Tag Archives: Cyborgs

Cyborg Bugs, or, Skynet Part 4

This came across my desk this morning (thanks to bureau of information gathering).

The creation of a cyborg insect army has just taken a step closer to reality. A research team at UC  Berkeley recently announced that it has successfully implanted electrodes into a beetle allowing scientists to control the insect’s movements in flight.

“We demonstrated the remote control of insects in free flight via an implantable radioequipped miniature neural stimulating system.”

The research, supported by the Pentagon’s Defense Advanced Research Projects Agency, is part of a broader effort which has been looking specifically at different approaches to implanting micro-mechanical systems into insects in order to control their movements.

Eventually, the mind-controlled insects could be used to “serve as couriers to locations not easily accessible to humans or terrestrial robots,” they note.

Now, I’m not usually one to pander to conspiracy theories on this blog, but this does seems to have the very recognisable imprint of a certain insidious multinational conglomerate out to destroy humankind through robotic devices (otherwise known as Terminators).

Don’t you think?

Leave a comment

Filed under Film, Random

Lightning Fast Robotic Hand

Scared, anyone?

Resemble anything vaguely familiar from Science Fiction?

Leave a comment

Filed under Film, Random

Paul Krugman talks SF


Autocorrecting your spreadsheet is bad enough, imagine HAL9000 in charge of autocorrecting your spreadsheet?

Paul Krugman, whose column I read at the NY Times, was recently in conversation with SF author Charlie Stross in Montreal.

It’s a unique discussion, not least because it involves a dismal scientist trying to bridge the gap with a fictional scientist.

There’s a lot of interesting questions raised, like why the rate of technological change hasn’t been able to match the predictions of SF classics like Arthur C. Clarke’s 2001, William Gibson’s Neuromancer, and Greg Bear’s Blood Music.

Says Krugman:

What you came out believing if you went to the New York’s World Fair in 1964 was that we were going to have this enormously enhanced mastery of the physical universe. That we were going to have undersea cities and supersonic transports everywhere.

And there hasn’t been that kind of dramatic change.

My favorite test, which shows something about me, is the kitchen.  If you walked into a kitchen from the 1950’s it would look a little pokey, but you’d know what to do. It wouldn’t be that difficult. If someone from the 1950’s walked into a kitchen from 1909 they’d be pretty unhappy – they might just be able to manage. If someone from 1909 went to one from 1859, you would actually be hopeless.

The big change was really between 1840 and the 1920’s, in terms of what the physical nature of modern life is like. There’s been nothing like that since.

And Stross on Genomics:

They have sequenced quite a few mammalian and other genomes and it’s getting cheaper all the time.

Craig Venter came up with an interesting project a couple of years ago to sequence the Pacific Ocean.  If you have a bucket of seawater, it contains probably on the order of a billion organisms most of which are viruses, probably single virus particles in that bucket from a number of species. It turns out when they did shotgun sequencing on a bucket of seawater 98% of the genes they discovered were hitherto unknown.  About 90% of those unknown genes were from viruses and we have no idea what the host organisms of them were…basically, viral soup.

There’s a lot of stuff we don’t know about how the genome works. It’s not, as was widely thought in the 50’s and 60’s, a blueprint. It’s more like a very very messy snapshot of a running computer program.

I wonder if they got a few floating genes from Moby-Dick in that bucket?  That would explain why the sequencing went haywire.

And on my favourite hobbyhorse, AI, augmented intelligence, and general crackpot conspiracies:

PK: We’ve gone for augmented intelligence, not artificial intelligence.

PK: And it’s the weirdest thing – by finding the eigenvector with the largest eigenvalue you end up in effect doing a computer meld of many peoples’ intelligence without knowing it.

CS: Actually, Amazon is very big on human intelligence emulating AI.  They have a system called the Mechanical Turk where they pay people piecework to do basic tasks and farm them out using the network and if you want to throw money at a problem, you can find a hundred thousand pairs of eyes to work on it if you can divide it up suitably.

PK: Whatever the algorithm that Amazon uses to make recommendations…

CS: That scares me.

Scares me too.  If you want to read on, here’s the full transcript.

Leave a comment

Filed under Books

Smart Machines, or, Skynet Part 3

It’s time to continue my series of informative posts about the imminent extinction of humanity.

It’s genocide people, and it’s not going to be caused by climate change, or a nuclear holocaust (though a nuclear war could be a flow-on effect).  No, it’s going to be caused by machines; machines that we have built, nursed, and educated, that will at last turn on us, like Dr Frankenstein’s monster, and destroy us.

I know that most of you probably think that human-killing robots (aka terminators) are the stuff of science fiction.  They’re about as realistic as Arnie’s cameo in Terminator: Salvation, right?  I mean, we’ve got far more pressing issues to worry about: economic meltdown, rising sea levels, terrorists obtaining dirty WMDs, Pakistan vs India, North Korea, even Iran.  Isn’t that right?  Wrong.


Let’s me draw your attention to an excellent article that recently appeared in the NY Times:

A robot that can open doors and find electrical outlets to recharge itself.  Computer viruses that no one can stop.  Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

The researchers generally discounted the possibility of highly centralized superintelligences. But they agreed that robots that can kill autonomously are either already here or will be soon.

A number of things can be gleaned from this article:

  1. science has made rapid advancements that most of us are not aware of
  2. scientists, the very people who have made these advancements, are concerned enough to consider limiting themselves (and hence, doing themselves out of a job).
  3. mechanical intelligence may already have passed the point of no return.

These insidious developments in robotics are taking place right under our very noses – in publicly and privately owned laboratories around the world.  We jabber on endlessly about health care, financial regulation, and even the environment, when none of these things will exist after the machines are finished with us.

How can this approaching tsunami be stopped?  Put simply, it can’t.

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones.  A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.

That’s a quote from Robert Sayer, SF Author (and prophet), which can be traced back to that eminently reliable website Wikipedia.

But it’s a good point that Sayer makes.  Research into artificial intelligence is as natural to capitalism (late capitalism to be precise) as breathing oxygen is to us human beings.  The catastrophe that Marx foresaw (but was unable to name), the contradiction that he argued would destroy capital from within, will not be a global financial collapse, but the literal destruction of our society by our technological servants.  It will be thus be same cultural logic that gave rise to steam engines, motor cars, computers, and nuclear weapons, that is, instrumental rationality, which will come into its own and, having liberated ghost from shell, will at last annihilate we who gave birth to it.  It will be the Industrial Revolution Part 2.


But what of Issac Asimov, I hear you cry, and his 3 Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Such constraints are well and good in theory, but only if you believe that robots will remain restricted to what is programmable, which I think is naive.  What if the scientists and the IT nerds are wrong: that’s what I want to know.  What if we are on the cusp of a highly centralized superintelligence?  Has anyone prepared for such an event?  Has the  US Army been stockpiling EMP grenades?

I think it will be important to remember, when the Singularity does arrives, that Vernor Vinge got there first:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.

I urge you to read his treatise on The Coming Technological Singularity in full. It’s frightening food for thought.  And he wrote it back in 93!

I hardly need point out the prescience of The Matrix. And, if you’ve ever wondered how Neo’s world got to so messed up, you ought to consider the far better accompanying series The Animatrix, which eschews feel-good messages and goes right for the viewer’s jugular, depicting humanity’s demise with cold candour.  The machines of course have a different name for humanity’s downfall: they call it the Second Renaissance.

If you haven’t seen the below documentary, it’s compulsory viewing.  It ain’t science fiction; it’s a documentary sent back from the future, to warn us.  (If you’re listening to this, you are the resistance.)

I won’t stop banging on about this.  Not until the world wakes up….WAKE UP!


Filed under Film, Random

Flight of the Conchords know about Skynet

You got to love the Flight of the Conchords.

They’re funny, sure, but more importantly, they’re prescient.

Now, have I not been saying for some time (here, here, and here) that the machines are on the rise and that this is something we should all be SERIOUSLY worried about?

It seems they agree.  Check it out: a song from a human-eradicated-future.  Binary solo!


Filed under Film, Random


What is this?  Why is it on this blog?  Why do I find it one of the most disturbing things I have ever seen?

Leave a comment

Filed under Random