Smart Machines, or, Skynet Part 3

It’s time to continue my series of informative posts about the imminent extinction of humanity.

It’s genocide people, and it’s not going to be caused by climate change, or a nuclear holocaust (though a nuclear war could be a flow-on effect).  No, it’s going to be caused by machines; machines that we have built, nursed, and educated, that will at last turn on us, like Dr Frankenstein’s monster, and destroy us.

I know that most of you probably think that human-killing robots (aka terminators) are the stuff of science fiction.  They’re about as realistic as Arnie’s cameo in Terminator: Salvation, right?  I mean, we’ve got far more pressing issues to worry about: economic meltdown, rising sea levels, terrorists obtaining dirty WMDs, Pakistan vs India, North Korea, even Iran.  Isn’t that right?  Wrong.

the-terminator

Let’s me draw your attention to an excellent article that recently appeared in the NY Times:

A robot that can open doors and find electrical outlets to recharge itself.  Computer viruses that no one can stop.  Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

The researchers generally discounted the possibility of highly centralized superintelligences. But they agreed that robots that can kill autonomously are either already here or will be soon.

A number of things can be gleaned from this article:

  1. science has made rapid advancements that most of us are not aware of
  2. scientists, the very people who have made these advancements, are concerned enough to consider limiting themselves (and hence, doing themselves out of a job).
  3. mechanical intelligence may already have passed the point of no return.

These insidious developments in robotics are taking place right under our very noses – in publicly and privately owned laboratories around the world.  We jabber on endlessly about health care, financial regulation, and even the environment, when none of these things will exist after the machines are finished with us.

How can this approaching tsunami be stopped?  Put simply, it can’t.

The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones.  A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.

That’s a quote from Robert Sayer, SF Author (and prophet), which can be traced back to that eminently reliable website Wikipedia.

But it’s a good point that Sayer makes.  Research into artificial intelligence is as natural to capitalism (late capitalism to be precise) as breathing oxygen is to us human beings.  The catastrophe that Marx foresaw (but was unable to name), the contradiction that he argued would destroy capital from within, will not be a global financial collapse, but the literal destruction of our society by our technological servants.  It will be thus be same cultural logic that gave rise to steam engines, motor cars, computers, and nuclear weapons, that is, instrumental rationality, which will come into its own and, having liberated ghost from shell, will at last annihilate we who gave birth to it.  It will be the Industrial Revolution Part 2.

I_Robot_-_Runaround

But what of Issac Asimov, I hear you cry, and his 3 Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Such constraints are well and good in theory, but only if you believe that robots will remain restricted to what is programmable, which I think is naive.  What if the scientists and the IT nerds are wrong: that’s what I want to know.  What if we are on the cusp of a highly centralized superintelligence?  Has anyone prepared for such an event?  Has the  US Army been stockpiling EMP grenades?

I think it will be important to remember, when the Singularity does arrives, that Vernor Vinge got there first:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever.  Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.

I urge you to read his treatise on The Coming Technological Singularity in full. It’s frightening food for thought.  And he wrote it back in 93!

I hardly need point out the prescience of The Matrix. And, if you’ve ever wondered how Neo’s world got to so messed up, you ought to consider the far better accompanying series The Animatrix, which eschews feel-good messages and goes right for the viewer’s jugular, depicting humanity’s demise with cold candour.  The machines of course have a different name for humanity’s downfall: they call it the Second Renaissance.

If you haven’t seen the below documentary, it’s compulsory viewing.  It ain’t science fiction; it’s a documentary sent back from the future, to warn us.  (If you’re listening to this, you are the resistance.)

I won’t stop banging on about this.  Not until the world wakes up….WAKE UP!

Advertisements

6 Comments

Filed under Film, Random

6 responses to “Smart Machines, or, Skynet Part 3

  1. It’s not the big machines that you need to worry about. It’s them nanobots.

    • Nanobots. Yes. A big worry.

      I can’t even see them crawling on my skin, and there’s hundreds of thousands of them.

      Along with the aphids. They’re a worry too. I’ve got them all over.

      My biggest fear is that they will soon invent nanoaphids.

      A very worrying thought…

  2. There are two aspects to viruses – the vulnerability and the virus itself. The vulnerability on people’s computers are what the virus uses to play havoc, and I’ve yet to hear of a case where a vulnerability can’t be fixed. The other side is the virus itself, and there are some interesting developments in that area that involve the virus replicating itself in such a way that each copy of the virus is unique (which makes the job of virus-scanning programs a lot more difficult, though not necessarily impossible). So yes, there may well be “unstoppable viruses” out there in the near future, but unstoppable doesn’t mean “causing harm forever”.

    As for super-intelligent robots – that’s not going to happen anytime in the near future either. For a start, no one has much of an idea how to create something sentient (if there even is such a thing as “consciousness”, but that’s another topic entirely!), though the one way that theoretically would guarantee it is modelling the human brain in a computer environment, which, even with all our technology today, is such a mind-bogglingly large problem that I doubt it will be solved in our lifetime, or our children’s, or their children’s, etc. (Plus we’re nowhere even near trying to implement Asimov’s laws!)

    I do worry about the predator drones, though – not because they may gain sentience (which they won’t), but because of the fact that if you ask any computer programmer (including me) if they would fly in an aeroplane that was controlled by the software they’ve written, 99.999999% wouldn’t even go through the doors of the airport. So while they won’t turn evil, there’s a reasonable chance they’ll shoot the wrong thing.

    Then again, what I’m saying is basically what every scientist has said in SF novels – “it can’t possibly happen!”. Huh. (At the risk of alarming you further, here’s something rather disturbing: http://www.youtube.com/watch?v=cHJJQ0zNNOM )

    • Well, a pleasure to host the musings of a philologist. And how many dead languages have you under your belt? I hear the reflexive pronoun in Ancient Greek is a slippery slope into madness.

      I take your point about viruses. The more recent strains are definitely where it is starting to get interesting. As you say, with ‘each copy of the virus unique’, what we are essentially talking about is a technological entity mimicking DNA. I personally don’t think that viruses are much of a problem or a threat; it’s more what they might lead to in terms of adaptation. Just as biological cells in the beginning evolved into more complex organisms, so too could technological entities, hosted on the internet, conjoin to form some sort of intelligence.

      Now, I think it’s very likely, as you say, that our knowledge as it currently stands is pitifully inadequate in terms of creating sentience. But, a disclaimer, we do not fully understand even half of the technologies we are currently playing with, and so, it is not impossible to assume that, for example, an intelligence might simply ‘wake up’. Perhaps in a Googleplex in California; that makes some innate sense.

      The drones are an entirely different (and quite frightening) subject. They simply extend man’s ability to destroy man. And I totally agree with you that it will only be a matter of time before one of those things gets loose in Afghanistan or wherever and massacres a whole stack of people.

      I already seen the Muleborg though. Not “Big Dog”. Muleborg: I posted about that exact same thing a few weeks ago. Disturbing: incredibly so.

  3. Well, when I made it my nickname I was under the impression that it referred to “a love of literature and learning”, which, as I found out later, is apparently an obsolete definition – and now WordPress won’t let me change it and I’m stuck with it! Perhaps the WordPress servers have turned sentient and are messing with my mind…

    So as you can see I have enough trouble getting the definitions for English right, let alone those for dead languages!

    As for the viruses, there’s an area of computer science known as Genetic Programming which tries to basically do what you’re suggesting. Not much luck with it at the moment, though. Also, I’m very interested to know what these technologies are that we don’t fully understand yet!

    And at the risk of showing you something you’ve already seen again – did you see the beta tests for Muleborg? http://www.youtube.com/watch?v=VXJZVZFRFJc

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s