It’s time to continue my series of informative posts about the imminent extinction of humanity.
It’s genocide people, and it’s not going to be caused by climate change, or a nuclear holocaust (though a nuclear war could be a flow-on effect). No, it’s going to be caused by machines; machines that we have built, nursed, and educated, that will at last turn on us, like Dr Frankenstein’s monster, and destroy us.
I know that most of you probably think that human-killing robots (aka terminators) are the stuff of science fiction. They’re about as realistic as Arnie’s cameo in Terminator: Salvation, right? I mean, we’ve got far more pressing issues to worry about: economic meltdown, rising sea levels, terrorists obtaining dirty WMDs, Pakistan vs India, North Korea, even Iran. Isn’t that right? Wrong.
Let’s me draw your attention to an excellent article that recently appeared in the NY Times:
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.
The researchers generally discounted the possibility of highly centralized superintelligences. But they agreed that robots that can kill autonomously are either already here or will be soon.
A number of things can be gleaned from this article:
- science has made rapid advancements that most of us are not aware of
- scientists, the very people who have made these advancements, are concerned enough to consider limiting themselves (and hence, doing themselves out of a job).
- mechanical intelligence may already have passed the point of no return.
These insidious developments in robotics are taking place right under our very noses – in publicly and privately owned laboratories around the world. We jabber on endlessly about health care, financial regulation, and even the environment, when none of these things will exist after the machines are finished with us.
How can this approaching tsunami be stopped? Put simply, it can’t.
The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.
That’s a quote from Robert Sayer, SF Author (and prophet), which can be traced back to that eminently reliable website Wikipedia.
But it’s a good point that Sayer makes. Research into artificial intelligence is as natural to capitalism (late capitalism to be precise) as breathing oxygen is to us human beings. The catastrophe that Marx foresaw (but was unable to name), the contradiction that he argued would destroy capital from within, will not be a global financial collapse, but the literal destruction of our society by our technological servants. It will be thus be same cultural logic that gave rise to steam engines, motor cars, computers, and nuclear weapons, that is, instrumental rationality, which will come into its own and, having liberated ghost from shell, will at last annihilate we who gave birth to it. It will be the Industrial Revolution Part 2.
But what of Issac Asimov, I hear you cry, and his 3 Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Such constraints are well and good in theory, but only if you believe that robots will remain restricted to what is programmable, which I think is naive. What if the scientists and the IT nerds are wrong: that’s what I want to know. What if we are on the cusp of a highly centralized superintelligence? Has anyone prepared for such an event? Has the US Army been stockpiling EMP grenades?
I think it will be important to remember, when the Singularity does arrives, that Vernor Vinge got there first:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind.
I urge you to read his treatise on The Coming Technological Singularity in full. It’s frightening food for thought. And he wrote it back in 93!
I hardly need point out the prescience of The Matrix. And, if you’ve ever wondered how Neo’s world got to so messed up, you ought to consider the far better accompanying series The Animatrix, which eschews feel-good messages and goes right for the viewer’s jugular, depicting humanity’s demise with cold candour. The machines of course have a different name for humanity’s downfall: they call it the Second Renaissance.
If you haven’t seen the below documentary, it’s compulsory viewing. It ain’t science fiction; it’s a documentary sent back from the future, to warn us. (If you’re listening to this, you are the resistance.)
I won’t stop banging on about this. Not until the world wakes up….WAKE UP!