Killing Ourselves With Technology

shutterstock_259134986

What do we do when technology spirals out of our control? Or, to put it more bluntly, when does humanity’s ability to build ever more dangerous weapons become a self-fulfilling prophesy?

Albert Einstein is said to have remarked that he didn’t know what weapons the third world war would be fought with, but the fourth would be waged with sticks and rocks. Even that classic of science fiction optimism, Star Trek, had humanity surviving a third world war. (Spock recounted the tolls of Earth’s three world wars in one episode.)

But we wouldn’t, would we? Or we might wish we didn’t. One story that has long lingered in my mind is an early Philip K. Dick story, “Second Variety,” published in 1953, a time when the cold war was looking decidedly hot. The story takes place in a post-apocalyptic France, in a world in which nuclear bombs and other equally nightmarish weapons have reduced most of North America and Europe to gray ash, with only a stubby tree trunk or a blasted wall dotting barren, depopulated landscapes.

The West’s governments have retreated to a bunker somewhere on the Moon, with scattered groups of soldiers huddled in hidden underground bunkers on Earth trying to “win” the world war. The land is uninhabitable because of a super-weapon developed by the U.S. — autonomous machines that hone in on any living being and rip it to shreds with whirring metal blades that make short work of whatever they encounter. The Western soldiers are protected by a belt that forces the death machines to back off. This is the weapon that turns the tide of the war into a U.S. advantage after years of “losing” the war against the Soviet Union.

But what is there to “win”? Much of the world is uninhabitable, not only because of the total destruction and residual radiation from countless bombs but from the new weapon. There is no alternative but to huddle in underground bunkers. As Dick’s story unfolds, the nightmare gets progressively worse — the weapons are not only autonomous, they are self-replicating and continually inventing newer and more deadly varieties of themselves. The last pockets of U.S. and Soviet soldiers in this slice of the French countryside are systematically killed as the machines learn to build robots difficult to distinguish from humans; robots allowed into bunkers as refugees, only to suddenly become unstoppable killing machines, and which don’t distinguish one side from the other.

Although shuddering at the mere thought of their deadliness, more than once a soldier tries to justify these ultimate weapons by saying “If we hadn’t invented them, they would have.”

If we didn’t shoot first, bomb first, destroy first, they would have. Whatever we do is justified. No culture has a monopoly on such thoughts. But such thoughts combined with the technological progress of the present day, rising nationalism and budget-busting military budgets leave the possible end of the human race a concrete possibility rather than merely a science fiction allegory.

Philip D. Dick was no prophet — no one is — but the nightmare world he created is chillingly tangible. What would happen if a technology of war was given autonomy? Such a weapon would be purposefully designed to kill swiftly and without mercy. The Pentagon has already begun a program designed to create autonomous weapons systems.

But what if an artificial intelligence decided humans were in the way? Isaac Asimov famously had his robots programmed with three laws that blocked them from doing any harm to any human. The other side of this equation was explored in another Star Trek episode, when the Enterprise encountered a planet populated by advanced robots. The robots had killed their creators so far back in time that the robots couldn’t remember when, but had done so because their creators “had begun to fear us and started to turn us off.”

Technology need not be feared nor is it necessarily fated to escape all control. There are no von Neumann machines swarming everywhere (at least in this part of the galaxy!), and I am inclined to agree with Arthur C. Clarke’s maxim that there is no evil technology, only evil applications of technology. Yet we live in a world where there are plenty of opportunities for technology to be used for evil purposes. We see some of this all around us as workplaces become sites of tightening surveillance and control, from computers that report on us to bosses, to the endless treadmill of work speedups. Technology is today a tool of capitalists, to extract ever more work out of us, to outsource work on scales never before possible and to facilitate ever faster and more numerous speculation in dubious financial instruments.

Technology in these hands also makes waging war easier — a drone operator can sit in a control room thousands of miles from the targets, safe from the carnage rained down on far-away peoples. If autonomous weaponry ever is unleashed, how could it be controlled? It couldn’t. Humanity won’t survive a third world war.

When we think of existential threats to our descendants’ world, we tend to focus on global warming, environmental degradation and the looming collapse of capitalist industrialism, of the impossibility of infinite growth on a finite planet. That is properly so, and these do seem to be the gravest challenges that will face us across the 21st century. But technology applied to perfecting military killing machines is within the human imagination. Dick conjured this at the midpoint of the 20th century and he is far from the only one.

Yes, a warning and not a prophesy. But in a world of vast inequality, of an industrial and financial elite willing to do anything, even put the planet’s health at risk, for the sake of acquiring more wealth, the potential for evil applications of technology are ever present.

One more reason, if we didn’t already have enough, to bring into being a better world, one built for human need and environmental harmony rather than private profit. We then wouldn’t need to endure a mad pursuit of fetishized technological advancement; instead we could harness technology for the greater good as necessary. Barbarism remains the likely alternative.

Pete Dolack writes the Systemic Disorder blog and has been an activist with several groups. His first book, It’s Not Over: Learning From the Socialist Experiment, is available from Zero Books and his second book, What Do We Need Bosses For?, is forthcoming from Autonomedia.