Ray Kurzweil appeared as a hologram in Tucson delivering a lecture from Boston about The Future. It was a fine performance, and when it was over we in the audience clapped delightedly, smiling and thrilled at our prospects. I liked him and liked the human spirit and felt relieved that something I considered unstoppable anyway (Technology) might turn out so well. Little did I know.
He said there was accelerating progress in many technologies and that, while hard to detect in early stages, where the difference between exponential and linear processes appeared slight, such progress inevitably snowballed to the point of changing the world. Recent examples included the Genome Project, which pessimists had said would take a hundred years (and still thought they were right several years into it before researchers ‘hit the knee of the curve’); The Internet, which looked insignificant in the 1980s; the cell phone, which at first seemed destined only for the rich; and computing power itself.
In the question period, I wanted to ask what he thought of moral and political developments: were we at that early part of the curve where it was difficult to see that two times almost nothing was really something, soon to be great, or was it rather that two times nothing is nothing? To ask my question, though, I would have had to scramble over about twenty laps and there were soon more people lined up at the mikes than there was time for.
Thinking that in 652 pages Kurzweil would answer that question and others, I read his book, The Singularity is Near: When Humans Transcend Biology. On page 423 he proves his political stripe: “A contentious contemporary political issue is the need for preemptive action to combat threats, such as terrorists with access to weapons of mass destruction or rogue nations that support such terrorists. Such measures will always be controversial, but the need for them is clear. A nuclear explosion can destroy a city in seconds. A self-replicating pathogen, whether biological or nanotechnology based, could destroy our civilization in a matter of days or weeks. We cannot always afford to wait for the massing of armies or other overt indications of ill intent before taking protective action.” In other words, kill first.
Describing military technology, he seems to take a restrained but decided pleasure in advances: a new uniform that is stab-resistant; the Abrams tank with only three casualties in twenty years of use; and predator drones:
“The trend toward unmanned aerial vehicles (UAVs) … will accelerate. Army research includes the development of micro-UAVs the size of birds that will be fast, accurate and capable of performing both reconnaissance and combat missions. Even smaller UAVs, the size of bumblebees are envisioned…
“[This] is not a one-shot program; it represents a pervasive focus of military systems toward remotely guided, autonomous, miniaturized, and robotic systems…
“One of the programs … envisions a drone army of unmanned, autonomous robots in the water, on the ground and in the air. The swarms will have human commanders with decentralized command and control and what project head Allen Moshfegh calls an ‘impregnable Internet in the sky.”. p. 332-3
After these furies will come ‘Smart Dust;’ and beyond that, ‘Nanoweapons,’ which an enemy will be unable to resist, except with some of its own.
In sum, Kurzweil’s answer to the old problem of human enmity would seem to be the Old Testament one: Smite ’em.
But not to prejudice a reader against this big tome with little moral concerns, it is worth considering the book in general.In truth, I was educated and entertained by it, and can recommend it, both for mind-stretching and for laughs. There are many wonderful quotations throughout: Yogi Berra’s, ‘The future ain’t what it used to be,’ and Marvin Minsky’s, ‘Will robots inherit the earth? Yes, but they will be our children.’; and Giulio Giorelli’s ‘Yes we have a soul. But it’s made up of lots of tiny robots.’
Is technology unstoppable? Kurzweil quotes Nick Bostrom: p.259 “…there is no end to the list of consumer-benefits. There is also a strong military motive to develop artificial intelligence. And nowhere on the path is there any natural stopping point where technophobics could plausibly argue ‘hither but not further.’”
I think this position correct and obvious. Whatever one wants, whether to battle poverty and disease, to capture energy, or to destroy others, knowledge is key, and the pursuit of it now pivots on physical means. Information may not be wisdom, but wisdom uses information.
Kurzweil predicts so many Earth-shaking developments it is doubtful he is right about all of them. On the other hand, he is almost certainly right in his central thesis that technological developments are, and will continue to be, Earth-shaking. Hence his term, singularity, borrowed from mathematics and physics to describe a situation in which the outcome of an operation is undefined, or where conditions develop such that the usual rules no longer pertain.
He envisages six epochs in the history of the cosmos and fits them to a logarithmic plot of time. First, the era of physics and chemistry, where matter came out of the big bang and substances mixed in regular ways; second, the era of biology, where RNA and DNA got established to create even more organization; third, brains, when animal intelligence got rolling; fourth, technology, where brains and opposable thumbs produced machines that were (or will be) themselves intelligent; fifth, a merger of technology and biology, during which time many things happen but, most importantly, machines surpass humans in the design and invention of themselves, thereby creating the singularity; and finally, epoch six, where ‘the universe wakes up’ and becomes saturated with intelligence that mechanically spreads out from Earth.
One question that comes up with such speculation is what machines might do with us once they had the upper hand. They would have read all our books and seen and known us inside out. Kurzweil proposes that “Our primary strategy in this area should be to optimize the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward.” p. 424.
It goes without saying that the machines would be aware of our professed values, but I think we can assume they would also be able to read between the lines. We might hope they had insight as to where our ‘values’ lead; otherwise, they too would get into a dust-up.
Machines, in this imaginary future, would have attained the upper hand by intelligence. The idea is that nothing trumps intelligence: it implies ability, mobility, strength, etc. We could not unplug the computer but it could unplug us.
Kurzweil quotes Seth Shostak: “We come from goldfish, essentially, but that [doesn’t] mean we turned around and killed all the goldfish…. If you had a machine with a 10 to the 18th IQ over humans, wouldn’t you want it to govern, or at least control your economy?”
I think, yes, we would want it to govern (there’s no choice if you’re playing this speculative game honestly), but I don’t see how we could expect it to love us, or to believe Kurzweil’s saccharine words about values. That being said, I don’t think we’d have to fear it either, except as another force of Nature, neither malevolent nor benevolent.
If Kurzweil’s main claim is only that technology develops at an accelerating rate, why does one so often feel suspicious of his predicting? We can all see that accelerating growth has been characteristic of technology in the past, but it seems unimaginable that any specific exponential progress should continue. This might be because: A) we can’t imagine it; B) we know that ‘no exponential is forever,’ but also, I think, C) there is a tendency in all of us to color and influence the future as we might like.
The following sample of Kurzweil’s assertions is presented as a list, a list of: astonishing facts or possibilities or claims i have learned of(followed in parentheses by a remark or two of my own). The list is incomplete for the sake of brevity but given largely in Kurzweil’s words.
1) p. 105 “By the end of this decade, computers will disappear as distinct physical objects, with displays built in our eyeglasses, and electronics woven in our clothing, providing full-immersion visual virtual reality.” (This cannot be. I just bought a computer and plan to keep it for at least two years. It seems probable that Kurzweil intended something more conservative than my literal reading here, but if he did, he is showing disregard for precision. If he meant what he said he needs reminding that: ‘Nothing ruins the truth like stretching it.’)
2) P. 130 Computation need consume no energy. “[The] act of erasing data generates heat and therefore requires energy… if you keep all the intermediate results and then run the algorithm backward when you’ve finished your calculations, you end up where you started, have used no energy and generated no heat. Along the way, however, you’ve calculated the result of your algorithm.” (I am not competent to evaluate this.)
3) p. 136 The date for the singularity is set at 2045. The possibilities for life after this period, or even after medical advances prior to this time, amount to virtual immortality. (This doesn’t interest me; I don’t want to be a trapped ghost, which is how I would feel if uploaded into a machine.)
4) P. 230 One of the major achievements of nanotechnology will be molecular assembly. “…the typical assembler has been described as a tabletop unit that can manufacture almost any physical product for which we have a software description, ranging from computers, clothes, and works of art to cooked meals. Larger products, such as furniture, cars, or even houses, can be built in a modular fashion or using larger assemblers… Drexler estimates a total manufacturing cost for a molecular-manufacturing process in the range of ten cents to fifty cents per kilogram, regardless of whether the manufactured product were clothing, massively parallel supercomputers, or additional manufacturing systems.” (I am pleased to know about this, but it is so far beyond my ken as to resist imagining; therefore, I also resist pooh-poohing it.)
5) p. 28 Respirocytes, little mechanical gizmos in the blood, will deliver oxygen far more efficiently than our own red blood cells. (I bet there are problems with unnaturally efficient oxygen delivery. We would still need fuel for oxygen to combine with. What if all your glycogen were gone in one minute instead of ten? Would that be progress? It isn’t always better to be faster in one way.)
6) p. 416 Nanotechnology will come with a risk of malevolently replicating machines, but do not be dismayed. “The reality is that … our defensive knowledge and technologies will grow along with the dangers. A phenomenon like gray goo (unrestrained nanobot replication) will be countered with ‘blue goo’ (‘police’ nanobots that combat the ‘bad’ nanobots).”
It may have been at this point that I wondered if Kurzweil had a cloven hoof, and it was only by reminding myself that his company built the first reading machines for the blind that I dissuaded myself of the idea. There is probably more imagination than truth in this book, but Kurzweil is broadly informed and always offers that germ of factuality that fascinates.
PETER HARLEY lives in Newfoundland. He can be reached at: email@example.com