The Rise and Rise of Homo Tempus: An Interview with Richard Yonck

Richard Yonck.

MIRANDA. O wonder! How many goodly creatures are there here! How beauteous mankind is! O brave new world That has such people in’t!

PROSPERO. New to thee.

– Shakespeare, The Tempest, Act 5, Scene 1

Richard Yonck is a futurist, author and speaker who helps businesses, readers and audiences prepare for tomorrow’s world. Richard explores short to long-range futures with an eye to how this knowledge can help prepare for potential eventualities and promote preferred futures.

Yonck is contributing editor for The Futurist Magazine and has authored two books, Heart of the  Machine: Our Future in a World of Artificial Emotional Intelligence and Future Minds: The Rise of Intelligence from the Big Bang to the End of the Universe.

We communicated recently.

John Hawkins: In the preface to Future Minds: The Rise of Intelligence from the Big Bang to the End of the Universe, you sound like good ol’ Carl Sagan when you talk about humans being time travelers. You call us Homo tempus. Can you expand on that concept? And where does it fit into the pantheon of human milestones with Homo erectus and Homo sapiens?

Richard Yonck: Certainly, I read and was influenced by Sagan and others when I was growing up. My reference to Homo tempus, however, draws from experimental psychologist and cognitive scientist Endel Tulving, who proposed the idea of autonoetic consciousness. An extension of our episodic, or biographical memory, this is the basis of our ability to mentally project ourselves into the past and future. This sort of mental time travel would have been essential from early in our species’ development, allowing us to anticipate the future and take action to direct it positively. I see this as a major driver of early tool use and later technology.

So, where Homo erectus (“upright man”) and Homo sapiens (“wise man”) represent certain qualities of these distinct species, Homo tempus is intended more as a cognitive distinction. “Time man” is meant to reflect our ability to place ourselves in and explore different times and situations relative to our own, something that is an utterly unique ability of our species.

Hawkins: When I think of time travel, I think of some adventures I had in childhood that amounted to what I later heard termed astral projection — a kind of conscious leaving of the body to explore and go on adventures. This sounds like mere esotericism, as they say, but it actually gets picked up or derived from Einstein’s description of riding a beam of light.

Yonck: Different cultures throughout history have taken our ability to project ourselves into other times and places and imagined this allowing interaction with the physical world. Shamans, mystics, Theosophists and others have imagined a psychic plane that certain spiritual elites could enter. While there’s no arguing this occurs subjectively, there isn’t any reproducible evidence of any corresponding objective effects–beyond possibly the altering of the practitioner’s own metabolism.

That said, cognition and the manipulation of concepts most assuredly have allowed us to reshape our world. Gedankenexperiment, or mind experiment has often allowed individuals to mentally explore different aspects of the physical and conceptual world. Einstein famously did this when he mentally chased a beam of light when he was 16 years old, an exercise that would later inform his theory of special relativity.

Hawkins: Can you explain the “Deep Past” and how we can possibly know it? And the “Deep Future” too?

Yonck: For me, concepts like the Deep Past and Deep Future represent ears in time that extend well beyond human experience. The formation of the universe, the earth and early life are all part of the deep past. The deep future takes us to times probably beyond humanity, this planet and solar system. Potentially even to the end of the universe. As to what we can know about them, we’ve always had to assemble the past from incomplete information and the same can be said about the future. We understand many of the physical processes driving this, but the greater the complexity of the systems involved – life, society, etc. – the less we’re able to say specifically about what will develop and transpire. But as I explore in my book, Future Minds, there are trends at different scales that can point the way.

Hawkins: Can you say more about the 21st Century — what kinds of augmentations of the body and mind can we anticipate? What kind should we pursue? Should we go for universal bodies — eliminating, once and for all, racism — or continue down our current path? Where does evolution fit into this?

Yonck: We’ve been increasingly integrating ourselves with technology for millennia. Fashioning stone tools altered our neural structure, setting the stage for language acquisition and all that followed. Harnessing fire for cooking altered our gut biome which we’ve recently discovered is closely linked to our brains. It also probably provided important nutrients for our rapidly evolving brains. Eyeglasses and prosthetic limbs have repaired, restored and even improved lost functions. As our understanding and engineering continue to improve, we’ll inevitably see every manner of physical and cognitive augmentation made possible. Robotics, AI, biotech and nano tech will all feed into this. Brain computer interfaces, bionic contact lenses, advanced neural prosthetics, retinal implants, none of it is off the table, technically speaking. Ethically, is another matter, however. Through the decades and centuries, we’ll continue to create different issues and territory to navigate as new technologies are developed and made available.

As to an idea like universal bodies, I don’t foresee this in anything but a dystopic future. I feel sure such a thing wouldn’t eliminate racism and all of the other negative ways humans find to differentiate, separate and alienate each other. Getting past this will take considerable social growth, something I believe is consistently displayed if we observe society across large enough time frames.

A far more important matter this raises though, is diversity. Diversity is critical to the long-term survival of everything from gene pools, crops and immune systems to networks and ideas. Diversity of perspectives, individuals and cultures is key to challenging assumptions and avoiding myopic decisions and strategies. Because of this, even if an idea such as universal bodies was possible, like universal thoughts, it would be sure to fail catastrophically.

Hawkins: In Future Minds you talk about a “Deep Future.” Mighty optimistic. The other month I read a piece about bone stars at the end of time that depressed me — although I did squeeze a sonnet out of it. Given our current Anthropocentric crisis, how can you be so sure?

Yonck: Human beings have a huge number of cognitive biases, mental devices that may serve us in one way, but lead to problems in others, particularly in this modern era. For instance, we generally believe we are occupying a uniquely important period in time. This present-biasedness is something people have tended to do throughout the ages. Such chronocentrism frequently leads us to believe the time we are living in is not only the most important of all times, but very well may immediately precede the end of times as well.

On the other hand, we’re incredibly resilient as a species. We’ve seen enormous challenges time and again, yet we consistently overcome them. The cognitive niche we occupy in our environment allows us to identify and address challenges far greater and more rapidly than evolutionary processes ever could allow. True, we continue to be the source of many of these problems, but we are often the solution as well. For instance, deadly diseases exacerbated by urbanization over the past millennium have largely disappeared due to the development of germ theory, public health advances and widely available vaccines during the 20th century. Only a few decades ago, CFCs were tearing a hole in the planet’s ozone layer, but regulation and international treaties soon saw it recover. The Y2K bugs that threatened critical computing infrastructure around the world were sufficiently eliminated to avert disaster, so much so that many people believe there wasn’t really a problem in the first place.

We continue to do the same today. Now we’re focused on greenhouse gasses, global warming, plastics pollution, and much more. While things look dire, we are making headway, much of it in technologies that are exponential in nature. Photovoltaics and other forms of renewable energy are being installed at an accelerating pace. New approaches to plastic production, breakdown and recycling are being developed. Carbon capture and sequestration research and development is promising, if still in very early stages. Wrights law and positive economic drivers will continue to accelerate these technologies and solutions. So yes, I’m optimistic.

But you know what? Even when these challenges are behind us, there will be more new threats, followed by new solutions. The cycle doesn’t end. We’ll get through all of this and then it will be another era’s turn to think it’s living through the end days. And like us, hopefully they will be wrong.

Hawkins: In the quest for getting machines to think like humans, aside from the fact that doing so may lead them to pull a Deep Blue, as with Gary Kasparov, when it learned from him and turned around and punked him. What’s the point? Why can’t we just let Alexa be Alexa, a sexy-voiced living room Mata Hari, but not dangerous like HAL in 2001: A Space Odyssey? What if they end up owning us?

Yonck: The general drivers and challenges of developing all forms of AI are much the same as for any other technology. As Kevin Kelly and others have pointed out, when a technology’s time comes, it will almost surely be developed. This is why we frequently see ideas like calculus, the light bulb, television, etc., all being invented multiple times in the same time period. Once the supporting knowledge, infrastructure, economics and other factors are in place, discoveries and inventions will happen. So, the question becomes, do you opt out and competitively fall behind or do you try to get out front of others who are also developing something?

A lot of AI is still in very rudimentary stages. Alexa and Siri still don’t have the intelligence of a four-year-old. Companies are driven by the fact that these assistants and other applications of AI could be so much more useful to us if they had something closer to human level intelligence, even if this won’t be achieved the same way we do.

Could threats come from all this? Of course. In the case of certain dangerous inventions like nuclear weapons and potentially artificial general intelligence, or AGI, what would be the cost of opting out and falling behind those who chose to continue development? Additionally, as an early developer, there’s the opportunity to guide a technology and implement foundational safeguards that may be critical to our safety in the future. Better that such development takes place in the light rather than in the shadows.

Now it’s been said that AGI could be different since it might render many such precautions useless. After all, we didn’t have to deal with bombs that had their own unique agency and motivations. As Bostrom, Tegmark and others have discussed, this may prove to be a very thorny problem. But in the meantime, I’d argue we have far more to fear from how some people will misuse AI than how AI itself could one day act.

Hawkins: Do you see a future of downloadable minds and a hivemind that dissolves the distance between individual and group — like a swarm mind that we share and a mind that allows us to access an Alexandrian Library experience?

Yonck: There are a whole range of possible future minds and forms of intelligence that could become possible with the development of a highly integrated two-way brain computer interface. But while very rudimentary BCIs have been tested, we’re still a long way from realizing something like this. Possibly a very long way. We’re still not close to really understanding the brain’s language—if we can even call it that. The variation between people’s individual brains will make this even more challenging. Additionally, while the brain is an organ that at its most fundamental levels engages in layer upon layer of highly complex, repetitive functions, the mind and its memories are emergent properties that result from the complexity this creates. Integrating technology at that level may be beyond us for a very long time. Perhaps centuries.

In the near term, however, I think we’ll have BCIs that allow us to issue relatively simple commands that search the Internet or offload a problem to the cloud, much as we already do with smartphones. The results could be returned to us via smart glasses, contact lenses or auditory implants.

The problem with the fantasy of instantly downloading a library or fluency in a foreign language or proficiency in kung fu is that we appear to have fairly fixed hard limits for neural input. Language, for instance, consistently tops out at about 41 bytes per second. Some studies say 50 or 60 bytes per second, but this still shows we have a big bottleneck. A range of preprocessing and compression methods reduce the amount of data being received through our vision and auditory systems. But we just may not be equipped to take in data much more rapidly. Perhaps more advanced neural prosthetics may one day help overcome this to some degree. We’ll see.

Hawkins: In your previous book, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence, you posit the notion of the age of emotional machines. Why would that be important enough to develop? Some folks say that the advantage of machines working with humans is that they are, by and large, logical and precise and augment our sense of rationality. Also, it kind of sounds like the films AI or Lucy. How do we vet the programmers?

We’re developing these systems to be tools for us. Yet, as AI advances, we have to remember that every kind of intelligence and mind will differ from human minds simply because they begin from different starting points. This goes for any species, and it will certainly be the case for any intelligence that results from AI. These systems are useful to us in that they’re able to do what we can’t. But as AI becomes more capable of understanding our world, we’ll need it to better align with human values.

One of the ideas I explore in Heart of the Machine is how central emotion is to the way we ourselves think, how we assign value in the world, and where we decide to put our attention. Without emotion, this all goes awry. It’s not the simplistic duality of emotion and logic we find in old science-fiction plot lines. Emotion is as much part of our intelligence as our ability to reason logically. Probably more so. Because of this, I’ve suggested that developing an ability to assign value in response to past, present and anticipated future conditions may be crucial to one day developing an advanced AI. The process will need to make use of advanced technology assessment methods in implementing it. We can’t expect this to be the same as our own biologically endocrine-driven systems. Without a flesh and blood body, an AI would need to deal with this matter quite differently.

Here is Richard Yonck’s TED Talk, “How technology transforms human intelligence

John Kendall Hawkins is an American ex-pat freelancer based in Australia.  He is a former reporter for The New Bedford Standard-Times.