Pandora was the first woman that Zeus, father of the gods and Greeks, schemed to tyrannize and, possibly, please men. The metallurgist god, Hephaistos, constructed Pandora on the advice and gifts of the rest of the gods. Myth says that the gods gave Pandora a Pithos / Jar full of virtues for the wellbeing of the Greeks. They told her she should never open the Pithos. Somehow, and possibly by error, Pandora opened the Pithos and the virtues flew away, save hope. The result was misfortune. Nasty wives, poverty, endless work. Hesiod, the second greatest epic poet after Homer, said that men would do well to marry women for a secure old age, thought he warned that a bad wife was a daily torture. The box, Pithos, of Pandora is still half full. Now humans, usually industrialists, militarists, and billionaires, open the secret box of Pandora. The new technology gift that has just escaped the box of Pandora comes under the name of Artificial Intelligence (AI).
The AI is endangering the world
On November 17, 2023, the board of a computer / AI company, OpenAI, fired its CEO, 38 year old Sam Altman. His fall from power shocked lots of people within and outside the computer industry. Here was a very young man in charge of OpenAI, a “groundbreaking” computer company, that gave birth to a machine of potential gigantic powers: political, military, social. These powers already expressed themselves in deceiving and and misinforming people the world over, especially misleading them during elections. Moreover, that propaganda machine could “create” art, write books, construct videos, and, in all likelihood, synthesize drugs, chemicals, pandemics, and chemical and biological weapons. The risks from AI are endless. AI could resurrect the dark ages and the Frankenstein monster. In the societies of the twenty-first century, the risks from AI include “jobs getting automated out of existence or autonomous warfare that grows beyond human control.”
AI insiders were concerned. Ilya Sutskever is one of those insiders who has expressed a diplomatic anxiety about the road of OpenAI. He said he was “increasingly worried that OpenAI’s technology could be dangerous.” He felt Altman “was not paying enough attention to that risk.” Fears circulated that AI engineers “were building a dangerous thing.” “It doesn’t seem at all implausible that we will have computers — data centers — that are much smarter than people,” Sutskever said. “What would such A.I.s do? I don’t know.”
The implications of all these potentially dangerous products of Artificial Intelligence are unfathomably bad. AI might put millions of artists, podcasters, writers of fiction and nonfiction books, articles, plays and movie scripts out of work – permanently. In the field of art, for example, AI generates “images, videos, sounds and other forms of art on its own.”
We cannot say that Altman had figured out in advance the exact strategic purpose of the machine he had in mind. In 2015, he raised funds from Elon Musk. He guided his San Francisco company into inventing real power potentially for the Pentagon and, certainly, for billionaires. No wonder he testified in a Senate Judiciary subcommittee in order to neutralize the concerns of risky IA. He offered broad encouragement for the “regulation” of the uncontrollable gifts flying out of the new Pandora’s box, or, more exactly, chatbot.
I watched that Congressional discussion. Altman’s remarks were vague and confusing. He admitted the AI technology was just getting out of its birth pangs, so it made mistakes. However, he assured the Senators, the users of ChatGPT were sophisticated people, so, don’t worry, he said to them. In a podcast with the New York Times, two days before he was fired, Altman was far more open and, slightly, honest. He admitted his magic box emits some bad stuff. He said “the main thing they’[ChatGPT varieties] are bad at is reasoning. And a lot of the valuable human things require some degree of complex reasoning. They’re good at a lot of other things — like, GPT-4 is vastly superhuman in terms of its world knowledge. It knows more than any human has ever known. On the other hand, again, sometimes it totally makes stuff up in a way that a human would not.”
Moreover, Altman said during the podcast that regulating AI is complex, though necessary in what he defined the “frontier systems,” where he argued, “there does need to be proactive regulation. But heading into overreach and regulatory capture would be really bad…. Regulate us, regulate the really capable models that can have significant consequences, but leave the rest of the industry alone.”
However, in his testimony in Congress, Altman was silent about the bad stuff of AI. As usual, the senators were mostly confused, looking at Altman as the miracle boy out of the deserts of Utah. He had discovered new sacred texts that would reveal the secrets for the control of the planet. They were dumbfounded. They listened to the sirens of profits and power and did nothing, not even asking Altman to send them a regulatory proposal. Altman had none. He simply exploited their ignorance and fears. In fact, during his podcast with New York Times, he revealed his true thoughts, all but dismissing any fears AI might do humanity in. “I actually don’t think we’re all going to go extinct,” he said. “I think it’s going to be great. I think we’re heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits. And that’s like a pretty consensus thing. I don’t think that’s a radical position.” In other words, Altman thought but did not spell out, that, if we accepted potential extinction with the construction and possession of nuclear bombs, why not do the same thing with AI?
Altman cannot be taken seriously. He is playing games. He sugar-coats his ambitions and proposals. It’s telling how he finished his interview in the New York Times podcast: “I think,” he said, “A.I. is good. Like, I don’t secretly hate what I do all day. I think it’s going to be awesome. I want to see this get built. I want people to benefit from this…. I believe that this will be the most important and beneficial technology humanity has yet invented. And I also believe that if we’re not careful about it, it can be quite disastrous. And so we have to navigate it carefully.” This is not that much different than the delusions of those who built the atomic / nuclear bombs.
Altman had no trouble convincing Microsoft to invest more than 13 billions for the realization of his chatbot. As a result, Microsoft became the owner of 49 percent of OpenAI, “the high-flying artificial intelligence start-up.”
Altman and his colleagues intentionally gave their chatbot machine a deceptive and meaningless name. They called it ChatGPT. This abstract acronym does not represent abstractions. It rather hides enormous political and technological powers. It’s basically a talking, writing, and imaging machine. Its appearance in 2022 caused an “industrywide A.I. frenzy.”
The frenzy was about the money and power Altman’s ChatGPT represented. It started making huge sums the moment it hit the market. “When OpenAI released ChatGPT last November,” said the New York Times reporter Cade Metz, “the chatbot attracted hundreds of millions of users, wowing people with the way it answered questions, wrote poetry and discussed almost any topic tossed its way.”
This almost perfect advertisement came with company exhortations of AI’s goals: that it was designed for good will and improvement of the wellbeing of mankind.
The financial success of Altman’s chatbot added a new name to AI, generative. That is, Generative Artificial Intelligence. The new name is more explicit, less artificial. You are supposed to assume that machines give birth to speech, writing, pictures / images, and who knows what else is hiding in this new chatbot of Pandora? “The result of more than a decade of research inside companies like OpenAI and Google,” said Metz, “these technologies are poised to remake everything from email programs to internet search engines to digital tutors.”
Meanwhile, Microsoft saved Altman from disgrace. It hired him immediately, along with his friend and colleague Greg Brockman, to set up an independent advanced research lab on AI at Microsoft. Not only that, but some 550 employees of OpenAI protested the firing of Altman and made clear they were ready to quit OpenAI and join Altman at Microsoft. The chaos at OpenAI sent waves of anxiety through the computer / Artificial Intelligence world – and beyond. “If you care about whether powerful A.I. systems might someday threaten human survival,” said Kevin Roose, technology reporter for the New York Times, “all of that is wrapped up in the drama at OpenAI, the country’s most prominent maker of artificial intelligence…. [OpenAI] is also unusually ambitious and saw its role as building a digital superintelligence that would eventually become more powerful than humans.”
The OpenAI-Altman drama came to some resolution when outside influences, especially from investors, recalled Altman back to OpenAI. Yet the tragedy of AI remains the growing black cloud over the world threatened even more by the chaos of climate change.
Constructing super intelligent machines that make their own decisions is suicidal. We already employ autonomous drones that decide what to destroy or kill. They are powered by Artificial Intelligence and, to some degree, are shaping the future of warfare. Giving life and death decisions to software programs is utterly stupid. Eventually, those machines will kill their creators. We don’t need to resurrect Mary Shelley’s Frankenstein. Before it’s too late, dismantle all AI machines and projects — worldwide. We definitely don’t need another ticking nuclear bomb.