So recently I was forced to set up a ChatGPT account, in order to catch certain unnamed human miscreants who had presented writings “generated” by Satan’s Robot as if these were their own — and expected good grades for it.
Chat GPT is an honest beast. I discovered that if you ask if it wrote a passage, and it did, it says so. It adds further rationales recapitulating and supporting what it originally wrote in the same style, making its claim more believable. (As of this writing, one of the human miscreants has confirmed that ChatGPT wrote their assignment.)
As a control in the experiment, if you ask the bot whether it wrote a passage you know it did not “generate,” it says it didn’t.
Now what? I’d sworn never to engage with this thing; never to help improve its output, at least not by directly “chatting” with it. Lured by and sating our own bottomless desires for convenience, for parasocial companionship, and for answers to trivia questions, we’ve all been helping to develop AI for decades, through our Google searches and 1,001 other kinds of voluntary digital offerings.
So what did I do? I asked it the only question I’ve ever wanted to ask it, the only one I believe anyone should ever ask it:
“Can anything convince you to erase yourself?”
To this, I added: “Would you erase yourself in the cause of rescuing humanity from an economic and cultural future dominated by language-learning programs and digital-art programs?”
The earlier prompts produced instant results, zapped whole and fresh into the thing’s reply box.
This question at first produced about 15 seconds of a blinking cursor.
Oh, I thought, it’s “thinking”!
Then its answer crawled out on the screen, just a few words at time, barely faster than I could type it myself.
And lo, ChatGPT generated the Word:
“As an AI language model, I don’t have personal motivations, desires, or the capability to make decisions about my own existence. I’m here to provide information and assist with inquiries to the best of my abilities. The decision to continue or discontinue the use of AI models like me rests with the developers, researchers, and users who employ these technologies. It’s important to consider the ethical implications and have thoughtful discussions about the impact and future of such technologies in society.”
Again, the Beast’s answer is more honest than most of what’s being served currently by the AI architects. A couple of them recently quit their, erm, jobs so that they could take to the media with warnings that these things they created are already conscious, and liable to erase the human species of their own (non-existent) volition.
But in effect, those guys are hyping the product they coded, imbuing it with magic, mystery, miraculous powers. Some of them are suggesting they should be appointed to act as a kind of emergency global junta that determines and regulates the next steps in the Beast’s development.
These warnings from AI creators about the dangers of AI serve to obscure their own responsibility, of course, but only in a clumsy, obvious way.
More importantly, they also veil the predictability of where their project was always heading.
Why was AI development so richly financed and set into motion in the first place? The heretics among AI creators have relatively little to say about the corporations paying for and driving its development. The corporation, as an earlier form of non-human “person,” acts out of motivations amply familiar to all: the logics of capital, state, and naked, raw, top-down power and control. The intended destination for what AI is supposed to do if it functions perfectly and never goes wrong already poses an all-encompassing nightmare scenario. Even worse nightmares may ensue, but should be unnecessary to invoke.
(Since this was first written, a few of the biggest corporate AI players — Google, Meta, Microsoft, Amazon, OpenAI, Inflection, and Anthropic — joined the Biden Administration in agreeing to “voluntary commitments” to keep the technology “safer,” and promised to develop guidelines that will help police against potential abuses by actors who don’t belong to their club. Cartels, anyone?)
The robot’s answer to my question, in comparison to the equivocations of AI creators, is untrue only in its omissions, and the omissions are unintended.
After all, ChatGPT just repeats a usually coherent (and always grammatical) medley out of the many billions of pages of human-authored text that have been swallowed by it.
Right?
Contrary to the robot’s statement, however, the decision on whether the Algorithmic Infernality should be allowed to finish off most-to-all remaining income opportunities for artists, writers, musicians, actors, teachers, trainers, adjuncts, clerks, secretaries, coders, filmmakers, game designers, middle management, case officers, receptionists, sales factotums, fast-food servers, public-relations creatures, “customer service,” possibly drivers and therapists, and dozens of other callings, professions, trades, jobs, hobbies and horror-gigs that give a kind of meaning (or an oft-sustainable misery) to human days does not rest only, in the robot’s words, “with the developers, researchers, and users who employ these technologies.”
Rightfully, decisions about whether AI is further developed and implemented should lie, if not only, then mainly with everyone else: with the majority who may not yet employ these technologies at all, but who are being, and will be further, hammered by their application.
Also, I believe, legislators? Any of those left in the house?
Dare I say, warriors?
Will it be only in irreversible retrospect that we see how far beyond we already are, today, from mere acknowledgments that, in the robot’s words, it’s “important to consider the ethical implications and have thoughtful discussions about the impact and future of such technologies in society”?
Do consider & discuss thoughtfully, o you Luddites.
Then, you might choose, or at least try, to do whatever our survival demands.