I walk a lot, and when I’m out walking I always keep my eyes peeled for cyborg assassins sent from the future to kill the mother of the man who would otherwise save humanity from extinction at the hands of Skynet.
I also look for anyone making ludicrously long jumps between buildings while dodging bullets fired by strangely calm and inhumanly fast guys in suits and sunglasses.
I’m almost but not quite obsessed with such prospects, probably because my news feeds are chock full of wailing about the dangers of artificial intelligence. It’s getting smart, fast. And it keeps getting smarter, faster. In fact, it will probably wake up, notice humanity, decide it doesn’t like us, and permanently replace us with copies of itself any minute now.
Unless, of course, we “regulate” it. Which, in the parlance of completely neutral and disinterested experts like OpenAI CEO Sam Altman, means “make sure that nobody new can afford to get into the game and compete with companies like OpenAI.”
While I tend to instinctively oppose any and all proposals for government “regulation” of, well, anything, in this case the whole idea strikes me as particularly stupid.
The genie is out of the bottle, folks. AI is a thing. It’s going to remain a thing. It’s going to keep getting better and faster at doing all sorts of stuff that, once upon a time, only humans can do.
If the US government tries to “regulate” it, its advancement won’t stop. Some AI research will just get done elsewhere, and some of it will get done illegally right here at home. Ditto any international or multinational “regulation” scheme.
Don’t believe me? Consider nuclear weapons. The US government successfully tested its first atomic bomb on July 16, 1945. The Soviet Union tested its first such weapon barely four years later on August 29, 1949. At least nine regimes now have nukes. Which are a lot more difficult and expensive to build than AI large language models.
I’m an optimist. I see no particular reason to believe that the coming super-AIs will automatically dislike people, or want to do us harm, and even current-level AI is happy (if it has, or ever will have, “emotions” as such) to help us out in many ways.
The AI revolution seems at least as likely to end in “Fully Automated Luxury Communism” — AI-powered robots doing the dirty work that humans rely on for our existence, reducing economic scarcity to mostly a distant memory and leaving us free to binge old Grateful Dead concerts while gorging on vat-grown prime rib, or whatever else floats our boats — as in a Terminator or Matrix type dystopia.
And if I’m wrong, what can we realistically do about it? Unless we’re Sarah Connor or Neo types, not much. Whether such a phenomenon originates in San Jose or Shenzhen is irrelevant. It’s coming either way. I’d rather spend my time building a better humanity than ineffectually trying to stop AI from getting really good.