Regulatory Capture Won’t Stop the Singularity

In a May 15 talk in Toronto, OpenAI CEO Sam Altman called for a “global licensing and regulatory framework” for artificial intelligence (AI). As I write this, he’s preparing to offer similar recommendations in testimony before a US Senate subcommittee.

The whole idea of “regulating AI” fails on at least three levels.

Level One: Regulation wouldn’t prevent the development of AI to some notional “singularity” point beyond which it surpassed (and could, if it chose, control or even destroy humankind). If that’s going to happen, it’s going to happen no matter what we do.

Level Two: Even a “global” regulatory framework wouldn’t work — some regimes would openly ignore it, others would secretly evade it, and the regimes which did the best job of ignoring/evading it would enjoy the benefits of AI before, and to a greater degree than, other regimes.

Level Three: Regulation would be VERY effective at one, and only one, thing:  Protecting the current big players (like, say, OpenAI) from competition. Any set of government AI regulators would consist of “experts” in the field — “experts” on their way to or from lucrative jobs in the very industry they’d be regulating. If you don’t believe me, just look for yourself at any other highly regulated field (securities, aviation, and “defense,” to name three) and at the revolving doors between the regulatory authorities and the regulated industries.

If regulation won’t stop technological singularity — and the accompanying obsolescence or even extinction of humankind — what will?

Nothing.

“Unfortunately,” J. Mauricio Gaona writes at The Hill, “AI singularity is already underway. … the use of unsupervised learning algorithms (such as Chat-GPT3 and BARD) show that machines can do things that humans do today. These results, along with AI’s most ambitious development yet (AI empowered through quantum technology), constitute the last warning to humanity: Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity. “

I’m not sure why Gaona considers that “unfortunate,” or why his recommendation is the same as Altman’s — ineffectual government regulation that won’t prevent it.

We’ve been hurtling toward the “singularity” for at least 3.3 million years, ever since one of our ancestor hominins  (probably Australopithecus or Kenyanthropus) started using tools to make their work easier.

Over those millions of years, we’ve continuously improved our tools … and our tools have continuously improved us. We’re not really the same animal we were before the automobile, let alone before the wheel. We can do things our grandparents, and those hominins, never dreamed of.

Once we started developing tools that could crack nuts better than us, speak across greater distances than us, travel faster than us, etc., it was inevitable that we’d eventually develop tools which could think better than us.

And having now done that, we’ll have to accept the consequences. Which may not be wholly negative. Maybe our AI descendants will like us and choose to assist us in continuing to improve our lives, instead of merely superseding us.

Thomas L. Knapp is director and senior news analyst at the William Lloyd Garrison Center for Libertarian Advocacy Journalism (thegarrisoncenter.org). He lives and works in north central Florida.