AI Q&A

Photograph Source: El contenido de Pixabay se pone a su disposición en los siguientes términos (“Licencia de Pixabay”) – Public Domain

Artificial Intelligence is here, there, and everywhere. Pundits wax eloquent—and sometimes not so eloquent. YouTube videos reveal inner secrets and possibilities—and sometimes stuff that is not so inner, not so secret, and not even possible. AI will destroy us. AI will resoundingly uplift us. What questions actually matter? What answers might matter? Debates rapidly multiply, but confusion reigns supreme.

Among all the noise, here are some questions that seem central.

+ What’s all the fuss? What can today’s AI do that yesterday’s couldn’t?

+ Even just roughly, how does the new AI work? If it isn’t magic, what is it?

+ When AI does things we do, does it do those things the way we do them?

+ Can AI do things we do as well or better than we do those things?

+ When AI can do things we do better than we humans do those things, how much better?

+ And mainly, what are important short run consequences of AI progress? What are long run possibilities of AI progress? And how should advocates of progressive or revolutionary social change respond to AI?

If you know all the answers, congratulations, you are the only person on the planet who does. But despite the state of ignorance and flux, can we say anything with at least some confidence? Let’s take it a bit at a time.

Why the sudden fuss? What can AI do now that it couldn’t do before?

The short answer is a whole lot. Before—let’s say two decades back—machines didn’t trespass overly much on terrain that typically only humans trod. Well, wait. Machines did play games like Chess and Go. And machines could act like a mediocre expert on some very specific topics. But two decades later, and mainly in the last six years, and overwhelmingly in the last three years, and actually even just in the last year, and—as I write—even just in the last month or week, machines paint pictures, compose music, diagnose diseases, and research and prepare legal opinions. They write technical manuals, news reports, essays, stories, (and even novels?). Machines code software, design buildings, and ace incredibly diverse exams. Right now, in most states, machines could become a lawyer. For all I know they have probably passed medical exams too. Machines provide mental health counciling, elderly care, personal support, and even intimate companionship. Machines converse. They find patterns. They solve complex problems (like Protein folding). And as of this week, they collaborate and can even make requests of one another. And much more.

So, is all that what we mean by AI? Yes, because what typically qualifies as artificial intelligence, is machines doing things that we humans do with our brains. It is machines doing things that we do mentally or to use a more high-falutin word, that we do cognitively. And the kicker is that today’s AI, much less tomorrow’s, doesn’t just do mental things in a rudimentary manner. No. Even today’s AI, much less next week’s, much less next year’s, does many mental things as well and in some significant respects not only hundreds of times faster but also qualitatively better than nearly all humans do these things. And in some cases, with more to come, better than any human does or even ever will do these things. Remember when it was big news that a computer program defeated Garry Kasparov, the then World Champion at Chess in 1997? Well, the program that beat him would be annihilated by current AI, and the same holds for other games. The gap between the best players in the world at chess, go, poker, and even video games and the best AI player of each has become enormous. And this differential isn’t just about games.

Even just roughly, how does the new AI work? If it isn’t magic, what is it?

You may find this hard to believe but beyond some limited observations, the best sources I could find say that no one can fully answer this question. And I mean no one. For example, the AI’s that have been trained on English so as to read, write, and converse, use, as we have all heard, “neural nets” trained on essentially as much data as can be utilized, which turns out to be millions of books and nearly everything on the internet. Once trained, these AIs generate the next word, and then the next, and so on, to cumulatively fulfill requests made to them for written, or graphic, or other response. Each step the AI takes involves a huge number of calculations. According to some estimates, the most up to date trained neural network, GPT-4, includes about 150 trillion numbers, or weights, each associated with connections between nodes that are loosely modeled on neurons found in organic brains. My guess is that that number, 150 trillion, is a loose-lipped provocative exaggeration that some journalist ran with and which then became false gospel, but even so, we can be quite sure that the true number, not yet released, is incredibly high. Whatever number of numbers characterize GPT-4, they are there to act on inputs, which is to say to act on the request you make to the AI which request is itself first translated into numbers. This “acting on” in turn yields numeric outputs that the AI in turn translates into text (or pictures or tunes or whatever else) we receive. In the midst of all that calculating, and again by way of the best sources, various additional parameters and features are set by essentially trial and error.

Yes, trial and error. In other words, the engineers didn’t go from GPT-2 in 2019, to GPT-3 in 2020, to GPT-3.5 in 2022, to GPT-4 months later in 2023, by having a steadily enriched theory of their product’s operations and making big changes guided by that theory. No. Instead, on the one hand the engineers simply enlarged the neural net, increasing its numbers of nodes and parameters and watching to see if that improved results, which, so far, it has. And beyond that, the best descriptions I can find say the programmers essentially guessed at lots and lots of possible modest changes, tried out their guesses, and retained what worked and jettisoned what failed without actually knowing why some worked and others failed. And, yes, that implies that for the most part the programmers can’t answer, “why did that choice work? Why did that other choice fail”? And it also implies that each new version of GPT was due to a combination of modest changes that summed to very large gains all in very short time spans. But whatever the logic/theory/explanation of AI’s recent success and progress may turn out to be, we do know that the progress in human-level outputs has recently been not just eye-opening but also accelerating.

When AI does what it does, does AI do what we do the way we do it?

The old AI most often tried to explicitly embody in its innards lessons conveyed to it by humans who were consulted about their methods in specific domains—say, playing chess, diagnosing certain medical symptoms, or whatever. Then the human-gained insights that programmers learned by talking with experts were stored by engineers in a database that the AI searched when asked to accomplish some related task. The new AIs instead first “examine” (are trained on) huge arrays of data to themselves arrive at internal arrangements of their vast array of parameters. The resulting arrangement of numbers then accomplish various ends. It turns out, therefore, that when we convey a request to an AI we are conversing with an incredibly immense array of numbers that in turn act on input numbers to yield output numbers. Is this how you talk?

Well, there is a problem with definitively answering that question. Mostly, we don’t know how we humans produce sentences much less arrive at views, decisions, etc. We know a lot and probably most of what happens in us occurs pre-consciously. We also don’t know how AI arrives at its views, decisions, etc. We know that the current AI’s use neural networks and have been trained on massive amounts of data so as to set countless parameters, and have then also had human programmers set some additional parameters by trial and error, but beyond that we know nearly nothing of “why” it does well. We do know, however, that whatever the underlying logic may be, AI is accomplishing diverse kinds of tasks in ways that yield ever more human-like results.

So is AI doing what it does the way we do the same things? The highly likely answer is no. Maybe in some respects there are analogies, if there is even that much similarity. And the difference is of considerable scientific interest because it strongly suggests that scientifically understanding AIs will not yield much if any scientific understanding of humans. But for AI as engineering, all this “why” stuff is of much less consequence. The “how it happens” or “why it works” isn’t the central point for AI as engineering. The “what happens” is the point. And while the AI’s “how it happens” is not much or perhaps not at all like for humans, the AI’s “what happens” is very human-like.

So, can AI do what it does as well or better than we do what it does? When it can do things better than we humans can do them, how much better?

By the factual evidence of AI’s current practice, the answer is that yes, AIs can already do many tasks as well or better than humans do those tasks. Indeed, AIs can do lots of what we do vastly quicker but also qualitatively better. How many humans can create pictures, compose music, read and summarize reports, and write and program better than even today’s AIs can do these things right now? Very very few. Does current AI make mistakes? Definitely, including many humdingers. Then again, so do humans make mistakes. And in any event, what matters is its trajectory. Anecdotes about weird failures now are amusing. Assessments of next year are a whole different matter.

GPT-2 wouldn’t have known a legal bar exam from a broom. GPT-3 took a legal bar exam and scored in the bottom 10%. Lots of mistakes. A year later GPT 4 scored in the top 10%. Many less mistakes. See the trajectory not the snapshot at a monument. And this was not compared to random humans plucked off the street. It was compared to law students. What do you think GPT-5 will score next year? What will happen to its number of errors, however many it is still making, when a stone’s throw down the road one neural net will routinely send results to a second to check, and then the first will correct errors reported back by the second before delivering its results to us? Will it be better than 99% of law students. Will all its current silly and easily fact checkable errors be gone?

On another front, scholars point out that AI doesn’t understand the answers to law board questions the way law students do. And depending what we mean by the word “understand,” AI arguably doesn’t understand any answers it gives at all. This is true but would you bet on the AI or on a random student or even on a random law school graduate to get a better score? And not to beat up on scholars, but, really, what does “understand” even mean?

A more general technical, but related observation is that GPT-4 does not contain a “theory of language” like what resides in human brains. GPT-4 just contains a gazillion parameters that yield results quite like if it had been fed a perfect theory of language. It delivers grammatically sound and compelling text. What does “understand” mean? And does the AI have a “theory of language” even though its “theory” is hidden amidst a trillion numbers? Humans don’t have a “theory of language” either, other than hidden deep inside.

So now we come to what matters for policy. What are short and long run consequences that are already happening or that without regulation are highly likely to happen? What’s potentially good? What’s likely bad?

First, I should acknowledge that there is a big unknown lurking over this entire essay and how to assess AI. That is, will it keep getting more “intelligent” or will it hit a wall? Will more nodes and numbers and clever alterations diminish errors and yield ever more functionality, or will there come a point with the neural network approach—perhaps even soon— when scaling up the numbers encounters diminishing returns? We don’t know what is coming because it depends how far forward AIs keep getting more powerful.

So what is potentially good and what is likely bad about AI? At one extreme, and in the long run (which some say is a matter of a only decade or two, or even less), we hear from thousands of engineers, scientists, and even officials who work with, who program, and who otherwise utilize or produce AI and who, for that matter, made the big breakthroughs, horror predictions about AI enslaving or terminating humanity. At the other extreme, from equally informed, involved, and embedded folks, we hear about AI creating a virtual utopia on earth by creating cures for everything from cancer to dementia to who knows what, plus eliminating drudge work and thereby facilitating enlarged human creativity. Sometimes, in fact, I suspect pretty often, the same person, for example the CEO of OpenAI, says both outcomes are possible and we have to find a way to get only the positive result.

In the short run we can ourselves easily see prospects for false voice recordings and phoney pictures and videos flooding not just social media, but also mainstream media, alternative media, and even legal proceedings. That is, we can see prospects for massive, ubiquitous intentional fraud, individual or mass manipulation, mass intense surveillance, and new forms of violence all controlled by AIs which are in turn controlled by corporations who seek profit (think Facebook…), by governments who seek control and power (think your own government…), but also even by particular smaller scale entities (think Proud Boys or even distasteful individuals…) who seek joyful havoc or group or personal advantage. If an AI can help find a chemical compound to cure cancer it can no doubt find one to highly effectively kill people.

And then there is the question of jobs. It very much appears that AI can or will soon be able to do many tasks fully in place of the humans who now do them or at the very least will be able to dramatically augment the productivity of the humans who now do them. The good side of this is attaining similar economic output with less labor hours, and thus, for example, potentially allowing a shorter work week with full income for all, or even with more equitable incomes. The bad side is that instead of allocating less work but full income to all, corporations will keep some employees working as much as now, but with twice the output, pay them reduced income, and pink-slip the rest into unemployment.

Consider, as but one of countless examples, there are roughly 400,000 paralegals in just the U.S. Suppose by 2024 AI enables each paralegal to do twice as much work per hour as before using AI. Suppose paralegals in 2023 work 50 hours a week. In 2024, do law firms retain them all, maintain their full pay, and have them each work 25 hours per week? Or do law firms retain half of them, maintain them at 50 hours a week and full salary, while firing the other half. And then with 200,000 unemployed paralegals who seek work reducing the bargaining power of those who still have a job due to their fear of being replaced, do the law firms further reduce pay and enlarge required output and the work week of those retained, while they fire still more paralegals? With no effective regulations or system change, profit will rule, and we know the outcome of that. And this is not just about paralegals, of course. AI can deliver personal aides to educate, to deliver day care, to diagnose and medicate, to write manuals, to conduct correspondence, to make and deliver product orders, to compose music, to sing, to write stories, to create films, and even to design buildings. With no powerful regulations, if we have profit in command, is there any doubt about whether AI would bring utopia or impose dystopia?

The above enumeration could go on. Incredibly in the past week, and as far as I am aware not even contemplated a month before, there is a firm now training AI in managerial functions, financial functions, policy making functions, and so on. Or, if there isn’t, might there be next week?

Before moving on from crystal balling the future, we might also consider some unintended consequences of trying to do good with AI. Short of worst case nefarious agendas, what will be the impact of AI doing tasks that we welcome it to do but that are part and parcel of our being human? Let’s even suppose they do these functions as well as we do, as compared to doing them just well enough for it to be profitable for corporate entities to utilize them in our place.

Day care? Care for the elderly? Psychological and medical counseling? Planning our own daily agendas? Teaching? Cooking? Intimate conversation? If AIs do these things what happens to our capacity to do them? If AIs crowd us out of such human-defining activities, are they becoming like people, or are we becoming like machines?

Try conversing with even the current AIs. I would wager before long you will move from referring to it as it, to referring to it as he or she, or by name. Now imagine the AIs are doing the teaching, counseling, care taking, agenda setting, drawing, designing, medicating, and what all—and you are doing what? Uplifted and liberated from responsibilities you watch movies AI makes. You eat food AI prepares. You read stories AI writes. You do errands AI organizes. Assume income is handled well. Assume remaining work for humans is allocated well. You want something, you ask an AI for it. Ecstasy. And if AI’s development doesn’t hit a wall, this is the non-nefarious utopian scenario.

What is a sensible response to the short and long run possibilities?

We humans have at our disposal something called the “precautionary principle.” First proposed as a guide for environmental decision making, it tells us how we should address innovations that have potential to cause great harm. The principle emphasizes caution. It says pause and review before you leap into innovations that may prove disastrous. It says take preventive action in the face of uncertainty. It shifts the burden of proof to the proponents of a risky activity. It says explore a wide range of alternatives to possibly harmful actions. It says increase public participation in decision making. Boiled down, it says look before you leap.

So, it seems to me that we have our answer. A sensible response to the emergence of steadily more powerful AI is to pump the brakes. Hard. Impose a moratorium. Then, during the ensuing hiatus, establish regulatory mechanisms, rules, and means of enforcement able to ward off dangers as well as to advantageously benefit from possibilities. This is simple to say but in our world, it is hard to do. In our world, owners and investors seek profits regardless of wider implications for others. Pushed by market competition and by short term agendas, they proceed full speed ahead. Their feet avoid brakes. Their feet pound gas. It is a suicide ride. Yet unusually, and indicative of the seriousness of the situation, hundreds and even thousands of central actors inside AI firms are concerned/scared enough to issue warnings. And even so, we know that markets are unlikely to hear them. Investors will mutter about risk and safety but will barrel on.

So, can we win time to look before corporate suicide pilots leap? If human needs are to replace competitive, profit-seeking corporate insanity regarding further development and deployment of AI, we who have our heads screwed on properly will have to make demands and exert very serious pressure to win them.

This first ran on ZNet.

Michael Albert is the co-founder of ZNet and Z Magazine.