Bob & Alice’s Cryptic AI Talk

Source: Bob & Alice by https://deepai.org/machine-learning-model/cyberpunk-portrait-generator

With the seemingly unstoppable rise of artificial intelligence (AI), we, as human beings can talk to machines that pretend to understand what we say and react to what we tell them. This is referred to as human-to-machine communication.

This form of communication changed five years ago when Facebook set two AI-driven chat-bots to talk to each other. They were called Bob and Alice. It was one of the first times – as far as we know – that an AI talked with an AI.

Despite the rather deceptive labeling of Bob and Alice, they were not humans but machines. Yet, they engaged into mutual communication – for a short while.

In current AI, human language is increasingly being used as a means of communication between humans and computers, and between computers. They use conversational interfaces for both: text-based and vocal speech. For some time this has become a standard feature on consumer technologies such as, for example, in smartphones.

In other words, AI’s human-like language is used at the level of the human-machine interface. The underlying process can even generate linguistic outputs – speech – based on computational codes.

Reaching beyond that were a pair of chatbot-like figures. Bob and Alice were developed by Facebook’s AI Research Lab in 2017. Yet, both were shut down after they began to converse in a non-human and incomprehensive language. Here comes the key:

it was a new non-human language, a language of their own.

This event in the year 2017 marks the end of just having a communication between humans and machines. After 2017, an AI could talk to an AI. The machine-to-machine talk became a reality.

Bob and Alice challenged what we thought for the last 2,000+ years. Up until 2017, Western Philosophy’s idea about language – at least since the philosopher Aristotle – has tended to see the ability to use language essentially as a human form of reasoning and expression. And, it is that which distinguishes humans from animals – as the cruelty administered in the BBC documentary Nim Chimpsky shows.

With AI, this has changed, and it was not animal-to-human communication. The emergence of two computational AI agents enables them to generate innovative language outputs. Yet very soon after AI showed this ability, Facebook shut down the pair of experimental AIs – even though or perhaps because of the fact that they created their own language.

Some have argued that Facebook’s AI researchers were forced to pull the plug in a state of utter panic. The rather sensationalist claim of stopping it came because the AI-to-AI talk had created an entirely new language.

And worse, their language was indecipherable to human beings. It was framed as rogue AI with a creepy potential. The AI apocalypse was in sight with your toaster becoming a killer robot.

Media sensationalism assured the image of a secretive AI super-intelligence turning on its human creators. The reality was far from it. Because, in reality, their so-called super-intelligent conversation boiled down to a few lines of code:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

What became relatively clear was that the AI “talk” was more of a highly effective form of conversation seeking to become perfectly clear with one other. Even more interesting, they followed the script. Bob and Alice followed what they were created for: the purpose of a simulated negotiation.

Facebook’s goal in the field of machine intelligence was to create a technology that gave Facebook customers a better way to communicate. Set up that way, AI was to imitate human-like linguistic behavior. From that, Facebook programmers found five things:

1. using their algorithms, Bob and Alice were able to reach a negotiated agreement;
2. they even established some sort of fluent language;
3. both, actually, made relatively poor negotiators;
4. both were excessively willing to compromise; and,
5. their performance optimized negotiating with each other.

On the upswing – and this does not indicate current AI becoming AGI or ASI – the two AIs developed innovative strategies for their mutual negotiation to achieve good outcomes. Both AI machines also depicted same collection of items to be discussed or better, negotiated over: two books, one hat, three balls. Their task was to divide them between themselves by negotiating.

Beyond all that, both AI machines used forms of expressions that eventually were transformed to the point that the language they used was no longer recognizable as a human language. Instead, both AI bots started to invent practical and highly efficient code-words for themselves.

Strangely, both also showed a habit of drifting away from human language and into a highly coded form of communication. In short, they limited their communication to usefulness and towards the purpose for which they were developed.

Yet, the highly coded words they used were no more than a kind of shorthand. Most importantly, they were “not” introduced for the purposes of secrecy. Instead, it came about for the purpose of making communication more efficient – as they were instructed to do by their algorithms.

What was often termed a new language needs to be – more accurately – called a kind of data compression. Instead of being mysterious, secretive, and so on, this is a rather common practice. It is an entirely normal thing to do in communication systems. There is nothing untoward it.

Indeed, it is possible that the two bots tended to drift away from a recognizable language. It was not an aberrant behavior for an AI system at all.

Instead, it was rather a logical outcome for machine learning where AI machines have been given the means and motivation to communicate. In other words, the entire “Bob & Alice” affair was a bit like Shakespeare’s Much Ado About Nothing.

Behind the media hype and the sensationalist reporting about Bob and Alice’s so-called secretive new language, lurks a rather tedious interaction between machine language and machine code set up by a specific algorithm.

It was a kind of super-advanced version of what a code actually is before it became a computer code, namely codex – an encryption on a wax-covered wooden strip setting out ancient law

In other words, Bob and Alice’s code was in fact rather similar to Samuel Morse’s Morse Code (1838) – just now put into a computer program that could communicate.

Despite all this, it was still the first time ever that an automated coding system of writing had been self-optimizing according to technical criteria. Yet, it occurred with no regard to semantics. It was functional and technical without understanding – in short, it was a syntax, not semantics. It was a simple rule-following.

Almost 90 years after Alan Turing’s Entscheidungsproblem (decision problem), Bob and Alice were conditioned to use what was embedded in their coded and logical architecture. They simply reduced the vocabulary of their new language to reflect the condensing logic that underscored the efficacy of their code. That is pretty much it.

What they did wasn’t miscommunication, and it wasn’t motivated by secrecy. They simply stripped away all redundant aspects of language. What they did was logical, and it was decodable.

Yet, Bob and Alice were able to alter grammatical rules. Simultaneously, some of the compositional traits of human speech were dropped to establish an efficient mode of expression. They eliminated all words that did not help to move them towards the result.

Yet, unlike Orwell’s Newspeak, their reduction of vocabulary was geared by a pre-programmed quest for efficiency. It was not driven by power. For efficiency, they substituted the diversity of computational language with systematic repetitions – signs that represented numerical values while significantly altering grammar.

Yet, the briskness of this alteration in syntax made their language unrecognizable to humans. It was a language that had distanced itself from human language. Their language was working in a rather different evolutionary speed until it became detached from what we recognize as human speech.

Worse for the apostles of the killer robot and the looming AI Apocalypse promoters, Bob and Alice’s system remained strictly rule-bound at all levels of their two-way communication. In other words, syntax (algorithmic rules) took absolute priority over semantics – the creation of meaning. At all times, what they did was governed by a strict set of syntactic rules that they themselves had co-evolved.

On the upswing, the example of “Bob and Alice” also showed that AI machines are able to produce language outputs that surprise even the writers of their source code.

Still, the idea of the machine – even with a rather repetitive and deceivably unimaginative behavior – may explain why this story became somewhat of an apocalyptic popular fixation.

In its most basic level, it was no more than the simple execution of an algorithmic computer code that allowed the syntactical rules of speech to metamorphose.

Bob and Alice’s language was definitely not a work of poetry. It contained not even the slightest grain of any hint of what the philosopher Kant would call intention – beyond making language more efficient as inscribed by the algorithm.

In the end, Bob and Alice demonstrated – despite their so-called new language – that their machine language remained to be defined by its algorithm. It only perfected computer language to a point unrecognizable to human beings.

Both did not become aware of what they were doing. And they did not understand what they were doing either. They neither created AGI nor ASI. At the same time, the singularity is not already here. Well, Bob and Alice will not turn your toaster into a killer robot any time soon.

 

Thomas Klikauer is the author of German Conspiracy Fantasies.