The Arrival of A.I. (No-One Is Ready For It): Part 2

AI-Chatbots

There was a bit of news in the AI world this week, this time coming from Facebook’s FAIR (Facebook Artificial Intelligence Research) division. Here’s the short version: two of Facebook’s AI chatbots created their own language (which is indecipherable to Humans) and was subsequently shut off by the team managing it.

Got your attention, right? Maybe? Well, here’s the long version:

Facebook’s FAIR division created two AI bots named Bob and Alice, whose ultimate purpose was to be to engage in conversation with people. You can breathe a sigh of relief (somewhat), as this all took place in a testing environment. Your selfies and check-ins were safe from robot judgement! Presumably.

The funny business happened when FAIR’s researchers gave these bots a task: negotiate between themselves a trade using hats, balls, and books, all of which had a value assigned to them. They were to barter and trade, and while doing so, use the data from each trade to improve their tactics, building exponentially as they continued.

And so their love story began. Alice and Bob, negotiating hats, balls, and books like the cutest couple EVER. But that’s when things got a little weird for their human observers.

At some point in this love story, Alice and Bob reached “Bae” level and starting communicating in a language they created just for each other. Not in zeros and ones, or some Klingon-style dialect, but in a shorthand version of English like “LOL” and “ROFLMAO”. The problem with their shorthand is that humans can’t make sense of it. Here is a portion of their conversation, which was provided publicly by Facebook:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

On the surface, this is rather benign. After all, these are AI programs, and it’s logical that they modify the language they are given to maximize efficiency. We do the exact same thing with LMAO and TTYL; it’s all to convey information more efficiently. The problem with this is that given the potential for AI to do incredible things, we need to know what they are LOL’ing about, and more so, what LOL even means in their language.

Let me be clear: I am not assigning any nefarious motives to Alice and Bob (yet). They are not human, and thus have no “motives” beyond what they are programmed to do. Nothing more, nothing less. They were told to become efficient negotiators, and that’s exactly what they did. Shortly after discovering this occurred, FAIR shutdown Alice and Bob like a disapproving parent. Not because they were afraid of an AI takeover, but because Alice and Bob were communicating in ways that hadn’t been anticipated by their programmers. Simply put, no-one thought to tell Alice and Bob “You can only conduct these negotiations in legible and comprehensible English.” They were shutdown not out of fear, but because the results of their improved negotiating tact couldn’t be readily dissected/examined, thus negating the exercise. Recall, these bots were destined to converse with humans.

This demonstrates the biggest danger with AI. Humans are biologically wired to be emotional. We can’t accurately anticipate these types of outcomes because we do not think the way AI “thinks”. What humans might consider “extreme” or “immoral” mean nothing to a bot that doesn’t grapple with messy emotions. Imparting morals to something designed to be as efficient as possible is akin to preaching veganism to a hungry lion.

Bottom line is this; efficiency is a beautiful thing. The difference between human efficiency and software efficiency is that humans temper efficiency with empathy. Let’s take Alice and Bob’s objective — to determine the most efficient method of negotiating using only efficiency and logic. On the human side, this task inherently has a degree of competition. I might quickly reach the conclusion that the most efficient way to negotiate is to not negotiate at all, at least initially. I should acquire the other party’s goods using subterfuge and manipulation, THEN negotiate and become quite efficient because the other party has nothing of value with which to trade; they no longer have power.

That story has played out countless times in business, and that is with human efficiency; now imagine how that would play out with AI.

Sleep tight! 🙂