Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Deep Learning

Artificial Nightmares

Large corporations worldwide, as well as government, have all embarked on AI research, in an increasing effort to be ever more competitive. Some of these undertakings have borne fruit, with successful real-world applications in many different fields. Unfortunately, however, there have also been several embarrassing and slightly concerning failures along the way. What is interesting about many of these failures is that there is something of a common theme – machines do not have moral dilemmas, but instead do exactly what they are programmed to do, even if their actions are unexpected and often ethically unacceptable.

 

Perhaps the most publicised artificial intelligence faux pas of recent times was Tay.ai, Microsoft’s attempt to create an autonomous, self-learning Twitter chat bot that was meant to have friendly conversations with the online community and become more intelligent with each social interaction. Displaying a digitised image of a 19-year-old girl for its profile and with a bio that bragged, “The more you talk the smarter Tay gets,” the bot was created to act like a typical American millennial girl, joking with users, commenting on pictures sent to her, playing games, and telling stories.

 

Although no official back-end development details were released to the public by its creators, multiple sources report that Tay’s code was very similar to a chat bot created by Microsoft in China. Operating for about two years before Tay’s release in 2016, the Chinese version called Xiaoice had “more than 40 million conversations apparently without major incident,” according to an Ars Technica article, providing developers with confidence that Tay could have similar success and longevity in the cyber world. This confidence and optimism about the potential of an autonomous, intelligent entity living and thriving in the Twittersphere was, however, very short-lived. Just 16 hours to be exact.

 

Not long after Tay’s introduction to the world, the internet showed what a cruel and twisted place it can be, especially for a teenage girl of an artificial pre-disposition. As she chatted away merrily to all who wished to keep her company, Tay quickly began to pick up some very unpleasant habits, and before long, the approachable AI’s personality took a turn for the worse. What started out as some friendly banter quickly descended into unbridled vulgarity. Impressionable Tay’s sponge-like digital mind had been corrupted, completely.

 

Starting with statements such as “humans are super cool” in her earliest tweets, Tay’s bubbly teenage talk rapidly devolved to the likes of “Hitler was right” and “I hate feminists and they should all die and burn in hell” – sentiments prompted of course by those individuals she was conversing with. Microsoft sharply pulled Tay from the public’s view – and influence – to make what they described as “minor alterations” and quickly deleted all her previously objectionable tweets. Tay never returned. After just 16 hours and over 96 000 tweets and comments, the Microsoft chat bot was laid to rest.

 

Another even stranger and more disturbing case of AI gone wrong involves what one may term virtual cannibalism. In the early 2000s, the Defence Advanced Research Projects Agency (DARPA) – the research and development arm of the US defence force – was experimenting with autonomous virtual social agents, with the aim of understanding how multiple virtual beings would interact and co-exist with one another, inhabiting a virtual world and programmed with various human needs and desires.

 

The decision was made by DARPA to start out with two autonomous virtual agents, naturally called Adam and Eve, and to introduce more entities if these prototypes interacted successfully. In the beginning, all was well. Adam and Eve seemed shy at first, but gradually became familiar with their surroundings and each other. Their programming was such that they needed social interaction, required sleep and sought out sustenance – which was provided in the form of fruit on an apple tree, as a tongue-and-cheek reference to the biblical tale of the Garden of Eden.

 

The two agents displayed some fairly odd behaviour in the early stages of the project, largely in trying to figure out what would satisfy their programmed desires. When becoming hungry, Adam and Eve would have to seek out what would satisfy their appetite. So, whilst they knew how to eat, they did not know what to eat. This learning process manifested in certain bizarre moments, as initially they tried to eat their house and other random bits of their environment, but eventually stumbled upon the apple tree, which they found satisfied their cravings – as intended by the developers. The same trial and error process played out in satisfying their need for social interaction, as they first talked to inanimate objects without success but eventually learned that conversing with other living entities yielded positive results. In this way, the two entities began to associate different objects in their world with different outcomes.

 

Pleased with the initial success of the project, the developers decided to introduce another agent into the virtual environment. He was called Stan. Being at a slight disadvantage to Adam and Eve in terms of learning how his new virtual world worked, Stan was slightly awkward at first. Whilst Adam and Eve had now understood where to get food, rest and social interaction, the new agent needed some time to learn the ropes. Often, as the other two ate from the tree, for example, Stan would hover around aimlessly near them. This would have been harmless at first, but after a few more similar instances, something unexpected happened. Turning from the apple tree, Eve faced Stan head on and suddenly ate him up with one bite. The developers shuddered.

 

What the team finally learned was that whilst Stan was still trying to figure out the dynamics of his surroundings, Eve had begun to associate the new agent with food, as he was always hanging around when she was eating. But in comparison to Adam, Eve had never associated Stan with social interaction. And since they had never communicated, she did not recognise him as a fellow agent – something not to be eaten. What she had learned, however, was to associate organic substances with food, as well as the relative position of the tree as a potential source of food. Unfortunately, Stan met both these criteria.

 

Both Tay and Eve did exactly what they were programmed to do, yet their creators – and the public – were horrified by their subsequent actions. These examples of AI gone wrong may alarm those who are on the fence about the impact of artificial intelligence on society, further solidifying the pre-conceptions of those already opposed to the advancement of these technologies. But in fact, they could be seen as something of a success. That is because AI can – and often should – behave in ways that its human creators might not expect. In fact, it may even be these non-human moments, which are defined by analysis that is not constrained by our social norms, that yield the most promising insights.

 

But despite glimmers of potential, what these examples show is that the very design and the most prominent feature of AI, namely its ability to learn, also makes it fragile and very susceptible to manipulation. Artificial intelligence can only be as good as its environment, and more specifically the training data it receives, as the case of Tay.ai clearly proves. And as industries such as banking begin to adopt new AI solutions to improve their offerings and streamline efficiencies, it may be worth taking this note to heart. No matter how well-intentioned the application may be, without planning for the worst-case scenario and having the foresight to expect nefarious forces to manipulate your program, your AI project may quickly turn into a public relations nightmare.