Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Deep Learning

The Black Box Problem

In Ancient Greece, the most respected source of advice was the Oracle of Delphi, or the Pythia – the priestess of Apollo. She could be found perched on a tripod seat in the Temple of Apollo in Delphi, which was built around a sacred spring and considered to literally be the centre of the world. Seated above a fissure from which the divine pneuma or spirit was said to emanate, and gazing into a dish of water from the spring, she would receive enquirers who believed that the messages she communicated came directly from the gods. Her advice was reserved for only the most important questions, and once given it was accepted without reproach and followed without question. No military attack was planned, no political decision passed, no investment made, and no great journey undertaken without consulting the Oracle first.

 

Modern historians have since found evidence to suggest that the pneuma that emerged from the fissure in the temple was actually a combination of gases, produced naturally owing to the geology of the region and capable of producing a high in the individual who breathed them in. Although the ancient Greeks believed deeply in the advice the Oracle provided, the reality is that it was likely they were basing their most important decisions on the words of a woman who was intoxicated by something much more banal than the spirit of a god. In her drugged state, we can only wonder whether she had any clear understanding of the problems that were posed to her, or of the consequences of the advice she provided her anxious audiences. But people simply trusted that what she advised would help them.

 

THE ORACLE OF DELPHI

Today, the oracle we consult requires far less ceremony than those of ancient Greece. No goats need to be slaughtered or mysterious fumes inhaled. We do not need to journey up mountains or wait for the right time of the year to ask our questions. We only need to input large datasets, coded in a way that makes sense to our diviner, and await its response to the queries we have posed. But like the Oracle of Delphi, the layman has no idea how our deep learning AI is reaching its conclusions. And it has no way to tell us.

 

Developments in big data and machine learning have rapidly advanced the capabilities of AI in recent years. Machines are learning to do ever-more difficult and impressive tasks daily – often faster and more accurately than humans possibly can. AI has also reached the point of being able to teach itself how to do things, no longer relying on the rules and commands of programmers. Last year, for example, technology company Nvidia’s self-driving car taught itself how to drive by using an algorithm that it had developed on its own, simply by watching a human drive. This kind of technology does not rely on a human to give it rules for its decision-making, and so the question arises, how does it make its decisions? And can we assume that it will always make choices that are in line with the formal and informal ethical codes that govern human behaviour? Can we trust it?

 

In a 2017 MIT Technology Review article, Will Knight highlights that the neural networks that make up AI have become so complex that often the very engineers who design them may not be able to isolate the reason for any one particular action that their creation takes. In a bid to better understand how AI “thinks”, Google researchers experimented with a deep learning image recognition algorithm in 2015, altering it to generate images rather than identifying features in them. Essentially, the image recognition algorithm was run in reverse, using the various elements it had previously identified in an image to re-create that image. The experiment revealed the features that the algorithm concentrates on during image recognition – such as a bird’s beak or the scales of an amphibian in an animal scene. But the modified algorithm – known as Deep Dream – instead accentuated these features in often grotesque ways, producing a series of artworks that looked like something from a psychedelic-fuelled nightmare. It inadvertently over-emphasised certain features, whilst under-representing others required to construct the full context of the image. It perceived the picture in an entirely different way than a human would.

 

And this is where the crux of our discomfort with AI lies. In any society, there is a degree of trust that people will act in an altruistic manner towards one another. If someone actively violates this trust, they are considered a criminal and they are separated from society. If someone is unable to demonstrate an understanding of causality, intent and meaning in their actions, we consider them to be insane, and they too are separated from normal society. In either case, we would not trust them to make decisions as important as who receives a loan for a house or what medical treatment will save a life, and we would not let them fly a plane or drive a military tank. AI is increasingly playing a central role in the daily decisions that influence our lives, yet it has not fully demonstrated that it will adhere to the social contract that governs human decision-making. It still needs to earn our trust.

 

As a result of AI’s “black box problem,” the idea that it should become a legal right to be able to interrogate AI about how it reaches its decisions is gaining momentum globally and being incorporated in laws such as the US’s Equal Credit Opportunity Act, the EU’s General Data Protection Regulation, and France’s Digital Republic Act. The central problem that these regulations are trying to solve is one of trust, but the source of this distrust does not, in fact, reside within AI itself. AI’s continued development should always be based on the assumption that it is nothing more than an algorithmic extension of the data that we feed it. What this means is that all the biases inherent in our data will always result in those same biases emerging in the AI’s behaviour.

 

Artificial intelligence does not instinctively possess a guiding set of principles or social mores. It does not pass judgment on the results it produces – it simply processes the data it is given. Much like the group of boys who lose all sense of right and wrong whilst trapped on a deserted island in William Golding’s Lord of the Flies (1954), without adult supervision and the morals of a social contract, AI has the potential to spiral out of control. And much like these adolescents – who end up murdering their own friends in an almost trance-like mob mentality – artificial intelligence requires the structures of human reasoning and moral decision-making to ensure that the biases that exist in the data do not wholly corrupt the system, resulting in potentially horrific outcomes.