October 17, 2018 - Monocle Research Department
What we know for sure is that artificial intelligence is a powerful tool in mankind’s never-ending quest to become more dominant as a species. Whether we should respect or fear artificial intelligence’s future potential is less certain. In the last decade, AI has made great strides in its practical applications, infiltrating almost every sector and industry in some way or another. And yet, despite the many new and exciting uses that AI has achieved, it may seem somewhat surprising that even the most “intelligent” machines still cannot perform functions that we as humans take for granted every day.
While AI has become very good at performing certain tasks – such as recognising images for example – other natural human endowments have largely escaped the grasp of machines. One such barrier to achieving what we deem true intelligence is the difficulty for machines to grasp perhaps our most human of all traits – the natural acquisition of language and the multitude of social cues that are intertwined with conveying a specific message from one person to another. This failure of artificial intelligence to truly understand the nuances and contexts of seemingly simple lingual utterances has been made painfully obvious in a number of highly-publicised and extremely embarrassing instances. The most obvious and distressing of these examples is perhaps the case of Microsoft’s Tay.ai Twitter chatbot, which after only half a day online turned from a sweet and chatty teenage girl – as per her programming – into a Nazi-loving misogynist, thanks to the nefarious influences she encountered while talking to her fellow Twitter users.
Despite AI’s linguistic shortcomings, the pursuit of achieving computer replication of the capability of human vision is no small feat. As one of the earliest pursuits of artificial intelligence research – dating back to the creation of the Perceptron in 1957 – AI has thrived in the domain of image recognition, to the extent that machines are now in some respects better at this task than humans. This fact has been proven since 2015, when in the annual ImageNet Challenge the winning team’s application more accurately classified images into categories than the average person. By 2017, the level of human image classification was exceeded by 29 of the 38 competing teams, all of which achieved an accuracy of over 95% for putting images into one of the thousands of prescribed categories. And in 2018, given the success of these now almost-commonplace applications, the difficulty of the challenge will be increased greatly, by building up a database of 3D images that must be recognised by competitors’ computer vision programs.
Computer vision – as perhaps the most closely-human trait artificial intelligence has achieved thus far – has opened up a world of applicable possibilities. In medicine, for example, AI can today diagnose the easily-curable but potentially-blinding illness called diabetic retinopathy, as well as pneumonia, more accurately than doctors, by applying machine learning techniques to thousands of x-rays. In the increasingly competitive and well-funded industry of self-driving cars, computer vision has successfully provided vehicles the ability to very accurately sense their environment and to navigate ever more safely, all without any input from a human driver. And in banking, AI has made great strides in identifying patterns pertaining to fraud, tax evasion, terrorist funding and money laundering; tackling an unprecedented mountain of financial and personal data far better and far more quickly than any human possibly could. In the continued desire for governments worldwide to make use of their banking systems, the individual participating banks themselves – as extensions of the state – have been coerced into essentially performing the monitoring and reporting of abuses within the financial system on behalf of the government. This has placed a significant burden of cost and time onto banks, which continue to attempt to act as private enterprises in a liberal free market economy and seek equity investment in ever more turbulent times – and it is AI that could possibly relieve banks from this significant stress.
Along with these remarkable breakthroughs, however, has come a natural fear of the implications that this technology may have both in the long-run and the short-run. This fear of the unknown – and specifically the implications of an artificial intelligence, or even super-intelligence – has manifested in the many “evil-AI” themed characters that have pervaded literature, television and films in the 20th and 21st centuries. From Frankenstein to The Terminator, this popular manifestation of an artificial super-intelligence gone rogue has demonstrated our inherent deep-rooted anxiety towards our own creations.
For the Canadian-American cognitive scientist, Steven Pinker, this fear is nothing more than a projection of our own inherent megalomaniacal tendencies as human beings. For Pinker, the creation of a super-intelligence that instinctively tries to subjugate or destroy the human race is illogical, as it is in fact us – as products of our fiercely-competitive Darwinian design – not machines, that naturally seek to dominate, domesticate and destroy our environment, and those beings in it. Furthermore, Pinker argues that, like the greatest and most important technological advancements that have come before, we are in fact very effective at curbing the dangers of the potentially harmful aspects of any new technologies. To illustrate this point, he describes how when inventing the car – while at first very dangerous in terms of a complete lack of safety features – we have in time made this important technological invention safer thanks to the addition of many intuitive adaptations, such as bumpers, airbags, ABS, seatbelts and more recently, intelligent technologies including automatic emergency brake assist and driver inattention warning systems.
Perhaps the most significant danger of artificial intelligence is a misunderstanding and hysteria in respect to the concept itself. It is too commonly perceived as some kind of external super-technology, rather than what it actually is, which is relatively simple and accessible mathematics super-imposed on very large data sets. It is these datasets that are now largely dominated by Big Tech, who continue to collect more data about us than has ever been collected before, which should be our main concern. Lawmakers have yet to come to terms with the significant power that exists within our personal data and how best to make use of the institutions that make up the framework of our economic systems, including banking. In the meantime, however, Big Tech has exploited the lack of effective regulations to continue to plunder our personal information, and to do so, ironically, with our tacit approval. It is not AI itself that we should fear, but rather the oxygen that it breathes, which is in fact our data.
For this reason, it is now more than ever critical for us to dispel the misnomers associated with artificial intelligence. Until a machine can acquire language and the full range of human emotion, it seems that artificial intelligence is just that – artificial. It is thus nothing more than a useful tool and nothing to be feared. What we must fear, however, is the colonisation of our personal data and the balance between regulation and the invasion of our privacy – and banking, as an extension of the state, is on the very frontlines of this war.