Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Deep Learning

The Singularity

In mathematics, a singularity is a point at which the normal logic of mathematics breaks down and an object does not behave in the manner that is typically expected. As the simplest example of a singularity, when the x in the function f(x) = 1/x is equal to zero, the answer cannot be defined. This is because when x = 0, the function’s outcome seems to explode to an undefined state where the result must be something close to positive or negative infinity. In physics, a singularity can theoretically occur in a similar manner, most commonly exemplified in the physics related to black holes. Karl Schwarzschild defined this in his famous Schwarzschild radius equation in 1916, where every object with mass possesses a physical parameter corresponding to the event horizon of a black hole. According to this equation, any object with a physical radius smaller than the Schwarzschild radius, whilst maintaining the same mass, would create a situation where everything in between these two parameters would be unable to escape, as the gravitational pull towards the object’s centre would exceed the speed of light. In this scenario, not even light would be able to escape the Schwarzschild radius – creating a black hole.

 

There is, however, another kind of singularity that has become a favourite topic of debate – that of the technological singularity. This theory is based on the notion that, one day, an artificial super-intelligence will be created that is so far superior to its creators that it will begin a cycle of self-learning and self-improvement, spiralling beyond the control of human intervention. But the opinions on how close we are to this technological singularity, or if it is even possible, vary greatly.

 

RAYMOND KURZWEIL

The first mention of a technological singularity in the 1950s was aptly, and perhaps somewhat tellingly, uttered by the Hungarian-American mathematician, physicist, and computer scientist, John von Neumann, who is widely recognised as a founding figure in the world of computing. Von Neumann was no stranger to potentially world-ending technological advancements, as one of the leading scientists in the Manhattan Project during World War II, he helped to produce the nuclear weapons that were dropped on Hiroshima and Nagasaki in August of 1945. In the 1950s, a peer of von Neumann, Stanislaw Ulam, recalled a conversation with him that “centred on the accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

 

In recent years, perhaps the most prominent voice of the singularity has been that of computer scientist and futurist, Raymond Kurzweil. Among Kurzweil’s predictions were the disintegration of the Soviet Union because of the advancement of technologies such as cellphones, and the explosion in internet usage from the 1990s. He also foresaw that chess software would beat the best human player by the year 2000 – a feat that was achieved in 1997 when IBM’s Deep Blue beat world champion Garry Kasparov in a globally-broadcast match. And in terms of the concept of a technological singularity, Kurzweil predicts that by the year 2045, “the pace of change will be so astonishingly quick that we won’t be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating.”

 

Kurzweil’s prediction, as described in his book The Singularity is Near (2005), relies heavily on a theory called “the law of accelerating returns.” He argues that the singularity is closer than many think, because humans tend to reason in terms of linear progression. Yet, as he describes in his book, technology, as with many of our most important advancements, is progressing at an exponential rate – a reality observed by Gordon Moore, co-founder of Intel, in 1965. Moore observed that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented, and he predicted that this would continue to be the case for the foreseeable future. In recent years, the pace of technological development has slowed down but only slightly, with the capacity of computer chips roughly doubling every two years, according to what has become known as “Moore’s Law”. At certain times the rate of this growth seems linear, Kurzweil explains, because when looking back at the first half of the curve, it is much flatter than what comes after the “elbow” of the curve. At that point, beyond the elbow, advancements that previously took decades to see major progress, could suddenly double and then quadruple in effectiveness, usability and adoption. And one such advancement – that many have deemed slow and laborious in its development and practical applications in the last few decades – is artificial intelligence. Just as Kurzweil explains, when we stand at this point in time and look back at the rate of progress in the field of AI since the beginning of the 20th century, it certainly can seem linear in nature, if not pedestrian.

 

However, what has predominantly been holding AI back is not a lack of ideas or useful implementations, but a shortage of both computing power and the data necessary to achieve deep learning in AI. In recent years, both of these necessities have experienced substantial growth, providing the major players who have collected these vast amounts of data a seemingly endless number of possibilities for the penetration of artificial intelligence into every industry imaginable, as well as into almost every sphere of our daily lives. In many ways, if we are indeed currently situated at the “elbow of the curve”, the conditions do seem perfect for AI to accelerate exponentially in the coming years – perhaps even in time to realise Kurzweil’s expectation of a technological singularity by the year 2045.

 

The idea of technological singularity is not, however, one that everybody views with as much optimism as Kurzweil, owing to the widespread fear that machines may gain the intelligence to one day rise up and, without empathy or compassion, overcome and annihilate the human species. This fear has a long history and has been manifested in many cultural expressions – from literature, to film, to art – and it has only grown more intense as the power of artificial intelligence has accelerated.

 

This fear, however, is predicated on the belief that humans and machines are completely separate and competing entities. It ignores the possibility that we could be moving towards the singularity that Kurzweil describes – one that envisions the ultimate advancement of human intelligence through the corporeal merging of human and machine. Kurzweil believes that as humans continue to evolve, we will inevitably reach a point where computational capacity will supersede the raw processing power of the human brain, enabling us to move beyond the present limits of our biological bodies, and our minds.

 

Kurzweil’s enthusiasm for the singularity is echoed in a 2017 article in An International Journal of Computing and Informatics, where researchers Mikhail Batin, Alexey Turchin, Markov Sergey, Alisa Zhila and David Denkenberger assert that there will be three stages of AI development, and that we are currently only in the first stage of “narrow” AI. They predict that what will follow is artificial general intelligence and then super-intelligence, by which point the possibility of uploading human minds and creating disease-fighting nanotechnological bodies will lower the probability of human death to close to zero. Biomedical gerontologist and chief scientist at the Strategies for Engineered Negligible Senescence (Sens) Research Foundation Dr Aubrey De Grey raised eyebrows when he proclaimed that the first person that will live to be 1 000 is probably already alive today. De Grey’s long white beard and sometimes eccentric opinions have perhaps – to some extent – made him easy to dismiss, especially among the scientific community. But the acceleration of medical research as a result of AI – and the possibilities for achieving radically improved approaches to health and medical care – are not easy to disregard. Companies such as Insilico Medicine, IBM Medical Sieve, Google DeepMind Health and Turbine.ai are already involved in projects that will aid in advancing disease detection and treatment.

 

The enhancement of the human body through technological means has intrigued us for years and has been explored extensively in fiction through the character of the cyborg – from the James Bond supervillain Dr No and his bionic metal hands, to the replicants in Ridley Scott’s Blade Runner, Molly Millions in William Gibson’s The Neuromancer, and Tony Stark in the Iron Man Marvel comics, to name a few. Literature scholars have highlighted that the recurring use of the cyborg character reflects our concerns about the changes in human nature and identity that are taking place through the blending of technology and corporeality. The willingness of writers to mix elements that are human with those that are not has been hailed as a potentially significant transgressive act, producing characters whose identities are fluid and permeable – for they are neither strictly human, nor machine. The power of the cyborg was articulated perhaps most famously by Donna Haraway in her academic essay A Cyborg Manifesto (1984), where she argues that the figure of the cyborg allows for the possibility of envisioning a world where the human and the non-human merge seamlessly.

 

Haraway’s argument is an important one to consider, as fiction increasingly becomes reality. Humans have been augmenting themselves for years and it could be argued that, in fact, almost all of us are already cyborgs to some extent. We use synthetic drugs to improve our health, to stave off life-threatening disease, and to enhance our performance both mentally and physically. Artificial devices are routinely used to improve our eyesight or hearing, to give people new limbs, or to keep hearts beating. As a species, we are getting smarter, running faster, and living longer thanks to artificial augmentation. We have always been altering the limits of what the human body is capable of. It is therefore perplexing that we should fear the more advanced physical enhancements that artificial intelligence is likely to facilitate in the not-too-distant future. Perhaps it is because in the case of pace-makers and prosthetics, we feel that technology is only restoring a normal level of physical functionality, rather than enhancing the natural body. But this is not entirely true.

 

Before he made headlines for his involvement in a dramatic murder, South African double-amputee Oscar Pistorius was the subject of an international debate of a rather different kind. In 2008, the International Association of Athletics Federations (IAAF) banned Pistorius from competing against able-bodied runners, as it claimed that his prosthetic limbs gave him an unfair advantage over human legs. These artificial appendages were reported to make Pistorius more energy efficient than normal sprinters and to reduce the time between strides to such a degree that researchers estimated that he would have as much as a seven second advantage in a 400m race. These small, but significant, advancements had given a damaged human body more functionality than the average human body, and collectively, we generally view these enhancements with suspicion.

 

A far more advanced development in prosthetics is robotic limbs, which rely on brain-computer interfaces to help amputees regain an unprecedented level of movement and control over their bodies. Jesse Sullivan – who had both of his arms amputated following an electrocution accident – underwent a nerve graft to join his shoulder muscles to his pectoral muscles, and a computerised prosthesis was joined to his body where his right arm used to be. Using thought control, Sullivan is able to contract the muscles in his chest, and the computer in the arm is able to interpret these signals to perform the desired motion. When he thinks “close hand”, the chain of communication through his body – and its artificial addition – work seamlessly and his prosthetic hand closes. Researchers in Utah moreover announced in 2017 that they had developed a hand that can simulate over 1 000 unique touch sensations in the brain of a user, enabling them to interact with their environment in a tactile sense. These prosthetics are far more technologically advanced than Pistorius’s legs – and yet, society views these as medical marvels.

 

Generally, our first response is to elicit fear when futurists like Kurzweil talk about singularity, or when businessmen like Elon Musk talk about a neural lace enabling human brains to interface directly with computers. These ideas may seem outlandish, and yet to consider them so is to draw a fairly arbitrary line between what are acceptable augmentations to the human body and what are not. Humans have always found ways to enhance themselves, and as AI advances, the possibilities for pushing beyond our present physical limits are only growing. What is perhaps more interesting than the debate currently taking place about the potential impact of AI, is to question ourselves as human beings more deeply – to ask ourselves what we mean when we speak of human intelligence. After all, if we believe that we can create an artificial intelligence, then it is incumbent upon us to have a better grip on the meaning and history of what is understood by us as human intelligence itself.