Good Contents Are Everywhere, But Here, We Deliver The Best of The Best.Please Hold on!
Your address will show here +12 34 56 78
Deep Learning

A True Test of Intelligence

A key area of interest in AI development is natural language processing (NLP), which aims to programme computers to communicate with humans through natural language recognition, natural language understanding, and natural language generation. AI developers have had several successes in this area – just think of the popularity of digital assistants such as Siri and Alexa, for example – but they have also faced many challenges. And this is unsurprising, for although we use it every day, language remains a highly mysterious feature of human intelligence.

 

Language has long been a key focus in the debate about human intelligence, because it is only through language that we are able to communicate our thoughts and advance our knowledge collectively. For years, linguists have engaged in heated debates about how it is that we have acquired this crucial skill, with their arguments often falling on one side or the other of a nature versus nurture dispute. In Verbal Behaviour (1957), behaviourist BF Skinner famously suggested that language is a learned skill, acquired through a process of operant conditioning. He maintained that children learn language by being exposed to it, imitating it, and then receiving positive reinforcement when they use it correctly. For example, a child learns that if he says “up” when he wants to be picked up, he will achieve the desired outcome. The achievement of the goal – being picked up – reinforces in his mind what the word “up” means, and he will be more likely to use it again in the future in the same scenario.

 

Many people agreed with Skinner’s theory or developed their own similar theories, which supported the idea that language was not innate in humans but learned through our engagement with other people and our environment. American linguist Noam Chomsky, however, took a strongly opposing view, arguing that the human ability to acquire and use language is a biological feature, hard-wired in the neural networks of the brain. This is not to say that children are born with the ability to fluently speak their mother tongue, but rather that children are bestowed with a natural syntactic knowledge, a fundamental ability to understand and apply the rules of language – regardless of whether they will learn to speak English, Spanish, French or any other language from their parents. Chomsky argued that humans are naturally programmed with a universal grammar, and that we need only learn the parochial features of any particular language in order to speak it.

 

Chomsky’s theory gained credibility as it accounted for the fact that children do not have to be explicitly taught every specific word and sentence that could possibly exist in order to use them. Rather, children are exposed to a limited vocabulary in their early years and learn very quickly to apply the rules of their language to construct an infinite number of sentences. Chomsky highlighted that often, children will actually utter things that are highly unlikely to have been heard by them from any adult, such as the erroneous application of the “ed” suffix to irregular past tense verbs, such as “runned”. The overgeneralisation of the “ed” for past tense verbs shows that the child is clearly attempting to apply a rule, rather than simply imitating something they have heard – pointing to an internal aptitude for language.

 

Chomsky’s argument led to a wave of linguistic research into the biological mechanisms that might facilitate language acquisition, and investigation into which aspects of language may be consistent across different languages. If similarities in the structure of all languages known to exist in the world could be proven, this would further support the idea that all humans are endowed with universal grammar.

 

The idea of universal grammar was further popularised through the work of cognitive psychologist Steven Pinker, who offered an evolutionary perspective on Chomsky’s theory in his 1994 book The Language Instinct. Chomsky had described universal grammar as something innate, a part of our very DNA, and the special feature that elevates humans above other animals. But, unwilling to accept that language is simply part of us, Pinker sought to provide a biological explanation for why humans had acquired this skill, arguing that universal grammar is the result of evolution and natural selection. He agreed that language is a uniquely human ability but posited that it has developed as a specialised adaptation that ensures that we can survive and thrive in our environment – much like a spider’s natural instinct to create a web. He described universal grammar as being representative of the structures in the brain that recognise the language rules and patterns in another person’s speech. This natural affinity speeds up the process of language acquisition in children when they are exposed to language in their external environment and is disassembled to some degree as the child grows up – having mastered language, the brain frees up capacity for other functions, no longer prioritising the learning of language. This is a phenomenon that would be well understood by anyone who has tried to learn a language as an adult.

 

However, it is important to note that the theory of universal grammar has not gone uncontested. Linguistic anthropologist Daniel Everett, for example, claimed to have found a language that does not display the key evidence for universal grammar – namely, recursion, which enables a limited number of words to be combined in an infinite way – in the Pirahã tribe of the Amazon. He maintained that this was sufficient evidence to dismantle the possibility of language being innate, and many other linguists have followed suit, arguing against universal grammar in increasingly technically nuanced debates. That linguistic theory almost always orientates itself in relation to Chomsky’s universal grammar is, however, telling.

 

In fact, academic debate as to whether or not a universal grammar exists has reached fever pitch and, if ones reads deeply into the arguments, they can sometimes appear to be quite petulant and almost personal in many regards. The reasons for this may not only reside in the fact that peoples’ careers would be somewhat undermined if the existence of universal grammar were to be definitively proven, but also because the existence of a universal grammar is viewed by many as somewhat akin to a linguistic argument for the uniqueness of human intelligence. The debate therefore appears to be much more about whether humans are special and distinct from other species which are not bestowed with the so-called gift of language.

 

Learning language

In contemplation of one of the great unsolved problems of AI – that of natural language acquisition – it is fundamentally important to note that if a universal grammar were to exist, then no amount of data or neural network engineering or hardware or cloud-space would ever enable a machine to acquire language as humans do. The only way that a machine could ever acquire language would be to solve the problem in a so-called closed-form parametric solution. To be precise, if a universal grammar exists in humans, then neuroscientists would need to understand exactly how the human brain works in this specific regard, and exactly where in the brain this universal grammar actually resides. And this would then have to be replicated exactly in a machine. However, this presents something quite distinct and different from the goals of unsupervised deep learning neural networks, which seek to replicate the process of learning in general, rather than endowing machines with an architectural structure containing the specific, preordained rules and parameters that are only accessible to human beings and to no other species.

 

On the other hand, if we suppose that people do not have some kind of “language acquisition device” in their brain, as Chomsky referred to it, and that a child learns language by hearing it, then it would seem logical that it would be possible for machines to similarly perfect human language, simply by analysing enough data. But despite the wealth of data that is currently at our disposal, there are a number of examples that show that natural language processing (NLP) remains a challenging area for AI. NLP research has certainly made great strides, not only teaching computers the words that exist in a language and the rules that govern their grammatical combination, but also teaching them when to use certain sentences. The first NLP computer program was created in the 1960s and was called Eliza. Although Eliza could hold a conversation with humans, she lacked any understanding of the exchange, using a pattern-recognition methodology to select responses from a pre-determined script. More recently, virtual assistants have advanced voice-recognition AI, performing tasks as directed by voice commands from users. However, these programs have not been without their problems and have been criticised for misinterpreting instructions and for requiring that commands be given with an unnatural degree of stiffness. These devices can certainly hear, interpret and respond to language – which in the most basic terms means they can communicate – but this communication is limited, has been prone to problems, and is certainly nothing akin to what even young children are capable of.

 

In 2011, a chatbot called Cleverbot was able to fool 59.3% of the human participants at the Techniche festival at the Indian Institute of Technology Guwahati into thinking they were chatting, using text messages, with another human. The chatbot had been trained on millions of conversations with humans and worked by searching through these conversations to select the most fitting responses to the messages it received. This marked a key development for chatbots, with Cleverbot proving itself capable of communicating in a far more conversational manner than the NLP devices that had preceded it. But it still lacked any real understanding of context or social propriety, a shortfall that would likely become more obvious in a more spontaneous or lengthy real-world interaction.

 

A 2018 WIRED article further explains how X.ai – an American company active in the digital assistant market – is trying to create a chatbot that can schedule meetings for busy professionals. But even this seemingly straightforward task is proving immensely complex, thanks to the quirks of natural human language. The developers have found that often people send meeting requests that are muddled by conversational niceties and “small talk”, or that are ambiguous when describing their availability. The chatbots are programmed to send responses asking for clarification, but it can be arduous and frustrating for the user if they must respond to several emails just to schedule a meeting. In a bid to try to prepare for every conversational possibility, X.ai’s trainers are feeding a vast amount of data into the system, in a tireless quest to refine the algorithm to the point where it will be able to communicate with all the nuance and flexibility that a human does. Whether or not the machines will ever be able to communicate like a human remains uncertain. And this is because language does not only perform a practical function – enabling us to share content with one another – but a far more abstract social one as well.

 

The dual functions of language

In The Stuff of Thought: Language as a Window into Human Nature (2007), Steven Pinker cements the idea that language is a distinctly human trait, by focusing on how our use of language reveals insights into the internal dynamics of how we think. As arbitrary – and at times, irksome – as using correct grammar may seem, Pinker argues that we have come to a consensus to use certain words in certain ways for a particular reason. The way we speak reflects the way we think, betraying the intuitive physics that underpin our understandings of the world. Prepositions, for example, reflect our conceptions of space, whilst nouns reflect our conceptions of matter, tenses our conception of time and verbs our conception of causality. The words we use are anything but arbitrary – they are intended to communicate very specific meanings, which are aligned with the mental models and cognition processes we use to make sense of the world.

 

Pinker does not, however, advocate the idea that language is merely the tool by which we communicate our thoughts to others, much like a memory stick that can take information from one computer and transfer it to another. After all, we do not only use language to describe when an event occurred, or where an object is currently placed. He reminds us that we are always communicating with someone else, explaining that language therefore has a dual purpose: it must convey content whilst at the same time negotiating a social relationship. The phrase “if you could pass me the salt, that would be awesome,” for example, does not make much literal sense and is a highly inefficient way to communicate, if all the speaker wants is the salt. But the speaker is aware that they are making a request of another person and will not wish to sound overly demanding – so they hedge the request in a way that is more polite. Language, then, is doing something far more complex than simply transferring our internal thoughts to one another. It is continuously confirming that we belong to a social group.

 

The social function of language is evident in the fact that when we communicate, very rarely do we do so in a manner that would convey content in the simplest and clearest way. Often, we use language in a very abstract way, relying on the listener’s human ability to understand what is being said in context. Metaphor, together with the combinatorial power of language – what Pinker refers to as “the infinite use of finite means” – makes possible infinite creativity in language, and infinite meaning. When we interpret an utterance, we draw on a vast body of knowledge about the world and about people, combining this with our own experiences to instinctively interpret what is being said. In addition, we temper our interpretation with a host of other information – such as the person’s tone, the length of their pauses, their body language and facial expressions – which helps us to understand what the person actually means.

 

It is unlikely that neuroscientists will ever be able to prove the existence of Chomsky’s universal grammar – for this was a concept that he invented to produce a general theory of language acquisition, rather than to describe a particular biological structure. But, if AI research and development is to replicate one of the crucial pillars of intelligence – the acquisition of natural language – it would, in any event, have to do it without human intervention in the form of scripts or templates. Whether universal grammar exists or not, AI would still have to find a way to address not only the transfer of content, but the ongoing negotiation of the social function that language performs – with all the verbal and physical nuances and contextual variations that affect the meaning of the words we utter. Until AI can achieve this, it has not demonstrated artificial intelligence at all – no matter how many Chess or Go champions it beats.

 

In May 2018, Bloomberg Businessweek published an article entitled “These tech revolutions are further away than you think.” The list included a practical use for blockchain technology, the mainstream use of augmented reality, the death of cable television, the full-scale implementation of renewable energy sources, total data portability, and fully robotic factories. Natural language processing was, however, curiously absent from the list. Given the limitations that virtual assistants and chatbots have displayed in performing even the most menial communication-based tasks, it seems clear that whilst vast amounts of data and highly refined neural network structures may threaten jobs, entire industries, and even our values, it is highly improbable that machines will be capable of the distinctly human capability to acquire and use language effectively in the near future.