Yes, you’ve read that correctly. You really should forget about ‘artificial intelligence’. Well, at least the term, as it is meaningless and often leads to confusion. It would, however, be unwise to neglect the technology itself. After all, the current development of AI technologies is bringing us ever more ways of improving our lives, as well as opening up new and exciting commercial and social opportunities. Indeed, many companies thrive on helping businesses introduce such new technological capabilities.
That being said, though, I still believe that the term ‘artificial intelligence’ (or simply ‘AI’) says very little. Over the past couple of years, the term has been so overused that the hype is reminding us more and more of the dot-com boom in the late ’90s. The problem back then was that many companies and opportunists were making exaggerated claims about what technologies can really do, and it seems that this same feeling is in the air now with AI.
Not long ago, lots of executives were talking about ‘big data’ in their earnings calls so as to spark investors’ interest. Lately, though, AI has become the new buzzword to create excitement. I vividly remember that, at the height of the ‘dot-bomb’ boom, any company that added .com to the end of its name signaled upcoming corporate renewals or that they were en route to exploring new growth prospects.
In the Internet gold rush, there emerged a substantial number of new e-commerce businesses that were poised to unseat traditional incumbents, only to witness their own demise. “AI” now seems to be what “e-commerce” was once in the past. I am now seeing many AI startups claim that they are ready to help their clients build fully integrated AI capabilities when, in fact, they mainly only come up with proof of concepts and don’t have much to show beyond that (often due to lack of experience or even the necessary ability and knowledge for full implementation).
There is no I in AI
In fact, ever since the term ‘artificial intelligence’ was first used by John McCarthy in 1956, when he held the first academic conference on the subject, the term has been misleading. Just ask yourself, what does ‘intelligence’ mean? You may associate intelligence with the following: logic, understanding, self-awareness, emotional knowledge, planning, creativity, problem-solving and learning. However, with the exception of the last two concepts, machines arguably cannot achieve any of these things.
Sure, machines can solve problems. But at the moment they can only solve well-defined problems. The level of intelligence of machines right now is mostly good for helping with automation – taking over the most time-consuming, repetitive and labour-intensive tasks, such as standard document reading for onboarding new customers.
The term ‘machine learning’, on the other hand, is a misnomer. Current technologies in use in deep learning do not mean that machines can learn like human beings can. They often ‘learn’ by gradually improving their ability and accuracy so that, as more data is fed into them, they guess the right answer with increasing frequency. Through such training, they can come to recognise pictures, but they will never understand what they are looking at, let alone the context. Machines can, at times, guess the right words to complete a sentence to a fairly accurate degree, but they never understand what the words or sentence actually mean.
Just consider this famous formula: king – man + woman = queen. Computers can figure out the ‘meaning’ of a sentence by knowing the relationship between the concepts of king and queen or man and woman. But the computer still hasn’t got a clue what royalty or gender are.
On this basis, in an upcoming book with my colleagues Mark Esposito and Danny Goh, we argue that there is simply no intelligence in artificial intelligence. Some machines and computers may be smart and excel in undertaking a single task superbly, but they are certainly not intelligent. For the moment at least, AI is much more of a mindless robot and much less a thinking machine.
“There is simply no intelligence in artificial intelligence”.
I realise that I may sound pedantic and that this is all a matter of semantics. After all, what’s the big deal with using a term that is broad and telling, but that everyone can understand? “Policymakers don’t read the scientific literature, but they do read the clickbait that goes around,” said Zachary Lipton at a MIT Technology Review’s EmTech conference. He warns that if they cannot separate hype from reality, it can lead them to put too much faith in algorithms governing things like autonomous vehicles and clinical diagnoses. It is necessary to have a good understanding of the technologies and perhaps, more importantly, their limitations.
For instance, a fundamental limitation of AI that doesn’t get talked about often enough is what happens if a model that is trained with a set of data collected in the past and yet the real world has changed? (Think driverless cars). How can governments get the policy and regulations right if they are not well-informed? How do voters make the right decisions if they can separate facts from fake news? (Think referendum).
The same applies to companies. With more and more executives saying that they intend to use AI to run this and that, we risk seeing AI-related technologies being turned into hyperbole and hearing many false promises. A telling illustration has shown that merely the announcement of pivoting to blockchain, an emerging technology that still has a long way to go, can lead share price to skyrocket. Just as important, if companies are unable to see the limitations of these technologies, they can potentially put technologies on the wrong paths in terms of business practices. In addition, if we don’t know better, we will be less able to fend off a widening mix of dodgy AI products and services that are at best useless and at worst harmful. Indeed, it has been observed that there is a growing number of unscrupulous business outfits that are happy to sell AI-related myths in exchange for profit.
Whereas there may be no intelligence in AI, it certainly exists among us humans. It is time for us to use it.
Written by our professor Terence Tse