AIExplained
The AI revolution is not what you expect it to be

The AI revolution is not what you expect it to be

The AI revolution is taking place right now. In contrast to what the scary headlines and stories suggest, the revolution is not about robots or computers taking over humanity. The real revolution does have and will have a continuing impact on all facets of society, but in a more subtle way.

This blog will keep you informed about the developments in AI by emphasising the actual practical implications for society and business rather than stating futuristic claims about what may happen.

What is AI?

Let us start with the concept of artificial intelligence, AI, for short. The research field of artificial intelligence originates from the work of Alan Turing, the brilliant mathematician who helped decipher the Enigma code of the Nazi’s in the second World War. Turing’s work forms the foundation for the way computers are structured and programmed. As early as in 1950, he wrote about the possibilities of imitating human intelligence with artificial computing systems. Reviewing AI research, it is fruitful to make a coarse division in two phases of AI research: the first 40 years (roughly 1950-1990) and the last 30 years (1990-now).

The first 40 years: expert systems

chess AI

In the early days of AI, the emphasis was on devising computer programs that could exhibit “intelligent” behaviour. The interpretation of “intelligent” corresponded to what is generally considered to be intelligent, such as being good at logical reasoning or having excellent chess playing skills. As a result, AI researchers focussed on the development of automatic reasoning programs or chess-playing program. So-called expert systems were developed to take over human expertise in certain knowledge domains. Given a set of observations, expert systems could reason automatically towards conclusions. MYCIN is a famous example of such a system that contained several hundred rules. By automatic reasoning, MYCIN was able to recommended treatments (to administer antibiotics) for the treatment of infections.

The CYC project was more ambitious by attempting to put all human knowledge (common sense) into a computer. For instance, the fact that you become wet if it rains could be “programmed” in a computer with an IF-THEN statement. Example:

IF it rains - THEN you become wet

You can imagine that human knowledge (common sense) encompasses a huge amount of such facts. In addition, there are many exceptions and conditions that have to be taken into account. For instance, if you have an umbrella, then the aforementioned rule may not apply. Despite the great expectations of the Cyc project – it promised to achieve a computer with knowledge superior to humans – it ultimately failed to fulfil its promise.

Somehow, it turned to be very difficult to program common sense into a computer, because a considerable part of human knowledge is hard to express in IF-THEN rules. Many consider the Cyc project a failure, because it did not meet its overoptimistic promises. The enormous amount of work performed in creating Cyc became publicly available as OpenCyc in 2002. The results of Cyc and other attempts to formalise knowledge are not at the center of current AI hype, but may become more relevant in the near future.

Despite the failure to build successful common-sense systems, the first 40 years of AI had many successes. Most notable was the defeat in 1996 of the reigning world champion chess player Gary Kasparov by Deep Blue, an intelligent computer developed by IBM.

The last 30 years: machine learning

About 30 years ago, many AI researchers became aware of the fact that existing “intelligent” programs were severely limited in their scope. Deep Blue could only deal with the abstract world of a chess board. Cyc’s knowledge rules seemed more like descriptions of knowledge than knowledge itself. The reason was that there were no AI programs that could recognise natural speech or recognise objects that could connect the descriptions of knowledge to the real world. Part of the problem was that AI programs were not able to learn from examples. Everything had to be inserted in the computer by means of IF-THEN rules or procedures. Humans learn from examples, rather than from explicit instruction only. As a result, the field of machine learning became popular in AI.

What is machine learning?

Machine learning is the name for algorithms that are trained by providing them with examples, also called instances. Instances often consist of a sequence of numbers, because computers can deal very well with numbers. For instance, a client of a bank may be represented by the following numbers: age, income, total debt, and number of children. Numbers may also be binary (two-valued). For example, marital status can be expressed by a 0 (unmarried) or 1 (married). In supervised machine learning, the instances are labelled. In our example, the client may be labelled as reliable or unreliable in the payment of their mortgage. Of course, such labels are available only after the fact. For novel clients, the labels can be predicted with a machine learning algorithm that is trained on many previous clients.

Predictive ability

The core strength of machine learning algorithms is in their predictive ability. Hence, machine learning is at the heart of the algorithms used by the big technology companies to make predictions about whether individual users of their services are interested in certain products. So if you are using a mail service, the company providing the service may extract all kinds of information from your messages to create prediction models. To this end, numerical statistics are collected about the words and sentences in your message.

The AI revolution

The use and predictive power of machine learning algorithms increased enormously during the last decades. The main reason for the increase was the large-scale availability (and collection) of data. This led to an increased popularity of data science, a branch of computer science that focusses on all aspect of data. The AI revolution was ignited by a specific type of machine learning algorithms called neural networks. These algorithms learn from examples and have a very superficial resemblance to biological neural networks of the brain of humans and other animals.

Neural networks are nothing new

Neural networks are nothing new. The first neural networks were proposed even before AI became a research domain. Around the 1990s, there was a brief upsurge in interest in neural networks. It was in 2012 that a so-called deep neural network was applied to the recognition of natural images. The network outperformed all existing image-recognition methods. Similar breakthroughs in performance were reported for speech recognition and text analysis.

Deep neural networks

Deep neural networks, commonly referred to as “deep learning algorithms”, have the ability to learn very complex tasks (such as the recognition of images) provided they are provided with many labelled instances. The overhyped expectations of the (near) future of AI are unwarranted, but the considerable improvements in prediction and recognition accuracy on the domains of images, signals, and spoken and written language are reason for enthusiasm. Hence, deep learning represents the core of the AI revolution. From an application perspective, the coming decade we will see the integration of deep learning algorithms in all facets of our daily environment. Computers will become better able to understand our speech, to recognise our faces and facial expressions, and aid in the identification of patterns.

The most important take-home message is that the real AI revolution pertains only a small fraction of the AI research domain. Deep learning represents roughly 1% of all AI research, but in the media it has been rebranded as “AI”. This is one of the major reasons for the AI revolution to be misunderstood.

In this blog, we will highlight many ongoing and expected innovations.

AIExplained

A blog on AI, by Eric Postma (father) & Wouter Postma (son).

Add comment

AIExplained

The aim of this blog is to keep you informed on developments in AI, by emphasising practical implications rather than stating futuristic claims.

Understand the AI revolution!