AI is the New Electricity

Brain Cells and Deep Space

People have always been fascinated with mysteries of intelligent robots. So far this has been limited to myths and legends. Form the legend of Golem to HAL 9000 in 1969 film Space Odyssey, our imagination has been fuelled by some unknown intelligence that will take over our lives. Today, artificial intelligence (AI) is entering a stage of being – to paraphrase the author of the mentioned movie, Arthur C. Clark, indistinguishable  from magic.  The impact of AI is going to be huge in the coming years. In a conference in May 2016, Andrew Ng, Chief Scientist at Baidu and one of leading researcher into AI stated: “AI is the New Electricity.” We are seeing the beginning of a shift to a world where software will dominate and control our lives.

The idea of intelligent machines is synonymous with computers. The first computes in the 50s and 60s were called “electronic brains”. Ironically they ware far from intelligent but basically good a calculating both fast and accurate. Despite the consensus  that these machines possessed some form of intelligence it quickly became apparent that machines were good at doing things that humans are not so good at, at least very slow on the average. Calculating 1,000 five digit numbers is both tedious and slow for humans. The risk of mistakes is also pretty high. For computers this is straightforward, fast and accurate. However, it turned out that tasks that humans find easy, such as understanding language or recognising objects in a picture is notoriously hard to program a computer to do.

Artificial Intelligence started as a field sixty years ago in 1956 summer workshop at Dartmouth Collage in the USA. The workshop was organised by John McCarthy, and attended by Marvin Minsky, Claude Shannon, Nathaniel Rochester and others that would become very influential on the field in the decades that followed. The goal of the workshop was to “solve kinds of problems now reserved for humans…if a carefully selected group of scientists work on it together for a summer”. That proved to be embarrassingly too optimistic.

The history of AI is full of “springs” – new hope for new ideas, and “winters” when people realise they hope they had was limited or simply did not work. The general conception was been that AI has never been able to deliver its promise. However, many of the advances in computer science is due to research in AI. As soon as something became practical and worked, for example new way of searching though wast amount of possibilities, it become known as something else. Some ideas just did not work due to the limited capacities of the computers at the time. For example, the ideas of building a computer system that was similar to the brain using the idea of neurons and connection between them, came as early as the 1950s. Some mathematical work was even done before the first computers. However, the computers of the 1950s and 1960 were simply not powerful enough to be able to achieve any success.

The first true public success of AI came in 1996 match between IBM DeepBlue and chess master Gary Kasparov. People realised that machines could become better than people in some cognitive tasks. In 2011, AI hit another milestone when IBM’s Watson supercomputer won the television quiz show Jeopardy. Pitted against the two most successful players the AI managed to win. The game requires understanding of language so this signalled a new era in natural language processing.

In 2012, Google posted a seemingly uninteresting blog labeled “Using large-scale brain simulations for machine learning and A.I.”. In the post, Google explains how they built a neural network, a form of machine learning or deep learning, had discovered how to recognise cats in Youtube videos. If there is anything in abundance in this world it is Youtube videos of cats.

So how does this work? We know how traditional programming works. You write programs, series of commands such as expressions, variable assignments, if-statements and while loops and so on. These instruction tell the computer what to do and the computer will execute the commands. If there is an error or a “bug” you edit your program and run it again. Neural networks are not like this. They are of course programs but instead of programming the task, like finding cats or understanding language, we build a neural network or “brain” and train the network to learn how to do its task. For example, Google’s DeepMind, created an artificial intelligence program using deep learning to play Atari games. The only input the program was how to control the game (for example, move a bar left or right) and that high score should be as high as possible. The program then trained to master the game.

The cat discovery was the beginning of a new AI spring. And of course those who have been following AI research for a long time, like myself, took this with the usual skepticism, sort of “here we go again” attitude. Neural networks did not work in the past, why would they work now?

Three things are now different. First, machine learning algorithms have improved over the years. Many academic papers are published every year and the knowledge increases. Quick search on Google Scholar revealed 638.000 hits dated since 2012. Second, vast resources in computing power where you can build 20.000 GPU (Graphical Processing Units) computer cluster. This is far from the computers in the 1960s. Thirdly, the huge data available to train AI networks. The amount of data generated each day – Big Data, both by people and devices is input for machine learning.

In just the last few years, there has been an explosion in AI solutions coming the market. In most cases this is not obvious since AI, just like electricity, will not be a product but an enhancement to our lives. Just like people wanted light in their houses, not electricity for its own sake, people want the products that AI will bring. It will come in hidden form, making the tools we use more clever and convenient. In few years our personal digital assistant will be something we cannot live without.

This text is based on a new addition to the 2017 edition of my textbook, New Technology