When it comes to Artificial Intelligence (AI), people’s responses vary: from “Terminator and Skynet are coming to kill us all” to “Will the bots take my jobs?” to “Awesome, now I can sit back and do the fun stuff while the bots take care of tedious tasks for me.

There’s some truth to all of these sentiments.

But there are also misperceptions and misinformation. It’s always useful to have a basic grasp of AI, because whether you like it or not, AI is already manifesting in many aspects of our lives.

For instance, you can now order Domino’s pizzas by talking to your phone. Plus, the pizza giant also says it is moving from a “mobile first” to an “AI first” philosophy.

Some “job-threatening” examples include KPMG’s deal to use IBM’s Watson supercomputer for financial audits, among similar moves by other Big Four accounting firms.

Self-driving trucks and drones have led to the prediction that within the next two decades, there’s a 80% chance of professional drivers of road and rail vehicles being replaced.

If you’re still curious about your future job prospects, head to this website and input your profession to find out the risk level (Note: this is based on US data).

will AI take my jobs software developer Probability of automation for software developers. Source: Will Robots Take My Job

Now onto even more alarming claims [cue scary organ music].

Tesla’s CEO Elon Musk recently said without AI oversight, “we are summoning the demon.”

In fact, here is a graph summarising various thinkers and tech leaders’ views on AI, ranging from anxious (Not so fast) to untroubled (Hit the gas).

AI tech leaders thinkers Source: Vanity Fair

If there’s any consolation for concerned citizens like us, we should understand that most advances in AI so far have been focused on solving specific problems, rather than general intelligence.

And it’s still a long way to go before we can emulate human-like intelligence. But first, let’s go over some fundamental concepts in this emerging field.

Defining Artificial Intelligence & related terms

Lately, AI, machine learning, and deep learning get thrown around as if they were interchangeable terms. It’s time to clear up the confusion.

As the umbrella term, there is Artificial Intelligence (AI). It is a branch of computer science that tries to reproduce human behaviour in machines; or the ability for machines to reproduce human behaviour.

artificial intelligence definition How AI intersects with other branches of computer science. Adapted from PwC

One point to note here is, robots are not synonymous with AI. Think of robots as the physical structure carrying out the decisions made by the AI’s “mind.”

Next, within AI, we have Machine Learning (ML) - a range of techniques for computers to perform cognitive functions.

These techniques are about learning from examples to reproduce a given behaviour. For instance, recommendation engines of Netflix use ML algorithms to analyse your activity, compare it to the millions of other users, and then determine what you might like to binge watch next.

Next, within ML, we have Deep Learning. This technique uses layers of artificial neural networks - loosely inspired by the biology of animal brains - to have a machine learn how to recognise patterns.

Different characteristics of AI

Now there are three names that you may hear a lot surrounding AI too. They essentially describe the levels or characteristics of AI.

Narrow AI

Also known as: weak AI.

This is the level of AI that humanity has achieved so far. It is the ability for a machine to reproduce a specific human behavior without consciousness. Basically, narrow AI is powerful in automating narrow/single tasks. Examples:

  • Playing chess or Go. The machine has beaten us to this one.
  • Voice assistants like Siri, Cortana, Alexa
  • Self-driving car tech, which is actually a coordination of several narrow AIs

AI beats GO champion Google’s AlphaGo AI wins three-match series against the world’s best Go player. Source: TechCrunch

General AI

Other names: Artificial General Intelligence (AGI), Strong AI, Full AI.

This depicts a true “thinking machine” with “real” intelligence. Supposedly, General AI can reproduce human intelligence, including context awareness and adaptation, memory retrieval, abstraction, natural language processing etc.

Some have been saying AGI is just around the corner for years now but the fact is it still hasn’t happened yet. In this article, we’ll touch on the progress towards general AI and why it is harder to achieve the more we delve into it.

Super AI

Also known as Artificial Superintelligence.

Even more elusive at this stage is super AI, which is more intelligent than all humans combined. But let’s not get ahead of ourselves.

How are we approaching General AI?

At this stage, the best results in our attempt to build an “AI brain” have been coming out from Deep Learning.

This technique was not very successful when it was first attempted due to the limited amount of data and computing power. But recent advances in cloud computing and the collection of enormous data sets have led to improving the state of the art in almost every field where Deep Learning has been applied.

In many of these fields, the results are better than humans (e.g. cancer detection, big data sorting, stock market trading). Nonetheless, as mentioned above, Deep Learning is currently a problem-specific technology, and is not General AI.

There are some attempts to generalise it such as Neural Turing Machines and Differentiable Neural Computers, which are active areas of research.

AI art picasso van gogh monet AI can reproduce Mona Lisa in the styles of Picasso, van Gogh, and Monet. Source: Gene Kogan

Different players in the AGI space

Facebook, Google, Microsoft and Baidu are the big players in this space. This is because they have such enormous datasets and computing resources to train their Neural Networks on.

Many universities and researchers have contributed to Neural Networks and Deep Learning. The current resurgence was popularised by Geoff Hinton and Alex Krizhevsky at the University of Toronto when they used the massive ImageNet database - created by Fei Fei Li’s team at Stanford - to train the first modern Deep Neural Network to do image classification. This is a task where the machine can describe what is in an image using a single word (i.e., labelling it a Dog or Cat image).

Google has applied Deep Learning to almost all of its products at this point - including understanding the intent of a search, recognising street numbers in Google Street View images, Google Now’s voice recognition and improving Google Translate.

OpenAI is a not-for-profit research firm being funded by people like Elon Musk that is attempting to ensure that any breakthroughs in AI that occur are available to everyone rather than siloed in any specific company.

Major discoveries and breakthroughs in the last decade

If you’re interested in digging deeper, below is a list of relevant research papers that show measurable progress in this field.

AlexNet (2012) - The first Deep Learning paper that showed state-of-the-art performance, used to win the ImageNet Large-Scale Visual Recognition Challenge - an Olympics equivalent for computer vision.

After this, everyone in computer vision competitions started using Deep Learning and won successive challenges.

For example, Google’s Inception Net (2014) set another benchmark for image classification and detection in the aforementioned competition for the year 2014.

Microsoft’s ResNet (2015) provided a new architecture called the Deep Residual Network, which had 152 layers, 8 times deeper than a comparable network.

Google DeepMind’s Neural Turing Machine is an attempt to create a “computer that mimics the short-term memory of the human brain.” This machine can store and retrieve its memory when learning a solution to a given problem.

Deep Learning has its challenges and limitations

Deep Learning is not well understood. It is a very active area of research that can best be described as chaotic and somewhat unprincipled.

Most research activity is the result of researchers experimenting with ideas and reporting their results. The theory and mathematical models behind why this approach works so well and where it does not work well are still in a nascent stage.

A concrete example of Deep Learning’s limitations is so-called “Adversarial Examples”. This is a technique where an attacker can carefully craft small changes to images to make the machine confidently label them as anything the attacker chooses, while the image looks the same to a human.

adversarial example AI panda An adversarial input, overlaid on a typical image, can cause a machine to mislabel a panda as a gibbon. Source: OpenAI

This could be used to fool face recognition systems into thinking that one person is another, for instance. Some researchers have equated these examples to “optical illusions” in humans. They think that these sorts of edge cases are always going to exist, but we need to understand what causes these issues and how to control them if we are going to rely on these systems in the future.

General AI is still a long way away

Machine learning is only “at the beginning of the S-Curve,” as shown in the diagram below.

machine learning s curve Source: a16z

Our current Neural Network architectures have not reached human-level performance in general computing tasks. However, there are tools like Universe to help create training sets that can be used to train the AI to use a computer in the same way humans do. This includes tasks from using a web browser or email program to playing games.

What about more complex tasks? The number of neurons and equivalent computations that happen in the human brain are on the order of several trillion computations per second.

Deep Learning may provide the algorithms that lead to general computation. But assuming a similar level of operations per second is required from an AI, our current hardware would need building-sized computers and have very high associated costs to get to human-level intelligence.

For AI to become ubiquitous, we will need to continue to improve the density of computation and lower the power usage of computers. IBM have attempted to do this with their TrueNorth chip which runs neural networks on hardware that is thousands of times more power and heat-efficient than general purpose computers.

Right now, “we don’t have a computer that can function with the capabilities of a six year old, or even a three year old, and so we’re very far from general intelligence.”

Would we achieve it someday? Technically, we can. But with a caveat: “We will never get to Super Intelligence if we don’t solve the societal issues around Narrow Artificial Intelligence.”