Artificial general intelligence is probably the holy grail of computer science. Despite tangible progress in machine learning in recent years, many computer scientists believe that we are still far away from creating really intelligence machines. They say that, probably, even human-level artificial general intelligence is still decades away. The main problem is that we have to incorporate machine learning systems with reasoning and planning. So, what can we do about that?
Intelligence includes many components, but it’s widely accepted that the prime function of intelligence is modeling the world in order to predict the future. As we have seen, nothing, including natural intelligence, can precisely simulate reality. Because of this, in order to comprehend the world, we need to generate abstract concepts and operate with them or think, in other words. The process of thinking or reasoning can be considered as adjusting our models of the world.
The idea that there is a hierarchical structure of patterns or models in our mind has been known probably since the discovery of the structure of the neocortex. Individual patterns, which can be words or ideas, represent simpler models from which more complex models of a world can be constructed, and an overall model of the world in our mind consists of all those smaller models. At a physical level, this overall model is an entire neural network in the brain or at least in the cerebral cortex; whereas different areas of the cortex represent different parts of this biological neural network, which are responsible for forming smaller models like individual words or ideas.
In his book How to create a mind: the secret of human thought revealed, Ray Kurzweil proposed using successions of Markov chains, which are some well-known method in probability theory and machine learning, in order to reproduce natural hierarchical approach in intelligent computer programs. In fact, Markov chains are currently successfully used in many machine learning programs, including Siri. However, deep artificial neural nets also allow building complex concepts out of simpler concepts. So, we don’t necessary need the architecture based on Markov chains.
Obviously, our artificial general intelligence system should be endowed with its goals. In a simple case, it could be just exploring, modeling, and answering.
We have to build such a system which is flexible enough to generate different kinds of models of the reality, including numerous very specific models and also extremely abstract ones, depending on the situation, and these models should be interconnected with each other. Currently, this is probably the most important challenge, in my opinion. Switching between different sub-models rather than constantly running an overall model of the world is the preferable strategy because running an entire model would be a much more computationally expensive approach. We have many computational techniques for modeling, and artificial neural networks are one of them.
As we have seen, recurrent artificial neural networks’ ability to remember is very limited. Because of this, an artificial general intelligence system should be augmented with memory. Currently, there are several examples of combing artificial neural nets with a memory like, for example, a Neural Turing Machine or Differentiable Neural Computer. Recently, a new strategy for augmenting ANNs with memory has been proposed, which is called the Recurrent Entity Network (EntNet) (https://arxiv.org/pdf/1612.03969.pdf).
Also, in order to be able to perform various human-level intelligent tasks, computer systems have to acquire an appropriate amount of common human knowledge. I don’t think this is a very hard problem. Probably, we will soon see the appearance of various new artificial intelligence personal assistants collecting immense amounts of data, including the data about our everyday life.