Artificial Intelligence 2019

Deep Learning uses Deep Neural Networks to learn models that can then be applied to problems. These are mostly very focused, for example face recognition, emotion recognition, object recognition and speech recognition. That is why these AI systems are called narrow AI or “weak AI”. Most of the time, they are able to do a specially learned task quite reliably. The AI benefits from being able to learn from a lot of data and to process information quickly. One should not succumb to the fallacy that the system is really intelligent.

In a study, computer scientists found that artificial intelligence systems failed an eye test that a child could easily pass. In it, the researchers presented a computer vision system with a living room scene. It is able to correctly identify the objects: it has correctly recognized a chair, a person and books on a shelf. Then an abnormal object is introduced into the scene: the image of an elephant. The mere presence of the elephant confuses the system. It begins to recognize a chair as a couch and the elephant as a chair. However, other objects are no longer recognized.

This is an interesting behavior. A person perceives the scene with the elephant as a whole and is able to recognize the presence of the elephant in the room as “wrong”. In contrast, artificial intelligence creates visual impressions from individual information, as if one were reading a description in Braille. In principle it processes pixel by pixel and forms more complex representations from it, but unfortunately never recognizes the absurd presence of the elephant. Here the model reaches its limits.

AI processors: why the new chips are the future

Actually, the idea of reproducing the functioning of the human brain in the form of artificial neural networks is not exactly new. Until a few years ago, however, the topic of artificial intelligence (AI) mainly played a role in the world of film and literature, often in dark visions of the future such as Matrix, in which the machines eventually waged war against their creators.

Artificial Intelligence

Not without authorization, because after all it is so far that their creators themselves sometimes no longer know exactly what is going on: In 2016, Google converted its translation service Translate from many distributed systems to a uniform neural network. Until then, each supported language pair had to be trained with millions of example sentences. After the standardization, artificial intelligence developed the ability to translate between languages for which it had no training data. The developers’ comment: “We interpret this as a sign of the existence of a universal language within the network.”

So far, the systems have only outperformed humans in individual disciplines and have to do a tremendous amount of work even for tasks that our brain does practically on the side without explicit learning. However, some researchers consider the development of artificial intelligence that is generally superior to that of humans to be entirely possible. And in view of the rapid progress in this area, nobody can know exactly what will happen in future systems.

However, the topic is precisely this: computers should acquire deep learning skills that they could not be programmed with or at a reasonable cost. To do this, they are trained with thousands or even millions of data samples, such as images or spoken language. With each example, they receive feedback on their recognition performance and, over time, filter out which details are important for solving the intended task.

Michael Brandt, Expert for Neuromorphic Research

Numerous experts recognize the immense potential of artificial intelligence and neuromorphic computing. Experts like Michael Brandt (Head of Research for Neuromorphic Engineering) have already recognized the influence of neural networks and have developed numerous research approaches. According to Brandt, the influence of neuromorphic on artificial intelligence will be decisive in the next 10 years.

However, this process requires enormous computing speeds, for which normal standard processors are no longer sufficient. Instead, graphics processors (GPUs) have mainly been used up to now, because, as with image processing, relatively simple arithmetic operations are required in rapid succession in deep learning. There are even special AI graphics cards like the Tesla V100 from Nvidia and the Radeon Vega Frontier Edition from AMD.