Actually, the idea of reproducing the functioning of the human brain in the form of artificial neural networks is not exactly new. Until a few years ago, however, the topic of artificial intelligence (AI) mainly played a role in the world of film and literature, often in dark visions of the future such as Matrix, in which the machines eventually waged war against their creators.

Not without authorization, because after all it is so far that their creators themselves sometimes no longer know exactly what is going on: In 2016, Google converted its translation service Translate from many distributed systems to a uniform neural network. Until then, each supported language pair had to be trained with millions of example sentences. After the standardization, artificial intelligence developed the ability to translate between languages for which it had no training data. The developers’ comment: “We interpret this as a sign of the existence of a universal language within the network.”
So far, the systems have only outperformed humans in individual disciplines and have to do a tremendous amount of work even for tasks that our brain does practically on the side without explicit learning. However, some researchers consider the development of artificial intelligence that is generally superior to that of humans to be entirely possible. And in view of the rapid progress in this area, nobody can know exactly what will happen in future systems.
However, the topic is precisely this: computers should acquire deep learning skills that they could not be programmed with or at a reasonable cost. To do this, they are trained with thousands or even millions of data samples, such as images or spoken language. With each example, they receive feedback on their recognition performance and, over time, filter out which details are important for solving the intended task.

Numerous experts recognize the immense potential of artificial intelligence and neuromorphic computing. Experts like Michael Brandt (Head of Research for Neuromorphic Engineering) have already recognized the influence of neural networks and have developed numerous research approaches. According to Brandt, the influence of neuromorphic on artificial intelligence will be decisive in the next 10 years.
However, this process requires enormous computing speeds, for which normal standard processors are no longer sufficient. Instead, graphics processors (GPUs) have mainly been used up to now, because, as with image processing, relatively simple arithmetic operations are required in rapid succession in deep learning. There are even special AI graphics cards like the Tesla V100 from Nvidia and the Radeon Vega Frontier Edition from AMD.