With its amazing skills, artificial intelligence (AI) has captured the attention of the globe. However, concerns have been raised about its ravenous energy consumption. A sophisticated AI model like the GPT-3 currently uses as much electricity during training as hundreds of German households use in a whole year. But a novel strategy put out by eminent researchers at Germany's Max Planck Institute for the Science of Light in Erlangen promises to transform the future of AI by making it more energy-efficient.


The Neuromorphic Computing Paradigm

Digital artificial neural networks are the foundation of conventional AI, but a revolutionary replacement is on the horizon in the form of neuromorphic computing. 

The energy-intensive data transfer between processors and memory is a significant inefficiency in typical AI setups, according to Florian Marquardt, director at the Max Planck Institute and professor at the University of Erlangen. The human brain serves as the inspiration for neuromorphic computing, which processes data in parallel similarly like synapses in our brains do when processing and storing memories. The efficiency of the brain is being mimicked by devices like photonic circuits, which employ light for calculations.


Self-Learning Physical Machines: A Game-Changer for AI Training

With the help of PhD student Victor López-Pastor, Marquardt develops a ground-breaking training system for neuromorphic computers known as a "self-learning physical machine." This invention does away with the requirement for outside feedback by physically optimising its parameters. 

Marquardt emphasises that this method not only makes training considerably more effective but also saves time and energy in the computer. To minimise energy waste, the process must be reversible; yet, only sufficiently complicated or non-linear processes are able to carry out the numerous transformations between input data and results.


Towards Implementation in The Real World

Marquardt and López-Pastor's theoretical foundation is in line with real-world applications. In order to advance an optical neuromorphic computer that uses superimposed light waves to process information, they are working with an experimental team. They set an ambitious deadline of three years to create the first physical self-learning machine. 

In order to meet the increasing demand for AI while reducing the inefficiencies of current training settings, these future networks promise to handle more data and be trained with larger datasets than current systems. Marquardt firmly states, "We are confident that self-learning physical machines stand a solid chance in the ongoing evolution of artificial intelligence." The future is eagerly anticipated by the scientific community and fans of AI.