Neuromorphic AI Processing Uses 16X Less Energy

Author:
Ally Winning

Date
05/24/2022

 PDF
Researchers have demonstrated that using neuromorphic techniques can reduce the power required for AI processing by up to 16X

© Tim Herman/Intel Corporation TU Graz

One of Intel’s Nahuku boards, each of which contains eight to 32 Intel Loihi neuromorphic chips

 

The next big step forward in electronics will be the migration of artificial intelligence (AI) from the datacentre to the edge. As systems grow more complex, the size of the code required to calculate for every eventuality grows exponentially. Training the system to “think” for itself and make decisions based on its programming and past experiences makes a lot of sense if it can be properly implemented. It is only now that we can manufacture the silicon chips to allow real AI implementations to happen outside of a large computing centre. The amount of processing required to perform AI calculations is enormous, and processing and power scale together. Until now, most AI systems have been installed in datacentres as part of the cloud, to make powering the AI devices easier. But, there is becoming a real need to have AI processing closer to where it is needed, rather than introducing latency by sending captured data through the internet to a datacentre and back. For example, an autonomous vehicle needs to know if a collision is imminent immediately, not several seconds later when the data has been uploaded and control information is returned. To perform at the edge, AI chips have to be small, which has led to unprecedented power density requirements. We either need to find new ways to power these chips, or to try to make them need less power to operate. Even in the datacentre, reducing the power requirements of AI systems would save operators millions of dollars.

 

One solution could be to design ICs that mimic the way that the human brain works. The brain’s hundred billion neurons consume only about 20 watts. If we could get AI processing to operate similarly to our brains, then we could reduce the amount of energy used by AI algorithms to a much more manageable level. TU Graz’s Institute of Theoretical Computer Science and Intel Labs have been working on this issue and now have demonstrated that a large neural network can process sequences, such as sentences, while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research uses our learnings from neuroscience to manufacture chips that function similar to those in the brain. The experiment was based on 32 Intel Labs’ Loihi neuromorphic research chip

 

The research focused on algorithms that work with temporal processes. For example, the system answers questions about a previously told story and extracts relationships between objects or people from the context. The researchers link two types of deep learning networks for this purpose. Feedback neural networks are responsible for “short-term memory.” So-called recurrent modules filter out possible relevant information from the input signal and store it. A feed-forward network then determines which of the relationships found are very important for solving the task. Meaningless relationships are screened out, the neurons only fire in those modules where relevant information has been found. This process leads to the energy savings.

 

This research was financially supported by Intel and the European Human Brain Project, which connects neuroscience, medicine, and brain-inspired technologies in the EU.

 

https://www.tugraz.at/

 

RELATED