Artificial neural networks as they are typically known run as computer programs that simulate the human brain’s method of solving problems. Researchers can teach these programs to do many remarkable tasks that were unthinkable just a few years ago. The main capability of artificial neural networks is to efficiently analyze complex data. They can be used to recognize patterns and relationships within data. Google uses this technology for image recognition. Now new research brings similar neural network technology to hardware.
In these hardware versions of neural networks data processing happens right in the device, versus sending it over the Internet. The benefit to this approach is that it saves time and allows for continued processing even if there is no WiFi connection. Special circuits route and re-route the data and are configured to make receiving and processing the data faster and more efficient, as it is locally processed instead of sent to the cloud. These neural networks based on printed circuit board systems have been evolving quickly; especially over the last two years.
Two of the most respected engineering schools in the country have developed systems that mimic the human brain’s capacity to process information. In 2014 Stanford engineers created circuit boards with underlying chips that process information in a similar way to the human brain. The significance of this innovative system is its ability to not only operate like a human brain, but to do so more efficiently.
The Stanford neural network called NeuroGrid uses dramatically less power than that used in a desktop computer processing the same information. It operates 100,000 times more efficiently and 9,000 times more rapidly than a desktop computer. The importance of that fact is that in the future, robots outfitted with these printed circuit boards would not need to be connected to large power supplies. The system is used by scientists conducting research on brain activity and can be configured to mimic specific layers of the brain’s cortex.
Fast forward to 2016 and MIT is working on its own artificial intelligence neural network system, called Eyeriss; a system that was designed to be 10 times more efficient than a mobile graphical user interface. Mobile devices using this system would not need to send data to the cloud for processing. The data processing would happen on programs running on chips in the mobile devices themselves.
The key innovation that Eyeriss offers is that each processing core has its own memory, which eliminates the need for data to be sent back and forth inefficiently from cores to a central memory; hence Eyeriss’ lower power requirements. These hardware-based neural systems for mobile devices could also share data more easily between cores. They could usher in a new era of devices that analyze data and learn and find new ways to solve problems on their own.