Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very ‘deep’ network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle.
Computer scientist Wolfgang Maass, together with his Ph.D. student Christoph Stöckl, has now found a design method for artificial neural networks that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in computer simulations for image classification in such a way that the neurons —similar to neurons in the brain—only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools.