Classical computer processors are marvels of human ingenuity, representing more than 70 years’ accumulated research and development, but their processing power is limited because:
- They are deterministic, meaning their output is determined by the initial set conditions and parameter values, whereas most real-world problems (e.g. predicting the weather, human behaviour, or financial markets) are stochastic, meaning they vary randomly and so must be analysed using statistical probability.
- They use digital representations of data and instructions, in a binary on/off code of ones and zeros, whereas the real-world is analogue, meaning continuously variable. An analogue clock with a second-hand that moves continuously is a better representation of time than a digital clock with a display that jumps from one second to the next.
- They work sequentially, solving problems slowly, one step at a time, rather than solving multiple threads of the problem concurrently, even out of sequence.
To process large amounts of data, classical computers use large amounts of energy, because they shuttle information back and forth between the central processing unit and memory storage. This also creates bottle-necks that slow processing down. All of these characteristics make classical processors unsuitable for the fast-growing number of real-world applications where large amounts of data must be processed in real-time, under ‘noisy’ conditions, using minimal energy.
The current solution to processing large amounts of data is to use deep convolutional neural networks (CNNs). CNNs comprise stacked layers of artificial neurons (nerve cells), in which each neuron is connected to every neuron in the next layer, enabling concurrent processing. For image processing, CNNs are run with graphics processing unitsto create output for visual displays used for recognition, localisation, and mapping.
But CNNs require precise processing by every neuron in every layer on every input (30-60 frames per second for standard video), whether the input carries important information or none at all. This is analogous to every single neuron in an animal's brain firing at its maximum rate continuously. Because they operate in this inefficient way, CNNs cannot meet the high-speed, low-energy processing needs of mobile applications such as drones, robots, or autonomous vehicles, for example.
ICNS researchers study the human retina (the light-sensitive lining at the back of the eyeball) and brain to build processors that mimic natural neural networks. In these networks, each layer of neurons filters the incoming data. Inspired by biology, our neuromorphic processors are adaptive: each neuron monitors its input and only activates underlying neurons at points in time and space where new, important information is detected. Data transmission is sparse and event-driven. Redundant or insignificant data is suppressed at every stage (layer) of processing. This is how the brain of the most successful apex predator on Earth (yours) operates on a power budget of only twenty watts.
When neurons are activated, they transmit data as spikes of electrical voltage. The pattern of spikes over time creates a temporal code capable of transmitting large amounts of information quickly, with a time-stamp usefully built into each message. Neuron spiking is stochastic, not deterministic, i.e., random variation affects it, enabling stochastic processing – another advantage of neuromorphic systems.
At ICNS, we create customised hardware platforms to rapidly prototype, test, and compare our processors to competing solutions. When our neuromorphic processors are combined with our neuromorphic sensors and algorithms, the reduction in energy consumption is even more dramatic, and the required depth of the neural network is further reduced. Like animal brains, our neuromorphic systems are small and energy-efficient but powerful, making them especially ideal for distributed computing or mobile applications that solve real-world problems in real-time for positive impact.