Neuromorphic Processing

The standard processors which power our laptops and PCs are marvels of human ingenuity. They are the engines of our information age and represent more than half a century of accumulated research and development. However, based on the 1945 design by John von Neumann, all such processors share the following properties: they are deterministic, they operate on digital representations of instructions and numbers, and they do so in a sequential manner. These properties can create informational bottlenecks that are unsuitable for applications where large amounts of temporal data must be processed in real time, under noisy conditions, while using minimal power. The growing demand for adaptive, automated, portable systems means that the number of such applications is increasing rapidly.

The current standard solution to processing large amounts complex data is the use of deep convolutional neural networks. Deep convolutional neural networks running on GPU platforms processing frame-based data represent the current state of the art solution to recognition, localisation, and mapping in the academic, civilian, and commercial space. Yet the data rate issues which make frame-based sensors unsuitable for many mobile power and bandwidth constrained applications pale in comparison to the difficulties of porting deep learning artificial neural networks onto mobile high-speed low-power platforms. To the first approximation, every layer of feedforward processing in the brain can be viewed as operating in a similar regime as the first layer of retinal cells: each layer detects, changes in their input using localised competition among neurons to extract features and encodes their intensity using ever sparser spatio-temporal spike patterns. This means that every cortical layer acts as a mesh of dynamic filters blocking activation of subsequent layers except at points in time and space where new higher level salient features have been detected. It is only through such a hierarchical event-based feature detecting architecture, and the suppression of redundant activation at every stage of processing, that the brain of the most successful apex predator on earth could operate on a power budget of a mere twenty watts. In contrast, deep convolutional neural nets require high precision numerical processing at every single node of every layer on every input frame, regardless of whether the input carries highly salient information or none at all. This highly inefficient mode of operation would be analogous to every single neuron in an animal's brain firing at its maximum firing rate. When the event-based paradigm used in neuromorphic sensors is extended to deep neural networks, the data rate reduction becomes even more dramatic, since the total number of processing nodes in deep convolutional neural networks dwarf the number of input channels.

In contrast to standard processors, the bio-inspired spiking processor designs that are the focus of the ICNS research mimic the way the brain processes information in that they are stochastic, adaptive, distributed and use time itself as the central processing element. With the help of customised hardware platforms, these alternative processor architectures and algorithms can be rapidly prototyped, tested, and compared in performance, power efficiency, and speed to traditional solutions against which they must compete. In this way this our research probes the large search space of potential solutions to real-time event-based processing in hardware for real world applications.