Understanding the Human Visual System via Forensic Neuromorphic Engineering


This project will investigate event-based sensors and networks in the presence of brightness, color, geometry, and motion-based optical illusions. The project will involve the design of networks and systems that replicate these illusions. For each illusion, the visual processing performance of the developed sensor-processor system will be investigated in the context of a relevant task under natural, cluttered visual conditions with the aim of better understanding the ecological pressures that caused the evolution of the system.


How does our brain process visual information?  The neuroscience of vision is among the most intensely studied fields in science. Yet due to the vast complexity of the visual system and our technical limitations in probing the living human brain, we are still unable to definitively answer many of the most basic questions around how the visual system operates. The field of Neuromorphic Engineering turns the problem of understanding the brain on its head. Instead of attempting to study the vast array of electrical and chemical pathways in the brain in minute phenomenological detail, Neuromorphic Engineers attempt to replicate the superior performance of its systems using the same power, speed, accuracy, and structural constraints, with the assumption that this approach significantly constrains the solution space and that any developed solutions meeting the same requirements are likely to have similar functional properties to those used in the brain. Such solutions are likely to provide deep insights into how the brain processes information.

Neuromorphic engineers have developed a range of biologically inspired sensors that perceive the world in a similar manner as the human eye. One such sensor called the Dynamic Vision Sensor or Silicon Retina operates entirely differently to a normal camera. The DVS operates in an event-based manner, that is without sequentially capturing frames via a global shutter. Instead, the event-based DVS aims to model the photo-receptor circuits present in the retina by having each pixel of operating independently and generating spikes (events) only in response to detected changes in the visual scene.

As every good engineer knows, one of the best ways to understand how a system works is to investigate its failure modes. The investigation and replication of modes of failure, called Forensic Engineering, allows us to understand how the design of a complex system when interacting with a complex environment can result in unexpected outputs. By studying the edge cases where the biological visual system responds in an unexpected or incorrect manner, we can gain deep insights into its internal functioning.

Event-based sensors such as the DVS allow us to perform a wide range of experiments in controlled as well as natural environments. By capturing stimuli that cause optical illusions in the same event-based way as the human eye we can attempt to design and investigate processing networks that reproduce the same effects. This would provide us with real-time working models that model the key functional pathways that cause the illusions. Once developed these working “faulty” visual processing systems can then be tested in the type of natural cluttered visual environments where their biological models originally evolved and have their performance tested on the types of visual detection, recognition and tracking tasks that made the difference between life and death. In this way real, working, neuromorphic, event-based sensor-processor systems operating in the real-world may allow us to not only investigate optical illusions but to also gain insights into why they evolved in the first place and what functions they serve.

Tasks to be completed

Investigation of event-based greyscale image reconstruction networks and event-based lateral inhibition networks on of lightness constancy testing stimuli such as the Munker–White's Illusion and Gradient Illusion.

A working event-based model of retinal photopigment depletion which replicates the Negative Afterimage illusion. This can potentially be extended to an investigation of Positive Afterimages and also the color domain.

Design of event-based lateral inhibition networks which replicate the Hermann Grid and Scintillating Grid illusions. An extension could include an examination of the benefits of the developed event-based system for edge detection and enhancement in real-world cluttered environments.

Investigation of color constancy using event-based color reconstruction networks in the presence of spatial color mixing stimuli such as in the von Bezold effect.

Investigation of event-based laterally inhibited edge detection networks in the presence of geometrical-optical illusions such as the Caf√© Wall illusion, and M√ľnsterberg Checkerboard illusions. An extension could include a comparison of the network with and without lateral inhibition for object identification in cluttered environments.

Investigation of event-based laterally inhibited visual motion estimation networks in the presence of motion inducing optical illusions such as the Dynamic Luminance-Gradient effect, Fraser Wilcox illusion, and Barberpole illusion. An extension could involve testing of developed networks to estimate motion in natural visual scenes.

Tools/Skills required

Experience with C++, Python or Matlab for network design and testing


Dr Saeed Afshar