Aprovis3D

APROVIS3D project targets analog computing for artificial intelligence in the form of Spiking Neural Networks (SNNs) on a digital architecture. The project relies on SpiNNaker applied to a stereopsis system dedicated to coastal surveillance using an aerial robot. Computer vision systems widely rely on artificial intelligence and especially neural network-based machine learning, which recently gained huge visibility. The training stage for deep convolutional neural networks is both time and energy consuming. In contrast, the human brain has the ability to perform visual tasks with unrivalled computational and energy efficiency. It is believed that one major factor of this efficiency is the fact that information is vastly represented by short pulses (spikes) at analog – not discrete – times. However, computer vision algorithms using such representation still lack in practice, and its high potential is largely underexploited. Inspired from biology, the project addresses the scientific question of developing a low-power, end-to-end analog sensing and processing architecture of 3D visual scenes, without a central clock and aims to validate them in real-life situations. More specifically, the project develops new paradigms for biologically inspired vision, from sensing to processing, in order to help machines such as Unmanned Autonomous Vehicles (UAV), autonomous vehicles, or robots gain high-level understanding from visual scenes. Event-based neuromorphic vision sensors have led to an increased interest in studying and developing a new class of fast and accurate vision systems. This project ambitions to develop a new design of event-based vision system, based on (1) improved event-based vision sensors, (2) new neuromorphic algorithms (3) their implementation on SpiNNaker. With this approach that diverges radically from the mainstream frame-based approaches, we expect a more efficient processing for such vision tasks (e.g., object detection, optical flow characterization). Moreover, we aim to reach higher information fidelity, without the noise induced by conventional capture and pre-processing processes, and at a much lower energy cost. The ambitious long-term vision of the project is to develop the next generation AI paradigm that will eventually compete with deep learning. We believe that neuromorphic computing, mainly studied in EU countries, will be a key technology in the next decade. It is therefore both a scientific and strategic challenge for the EU to foster this technological breakthrough.

The consortium from four EU countries offers a unique combination of expertise that the project requires. SNNs specialists from various fields, such as visual sensors (IMSE, Spain), neural network architecture and computer vision (Uni. of Lille, France) and computational neuroscience (INT, France) will team up with robotics and automatic control specialists (NTUA, Greece), and low power integrated systems designers (ETHZ, Switzerland) to help geoinformatics researchers (UNIWA, Greece) build a demonstrator UAV for coastal surveillance (TRL5). Adding up to the shared interest regarding analog based computing and computer vision, all team members have a lot to offer given their different and complementary points of view and expertise. Key challenges of this project will be end-to-end analog system design (from sensing to AI- based control of the UAV and 3D coastal volumetric reconstruction), energy efficiency, and practical usability in real conditions. We aim to show that such a bioinspired analog design will bring large benefits in terms of power efficiency, adaptability and efficiency needed to make coastal surveillance with UAVs practical and more efficient than digital approaches. The original event sensor that is being developed at IMSE is fully analog: it performs an analog pre-processing of the photocurrent generated by the illumination by computing the relative temporal changes of the illumination using fully analog circuits. The full temporal resolution enables continuous time, which is fundamentally different from standard frame-based cameras, where time is discretized and data is output sequentially at every time step. We expect this sensor to be fabricated during Y2 of the project. The signal information is coded as events, later interpreted as spikes, that occur continously in time, in an analog fashion. The asynchronous signal from the sensor together with the asynchronous computation with the neuromorphic hardware are the most innovative aspect in our approach. We intend to build a fully asynchronous continuous-time system without a central clock, without ever adding timestamps to the events and spikes.