imagen

June 19, 2021, Saturday. 1st day of CVPR. Virtual workshop.
Starts at 10 am Eastern Time; 4 pm Europe Time.
Held in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition 2021.

Welcome to the Third International Workshop on Event-Based Vision!

Objectives

This workshop is dedicated to event-based cameras, smart cameras, and algorithms processing data from these sensors. Event-based cameras are bio-inspired sensors with the key advantages of microsecond temporal resolution, low latency, very high dynamic range, and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with standard frame-based cameras (which have been the main sensing technology of the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight, build a flying robot with the agility of a fly, and perform structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. In recent years they have received attention from large companies, e.g. the event-sensor company Prophesee collaborated with Intel and Bosch on a high spatial resolution sensor, Samsung announced mass production of a sensor to be used on hand-held devices, and they have been used in various applications on neuromorphic chips such as IBM’s TrueNorth and Intel’s Loihi. The workshop also considers novel vision sensors, such as pixel processor arrays (PPAs), that perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor, the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This workshop will cover the sensing hardware, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.

Topics Covered

List of Talks and Speakers

Ralph Etienne-Cummings (Johns Hopkins Univ., USA)
[Click] Learning Spatiotemporal Filters to Track Event-Based Visual Saliency.

Abstract: Uncovering the nuances behind visual saliency, or the tendency to gaze in a particular direction or toward a specific object, is critical in understanding what and why the human mind focuses on specific features in a field of vision. There are a wide variety of applications in which saliency would provide significant steps forward, such as: tele-tourism, high-accuracy drone cameras, live-data analysis for traffic, and criminal investigations. More specifically, visual saliency in the form of event-based information is particularly attractive because event-based data encodes information in a more compressed and power efficient manner. In this workshop, we discuss an unsupervised learning scheme to learn spatiotemporal filters that can identify and track salient features in an event-based data stream. We show how decision trees and threshold tracking can learn interesting features that are not easily discernable by the human eye, and further compare our findings to a ground-truth human-based saliency experiment with event-based data. We compare hand-crafted versus learned filters with that of the ground-truth human-based data and stress the need for the first event-based visual saliency ground-truth dataset.

Biography: Ralph Etienne-Cummings, an IEEE Fellow, received his B. Sc. in physics, 1988, from Lincoln University, Pennsylvania. He completed his M.S.E.E ('91). and Ph.D. ('94) in electrical engineering at the University of Pennsylvania. Currently, Dr. Etienne-Cummings is a Professor and previous (7/2014 – 7/2020) Chairman of Department of Electrical and Computer Engineering at Johns Hopkins University (JHU). He was the founding Director of the Institute of Neuromorphic Engineering. He has served as Chairman of various IEEE Circuits and Systems (CAS) Technical Committees and was elected as a member of CAS Board of Governors. He also serves on numerous editorial boards and was recently appointed Deputy Editor in Chief for the IEEE Transactions on Biomedical Circuits and Systems. He is the recipient of the NSF’s Career and Office of Naval Research Young Investigator Program Awards, among many other recognitions. He was a Visiting African Fellow at U. Cape Town, Fulbright Fellowship Grantee, Eminent Visiting Scholar at U. Western Sydney and has also won numerous publication awards, most recently the 2012 Most Outstanding Paper of the IEEE Transaction on Neural Systems and Rehabilitation Engineering. He was also recognized as a "ScienceMaker", an African American history archive and for the "Indispensable Roles of African Americans at JHU" exhibit. He has published over 250 peer reviewed article, 11 books/chapters and holds 20 patents/applications on his work.

Bernabé Linares-Barranco (IMSE-CNM, CSIC and Univ. Seville, Spain)
[Click] Event-driven convolution based processing.

Abstract: We will review some of the event-driven hardware developments in which our lab has been involved, covering from sensitive-DVS to event-driven convolutions on dedicated ASICs, FPGAs, and the SpiNNaker platform, with applications in object recognition or stereo vision. We will show how to train event-driven convnets to minimize the number of required spikes, reducing energy consumption for the same recognition tasks. Additionally, we will present some results on a type of spike-timing-dependent-plasticity, which uses only binary weights combined with stochasticity, and which results in hardware that requires less hardware and energy resources for the same accuracy.

Biography: Bernabé Linares-Barranco received a first Ph.D. degree in high-frequency OTA-C oscillator design in June 1990 from the University of Seville, Spain, and a second Ph.D deegree in analog neural network design in December 1991 from Texas A&M University, College-Station, USA. Since June 1991, he has been a Tenured Scientist at the "Instituto de Microelectrónica deSevilla". From September 1996 to August 1997, he was on sabbatical stay at the Department of Electrical and Computer Engineering of the Johns Hopkins University. During Spring 2002 he was Visiting Associate Professor at the Electrical Engineering Department of Texas A&M University, College-Station, USA. In January 2003 he was promoted to Tenured Researcher, and in January 2004 to Full Professor. Since February 2018, he is the Director of the "Insitituto de Microelectrónica de Sevilla".

He has been involved with circuit design for telecommunication circuits, VLSI emulators of biological neurons, VLSI neural based pattern recognition systems, hearing aids, precision circuit design for instrumentation equipment, VLSI transistor mismatch parameters characterization, and over the past 20 years has been deeply involved with neuromorphic spiking circuits and systems, with strong emphasis on vision and exploiting nanoscale memristive devices for learning. He is co-founder of two start-ups, Prophesee SA (www.prophesee.ai) and GrAI-Matter-Labs SAS (www.graimatterlabs.ai), both on neuromorphic hardware. He has been Associate Editor of the IEEE Transactions on Circuits and Systems Part II, IEEE Transactions on Neural Networks, and "Frontiers in Neuromorphic Engineering". Since Jan. 2021 he is Chief Editor of "Frontiers in Neuromorphic Engineering". He is an IEEE Fellow since January 2010. He is listed among the Standford top 2% most world-wide cited scientist in Electrical and Electronic Engineering (top 0.62%).

Oliver Cossairt (Northwestern Univ., USA)
[Click] Hardware and Algorithm Co-design with Event Sensors.

Abstract: In this talk I will provide an overview of our research developing hardware/software co-designs with event sensors, focusing on methods to fuse together information acquired from multiple sensing modalities for task-specific processing such as image reconstruction, object detection, and tracking. I will discuss three main thrusts of research, 1) extracting 3D information from event data using structured light, and inverse rendering, 2), fusing event sensor data together with conventional frame-based camera images, and 3) a feedback-driven, chip-host architecture built for lightweight on-camera processing equipped with novel data compression algorithms for high-bandwidth, task-specific chip/host communication. Finally, I will wrap up by briefly discussing our current work in-progress developing spiking-based neural network (SNN) algorithms to leverage similar co-design principles.

Biography: Oliver Cossairt is Associate Professor in the Computer Science (CS) and Electrical and Computer Engineering (ECE) departments at Northwestern University. Prof. Cossairt is director of the Computational Photography Laboratory (CPL) at Northwestern University (compphotolab.northwestern.edu), whose research consists of a diverse portfolio, ranging in topics from optics/photonics, computer graphics, computer vision, machine learning and image processing. The general goal of CPL is to develop imaging hardware and algorithms that can be applied across a broad range of physical scales, from nanometer to astronomical. This includes active projects on 3D nano-tomography, computational microscopy , cultural heritage imaging analysis of paintings, structured light and ToF 3D-scanning of macroscopic scenes, de-scattering through fog for remote sensing, and coded aperture imaging for astronomy. Prof. Cossairt has garnered funding from numerous corporate sponsorships (Google, Rambus, Samsung, Omron, Oculus/Facebook, Zoloz/Alibaba) and federal funding agencies (ONR, NIH, DOE, DARPA, IARPA, NSF CAREER Award).

Gregory Cohen (Western Sydney Univ., Australia)
[Click] Neuromorphic Vision Applications: From Robotic Foosball to Tracking Space Junk.

Abstract: Neuromorphic event-based cameras offer a different way to approach visual imaging tasks and really excel at problems in which they can leverage the unique way that the hardware works. This talk will introduce a range of applications for neuromorphic cameras ranging from tracking space junk and satellites to their applications in robotic foosball and pinball. We will demonstrate real-world results from space tracking with event-based cameras, and introduce our Astrosite mobile neuromorphic telescope observatories - built specifically to leverage the benefits of neuromorphic space imaging. We will describe some of the problems with benchmarking and comparing neuromorphic systems, and show how robotic foosball and robotic pinball machines may be a great way to demonstrate the benefits of neuromorphic systems.

Biography: Gregory Cohen is an Associate Professor in Neuromorphic Systems at the International Centre for Neuromorphic Systems (ICNS) at Western Sydney University and program lead for neuromorphic algorithms and space applications. Prior to returning to research from industry, he worked in several start-ups and established engineering and consulting firms including working as a consulting engineer in the field of large-scale HVAC from 2007 to 2009, as an electronic design engineer from 2009 to 2011, and as an expert consultant for Kaiser Economic Development Practice in 2012. He is a pioneer of event-based and neuromorphic sensing for space imaging applications and his research interests include unsupervised feature extraction, bio-inspired machine learning, and neuromorphic computation systems. Greg holds a BSc(Eng), MSc(Eng), and BCom(Hons) from the University of Cape Town, South Africa and a joint PhD from Western Sydney University, Sydney, Australia and the University of Pierre and Marie Curie in Paris, France.

Guido de Croon (TU Delft, Netherlands)
[Click] Event-based vision and processing for tiny drones.

Abstract: Event-based vision and processing hold an important promise for creating autonomous tiny drones. Both promise to be light weight and highly energy efficient, while allowing for high-speed perception and control. For tiny drones, these characteristics are essential, as they are extremely restricted in terms of size, weight and power, while at smaller scales drones become even more agile. In my talk, I will present our work on developing event-based perception and control for tiny autonomous drones. I will delve into the approach we followed for having spiking neural networks learn visual tasks such as optical flow estimation. Furthermore, I will explain our ongoing effort to integrate these networks in autonomously flying drones.

Biography: Guido de Croon Received his M.Sc. and Ph.D. in the field of Artificial Intelligence (AI) at Maastricht University, the Netherlands. His research interest lies with computationally efficient, bio-inspired algorithms for robot autonomy, with an emphasis on computer vision. Since 2008 he has worked on algorithms for achieving autonomous flight with small and light-weight flying robots, such as the DelFly flapping wing MAV. In 2011-2012, he was a research fellow in the Advanced Concepts Team of the European Space Agency, where he studied topics such as optical flow based control algorithms for extraterrestrial landing scenarios. After his return at TU Delft, his work has included fully autonomous flight of a 20-gram DelFly, a new theory on active distance perception with optical flow, and a swarm of tiny drones able to explore unknown environments. Currently, he is Full Professor at TU Delft and scientific lead of the Micro Air Vehicle lab (MAVLab) of Delft University of Technology.

Kynan Eng (CEO of iniVation, Switzerland)
[Click] High-Performance Neuromorphic Vision: From Core Technologies to Applications.

Abstract: Neuromorphic event-based vision can enable new levels of enhanced vision sensing in situations where current technologies fail. In this presentation, we provide an overview of our technology, our DV open developer environment, and some real-world application examples.

Biography: Kynan Eng is co-founder and CEO at iniVation. Prior to co-founding iniVation, he was PI of a research group at the Institute of Neuroinformatics at the University of Zurich and ETH Zurich. He also worked in the past at ABB and Alstom. He holds a PhD from the ETH Zurich, and degrees in computer science and mechanical engineering from Monash University.

Shoushun Chen (Founder of CelePixel. Will Semiconductor, China)
[Click] Development of Event-based Sensor and Applications.

Abstract: Event cameras have demonstrated great potential to solve problems in many applications such as robotics, mobile, automotive, gaming and computer vision etc. This talk will introduce the recent development by CelePixel. We will first revisit the pixel architecture, then discuss on the limiting factors of the temporal resolution which could be applicable to other event sensors, finally we will introduce an efficient event-based HCI framework.

Biography: Dr. Shoushun Chen received his B.S, M.E and Ph.D degrees in 2000, 2003 and 2007, respectively. He held a postdoc research fellowship in Hong Kong University of Science and Technology for one year after graduation. From Feb 2008 to May 2009 he was a postdoc research associate at Yale University. In July 2009, he joined Nanyang Technological University as a faculty. Dr. Chen is a founder of CelePixel Technology, which is now part of Will Semiconductor.

Dr. Chen is a senior member of IEEE. He serves as a member, Chair-Elect of Sensory Systems Technical Committee, IEEE Circuits and Systems Society (CASS); Associate Editor of IEEE Sensors Journal; Program Director (Smart Sensors) of VIRTUS, IC Design Centre of Excellence; Regular reviewer for a number of international conferences and journals such as TVLSI, TCAS-I/II, TBioCAS, TPAMI, Sensors, TCSVT, etc.

His research interests include Smart image sensor and imaging system, remote sensing imaging system and mixed signal integrated circuits.

Christian Brändli (CEO of Sony Advanced Visual Sensing AG, Switzerland)
[Click] Event-Based Computer Vision At Sony AVS.

Abstract: Sony Advanced Visual Sensing is a research center of Sony Semiconductor Solutions, the world leader in image sensors. With a long history in the field, Sony AVS works on event-based vision sensors (EVS) and computer vision algorithms. First, the talk will introduce some core principles of event-based processing which have been gathered over the years. The second part of the talk will then highlight some recent applications of event-based algorithms developed at Sony AVS.

Biography: Christian Brändli did his PhD at ETH Zurich in the research group of EVS pioneer Prof. Tobi Delbruck at the Institute of Neuroinformatics where he contributed to early event-based vision sensors and algorithms. After his graduation he co-founded the startup Insightness which developed the first stacked event-based image sensor and benchmark-beating algorithms. After the Insightness team joined Sony in 2019 he became the CEO of Sony AVS.

Anthony Bisulco, Daewon Lee, Volkan Isler (Samsung AI Center NY, USA)
[Click] High Speed Perception-Action Systems with Event-Based Cameras.

Abstract: High-speed perception-action systems are important for mobile robot systems to react in dynamic environments. Event-based cameras have attractive properties for these systems such as high dynamic range, efficient energy use and low latency sensing. At Samsung’s AI Center in NY (SAIC-NY) we have been working on novel DVS-based systems and algorithms to capitalize on these properties. Our previous work in this domain includes a near-chip architecture for low-complexity pedestrian detection on bandwidth-limited networks. In this talk, we will present an overview of our most recent work where the goal is to create high speed perception-action systems for collision avoidance.

The introduction of robots to kitchen environments will require avoidance of incoming high-speed moving obstacles such as falling spices, liquids or sharp objects that they should avoid. Our experimental test-bed to explore these systems consists of shooting a toy-dart(22m/s) at a target located on a linear-actuator with a static event-based camera observing the motion head-on. During the dart’s flight, we developed a perception system to extract time to collision and impact location on the camera plane from the event-stream for triggering a collision avoidance system. The entire dart flight is around 150ms, hence we also analyze the various latencies of the perception-action system and system tradeoffs for collision avoidance. As a result of this analysis, we found an initial observability latency of the dart up to 100ms, which resulted in the use of a telescopic lens to reduce this delay to 20ms. A benefit of using an event-camera in this scenario as opposed to a 60Hz frame-based imager is that the perception process can acquire ~100ms of in-focus events as opposed to one or two motion blurred frames. Inspecting our perception performance using event-data, we established our perception system to estimate time to collision within 24.73% and impact location within 18.4mm on our testing dataset. Overall, our perception system and minimal system latency allows our system to successfully avoid a fast incoming toy dart.

Biography: Anthony Bisulco is a researcher at the Samsung Artificial Intelligence Center New York, where he works on projects at the intersection of robotics, machine learning and neuroscience. Anthony received a Master of Engineering degree from Cornell University and a Bachelor of Science degree from Northeastern University. Anthony has performed research in a variety of fields at the European Center for Nuclear Research, Sensing, Imaging, Control, and Actuation Laboratory, Google, Massachusetts Institute of Technology Lincoln Laboratories and Brookhaven National Laboratory.

Schedule

The tentative schedule is the following:

  1. Session: Neuromorphic cameras and computing.
  2. Session: Event-based sensors in computer vision.
  3. Session: Algorithms and Architectures.
  4. Session: Industrial companies and applications.
  5. Final Panel Discussion

Accepted Papers

Courtesy presentations

We also invite courtesy presentations (short talks) of related papers that are accepted at CVPR main conference or at other conferences. These presentations provide visibility to your work and help to build a community around the topics of the workshop. Please contact the organizers to make arrangements to showcase your work at the workshop.

Organizers

FAQs