Skip to the content.

imagen

June 19, 2021, Saturday. 1st day of CVPR. Virtual workshop.
Starts at 10 am Eastern Time; 4 pm Europe Time.
Held in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition 2021.

Welcome to the Third International Workshop on Event-Based Vision!

Pre-recorded videos

WATCH HERE all videos! (YouTube Playlist)

Invited Speakers

speakers

Schedule

Time EST SPEAKER TITLE
10:00 Welcome and Organization, Video, Slides
10:10 SESSION 1 VIDEO OF LIVE SESSION
10:10 Ryad Benosman
(University of Pittsburgh, CMU, Sorbonne)
Event Computer Vision 10 years Assessment: Where We Came From, Where We Are and Where We Are Heading To. Video, Slides
10:20 Gregory Cohen
(Western Sydney Univ.)
Neuromorphic Vision Applications: From Robotic Foosball to Tracking Space Junk, Video
10:30 Robert Mahony
(Australian National Univ.)
Fusing Frame and Event data for High Dynamic Range Video. Video, Slides, Paper
10:40 Kynan Eng
(CEO of iniVation)
High-Performance Neuromorphic Vision: From Core Technologies to Applications, Video
10:50 Panel discussion
11:10 SESSION 2 VIDEO OF LIVE SESSION
11:10 Bernabé Linares-Barranco
(IMSE-CNM, CSIC and Univ. Seville)
Event-driven convolution based processing. Video, Slides
11:20 Kwabena Boahen
(Stanford)
Routing Events in Two-Dimensional Arrays with a Tree. Video, Slides
11:30 Ralph Etienne-Cummings
(Johns Hopkins Univ.)
Learning Spatiotemporal Filters to Track Event-Based Visual Saliency. Video, Slides
11:40 Ignacio Alzugaray
(ETH Zurich)
Towards Asynchronous SLAM with Event Cameras. Video, Slides
11:50 Panel discussion
12:10 INTERMISSION
12:10 Mathias Gehrig DSEC competition, VIDEO OF LIVE SESSION
12:20 Award Ceremony, VIDEO OF LIVE SESSION
12:30 Poster session of accepted papers and courtesy presentations
Yi Zhou
(HKUST)
Event-based Visual Odometry: A Short Tutorial. Video, Slides
S. Tulyakov (Huawei) and
D. Gehrig (UZH)
Time Lens: Event-based Video Frame Interpolation. Video, Slides
Federico Paredes-Vallés
(TU Delft)
Back to Event Basics: Self-Supervised Learning of Image Reconstruction for Event Cameras via Photometric Constancy. Video, Slides
Daqi Liu
(U. Adelaide)
Spatiotemporal Registration for Event-based Visual Odometry. Video
Cornelia Fermüller
(U. Maryland)
0-MMS: Zero-Shot Multi-Motion Segmentation With A Monocular Event Camera. Video
Cornelia Fermüller
(U. Maryland)
EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing And Following. Video
Joe Maljian
(Oculi, Inc)
Oculi products portfolio. Video, Slides
14:00 SESSION 3 VIDEO OF LIVE SESSION
14:00 Yulia Sandamirskaya
(Intel Labs)
Neuromorphic computing hardware and event based vision: a perfect match? Video
14:10 Anthony Bisulco, Daewon Lee, Daniel D. Lee, Volkan Isler
(Samsung AI Center NY)
High Speed Perception-Action Systems with Event-Based Cameras. Video, Slides
14:20 Guido de Croon
(TU Delft, Netherlands)
Event-based vision and processing for tiny drones. Video, Slides
14:30 Chiara Bartolozzi
(IIT, Italy)
Neuromorphic vision for humanoid robots. Video
14:40 Panel discussion
15:00 SESSION 4 VIDEO OF LIVE SESSION
15:00 Luca Verre
(Co-founder and CEO of Prophesee)
From the lab to the real world: event-based vision evolves as a commercial force. Video, Slides
15:10 Oliver Cossairt
(Northwestern Univ.)
Hardware and Algorithm Co-design with Event Sensors. Video, Slides
15:20 Shoushun Chen
(Founder of CelePixel. Will Semiconductor)
Development of Event-based Sensor and Applications. Video, Slides
15:30 Christian Brändli
(CEO of Sony AVS)
Event-Based Computer Vision At Sony Advanced Visual Sensing. Video, Slides
15:40 Panel discussion

imagen

Accepted Papers

Reviewer Acknowledgement

We thank our reviewers for a thorough review process.

Objectives

This workshop is dedicated to event-based cameras, smart cameras, and algorithms processing data from these sensors. Event-based cameras are bio-inspired sensors with the key advantages of microsecond temporal resolution, low latency, very high dynamic range, and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with standard frame-based cameras (which have been the main sensing technology of the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight, build a flying robot with the agility of a fly, and perform structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. In recent years they have received attention from large companies, e.g. the event-sensor company Prophesee collaborated with Intel and Bosch on a high spatial resolution sensor, Samsung announced mass production of a sensor to be used on hand-held devices, and they have been used in various applications on neuromorphic chips such as IBM’s TrueNorth and Intel’s Loihi. The workshop also considers novel vision sensors, such as pixel processor arrays (PPAs), that perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor, the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This workshop will cover the sensing hardware, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.

Topics Covered

Organizers

organizers

This workshop is sponsored by event camera manufacturer

sponsor

FAQs