Skip to the content.

imagen

June 19, 2023, Monday. 2nd day of CVPR, Vancouver, Canada. Starts at 8 am Local time; 4 pm Europe Time.
Held in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition 2023.

Welcome to the 4th International Workshop on Event-Based Vision!

group photo by S. Shiba Many thanks to all who contributed and made this workshop possible!

Photo Album of the Workshop

Videos! YouTube Playlist

Speakers

imagen

Schedule (starts at 8 am Local time)

Time Speaker and Title
08:00 SESSION 1
08:00 Welcome and Organization. Video, Slides
08:05 Kaushik Roy (Purdue University). Re-thinking Computing with Neuro-Inspired Learning: Algorithms, Sensors, and Hardware Architecture. Video, Slides
08:35 Ryad Benosman (Meta). The Interplay Between Events and Frames: A Comprehensive Explanation. Video, Slides
09:05 Katie Schuman (University of Tennessee). A Workflow for Low-Power Neuromorphic Computing for Event-Based Vision Applications. Video
09:35 Andre van Schaik (Western Sydney University). Applications of Event-based Vision at the International Centre for Neuromorphic Systems. Video
10:00 Coffee break. Set up posters
10:30 SESSION 2
10:30 Poster session: contributed papers, demos and courtesy papers (as posters). See links below. Boards 1 -- 49.
12:30 Lunch break
13:30 SESSION 3
13:30 Felix Heide (Princeton University). Neural Nanophotonic Cameras. Video
14:00 Arren Glover (Italian Institute of Technology). Real-time, Speed-invariant, Vision for Robotics. Video, Slides
14:30 NeuroPAC. Video
14:35 Cornelia Fermüller (University of Maryland). When do neuromorphic sensors outperform cameras? Learning from dynamic features. Video, Slides
14:40 Daniel Gehrig (Scaramuzza's Lab, University of Zurich). Efficient event processing with geometric deep learning. Video, Slides
14:45 Guillermo Gallego (TU Berlin, ECDF, SCIoI). Event-based Robot Vision for Autonomous Systems and Animal Observation. Video, Slides
14:50 Kenneth Chaney and Fernando Cladera (Daniilidis' Lab, University of Pennsylvania). M3ED: Multi-robot, Multi-Sensor, Multi-Environment Event Dataset. Video
15:00 Boxin Shi (Peking University). NeuCAP: Neuromorphic Camera Aided Photography. Video, Slides
15:30 Coffee break
16:00 SESSION 4
16:00 Yulia Sandamirskaya and Andreas Wild (Intel Labs). Visual Processing with Loihi 2. Video, Slides
16:20 Kynan Eng (iniVation). Beyond Frames and Events: Next Generation Visual Sensing. Video
16:40 Atsumi Niwa (SONY). Event-based Vision Sensor and On-chip Processing Development. Video, Slides
17:00 Andreas Suess (OmniVision Technologies). Towards hybrid event/image vision. Video
17:20 Nandan Nayampally (Brainchip). Enabling Ultra-Low Power Edge Inference and On-Device Learning with Akida. Video
17:35 Christoph Posch (Prophesee). Event sensors for embedded AI vision applications. Video
17:50 Award Ceremony. Video, Slides

Proceedings at The Computer Vision Foundation (CVF)

List of accepted papers and live demos

  1. M3ED: Multi-Robot, Multi-Sensor, Multi-Environment Event Dataset, Dataset, Code
  2. Asynchronous Events-based Panoptic Segmentation using Graph Mixer Neural Network, and Suppl mat, Code and data
  3. Event-IMU fusion strategies for faster-than-IMU estimation throughput, and Suppl mat
  4. Fast Trajectory End-Point Prediction with Event Cameras for Reactive Robot Control, and Suppl mat, Code
  5. Exploring Joint Embedding Architectures and Data Augmentations for Self-Supervised Representation Learning in Event-Based Vision, and Suppl mat, Poster Code
  6. Event-based Blur Kernel Estimation For Blind Motion Deblurring
  7. Neuromorphic Event-based Facial Expression Recognition, Dataset
  8. Low-latency monocular depth estimation using event timing on neuromorphic hardware
  9. Frugal event data: how small is too small? A human performance assessment with shrinking data, Poster
  10. Flow cytometry with event-based vision and spiking neuromorphic hardware, Code, Poster
  11. How Many Events Make an Object? Improving Single-frame Object Detection on the 1 Mpx Dataset, and Suppl mat, Code
  12. EVREAL: Towards a Comprehensive Benchmark and Analysis Suite for Event-based Video Reconstruction, and Suppl mat, Code, Poster
  13. X-maps: Direct Depth Lookup for Event-based Structured Light Systems, and Poster, Code
  14. PEDRo: an Event-based Dataset for Person Detection in Robotics, Dataset
  15. Density Invariant Contrast Maximization for Neuromorphic Earth Observations, and Suppl mat, Code, Poster
  16. Entropy Coding-based Lossless Compression of Asynchronous Event Sequences, and Suppl mat, Poster
  17. MoveEnet: Online High-Frequency Human Pose Estimation with an Event Camera, and Suppl mat, Code
  18. Predictive Coding Light: learning compact visual codes by combining excitatory and inhibitory spike timing-dependent plasticity
  19. Neuromorphic Optical Flow and Real-time Implementation with Event Cameras, and Suppl mat, Poster, Video
  20. HUGNet: Hemi-Spherical Update Graph Neural Network applied to low-latency event-based optical flow, and Suppl mat
  21. End-to-end Neuromorphic Lip-reading, Poster
  22. Sparse-E2VID: A Sparse Convolutional Model for Event-Based Video Reconstruction Trained with Real Event Noise, and Suppl mat, Video
  23. Interpolation-Based Event Visual Data Filtering Algorithms, Code
  24. Within-Camera Multilayer Perceptron DVS Denoising, Suppl mat, Code, Poster
  25. Shining light on the DVS pixel: A tutorial and discussion about biasing and optimization, Tool
  26. PDAVIS: Bio-inspired Polarization Event Camera, Suppl mat, Code
  27. Live Demonstration: E2P–Events to Polarization Reconstruction from PDAVIS Events, and Suppl mat, Code
  28. Live Demonstration: PINK: Polarity-based Anti-flicker for Event Cameras, Video
  29. Live Demonstration: Event-based Visual Microphone, and Suppl mat
  30. Live Demonstration: SCAMP-7
  31. Live Demonstration: ANN vs SNN vs Hybrid Architectures for Event-based Real-time Gesture Recognition and Optical Flow Estimation
  32. Live Demonstration: Tangentially Elongated Gaussian Belief Propagation for Event-based Incremental Optical Flow Estimation, Code
  33. Live Demonstration: Real-time Event-based Speed Detection using Spiking Neural Networks
  34. Live Demonstration: Integrating Event-based Hand Tracking Into TouchFree Interactions, and Suppl mat

List of courtesy papers (as posters, during session #2)

  1. Event Collapse in Motion Estimation using Contrast Maximization, by Shintaro Shiba, from Keio University and TU Berlin. Poster
  2. Deep Asynchronous Graph Neural Networks for Events and Frames, by Daniel Gehrig, from the University of Zurich.
  3. Animal behavior observation with event cameras, by Friedhelm Hamann, from TU Berlin and the Science of Intelligence Excellence Cluster (SCIoI). Poster
  4. Recurrent Vision Transformers for Object Detection with Event Cameras, by Mathias Gehrig, from the University of Zurich, CVPR 2023.
  5. Learning to Estimate Two Dense Depths from LiDAR and Event Data, by Vincent Brebion (Université de Technologie de Compiègne), Julien Moreau and Franck Davoine, SCIA 2023. Project Page (suppl. material, poster, code, dataset, videos). PDF
  6. ESS: Learning Event-based Semantic Segmentation from Still Images, by Nico Messikomer, from the University of Zurich, ECCV 2022.
  7. Multi-event-camera Depth Estimation, by Suman Ghosh, from TU Berlin. Poster
  8. Event-based shape from polarization, by Manasi Muglikar, from the University of Zurich, CVPR 2023.
  9. All-in-focus Imaging from Event Focal Stack, by Hanyue Lou, Minggui Teng, Yixin Yang, and Boxin Shi, CVPR 2023.
  10. Event-aided Direct Sparse Odometry, by Javier Hidalgo-Carrió, from the University of Zurich. Poster
  11. High-fidelity Event-Radiance Recovery via Transient Event Frequency, by Jin Han, Yuta Asano, Boxin Shi, Yinqiang Zheng, and Imari Sato, from the University of Tokyo, NII and Peking University, CVPR 2023. Poster

Papers at the main conference (CVPR 2023)

  1. Adaptive Global Decay Process for Event Cameras, Code
  2. All-in-focus Imaging from Event Focal Stack, Video
  3. Data-driven Feature Tracking for Event Cameras, Video, Code
  4. Deep Polarization Reconstruction with PDAVIS Events, Video, Code
  5. Event-based Blurry Frame Interpolation under Blind Exposure, Code
  6. Event-Based Frame Interpolation with Ad-hoc Deblurring, Video, Code
  7. Event-based Shape from Polarization, Video, Code
  8. Event-based Video Frame Interpolation with Cross-Modal Asymmetric Bidirectional Motion Fields, Video
  9. Event-guided Person Re-Identification via Sparse-Dense Complementary Learning, Code
  10. EventNeRF: Neural Radiance Fields from a Single Colour Event Camera, Video, Project page
  11. EvShutter: Transforming Events for Unconstrained Rolling Shutter Correction, Video
  12. Frame-Event Alignment and Fusion Network for High Frame Rate Tracking, Code
  13. Hierarchical Neural Memory Network for Low Latency Event Processing, Video, Project page
  14. High-fidelity Event-Radiance Recovery via Transient Event Frequency, Video, Code
  15. Learning Adaptive Dense Event Stereo from the Image Domain
  16. Learning Event Guided High Dynamic Range Video Reconstruction, Video
  17. Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution, Video
  18. Progressive Spatio-temporal Alignment for Efficient Event-based Motion Estimation, Code
  19. Recurrent Vision Transformers for Object Detection with Event Cameras, Video, Code
  20. “Seeing” Electric Network Frequency from Events, Project page
  21. Tangentially Elongated Gaussian Belief Propagation for Event-based Incremental Optical Flow Estimation, Video, Code

Objectives

This workshop is dedicated to event-based cameras, smart cameras, and algorithms processing data from these sensors. Event-based cameras are bio-inspired sensors with the key advantages of microsecond temporal resolution, low latency, very high dynamic range, and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with standard frame-based cameras (which have been the main sensing technology for the past 60 years). These revolutionary sensors enable the design of a new class of algorithms to track a baseball in the moonlight, build a flying robot with the agility of a bee, and perform structure from motion in challenging lighting conditions and at remarkable speeds. These sensors became commercially available in 2008 and are slowly being adopted in computer vision and robotics. In recent years they have received attention from large companies, e.g., the event-sensor company Prophesee collaborated with Intel and Bosch on a high spatial resolution sensor, Samsung announced mass production of a sensor to be used on hand-held devices, and they have been used in various applications on neuromorphic chips such as IBM’s TrueNorth and Intel’s Loihi. The workshop also considers novel vision sensors, such as pixel processor arrays (PPAs), which perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor, the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc. This workshop will cover the sensing hardware, as well as the processing and learning methods needed to take advantage of the above-mentioned novel cameras.

Topics Covered

Organizers

organizers

Important Dates

FAQs

Supported by

Ack

The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.