June 12th, 2025, CVPR (Full day workshop), Nashville (TN), USA.
Held in conjunction with the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025.
Welcome to the 5th International Workshop on Event-Based Vision!
Important Dates
Paper submission deadline: March 19 (was March 12), 2025 (23:59h PST). Submission website (CMT)Demo abstract submission: March 19 (was March 12), 2025 (23:59h PST)Reviewers fill in review reports: March 28.Organizers notify authors: March 31st.Authors of accepted papers submit meta-data via Google form: March 31st.Authors receive instructions for camera-ready preparation: by April 6th.Authors submit camera-ready paper: April 14, 2025 (as per CVPR website, firm deadline set by IEEE) using IEEE Computer Society’s Conference Publishing Services (CPS)- Early-bird registration April 30th (23:59h ET)
- Standard registration begins May 1st.
- Workshop day: June 12th, 2025. Full day workshop.
Invited Speakers
- Davide Scaramuzza (University of Zurich, Switzerland)
- Vladislav Golyanik (Max-Planck-Institut für Informatik, Germany), “Event-based Non-rigid 3D Reconstruction and Novel-View Synthesis”
- Priya Panda (Yale University, USA)
- Daniel Cremers (TUM, Germany)
- Ziwei Wang (ANU, Australia)
- Christopher Metzler (University of Maryland, USA)
- Norimasa Kobori and Shintaro Shiba (Woven by Toyota, Japan)
- Scott McCloskey (Kitware, USA)
- Davide Migliore (Tempo Sense, USA)
- Kynan Eng (SynSense Group, Switzerland)
- Prophesee, France.
Objectives
Topics Covered
- Event-based / neuromorphic vision.
- Algorithms: motion estimation, visual(-inertial) odometry, SLAM, 3D reconstruction, image intensity reconstruction, optical flow estimation, recognition, segmentation, feature/object detection, visual tracking, calibration, action understanding, sensor fusion (video synthesis, events and RGB, events and LiDAR, etc.), model-based, embedded, or learning-based approaches.
- Event-based representation, signal processing, and control.
- Event-based active vision, event-based sensorimotor integration.
- Event camera datasets and/or simulators.
- Applications in: computational photography, robotics (navigation, manipulation, drones, obstacle avoidance, human-robot interaction,…), automotive, IoT, AR/VR (e.g., smart eyewear), space science, automated inspection, surveillance, crowd counting, physics, biology.
- Novel hardware (cameras, neuromorphic processors, etc.) and/or software platforms, such as fully event-based systems (end-to-end).
- New trends and challenges in event-based and/or biologically-inspired vision (SNNs, Reservoir Computing, etc.).
- Efficient computing architectures for Event-based processing (e.g., HD Computing, State Space Models).
- Near-focal plane processing, such as pixel processor arrays (PPAs).
A longer list of related topics is available in the table of content of the List of Event-based Vision Resources
Call for Contributions
Research papers
-
Paper submissions must adhere to the CVPR 2025 paper submission style, format and length restrictions. See the author guidelines and template provided by the CVPR main conference. These submissions are meant to represent novel contributions, i.e., unpublished work (submissions should not have been published, accepted or be under review elsewhere). Accepted papers will be published open access through the Computer Vision Foundation (CVF) (see examples from CVPR Workshop 2023, 2021 and 2019). We encourage authors of accepted papers to write a paragraph about ethical considerations and impact of their work.
-
For demo abstract submission, authors are encouraged to submit an abstract of up to 2 pages using the same template as CVPR 2025 paper submissions.
List of accepted papers and live demos
- Dynamic EventNeRF: Reconstructing General Dynamic Scenes from Multi-view RGB and Event Streams
- Event Quality Score (EQS): Assessing the Realism of Simulated Event Camera Streams via Distances in Latent Space
- E-VLC: A Real-World Dataset for Event-based Visible Light Communication And Localization
- EV-Flying: an Event-based Dataset for In-The-Wild Recognition of Flying Objects
- BiasBench: A reproducible benchmark for tuning the biases of event cameras
- MTevent: A Multi-Task Event Camera Dataset for 6D Pose Estimation and Moving Object Detection
- Looking into the Shadow: Recording a Total Solar Eclipse with High-resolution Event Cameras
- Probabilistic Online Event Downsampling
- Every Event Counts: Balancing Data Efficiency and Accuracy in Event Camera Subsampling
- Learning from Noise: Enhancing DNNs for Event-Based Vision through Controlled Noise Injection
- Towards Low-Latency Event-based Obstacle Avoidance on a FPGA-Drone
- Real-Time Pedestrian Detection at the Edge on a Fully Asynchronous Neuromorphic System
- Reading Events Hierarchically to Reduce Latency
- DELTA: Dense Depth from Events and LiDAR using Transformer’s Attention
- Quadrocular, Neuromorphic Stereo Triangulation and Asynchronous Data Fusion for 3D Object Tracking
- Nanoparticle Diameter Measurements With Event Camera Tracking
- Egocentric Event-Based Vision for Ping Pong Ball Trajectory Prediction
- Event-Driven Dynamic Attention for Multi-Object Tracking on Neuromorphic Hardware
- Event-based Tracking and Imaging of Randomly Moving Objects in Dense Dynamical Scattering Media
- Perturbed State Space Feature Encoders for Optical Flow with Event Cameras
- Spatio-Temporal State Space Model For Efficient Event-Based Optical Flow
- Best Linear Unbiased Estimation for 2D and 3D Flow with Event-based Cameras
- Iterative Event-based Motion Segmentation by Variational Contrast Maximization
- EV-LayerSegNet: Self-supervised Motion Segmentation using Event Cameras
- Seeing like a Cephalopod: Colour Vision with a Monochrome Event Camera
- Event-based Continuous Color Video Decompression from Single Frames
- Reading in the Dark with Foveated Event Vision
- Human-Robot Navigation using Event-based Cameras and Reinforcement Learning
- E-BARF: Bundle Adjusting Neural Radiance Fields from a Moving Event Camera
- Event-based eye tracking. Even-based Vision Workshop 2025
- BRAT: Bidirectional Relative Positional Attention Transformer for Event-based Eye tracking
- Exploring Temporal Dynamics in Event-based Eye Tracker
- Dual-Path Enhancements in Event-Based Eye Tracking: Augmented Robustness and Adaptive Temporal Modeling
- Live Demonstration: AR Application Using Active Markers With An Event Camera
- Live Demonstration: Point-Feature Tracking for Pixel Processor Arrays
- Live Demonstration: NeuroTouch - A Neuromorphic Vision-based Tactile Sensor for Real-Time Gesture Recognition
- Live Demonstration: Real-time event-data processing with Graph Convolutional Neural Networks and SoC FPGA
Courtesy papers (in the poster session)
List of Courtesy papers
- NTIRE 2025 Challenge on Event-Based Image Deblurring: Methods and Results, CVPRW 2025.
- Graph Neural Network Combining Event Stream and Periodic Aggregation for Low-Latency Event-based Vision, CVPR 2025.
- Repurposing Pre-trained Video Diffusion Models for Event-based Video Interpolation, CVPR 2025.
- ETAP: Event-based Tracking of Any Point, CVPR 2025.
- EBS-EKF: Accurate and High Frequency Event-based Star Tracking, CVPR 2025.
- Event fields: Capturing light fields at high speed, resolution, and dynamic range, CVPR 2025.
- EMBA: Event-based Mosaicing Bundle Adjustment, ECCV 2024.
- Neuromorphic Facial Analysis with Cross-Modal Supervision, ECCVW 2024.
- Distance Estimation in Outdoor Driving Environments Using Phase-only Correlation Method with Event Cameras, IEEE IV 2025.
- Evaluation of Mobile Environment for Vehicular Visible Light Communication Using Multiple LEDs and Event Cameras, IEEE IV 2025.
- There is still time to submit!
Competitions / Challenges
1. Eye-tracking
Timeline:
- Challenge Start: February 15, 2025
- Challenge End: March 15, 2025
- Top-ranking teams will be invited to submit factsheet, code, and workshop paper after competition ends, the submission deadline: March 25, 2025
- Top-ranking teams will be invited to write challenge report together, the deadline: April 5, 2025
- Paper review deadline: April 5, 2025
Contact:
- Prof. Qinyu Chen (q.chen [at] liacs [dot] leidenuniv [dot] nl)
- Prof. Chang Gao (chang.gao [at] tudelft [dot] nl)
2. Space-time Instance Segmentation (SIS) Challenge
Overview:
- Task: Predict mask-accurate tracks of all mouse instances from input events (and optional frames).
- Data: This challenge is based on the MouseSIS dataset.
- Two Tracks: (1) Frame + Events Track, and (2) Events-only Track.
Timeline:
- February 7, 2025: Challenge opens for submissions
- May 23, 2025: Challenge closes, final submission deadline
- May 26, 2025: Winners announced.
Top teams are invited to:
- submit factsheets and code
- collaborate on challenge report
- present a poster at the CVPR workshop
- June 6, 2025: Deadline for top teams to submit: Factsheets, Code and Challenge report.
- June 12, 2025: Results presentation (Posters) at CVPR 2025 Workshop on Event-based Vision
Contact: Friedhelm Hamann (f.hamann [at] tu-berlin [dot] de)
3. Event-Based Image Deblurring Challenge
Overview:
This challenge focuses on leveraging the high-temporal-resolution events from event cameras to improve image deblurring. We hope that this challenge will serve as a starting point for promoting event-based image enhancement on a broader stage and contribute to the thriving development of the event-based vision community.
- Task: To obtain a network design / solution that fusing events and images produces high quality results with the best performance (i.e., PSNR).
- Data: This challenge is based on the HighREV dataset.
Timeline:
- February 10, 2025: Challenge opens for submissions
- March 15, 2025: Final test data release
- March 21, 2025: Challenge ends: submission deadline to upload results on the final test data
- March 22, 2025: Fact sheets and code/executable submission deadline
- March 24, 2025: Preliminary test results release to the participants
- March 29, 2025: Paper submission deadline for entries from the challenge, as per the NTIRE-W page
- June 11-12, 2025: Results presentation at CVPR 2025 Workshop NTIRE (June 11) and/or Workshop on Event-based Vision (June 12), as a poster.
Contact: Lei Sun (leo_sun [at] zju [dot] edu [dot] cn)
4. Event-Based SLAM Challenge
Overview:
The goal of this challenge is to leverage the high temporal and spatial resolution of HD event cameras for SLAM and pose estimation applications.
Please refer to the challenge website for more information about participation.
Tracks:
- Event (+ IMU): if you obtain your pose using a single or a pair of event cameras, with or without IMU.
- Event + Mono (+ IMU): if you obtain your pose using a single or a pair of event cameras fused with monocular global shutter cameras, with or without IMU.
Timeline:
- March 1, 2025: Challenge opens for submissions
- June 2, 2025: Challenge ends
- June 4, 2025: Winners announced
- June 8, 2025: The top submissions should send their code for manual evaluation, report, and posters.
- June 12, 2025: Posters presented at the CVPR Workshop on Event-based vision.
- After the workshop: The top submissions are invited to collaborate on a report for the challenge.
Contact:
- Fernando Cladera (fclad [at] seas.upenn.edu)
- Dr. Kenneth Chaney (chaneyk [at] seas.upenn.edu)
Location
- On site (Music City Center, Nashville TN): Room TBD
Schedule
The tentative schedule is the following:
Time (local) | Session |
---|---|
8:00 | Welcome. Session 1: Event cameras: Algorithms and applications I (Invited speakers) |
10:10 | Coffee break. Set up posters. |
10:30 | Session 2: Poster session: contributed papers, competitions, demos and courtesy presentations (as posters). |
12:30 | Lunch break |
13:30 | Session 3: Event cameras: Algorithms and applications II (Invited speakers) |
15:30 | Coffee break |
16:00 | Session 4: Hardware architectures and sensors (Invited speakers) |
17:45 | Award Ceremony and Final Panel Discussion. |
18:00 | End |
Organizers
- Guillermo Gallego, TU Berlin, Germany.
- Kostas Daniilidis, University of Pennsylvania, USA.
- Cornelia Fermüller, University of Maryland, USA.
- Daniele Perrone, Prophesee, France.
- Davide Migliore, Tempo Sense, USA.
FAQs
- What is an event camera? Watch this video explanation.
- What are possible applications of event cameras? Check the TPAMI 2022 review paper.
- Where can I buy an event camera? From Inivation, Prophesee, Lucid, etc..
- Are there datasets and simulators that I can play with? Yes, Dataset. Simulator. More.
- Is there any online course about event-based vision? Yes, check this course at TU Berlin.
- What is the SCAMP sensor? Read this page explanation.
- What are possible applications of the scamp sensor? Some applications can be found here.
- Where can I buy a SCAMP sensor? Contact Prof. Piotr Dudek.
- Where can I find more information? Check out this List of Event-based Vision Resources.
Related Workshops
- CVPR 2023 Fourth International Workshop on Event-based Vision. Videos
- CVPR 2021 Third International Workshop on Event-based Vision. Videos
- CVPR 2019 Second International Workshop on Event-based Vision. Videos
- ICRA 2017 First International Workshop on Event-based Vision. Videos
See also this link
Ack
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.