Skip to the content.

imagen

June 11th or 12th (TBD), 2025, CVPR, Nashville (TN), USA.
Held in conjunction with the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025.

Welcome to the 5th International Workshop on Event-Based Vision!

Important Dates

CVPRW 2023 edition photo by S. Shiba

Objectives

Event-based cameras are bio-inspired, asynchronous sensors that offer key advantages of microsecond temporal resolution, low latency, high dynamic range and low power consumption. Because of these advantages, event-based cameras open frontiers that are unthinkable with traditional (frame-based) cameras, which have been the main sensing technology for the past 60 years. These revolutionary sensors enable the design of a new class of efficient algorithms to track a baseball in the moonlight, build a flying robot with the agility of a bee, and perform structure from motion in challenging lighting conditions and at remarkable speeds. In the last decade, research about these sensors has attracted the attention of industry and academia, fostering exciting advances in the field. The proposed workshop covers the sensing hardware, as well as the processing, data, and learning methods needed to take advantage of the above-mentioned novel cameras. The workshop also considers novel vision sensors, such as pixel processor arrays, which perform massively parallel processing near the image plane. Because early vision computations are carried out on-sensor (mimicking the retina), the resulting systems have high speed and low-power consumption, enabling new embedded vision applications in areas such as robotics, AR/VR, automotive, gaming, surveillance, etc.

Topics Covered

A longer list of related topics is available in the table of content of the List of Event-based Vision Resources

Call for Contributions

Research papers

Research papers and demos are solicited in, but not limited to, the topics listed above.

Courtesy papers (in the poster session)

We also solicit contributions of papers relevant to the workshop that are accepted at the CVPR main conference or at other peer-reviewed conferences or journals. These contributions will be checked for suitability (soft review) and will not be published in the workshop proceedings. Papers should be submitted in single blind format (e.g., accepted version is fine), and should mention if and where the paper has been accepted / published. These contributions provide visibility to your work and help building a community around the topics of the workshop.

Competitions / Challenges

Eye-tracking

We are excited to arrange a challenge focused on advancing event-based eye tracking, a key technology for driving innovations in interaction technology and extended reality (XR). While current state-of-the-art devices like Apple's Vision Pro or Meta’s Aria glasses utilize frame-based eye tracking with frame rates from 10 to 100 Hz and latency around 11 ms, there is a pressing need for smoother, faster, and more efficient methods to enhance user experience. By leveraging two different event-based eye tracking datasets (the Enhanced Ev-Eye dataset and the 3ET+ dataset), this challenge offers participants the opportunity to contribute to cutting-edge solutions that push beyond current limitations. Both datasets are readily available, have been ethically collected with full consent and strict privacy protections, and have been validated. Submissions will be evaluated on accuracy and model efficiency to ensure low latency. We believe the outcomes of this challenge will play an important role in shaping the future of XR and interaction technology by pushing the boundaries of what's possible in eye tracking.

Eye-tracking Challenge website

Challenge timeline:

Contact:

Space-time Instance Segmentation (SIS) using frames and events

Event cameras react to moving objects in the scene and are a natural fit for all kinds of tracking problems. However, in object tracking event-based solutions lack far behind its conventional frame-based counterparts, in part due to missing annotated data. The new open-source MouseSIS dataset (ECCVW 2024) aims to overcome this problem with annotations for a task called Space-time Instance Segmentation (SIS), requiring algorithms to predict multi-object tracking and segmentation of all objects (in this case mice) in the scene. We are excited to announce the first SIS Challenge. It will be hosted on Codalab and evaluated on a non-publicly available test set. The dataset contains data and video instance segmentation annotations for mice. All recordings adhered to the ethical guidelines under German law. Algorithms will be evaluated based on the quality of the tracking predictions judged on the main metric Higher Order Tracking Accuracy (HOTA). As one of the interesting properties of event cameras is their sparse input, we want to incentivize efficient algorithms and additionally evaluate FLOPS and runtime per ground truth step (ms).

In short, submissions will be evaluated in two tracks (frames + events, event-only) and two figures of merit: accuracy and efficiency. Participants can train the models using any publicly available open-source dataset, but are requested to submit a technical report with all details alongside their submission and to open-source their code. This challenge aims to advance the state of the art in event-based fine-grained tracking for tasks that can be useful for scientific purposes, such as biologist and ecologist, as in the recent breakthrough of DeepLabCut.

Contact: Friedhelm Hamann

Speakers

Location

Schedule

The tentative schedule is the following:

Time (local) Session
8:00 Welcome. Session 1: Event cameras: Algorithms and applications I (Invited speakers)
10:10 Coffee break. Set up posters.
10:30 Session 2: Poster session: contributed papers, competitions, demos and courtesy presentations (as posters).
12:30 Lunch break
13:30 Session 3: Event cameras: Algorithms and applications II (Invited speakers)
15:30 Coffee break
16:00 Session 4: Hardware architectures and sensors (Invited speakers)
17:45 Award Ceremony and Final Panel Discussion.
18:00 End

Organizers

FAQs

See also this link