The CVPR 2025 Workshop on Autonomous Driving (WAD) brings together leading researchers and engineers from academia and industry to discuss the latest advances in autonomous driving. Now in its 8th year, the workshop has been continuously evolving with this rapidly changing field and now covers all areas of autonomy, including perception, behavior prediction and motion planning. In this full-day workshop, our keynote speakers will provide insights into the ongoing commercialization of autonomous vehicles, as well as progress in related fundamental research areas. Furthermore, we will host a series of technical benchmark challenges to help quantify recent advances in the field, and invite authors of accepted workshop papers to present their work.


Full Day Recording - CVPR Virtual Website
Individual recordings are linked below.
08:45am |
09:00am |
Opening Remarks |
09:00am |
09:30am |
Title: End-to-end Autonomous Driving: Past, Current and Onwards |
09:30am |
10:00am |
Title: Probabilistic and Deep Learning Approaches for Mobile Robotics and Automated Driving |
10:00am |
10:30am |
CVPR AM Coffee Break |
10:30am |
11:00am |
Title: Repurposing Generative Models for 3D Data |
11:00am |
11:30am |
Title: Perception and Simulation for Self-Driving Vehicles |
11:30am |
12:00pm |
|
12:00pm |
01:30pm |
Lunch Break & Poster Session Poster Location: ExHall D, #308 - 325 |
01:30pm |
02:00pm |
Title: Scalable Neural Simulation for Autonomy |
02:00pm |
02:30pm |
|
02:30pm |
03:00pm |
CVPR PM Coffee Break |
03:00pm |
03:30pm |
Title: Solving Real-World Challenges of Large-Scale AV Deployment |
03:30pm |
04:30pm |
|
04:30pm |
05:00pm |
Title: Scaling up Autonomous Driving via Large Foundation Models |
05:00pm |
05:05pm |
Closing Remarks |
- [Sep 8] Recordings are now publicly available on our YouTube Channel and linked in the schedule.[June 30] Recordings are now available on the CVPR virtual website for registered attendees.[June 12] Thank you all for attending! Recordings of the workshop talks will be released on the workshop website and our YouTube channel.[June 9] Technical reports for the provisional finalists of the Waymo Open Dataset Challenges are now available.[June 5] The workshop & poster boards locations are now available.[Apr 11] A preliminary program is now available (and remains subject to change).[Mar 31] The 2025 Waymo Open Dataset Challenges are now open for submissions![Mar 19] The 2025 Argoverse Challenges are now open for submissions![Mar 18] The workshop will take place on Wednesday, June 11.[Mar 17] Our paper track is now closed. Thanks to everyone submitting their work![Feb 24] We are pleased to also host the Nexar Dashcam Crash Prediction Challenge this year.[Feb 23] We released our call for papers.[Dec 20] The workshop got accepted. More updates to follow soon.Challenges
The workshop will host a variety of challenges to promote research in computer vision, behavor prediction and planning for autonomous driving. Our partners Waymo, Argoverse and Nexar have prepared large-scale benchmark datasets with high-quality ground truth annotations. We invite researchers around the world to tackle a range of challenging autonomous driving tasks.
Waymo Open Dataset Challenges
The 6th annual edition of the Waymo Open Dataset Challenges includes the following tracks:
- Scenario GenerationVision-based End-to-End DrivingInteraction PredictionSim Agents
Challenges closed on May 22, 2025. Refer to the challenge website for more details.
Nexar Dashcam Crash Prediction Challenge
Autonomous vehicles and advanced driver assistance systems (ADAS) rely on accurate accident prediction to enhance road safety. In this competition, you will analyze dashcam videos and develop models capable of predicting vehicle collisions before they occur. The challenge includes real-world scenarios, such as diverse weather conditions, occlusions, and unexpected road events. Your goal is to develop a model that accurately predicts vehicle collisions as early as possible in dashcam video sequences. The earlier and more accurately you can detect an impending accident, the better your score. The challenge closed on May 3, 2025. Details can be found here.
Argoverse Challenges
Argoverse is hosting three competitions this year. Top entries will be highlighted at the workshop.
- Multi-agent Motion Forecasting: Given the position, orientation, and category of actors in a scene, predict the motion of several key actors in the future.Scenario Mining: Find safety critical scenarios with natural language.Lidar Scene Flow: Capture the motion of pedestrians and other vulnerable road users.
Competitions closed on June 8, 2025. Details can be found here.
Call for PapersImportant Dates
- Workshop paper submission deadline:
Friday, March 14th, 2025Sunday, March 16th, 2025 (23:59 PST)Notification to authors: Monday, March 30th, 2025 (23:59 PST)Camera ready deadline: Sunday, April 6th, 2025 (23:59 PST)Topics Covered
We invite submissions of original research contributions in machine perception, computer vision, prediction, planning and simulation related to autonomous vehicles, such as (but not limited to):
- Foundational models for autonomous driving.Vision language models (VLMs) and large language models (LLMs) for solving autonomous vehicle related tasks such as prediction or planning.Autonomous navigation and exploration based on camera, laser, radar or related measurements.Embodied AI for autonomous driving.Sensor fusion and multi-modal perception algorithms for scene understanding.Bird’s eye view methods for autonomous driving, such as BEV-based 3D detection, BEV segmentation, occupancy grids, HD-maps, and topological lane graphs.Vision-based driving assistance, driver monitoring and advanced interfaces.Sensor simulation, neural rendering / NeRFs, 3D Gaussian Splatting, generative models for 3D assets or driving environments.Diffusion models for prediction and planning.Mapless autonomous driving.Cooperative perception and planning based on vehicle-to-everything (V2X) / vehicle-to-vehicle communication.Transfer learning and domain adaptation in the autonomous vehicle domain.Simulation for autonomous driving.Online sensor calibration.SLAM and 3D reconstruction algorithms.Validation and interpretability of autonomous systems.Adversarial learning, adversarial attacks, robustness and handling of uncertainty in autonomous systems.
Presentation Guidelines
All accepted papers will be presented as posters. The guidelines for the posters are the same as at the main conference.Submission Instruction
- We are following the CVPR paper format: https://cvpr.thecvf.com/Conferences/2025/AuthorGuidelinesLaTeX/Word Templates can be found here: CVPR 2025 Paper TemplateWe accept full-length (max 8 pages) submissions, excluding references.All the submissions will be peer-reviewed by at least two reviewers.Blind review: we adopt double-blind review for this workshop. Submitted papers and supplementary materials should not reveal any information about the author.Dual submission: We do not accept paper submissions that have been published or are under review for other conferences or workshops. Accepted papers are expected to be published at CVPR proceedings.In submitting a manuscript to WAD Workshop, the authors agree to the review process and agree to contribute with the reviewing process.
Submission Instruction
Submit your papers through CMT: https://cmt3.research.microsoft.com/WAD2025Acknowledgment
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.Accepted PapersAttentiveGRU: Recurrent Spatio-Temporal Modeling for Advanced Radar-Based BEV Object Detection
Authors: Loveneet Saini; Mirko Meuter; Hasan Tercan; Tobias Meisen
Inferring Driving Maps by Deep Learning-based Trail Map Extraction
Authors: Michael Hubbertz; Pascal Colling; Qi Han; Tobias Meisen
LMFormer: Lane based Motion Prediction Transformer
Authors: Harsh Yadav; Maximilian Schaefer; Kun Zhao; Tobias Meisen
TB-Bench: Training and Testing Multi-Modal AI for Understanding Spatio-Temporal Traffic Behaviors from Dashcam Images/Videos
Authors: Korawat Charoenpitaks; Van-Quang Nguyen; Masanori Suganuma; Kentaro Arai; Seiji Totsuka; Hiroshi Ino; Takayuki Okatani
PatchContrast: Self-Supervised Pre-Training for 3D Object Detection
Authors: Oren Shrout; Ori Nizan; Itzik Itzik Ben-Shabat; Ayellet Tal
PS4PRO: Pixel-to-pixel Supervision for Photorealistic Rendering and Optimization
Authors: Yezhi Shen; Qiuchen Zhai; Fengqing Zhu
Exploring Semi-Supervised Learning for Online Mapping
Authors: Adam Lilja; Erik Wallin; Junsheng Fu; Lars Hammarstrand
NeuRadar: Neural Radiance Fields for Automotive Radar Point Clouds
Authors: Mahan Rafidashti; Ji Lan; Maryam Fatemi; Junsheng Fu; Lars Hammarstrand; Lennart Svensson
Multimodal 3D Object Detection on Unseen Domains
Authors: Deepti Hegde; Suhas Lohit; Kuan-Chuan Peng; Michael Jones; Vishal Patel
What is the Added Value of UDA in the VFM Era?
Authors: Brunó Englert; Tommie Kerssies; Gijs Dubbelman
Camera-Only 3D Panoptic Scene Completion for Autonomous Driving through Differentiable Object Shapes
Authors: Nicola Marinello; Simen Cassiman; Jonas Heylen; Marc Proesmans; Luc Van Gool
TrajGNAS: Heterogeneous Multiagent Trajectory Prediction Based on a Graph Neural Architecture Search
Authors: Xu Yunheng; Chen Jie; Wang Shuoheng; Wang Xinwen
CE-NPBG: Connectivity Enhanced Neural Point-Based Graphics for Novel View Synthesis in Autonomous Driving Scenes
Authors: Mohammad Altillawi; Fengyi Shen; Liudi Yang; Sai Manoj Prakhya; Ziyuan Liu
DuoSpaceNet: Leveraging Both Bird's-Eye-View and Perspective View Representations for 3D Object Detection
Authors: Zhe Huang; Yizhe Zhao; Hao Xiao; Chenyan Wu; Lingting Ge
Data Scaling Laws for End-to-End Autonomous Driving
Authors: Alexander Naumann; Xunjiang Gu; Tolga Dimlioglu; Mariusz Bojarski; Alperen Degirmenci; Alexander Popov; Devansh Bisla; Marco Pavone; Urs Muller; Boris Ivanovic
Nexar Dashcam Collision Prediction Dataset and Challenge
Authors: Daniel Moura; Shizhan Zhu; Orly Zvitia
DySS: Dynamic Queries and State-Space Learning for Efficient 3D Object Detection from Multi-Camera Videos
Authors: Rajeev Yasarla; Shizhong Han; Hong Cai; Fatih Porikli
Waymo Open Dataset Challenge ReportsVision-based End-to-End Driving Challenge
First Place: UniPlan - Lan Feng, Alexandre Alahi - EPFL (Report)
Second Place: DiffusionLTF - Long Nguyen, Micha Fauth, Bernhard Jaeger, Daniel Dauner, Maximilian Igl, Andreas Geiger, Kashyap Chitta - University of Tübingen, Tübingen AI Center, NVIDIA Research (Report)
Third Place: Swin-Trajectory - Sungjin Park, Gwangik Shin, Jaeha Song, Sumin Lee, Hyukju Shon, Byounggun Park, Jinhee Na, Hawook Jeong, Soonmin Hwang - Hanyang University, RideFlux Inc (Report)
Special Mention: Poutine - Luke Rowe, Rodrigue de Shaetzen, Roger Girgis, Christopher Pal, Liam Paull - Mila - Quebec AI Institute, Université de Montréal, Polytechnique Montréal, CIFAR AI Chair (Report)
Interaction Prediction Challenge
First Place: Parallel ModeSeq - Zikang Zhou, Haibo Hu, Yifan Zhang, Yung-Hui Li, Jianping Wang, Nan Guan, Chun Jason Xue - City University of Hong Kong, Hon Hai Research Institute, Mohamed bin Zayed University of Artificial Intelligence (Report)
Second Place: IMPACT - Jiawei Sun, Xibin Yue, Jiahui Li, Tianle Shen, Chengram Yuan, Shuo Sun, Sheng Guo, Quanyun Zhou, Marcelo H. Ang Jr - National University of Singapore, Xiaomi EV (Report)
Third Place: BeTop-ens - Haochen Liu, Li Chen, Hongyang Li, Chen Lv - Nanyang Technology University, The University of Hong Kong (Report)
Honorable Mention: RetroMotion - Royden Wagner, Ömer Şahin Taş, Felix Hauser, Marlon Steiner, Dominik Strutz, Abhishek Vivekanandan, Carlos Fernandez, Christoph Stiller - Karlsruhe Institute of Technology, FZI Research Center for Information Technology (Report)
Sim Agents Challenge
First Place: TrajTok - Zhiyuan Zhang, Xiaosong Jia, Guanyu Chen, Qifeng Li, Junchi Yan - Shanghai Jiao Tong University (Report)
Second Place: RLFTSim - Ehsan Ahmadi, Hunter Schofield - University of Alberta, York University (Report)
Third Place: comBOT - Christian Rössert, Johannes Drever, Lukas Brostek - cogniBIT GmbH (Report)
Honorable Mention: UniMM - Longzhong Lin, Xuewu Lin, Kechun Xu, Haojian Lu, Lichao Huang, Rong Xiong, Yue Wang - Zhejiang University, Horizon Robotics (Report)
Scenario Generation Challenge
First Place: SimFormer - Sen Wang, Xu Jianrong, Xiaoyong Zhang, Fangqiao Hu, Kechen Zhu, Zhijun Huang, Jiaxiang Zhu, JiaChen Luo, Yong Zhou, Zhenwu Chen - Shenzhen Urban Transport Planning Center (Report)
Second Place: UniTSG - Jianrong Xu, Baicang Guo, Xingchen Liu, Wei Hong, Liangliang Li, Chenyun Xi, Yewei Shi, Peng Wang, Ruohai Di - Tongji University, Yanshan University, Xi'an Technological University (Report)
Third Place: SHRED - Micha Fauth, Long Nguyen, Bernhard Jaeger, Daniel Dauner, Maximilian Igl, Andreas Geiger, Kashyap Chitta - University of Tübingen, Tübingen AI Center, NVIDIA Research (Report)
Honorable Mention: InfGen - Zhenghao Peng, Yuxin Liu, Bolei Zhou - University of California, Los Angeles (Report)
Contact
cvpr.wad@gmail.com
Background photo of Nashville by Larry Darling, licensed under CC BY-NC 2.0 (link)