valeo2.ai

We are an open, international, research lab based in Paris

We aim

  • to conduct cutting-edge AI research for automotive applications (see our papers and codes)
  • to nurture collaborations with world-class academic teams
  • to irrigate Valeo’s R&D

We work towards better, lighter, clearer & safer automotive AI

The team

Hedi Ben-younes (research scientist) scholar
Alexandre Boulch (research scientist) scholar
Maxime Bucher (research scientist) scholar
Andrei Bursuc (research scientist) scholar
Charles Corbière (PhD student) scholar
Matthieu Cord (principal scientist) scholar
Spyros Gidaris (research scientist) scholar
David Hurych (research scientist) scholar
Himalaya Jain (research scientist) scholar
Maximilian Jaritz (PhD student) scholar
Renaud Marlet (principal scientist) scholar
Gabriel de Marmiesse (research engineer)
Arthur Ouaknine (PhD student)
Patrick Pérez (scientific director) scholar
Gilles Puy (research scientist) scholar
Julien Rebut (research scientist)
Simon Roburin (PhD student)
Antoine Saporta (PhD student)
Marin Toromanoff (PhD student) scholar
Huy Van Vo (PhD student) scholar
Tuan-Hung Vu (research scientist) scholar
Eloi Zablocki (research scientist) scholar

Openings

  • High profile research scientists in machine learning or in 3D scene understanding
  • AI research engineer

Please contact me

Some projects

(for not too technical updates, follow us on Medium and Twitter)

Multi-sensor perception — Automated driving relies first on a variety of sensors, like Valeo’s fish-eye cameras, LiDARs, radars and ultrasonics. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and gain robustness. To this end, we explore various machine learning approaches where sensors are considered either in isolation (as radar in Carrada at ICPR’20) or collectively (as in xMUDA at CVPR’20).

3D perception — Each sensor delivers information about the 3D world around the vehicle. Making sense of this information in terms of drivable space and important objects (road users, curb, obstacles, street furnitures) in 3D is required for the driving system to plan and act in the safest and most comfortable way. This encompasses several challenging tasks, in particular detection and segmentation of objects in point clouds as in FKAConv at ACCV’20.

Frugal learning — Collecting diverse enough data, and annotating it precisely, is complex, costly and time-consuming. To reduce dramatically these needs, we explore various alternatives to fully-supervised learning, e.g, training that is unsupervised (as rOSD at ECCCV’20), self-supervised (as BoWNet at CVPR’20 and OBoW at CVPR’21), semi-supervised, active, zero-shot (as ZS3 at NeurIPS’19) or few-shot. We also investigate training with fully-synthetic data (in combination with unsupervised domain adaptation) and with GAN-augmented data (as with Semantic Palette at CVPR’21 and DummyNet at AAAI’21).

Domain adaptation — Deep learning and reinforcement learning are key technologies for autonomous driving. One of the challenges they face is to adapt to conditions which differ from those met during training. To improve systems’ performance in such situations, we explore so-called “domain adaptation” techniques, as in AdvEnt at CVPR’19 and DADA its extension at ICCV’19.

Reliability — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, the embarked perception system should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or the human driver. With this in mind, we investigate ways to improve the robustness of neural nets to input variations, including to adversarial attacks, and to predict automatically the performance and the confidence of their predictions as in ConfidNet at NeurIPS’19.

Driving in action — Getting from sensory inputs to car control goes either through a modular stack (perception > localization > forecast > planning > actuation) or, more radically, through a single end-to-end model. We work on both strategies, more specifically on action forecasting, automatic interpretation of decisions taken by a driving system, and reinforcement / imitation learning for end-to-end systems (as in RL work at CVPR’20).

Core Deep Learning — Deep learning being now a key component of AD systems, it is important to get a better understanding of its inner workings, in particular the link between the specifics of the learning optimization and the key properties (performance, regularity, robustness, generalization) of the trained models. Among other things, we investigate the impact of popular batch normalization on standard learning procedures and the ability to learn through unsupervised distillation.

2022 version

Multi-sensor perception — Automated driving relies first on a variety of sensors, like Valeo’s fish-eye cameras, LiDARs, radars and ultrasonics. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and gain robustness. To this end, we explore various machine learning approaches where sensors are considered either in isolation (as radar in Carrada at ICPR’20) or collectively (as in xMUDA at CVPR’20).

3D perception — Each sensor delivers information about the 3D world around the vehicle. Making sense of this information in terms of drivable space and important objects (road users, curb, obstacles, street furnitures) in 3D is required for the driving system to plan and act in the safest and most comfortable way. This encompasses several challenging tasks, in particular detection and segmentation of objects in point clouds as in FKAConv at ACCV’20.

Frugal learning — Collecting diverse enough data, and annotating it precisely, is complex, costly and time-consuming. To reduce dramatically these needs, we explore various alternatives to fully-supervised learning, e.g, training that is unsupervised (as rOSD at ECCCV’20), self-supervised (as BoWNet at CVPR’20 and OBoW at CVPR’21), semi-supervised, active, zero-shot (as ZS3 at NeurIPS’19) or few-shot. We also investigate training with fully-synthetic data (in combination with unsupervised domain adaptation) and with GAN-augmented data (as with Semantic Palette at CVPR’21 and DummyNet at AAAI’21).

Domain adaptation — Deep learning and reinforcement learning are key technologies for autonomous driving. One of the challenges they face is to adapt to conditions which differ from those met during training. To improve systems’ performance in such situations, we explore so-called “domain adaptation” techniques, as in AdvEnt at CVPR’19 and DADA its extension at ICCV’19.

Reliability — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, the embarked perception system should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or the human driver. With this in mind, we investigate ways to improve the robustness of neural nets to input variations, including to adversarial attacks, and to predict automatically the performance and the confidence of their predictions as in ConfidNet at NeurIPS’19.

Driving in action — Getting from sensory inputs to car control goes either through a modular stack (perception > localization > forecast > planning > actuation) or, more radically, through a single end-to-end model. We work on both strategies, more specifically on action forecasting, automatic interpretation of decisions taken by a driving system, and reinforcement / imitation learning for end-to-end systems (as in RL work at CVPR’20).

Code, data and posts

On our github:

  • BF3S: Boosting few-shot visual learning with self-supervision (ICCV’19) - PyTorch
  • ConfidNet: Addressing failure prediction by learning model confidence (NeurIPS’19) - PyTorch
  • Rainbow-IQN Ape-X: effective RL combination for Atari games - PyTorch
  • DADA: Depth-aware Domain Adaptation in Semantic Segmentation (ICCV’19) - PyTorch
  • AdvEnt: Adversarial Entropy minimization for domain adaptation in semantic segmentation (CVPR’19) - PyTorch

Our academic partners

Czech Technical University in Prague (Josef Sivic)
EPFL (Alexandre Alahi)
INRIA (Jean Ponce, Inria Paris and Karteek Alahari, Inria Grenoble)
le CNAM (Nicolas Thome and Avner Bar-hen)
Max Planck Institute (Christian Theobalt)
Ponts ParisTech (Mathieu Aubry)
Sorbonne Université (Matthieu Cord)
Telecom ParisTech (Florence Tupin and Alasdair Newson)

Recent news

  • 02/2020: Five papers accepted at CVPR’20 (22% acceptance rate), inc. one oral, see there.
  • 01/2020: Spyros Gidaris, Andrei Bursuc and Karteek Alahari (Inria) to deliver a tutorial on Few-Shot, Self-Supervised, and Incremental Learning at CVPR’20.
  • 01/2020: Pedestrian monitoring demo at CES, Las Vegas.
  • 12/2019: Medium post: Is deep Reinforcement Learning really superhuman on Atari?
  • 12/2019: Codes for our ICCV’19 papers “Boosting few-shot visual learning with self-supervision” and “DADA: Depth-Aware Domain Adaptation in semantic segmentation” available on our github (BF3S and DADA).
  • 11/2019: Spyros Gidaris receives the Thesis Prize from Université Paris Est.
  • 10/2019: Valeo.ai researchers present 5 papers at ICCV in Seoul, Korea, and Valeo participates to associated workshops on Autonomous Driving and on Autonomous Navigation in Unconstrained Environments.
  • 10/2019: PRAIRIE research institute is officially launched, with a nice day of talks and pannels (inc. one on future mobility, program in French). Followed by PRAIRIE AI Summer School (PAISS), where Patrick Pérez delivered a lecture (slides).
  • 09/2019: Code of our NeurIPS’19 paper “Addressing failure prediction by learning model confidence” is available on valeo.ai github.
  • 09/2019: Work of Marin Toromanoff et al. discussed by Andrew Ng in The Batch (deeplearning.ai newsletter).
  • 09/2019: Two papers accepted at NeurIPS (21% acceptance rate), including one on new problem of zero shot semantic segmentation (“Z3S”). Matthieu Cord among the top contributors according to conference stats.
  • 07/2029: Medium post: ADVENT: Adversarial entropy minimization for domain adaptation in semantic segmentation.
  • 07/2019: Three papers accepted at ICCV19 (24% acceptance rate), including one oral (4.6% acceptance rate), see there.
  • 07/2019: Code of our CVPR’19 paper “AdvEnt: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation” is available on valeo.ai github.
  • 07/2019: Marin Toromanoff (PhD student with Mines ParisTech, Valeo DAR and Valeo.ai) ranks 1st on Track 2 of Carla Challenge 2019, and 2nd on Track 1.
  • 06/2019: Spyros Gidaris receives the Best Thesis Prize from Ponts Foundation.
  • 06/2019: Valeo.ai researchers present 8 papers (25% acceptance rate), including 4 orals (5.6% acceptance rate), at CVPR. Patrick Pérez delivers keynote on “Sustainable supervision with application to autonomous driving” at the Safe AI for Automated Driving (SAIAD) CVPR’19 workhsop.
  • 05/2019: Hedi Ben-younes defends his PhD at Sorbonne Université, committee: M. Cord, V. Ferrari, Y. LeCun, P. Pérez, L. Soulier, N. Thome, J. Verbeek, Ch. Wolf.
  • 05/2019: Himalaya Jain receives the Best Thesis Prize from Rennes 1 Foundation.
  • 05/2019: Valeo is proud to be part of Prairie, the new Paris Interdisciplinary Artificial Intelligence Institute. Stay tuned.

Commnunication

Human Resource Partner: Pascal Le Herisse
Assistant: Ouardia Moussouni
Location: 15 rue de La Baume, Paris