valeo.ai

An international team based in Paris, which conducts AI research for Valeo automotive applications, in collaboraton with world-class academics. Our main research is towards better, clearer & safer automotive AI. See our papers, projects, codes, posts and tweets.

Team


Hédi Ben-younes, research scientist | scholar | twitter
Florent Bartoccioni, PhD student
Alexandre Boulch, research scientist | page | scholar| github | twitter
Andrei Bursuc, research scientist | page | scholar | twitter
Laura Calem, PhD student | page | twitter
Mickaël Chen, research scientist | scholar
Charles Corbière, PhD student | page | scholar | twitter
Matthieu Cord, principal scientist | page | scholar | twitter
Spyros Gidaris, research scientist | scholar
David Hurych, research scientist | scholar
Himalaya Jain, research scientist | page | scholar
Renaud Marlet, principal scientist | page | scholar
Arthur Ouaknine, PhD student | twitter
Patrick Pérez, scientific director | page | scholar
Gilles Puy, research scientist | page | scholar
Julien Rebut, research scientist
Simon Roburin, PhD student | page
Antoine Saporta, PhD student | scholar
Tristan Schultz, research engineer
Oriane Siméoni, research scientist | scholar
Marin Toromanoff, PhD student | scholar
Huy Van Vo, PhD student | scholar | github
Tuan-Hung Vu, research scientist | page | scholar | twitter
Eloi Zablocki, research scientist | scholar | twitter

Human Resource Partner: Pascal Le Hérissé
Assistant: Ouardia Moussouni
Location: 15 rue de La Baume, Paris

Some projects

Multi-sensor perception — Automated driving relies first on a variety of sensors, like Valeo’s fish-eye cameras, LiDARs, radars and ultrasonics. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and gain robustness. To this end, we explore various machine learning approaches where sensors are considered either in isolation (as radar in Carrada at ICPR’20) or collectively (as in xMUDA at CVPR’20).

3D perception — Each sensor delivers information about the 3D world around the vehicle. Making sense of this information in terms of drivable space and important objects (road users, curb, obstacles, street furnitures) in 3D is required for the driving system to plan and act in the safest and most comfortable way. This encompasses several challenging tasks, in particular detection and segmentation of objects in point clouds as in FKAConv at ACCV’20.

Frugal learning — Collecting diverse enough data, and annotating it precisely, is complex, costly and time-consuming. To reduce dramatically these needs, we explore various alternatives to fully-supervised learning, e.g, training that is unsupervised (as rOSD at ECCCV’20), self-supervised (as BoWNet at CVPR’20), semi-supervised, active, zero-shot (as ZS3 at NeurIPS’19) or few-shot. We also investigate training with fully-synthetic data (in combination with unsupervised domain adaptation) and with GAN-augmented data.

Domain adaptation — Deep learning and reinforcement learning are key technologies for autonomous driving. One of the challenges they face is to adapt to conditions which differ from those met during training. To improve systems’ performance in such situations, we explore so-called “domain adaptation” techniques, as in AdvEnt at CVPR’19 and DADA its extension at ICCV’19.

Reliability — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, the embarked perception system should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or the human driver. With this in mind, we investigate ways to improve the robustness of neural nets to input variations, including to adversarial attacks, and to predict automatically the performance and the confidence of their predictions as in ConfidNet at NeurIPS’19.

Driving in action — Getting from sensory inputs to car control goes either through a modular stack (perception > localization > forecast > planning > actuation) or, more radically, through a single end-to-end model. We work on both strategies, more specifically on action forecasting, automatic interpretation of decisions taken by a driving system, and reinforcement / imitation learning for end-to-end systems (as in RL work at CVPR’20).

Core Deep Learning — Deep learning being now a key component of AD systems, it is important to get a better understanding of its inner workings, in particular the link between the specifics of the learning optimization and the key properties (performance, regularity, robustness, generalization) of the trained models. Among other things, we investigate the impact of popular batch normalization on standard learning procedures and the ability to learn through unsupervised distillation.

Code

  • OBoW: Online BoW generation for unsupervised representation learning (arXiv 2020)
  • CARRADA: Camera and Automotive Radar with Range-Angle-Doppler Annotations dataset (ICPR’20)
  • ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation (CVPRw’20)
  • FLOT: Scene flow on point clouds guided by optimal transport (ECCV’20)
  • AdamSRT: Adam exploiting BN-induced pherical invariance of CNN (arXiv 2020)
  • LightConvPoint: Convolution for points (ACCV’20)
  • xMUDA: Cross-modal UDA for 3D semantic segmentation (CVPR’20)
  • LearningByCheating: End-to-End driving using implicit affordances (CVPR’20)
  • rOSD: Unsupervised object discovery at scale (ECCV’20)
  • ConvPoint Convolutions for unstructured point clouds (Computer \& Graphics 2020)
  • ZS3: Zero-Shot Semantic Segmentation (NeurIPS’19)
  • BF3S: Boosting few-shot visual learning with self-supervision (ICCV’19)
  • ConfidNet: Addressing failure prediction by learning model confidence (NeurIPS’19)
  • Rainbow-IQN Ape-X: effective RL combination for Atari games
  • DADA: Depth-aware Domain Adaptation in Semantic Segmentation (ICCV’19)
  • AdvEnt: Adversarial Entropy minimization for domain adaptation in semantic segmentation (CVPR’19)
  • OSD: Unsupervised object discovery as optimization (CVPR’19)

Academic partners

CNAM (Nicolas Thome)
CTU Prague (Josef Sivic)
EPFL (Alexandre Alahi)
INRIA (Jean Ponce, Karteek Alahari)
MPI (Christian Theobalt)
Ponts (Mathieu Aubry)
Sorbonne (Matthieu Cord)
Télécom (Florence Tupin, Alasdair Newson, Florence d’Alché-Buc)

News

Commnunication

Alumni

Maxime Bucher, research scientist (page,scholar), now at Augustus Intelligence
Maximilian Jaritz, PhD student (page, scholar), now at Amazon
Gabriel de Marmiesse, research engineer (github), now at EarthCube
Emilie Wirbel, research scientist (scholar), now at NVidia