valeo.ai

An international team based in Paris, which conducts AI research for Valeo automotive applications, in collaboraton with world-class academics. Our main research is on reliable and sustainable automotive AI. See our papers, projects, codes, posts and tweets.

Team


Florent Bartoccioni, PhD student
Hédi Ben-younes, research scientist | scholar | twitter
Alexandre Boulch, research scientist | page | scholar| github | twitter
Andrei Bursuc, research scientist | page | scholar | github | twitter
Laura Calem, PhD student | github | twitter
Mickaël Chen, research scientist | page | scholar | github
Charles Corbière, PhD student | page | scholar | github | twitter
Matthieu Cord, principal scientist | page | scholar | twitter
Spyros Gidaris, research scientist | scholar | github
David Hurych, research scientist | scholar
Renaud Marlet, principal scientist | page | scholar
Arthur Ouaknine, PhD student | page | scholar | twitter
Patrick Pérez, scientific director | page | scholar
Gilles Puy, research scientist | page | scholar
Julien Rebut, research scientist | scholar
Simon Roburin, PhD student | page
Antoine Saporta, PhD student | scholar
Tristan Schultz, research engineer
Oriane Siméoni, research scientist | page | scholar | github
Huy Van Vo, PhD student | scholar | github
Tuan-Hung Vu, research scientist | page | scholar | github | twitter
Eloi Zablocki, research scientist | scholar | twitter
Léon Zheng, PhD student

Human Resource Partner: Alain Phetsinghane
Assistant: Ouafa Bakrine
Location: 100 rue de Courcelles, Paris

Some projects

Multi-sensor perception — Automated driving relies first on a variety of sensors, like Valeo’s fish-eye cameras, LiDARs, radars and ultrasonics. Exploiting at best the outputs of each of these sensors at any instant is fundamental to understand the complex environment of the vehicle and gain robustness. To this end, we explore various machine learning approaches where sensors are considered either in isolation (as radar in Carrada at ICPR’20) or collectively (as in xMUDA at CVPR’20).

3D perception — Each sensor delivers information about the 3D world around the vehicle. Making sense of this information in terms of drivable space and important objects (road users, curb, obstacles, street furnitures) in 3D is required for the driving system to plan and act in the safest and most comfortable way. This encompasses several challenging tasks, in particular detection and segmentation of objects in point clouds as in FKAConv at ACCV’20.

Frugal learning — Collecting diverse enough data, and annotating it precisely, is complex, costly and time-consuming. To reduce dramatically these needs, we explore various alternatives to fully-supervised learning, e.g, training that is unsupervised (as rOSD at ECCCV’20), self-supervised (as BoWNet at CVPR’20 and OBoW at CVPR’21), semi-supervised, active, zero-shot (as ZS3 at NeurIPS’19) or few-shot. We also investigate training with fully-synthetic data (in combination with unsupervised domain adaptation) and with GAN-augmented data (as with Semantic Palette at CVPR’21 and DummyNet at AAAI’21).

Domain adaptation — Deep learning and reinforcement learning are key technologies for autonomous driving. One of the challenges they face is to adapt to conditions which differ from those met during training. To improve systems’ performance in such situations, we explore so-called “domain adaptation” techniques, as in AdvEnt at CVPR’19 and DADA its extension at ICCV’19.

Reliability — When the unexpected happens, when the weather badly degrades, when a sensor gets blocked, the embarked perception system should continue working or, at least, diagnose the situation to react accordingly, e.g., by calling an alternative system or the human driver. With this in mind, we investigate ways to improve the robustness of neural nets to input variations, including to adversarial attacks, and to predict automatically the performance and the confidence of their predictions as in ConfidNet at NeurIPS’19.

Driving in action — Getting from sensory inputs to car control goes either through a modular stack (perception > localization > forecast > planning > actuation) or, more radically, through a single end-to-end model. We work on both strategies, more specifically on action forecasting, automatic interpretation of decisions taken by a driving system, and reinforcement / imitation learning for end-to-end systems (as in RL work at CVPR’20).

Core Deep Learning — Deep learning being now a key component of AD systems, it is important to get a better understanding of its inner workings, in particular the link between the specifics of the learning optimization and the key properties (performance, regularity, robustness, generalization) of the trained models. Among other things, we investigate the impact of popular batch normalization on standard learning procedures and the ability to learn through unsupervised distillation.

Code and data

  • LOST: Object localization with self-supervised transformers (BMVC’21)
  • MTAF: Multi-Target Adversarial Frameworks for domain adaptation (ICCV’21)
  • PCAM: Product of Cross-Attention Matrices for rigid registration of point clouds (ICCV’21)
  • SP4ASC: Separable convolutions for acoustic scene classification in DCASE’21 Challenge.
  • MVRSS: Multi-view radar semantic segmentation (ICCV’21)
  • ObsNet: Out-Of-Distribution detection by learning from local adversarial attacks in semantic segmentation (ICCV’21)
  • Semantic Palette: Guiding scene generation with class proportions (CVPR’21)
  • Attributes with Fields: Detecting 32 pedestrian attributes with composite fields (T-ITS)
  • OBoW: Online BoW generation for unsupervised representation learning (CVPR’21)
  • DummyNet: Artificial Dummies for Urban Dataset Augmentation (AAAI’21)
  • CARRADA (dataset): Camera and Automotive Radar with Range-Angle-Doppler Annotations dataset (ICPR’20)
  • ESL: Entropy-guided Self-supervised Learning for Domain Adaptation in Semantic Segmentation (workshop CVPR’20)
  • FLOT: Scene flow on point clouds guided by optimal transport (ECCV’20)
  • AdamSRT: Adam exploiting BN-induced pherical invariance of CNN (arXiv 2020)
  • LightConvPoint: Convolution for points (ACCV’20)
  • xMUDA: Cross-modal UDA for 3D semantic segmentation (CVPR’20)
  • LearningByCheating: End-to-End driving using implicit affordances (CVPR’20)
  • rOSD: Unsupervised object discovery at scale (ECCV’20)
  • ConvPoint: Convolutions for unstructured point clouds (Computer \& Graphics 2020)
  • BEEF: Driving behavior explanation with multi-level fusion (workshop NeurIPS’20)
  • Woodscape: Driving fisheye multi-task dataset (ICCV’19)
  • ZS3: Zero-Shot Semantic Segmentation (NeurIPS’19)
  • BF3S: Boosting few-shot visual learning with self-supervision (ICCV’19)
  • ConfidNet: Addressing failure prediction by learning model confidence (NeurIPS’19)
  • Rainbow-IQN Ape-X: effective RL combination for Atari games
  • DADA: Depth-aware Domain Adaptation in Semantic Segmentation (ICCV’19)
  • AdvEnt: Adversarial Entropy minimization for domain adaptation in semantic segmentation (CVPR’19)
  • OSD: Unsupervised object discovery as optimization (CVPR’19)

Academic partners

CNAM, Paris (Nicolas Thome)
CTU, Prague (Josef Sivic)
EPFL, Lausanne (Alexandre Alahi)
ENS & Inria, Lyon (Rémi Gribonval)
Inria & PR[AI]RIE, Paris (Jean Ponce)
Inria, Grenoble (Karteek Alahari)
MPI, Saarbrücken (Christian Theobalt)
Ponts, Paris (Mathieu Aubry, David Picard)
Sorbonne, Paris (Matthieu Cord)
Télécom, Paris (Florence Tupin, Alasdair Newson, Florence d’Alché-Buc)

News

Commnunication

Alumni

Himalaya Jain, research scientist (page,scholar), now at Helsing.ai
Marin Toromanoff, PhD student (scholar), now at Valeo Driving Assistance Research
Maxime Bucher, research scientist (page,scholar)
Maximilian Jaritz, PhD student (page, scholar), now at Amazon
Gabriel de Marmiesse, research engineer (github), now at Preligens
Emilie Wirbel, research scientist (scholar), now at Nvidia