Towards self-driving era with Continental and Budapest University of Technology and Economics

SmartLab AI
5 min readMay 30, 2020

Authors: Róbert Moni, Péter Béla Almási, Fábián Füleki, Tamás Illés, András Kalapos, Zoltán Lőrincz, Zsombor Tóth, Bálint Gyires-Tóth

The Continental Deep Learning Competence Center Budapest and the SmartLabs of Department of Telecommunications and Media Informatics (TMIT) and the Department of Control Engineering and Information Technology (IIT) at the Budapest University of Technology and Economics are working together to explore and develop novel solutions for self-driving cars. Here you can find a summary of the results from the last period, prior to 2020.

Introduction and overview

More than three decades have passed since the first computer-controlled vehicle rolled out from Carnegie Mellon University’s research lab in 1986 named Navlab 1. More interestingly, another great advance in the field was presented with ALVINN (Autonomous Land Vehicle In a Neural Network) in 1989, which serves as ground truth for machine learning tackling the domain. Now we call them self-driving vehicles and with the advances in electrical engineering, the technology became commercially available. Advanced Driver Assistance Systems (ADAS) reached maturity to provide a wide variety of solutions for automated driving with the combination of classic and modern signal processing and control engineering methods, but the gap between automated and autonomous driving is not bridged yet. One of the major requirement for an autonomous vehicle is reasoning in the most severe driving scenarios. We believe that the best way to obtain reasoning in a computer system is via deep reinforcement learning (DRL), thus our research group focuses on delivering autonomous driving solutions with the power of deep learning and reinforcement learning.

We are focusing on vision-based control and the major research questions we tend to answer are the following:

  • observation processing: how should the camera image be processed and what information should the policy agent receive from it?
  • action processing: what are the best action configuration and limitation for the agent?
  • reward shaping: what is the best reward that reinforces self-driving?
  • methods: which methods serve best the task?
  • transferability: how can we trust an agent in the real world when it is trained in a simulation?
  • adaptability: are the methods adaptive to new requirements?
  • robustness: can the agent ensure safety in different traffic scenarios?
  • reproducibility: are the solutions stable and easy to reproduce?
  • explainable AI: how do the methods reason and take decisions?

During the past 1 year, our team focused on answering these questions by reproducing, adapting and adding novelties to state-of-the-art solutions in the field. Computer Vision solutions as object detection, semantic segmentation and depth estimation constitute as the best solutions for environment interpretation using deep learning. Reinforcement Learning and Imitation Learning are two rival solutions capable of learning an optimal driving policy. In the following, each member of our research group presents their experience and achievements working with these solutions.

AI Driving Olympics

Challenges serve as great fuel to accelerate the research work because they provide two effective motivators: a due date and adversary. Our chosen motivator was the AI Driving Olympics, which is organized by the Duckietown Foundation.

Duckietown is a worldwide initiative and project aiming to provide a platform for AI and mobile robotics education and research, which started as an MIT course and by now has been adopted at many universities around the Globe. As a platform for researching autonomous driving technology, it is much simpler and cheaper to use compared to full-sized, real-world vehicles, while providing many challenging problems and research opportunities with similar scientific value. The physical Duckietown environment consists of ”standardized” robots (so-called Duckiebots) and road tiles, from which complex road networks can be constructed with multiple robots navigating and interacting within. Also, a simulator called the Duckietown Gym (built on OpenAI Gym environment), which matches the real environment reasonably well and also offers several features allowing for the training of supervised and reinforcement learning agents is provided.

The AI Driving Olympics is organized every half year since 2018 December. These feature a long, online selection round before the finals which are organised at important computer science conferences (NIPS and ICRA). The selection round is freely open to anyone, thus the competition has many participants from universities and research groups from around the world. Participants can submit their solution via an online evaluation platform which is used to benchmark them both in simulations and on real robots. The real robot evaluations are performed by the organisers of the competition in the same Duckietown setup (called robotarium) located at ETH Zürich. The robots and road system can also be purchased, so researchers also have the ability to test solutions quickly on their own hardware.

This competition calls for submission in three challenges:

  • Lane Following (LF): The robot has to drive in the right-hand lane and travel the longest possible distance without leaving the road in a given amount of time. Leaving the road terminates the evaluation episode.
  • Lane Following with other Vehicles (LFV): The robot has to perform the same lane following task, but there are other vehicles on the roads (both in the front and the robot’s lane). Collisions are of course not allowed and terminate the episode.
  • Lane Following with other Vehicles and Intersections (LFVI): The road system also contains intersections with traffic lights and signs, whose direction the robot has to follow while avoiding collision with other vehicles.

Our first appearance at the challenge was in December 2019 at the NIPS conference, and we had the following achievements:

  • LFV in simulation: 1st place
  • LFV in simulation: 1st place
  • LFVI in simulation: 2nd place
  • LF in robotarium: 3rd place

Topics

Please find below the corresponding works of the outstanding students involved in the project:

Acknowledgement

We are grateful for the support of Continental Automotive Ltd.

--

--

SmartLab AI

Deep Learning and AI solutions from Budapest University of Technology and Economics. http://smartlab.tmit.bme.hu/