Author: Zoltán Lőrincz
Throughout the previous semesters, I used Imitation Learning to carry out lane following in the Duckietown¹ environment. The agents were trained in the Duckietown simulator using different Imitation Learning methods such as Behavioral Cloning², Dataset Aggregation³ (DAgger) and Generative Adversarial Imitation Learning⁴ (GAIL). Even though the models performed well in the simulated environment, they could not generalize well enough in the real-world environment and failed to succeed in the lane following task.
In this semester I focused on applying Transfer Learning techniques to solve the simulator-to-real problem. I will present my work in the next sections.
Author: Robert Moni
Sorry for the clickbait, but no, we aren’t the engineers that put together the Perseverance Mars rover, although we do possess the right amount of perseverance to achieve our goals.
This blog post attempts to provide a summary of our latest research achievements under the aegis of the PIA project.
The Professional Intelligence for Automotive Project was founded in 2019 as a cooperation project between Continental Deep Learning Competence Center Budapest and the SmartLabs of the Department of Telecommunications and Media Informatics (TMIT) and the Department of Control Engineering and Information Technology (IIT) at the Budapest University…
Author: Zsombor Tóth
Part 1 of this blog post can be found here.
Deep Learning is one of the most important techniques of autonomous vehicles nowadays. There are many components of a self-driving vehicle that can be realized with the help of deep neural networks (e.g. car and pedestrian detection, depth estimation, scene interpretation, controlling, etc.). Among the many, semantic segmentation is one of the most essential components without a doubt. …
Author: Gábor Lant
One of the key strength of deep learning is that it can work with a large amount of data to train machine learning models. These models can be trained to find the underlying structure of the data. On the other hand, this also means that data is also a requirement for these methods to work properly. Finding enough good quality data, however, can be a challenging problem by itself. In computer vision tasks this is a common problem. Labeling images are difficult tasks and usually can only be done by a trained human agent. For example, annotating…
Author: Tamás Illés
In Machine Learning (ML), we typically care about optimizing for a particular metric, whether this is a score on a certain benchmark or a business Key Performance Indicator (KPI). In order to do this, we generally train a single model or an ensemble of models to perform our desired task. We then fine-tune and tweak these models until their performance no longer increases. While we can generally achieve acceptable performance this way, by being laser-focused on our single task, we ignore information that might help us do even better on the metric we care about. Specifically, this…
Author: András Béres
Recently the topic of self-driving cars has received great attention both from academia and the public. While Deep Learning can provide us tools for processing vast amounts of sensor data, Reinforcement Learning promises us the ability to take the right actions in complex interactive environments. Using these tools could be one way to solve the self-driving task, however some problems make the real world application of these techniques difficult.
Since AI agents collecting large amounts of experience while interacting with the real world is usually too expensive and sometimes even dangerous, we usually train the agents in…
Author: Péter Almási
I set the goal to create a method for controlling vehicles to perform autonomous lane following using deep reinforcement learning. The agent is trained in a simulated environment without any real-world data and is tested in the real world. The performance of the agent was tested under extreme test scenarios: night mode driving and recovery from irregular starting locations.
Deep Reinforcement Learning (DRL) is a field of machine learning which enables intelligent software agents in an environment to attain their goal. They utilize deep neural networks to learn the best possible actions in each state. …
Author: Márton Tim
In Deep Reinforcement Learning (DRL), convergence and low performance of the resulting agent is often an issue that just intensifies as the problem becomes increasingly complex and complicated. This is one reason why end-to-end applications of DRL might become soon outperformed by solutions tackling the original problem by breaking it down into smaller, meaningful subtasks.
In the last semester, I have worked on a solution for a robust, obstacle-avoiding lane follower agent that was using segmentation to simplify observations. The idea, motivation and details of my solution will all be discussed below, so keep on reading :)
Author: Dániel Unyi
Link prediction is to predict whether two components in a network are likely to interact with each other. It’s a fundamental task in network science, with a wide variety of real-world applications. Examples include predicting friendship links on social media, identifying hidden communities, or discovering drug-drug interactions in pharmacology. However, current state-of-the-art algorithms are unable to scale efficiently for large graphs. My goal was to work out a scalable, accurate link prediction method by exploiting the modeling power of deep neural networks.
Graph-based deep learning generalizes traditional methods by allowing connectivity between data points . Accordingly…
Author: Robert Moni
This year we competed with 6 different solutions at the 5th edition of the AI Driving Olympics (AIDO) which was part of the 34th conference on Neural Information Processing Systems (NeurIPS). There was a total of 94 competitors with 1326 submitted solutions, thus we proudly announce that our team ranked top in 2 out of 3 challenges.