TU Darmstadt / ULB / TUprints

Development of a Method for the Fusion of Environment Perception Sensor Data for an Automated Vehicle

Uecker, Marc (2021)
Development of a Method for the Fusion of Environment Perception Sensor Data for an Automated Vehicle.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00018613
Master Thesis, Primary publication, Publisher's Version

[img]
Preview
Text
Masterthesis_Marc_Uecker_tuprints.pdf
Copyright Information: CC BY-NC-ND 4.0 International - Creative Commons, Attribution NonCommercial, NoDerivs.

Download (36MB) | Preview
Item Type: Master Thesis
Type of entry: Primary publication
Title: Development of a Method for the Fusion of Environment Perception Sensor Data for an Automated Vehicle
Language: English
Referees: Kuijper, Prof. Dr. Arjan ; Winner, Prof. Dr. Hermann ; Linnhoff, M.Sc. Clemens
Date: 2021
Place of Publication: Darmstadt
Collation: XI, 98 Seiten
DOI: 10.26083/tuprints-00018613
Abstract:

Autonomous driving is currently one of the most anticipated future technologies of the automotive world, and researchers from all over the world are dedicated to this task. In the same pursuit, the aDDa project at TU Darmstadt is a collaboration of researchers and students, focused on jointly engineering a car into a fully autonomous vehicle. As such, the aDDa research vehicle is outfitted with a wide array of sensors for environment perception.

Within the scope of the aDDa project, this thesis covers the fusion of data from LIDAR, RADAR and camera sensors into a unified environment model. Specifically, this work focuses on providing real-time environment perception, including fusion and interpretation of data from different sensors using only on-board hardware resources.

The developed method is a software pipeline, consisting of an analytical low-level sensor fusion stage, a 3D semantic segmentation model based on deep learning, and analytical clustering and tracking methods, as well as a proof-of-concept for estimating drivable space. This method is designed to maximize robustness, by minimizing the influence of the used machine learning approach on the reliability of obstacle detection. The sensor fusion pipeline runs in real-time with an output frequency of 10 Hz, and a pipeline delay of 120 to 190 milliseconds in the encountered situations on public roads.

An evaluation of several scenarios shows that the developed system can reliably detect a target vehicle in a variety of real-world situations.

The full contributions of this work not only include the development of a sensor fusion pipeline, but also methods for sensor calibration, as well as a novel method for generating training data for the used machine learning approach. In contrast to existing manual methods of data annotation, this work presents a scalable solution for annotating real-world sensor recordings to generate training data for 3D machine perception approaches for autonomous driving.

Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-186134
Additional Information:

Keywords: sensor fusion, environment perception, machine perception, machine learning, object detection, semantic segmentation, pointclouds, point cloud, pointcloud, deep learning, computer vision, autonomous driving, LIDAR, RADAR, camera, calibration, sensor calibration, camera calibration, motion compensation, data annotation, training data, dataset, labeling

Classification DDC: 000 Generalities, computers, information > 004 Computer science
600 Technology, medicine, applied sciences > 620 Engineering and machine engineering
Divisions: 20 Department of Computer Science > Interactive Graphics Systems
Date Deposited: 29 Jun 2021 09:42
Last Modified: 29 Jun 2021 09:42
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/18613
PPN: 48123098X
Export:
Actions (login required)
View Item View Item