Furletov, Yury (2022)
Sound Processing for Autonomous Driving.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00022090
Ph.D. Thesis, Primary publication, Publisher's Version
Text
Furletov_Thesis_RMR.pdf Copyright Information: CC BY-SA 4.0 International - Creative Commons, Attribution ShareAlike. Download (12MB) |
Item Type: | Ph.D. Thesis | ||||
---|---|---|---|---|---|
Type of entry: | Primary publication | ||||
Title: | Sound Processing for Autonomous Driving | ||||
Language: | English | ||||
Referees: | Adamy, Prof. Dr. Jürgen ; Willert, Prof. Dr. Volker ; Hohmann, Prof. Dr. Sören | ||||
Date: | 2022 | ||||
Place of Publication: | Darmstadt | ||||
Collation: | XIV, 133 Seiten | ||||
Date of oral examination: | 22 June 2022 | ||||
DOI: | 10.26083/tuprints-00022090 | ||||
Abstract: | Nowadays, a variety of intelligent systems for autonomous driving have been developed, which have already shown a very high level of capability. One of the prerequisites for autonomous driving is an accurate and reliable representation of the environment around the vehicle. Current systems rely on cameras, RADAR, and LiDAR to capture the visual environment and to locate and track other traffic participants. Human drivers, in addition to vision, have hearing and use a lot of auditory information to understand the environment in addition to visual cues. In this thesis, we present the sound signal processing system for auditory based environment representation. Sound propagation is less dependent on occlusion than all other types of sensors and in some situations is less sensitive to different types of weather conditions such as snow, ice, fog or rain. Various audio processing algorithms provide the detection and classification of different audio signals specific to certain types of vehicles, as well as localization. First, the ambient sound is classified into fourteen major categories consisting of traffic objects and actions performed. Additionally, the classification of three specific types of emergency vehicles sirens is provided. Secondly, each object is localized using a combined localization algorithm based on time difference of arrival and amplitude. The system is evaluated on real data with a focus on reliable detection and accurate localization of emergency vehicles. On the third stage the possibility of visualizing the sound source on the image from the autonomous vehicle camera system is provided. For this purpose, a method for camera to microphones calibration has been developed. The presented approaches and methods have great potential to increase the accuracy of environment perception and, consequently, to improve the reliability and safety of autonomous driving systems in general. |
||||
Alternative Abstract: |
|
||||
Status: | Publisher's Version | ||||
URN: | urn:nbn:de:tuda-tuprints-220908 | ||||
Classification DDC: | 600 Technology, medicine, applied sciences > 600 Technology | ||||
Divisions: | 18 Department of Electrical Engineering and Information Technology > Institut für Automatisierungstechnik und Mechatronik > Control Methods and Intelligent Systems | ||||
Date Deposited: | 20 Oct 2022 12:03 | ||||
Last Modified: | 21 Oct 2022 13:05 | ||||
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/22090 | ||||
PPN: | 500654468 | ||||
Export: |
View Item |