Weber, Lukas Max (2023)
Novel Architectures for Offloading and Accelerating Computations in Artificial Intelligence and Big Data.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00024349
Ph.D. Thesis, Primary publication, Publisher's Version
Text
Dissertation_LMW.pdf Copyright Information: In Copyright. Download (4MB) |
Item Type: | Ph.D. Thesis | ||||
---|---|---|---|---|---|
Type of entry: | Primary publication | ||||
Title: | Novel Architectures for Offloading and Accelerating Computations in Artificial Intelligence and Big Data | ||||
Language: | English | ||||
Referees: | Koch, Prof. Dr. Andreas ; Sinnen, Prof. Oliver | ||||
Date: | 19 October 2023 | ||||
Place of Publication: | Darmstadt | ||||
Collation: | xxx, 231 Seiten | ||||
Date of oral examination: | 15 September 2023 | ||||
DOI: | 10.26083/tuprints-00024349 | ||||
Abstract: | Due to the end of Moore's Law and Dennard Scaling, performance gains in general-purpose architectures have significantly slowed in recent years. While raising the number of cores has been a viable approach for further performance increases, Amdahl's Law and its implications on parallelization also limit further performance gains. Consequently, research has shifted towards different approaches, including domain-specific custom architectures tailored to specific workloads. This has led to a new golden age for computer architecture, as noted in the Turing Award Lecture by Hennessy and Patterson, which has spawned several new architectures and architectural advances specifically targeted at highly current workloads, including Machine Learning. This thesis introduces a hierarchy of architectural improvements ranging from minor incremental changes, such as High-Bandwidth Memory, to more complex architectural extensions that offload workloads from the general-purpose CPU towards more specialized accelerators. Finally, we introduce novel architectural paradigms, namely Near-Data or In-Network Processing, as the most complex architectural improvements. This cumulative dissertation then investigates several architectural improvements to accelerate Sum-Product Networks, a novel Machine Learning approach from the class of Probabilistic Graphical Models. Furthermore, we use these improvements as case studies to discuss the impact of novel architectures, showing that minor and major architectural changes can significantly increase performance in Machine Learning applications. In addition, this thesis presents recent works on Near-Data Processing, which introduces Smart Storage Devices as a novel architectural paradigm that is especially interesting in the context of Big Data. We discuss how Near-Data Processing can be applied to improve performance in different database settings by offloading database operations to smart storage devices. Offloading data-reductive operations, such as selections, reduces the amount of data transferred, thus improving performance and alleviating bandwidth-related bottlenecks. Using Near-Data Processing as a use-case, we also discuss how Machine Learning approaches, like Sum-Product Networks, can improve novel architectures. Specifically, we introduce an approach for offloading Cardinality Estimation using Sum-Product Networks that could enable more intelligent decision-making in smart storage devices. Overall, we show that Machine Learning can benefit from developing novel architectures while also showing that Machine Learning can be applied to improve the applications of novel architectures. |
||||
Alternative Abstract: |
|
||||
Uncontrolled Keywords: | Computer Architecture, FPGA, Machine Learning, Probabilistic Models | ||||
Status: | Publisher's Version | ||||
URN: | urn:nbn:de:tuda-tuprints-243490 | ||||
Classification DDC: | 000 Generalities, computers, information > 004 Computer science | ||||
Divisions: | 20 Department of Computer Science > Embedded Systems and Applications | ||||
Date Deposited: | 19 Oct 2023 12:10 | ||||
Last Modified: | 20 Oct 2023 08:03 | ||||
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/24349 | ||||
PPN: | 512591121 | ||||
Export: |
View Item |