Vinçon, Tobias (2022)
Data-intensive Systems on Modern Hardware : Leveraging Near-Data Processing to Counter the Growth of Data.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00023016
Ph.D. Thesis, Primary publication, Publisher's Version
Text
Thesis_TobiasVincon.pdf Copyright Information: CC BY-NC-ND 4.0 International - Creative Commons, Attribution NonCommercial, NoDerivs. Download (11MB) |
Item Type: | Ph.D. Thesis | ||||
---|---|---|---|---|---|
Type of entry: | Primary publication | ||||
Title: | Data-intensive Systems on Modern Hardware : Leveraging Near-Data Processing to Counter the Growth of Data | ||||
Language: | German | ||||
Referees: | Koch, Prof. Dr. Andreas ; Teubner, Prof. Dr. Jens ; Petrov, Prof. Dr. Ilia | ||||
Date: | 2022 | ||||
Place of Publication: | Darmstadt | ||||
Collation: | xxiv, 225 Seiten | ||||
Date of oral examination: | 8 December 2022 | ||||
DOI: | 10.26083/tuprints-00023016 | ||||
Abstract: | Over the last decades, a tremendous change toward using information technology in almost every daily routine of our lives can be perceived in our society, entailing an incredible growth of data collected day-by-day on Web, IoT, and AI applications. At the same time, magneto-mechanical HDDs are being replaced by semiconductor storage such as SSDs, equipped with modern Non-Volatile Memories, like Flash, which yield significantly faster access latencies and higher levels of parallelism. Likewise, the execution speed of processing units increased considerably as nowadays server architectures comprise up to multiple hundreds of independently working CPU cores along with a variety of specialized computing co-processors such as GPUs or FPGAs. However, the burden of moving the continuously growing data to the best fitting processing unit is inherently linked to today’s computer architecture that is based on the data-to-code paradigm. In the light of Amdahl's Law, this leads to the conclusion that even with today's powerful processing units, the speedup of systems is limited since the fraction of parallel work is largely I/O-bound. Therefore, throughout this cumulative dissertation, we investigate the paradigm shift toward code-to-data, formally known as Near-Data Processing (NDP), which relieves the contention on the I/O bus by offloading processing to intelligent computational storage devices, where the data is originally located. Firstly, we identified Native Storage Management as the essential foundation for NDP due to its direct control of physical storage management within the database. Upon this, the interface is extended to propagate address mapping information and to invoke NDP functionality on the storage device. As the former can become very large, we introduce Physical Page Pointers as one novel NDP abstraction for self-contained immutable database objects. Secondly, the on-device navigation and interpretation of data are elaborated. Therefore, we introduce cross-layer Parsers and Accessors as another NDP abstraction that can be executed on the heterogeneous processing capabilities of modern computational storage devices. Thereby, the compute placement and resource configuration per NDP request is identified as a major performance criteria. Our experimental evaluation shows an improvement in the execution durations of 1.4x to 2.7x compared to traditional systems. Moreover, we propose a framework for the automatic generation of Parsers and Accessors on FPGAs to ease their application in NDP. Thirdly, we investigate the interplay of NDP and modern workload characteristics like HTAP. Therefore, we present different offloading models and focus on an intervention-free execution. By propagating the Shared State with the latest modifications of the database to the computational storage device, it is able to process data with transactional guarantees. Thus, we achieve to extend the design space of HTAP with NDP by providing a solution that optimizes for performance isolation, data freshness, and the reduction of data transfers. In contrast to traditional systems, we experience no significant drop in performance when an OLAP query is invoked but a steady and 30% faster throughput. Lastly, in-situ result-set management and consumption as well as NDP pipelines are proposed to achieve flexibility in processing data on heterogeneous hardware. As those produce final and intermediary results, we continue investigating their management and identified that an on-device materialization comes at a low cost but enables novel consumption modes and reuse semantics. Thereby, we achieve significant performance improvements of up to 400x by reusing once materialized results multiple times. |
||||
Alternative Abstract: |
|
||||
Uncontrolled Keywords: | Near-data processing, In-situ processing, database architecture, computational storage, growth of data, storage management, HTAP, result-set handling | ||||
Status: | Publisher's Version | ||||
URN: | urn:nbn:de:tuda-tuprints-230162 | ||||
Classification DDC: | 000 Generalities, computers, information > 004 Computer science | ||||
Divisions: | 20 Department of Computer Science > Embedded Systems and Applications | ||||
Date Deposited: | 23 Dec 2022 13:39 | ||||
Last Modified: | 05 Jan 2023 07:02 | ||||
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/23016 | ||||
PPN: | 503271764 | ||||
Export: |
View Item |