Jourdan, Sara (2024)
From Assistance to Empowerment: Human-AI Collaboration in High-Risk Decision Making.
Technische Universität Darmstadt
doi: 10.26083/tuprints-00028727
Ph.D. Thesis, Primary publication, Publisher's Version
Text
Dissertation_Jourdan_2024_Human_AI_Collaboration.pdf Copyright Information: CC BY-SA 4.0 International - Creative Commons, Attribution ShareAlike. Download (3MB) |
Item Type: | Ph.D. Thesis | ||||
---|---|---|---|---|---|
Type of entry: | Primary publication | ||||
Title: | From Assistance to Empowerment: Human-AI Collaboration in High-Risk Decision Making | ||||
Language: | English | ||||
Referees: | Buxmann, Prof. Dr. Peter ; Benlian, Prof. Dr. Alexander | ||||
Date: | 28 November 2024 | ||||
Place of Publication: | Darmstadt | ||||
Collation: | XVII, 158 Seiten | ||||
Date of oral examination: | 11 November 2024 | ||||
DOI: | 10.26083/tuprints-00028727 | ||||
Abstract: | The increasing availability of large amounts of valuable data and the development of ever more powerful machine learning (ML) algorithms enable ML systems to quickly and independently identify complex relationships in data. As a result, ML systems not only generate new knowledge, but also offer significant potential to augment human capabilities and assist decision makers in challenging tasks. In high-risk areas such as aviation or healthcare, humans retain final decision-making responsibility, but will increasingly collaborate with ML systems to improve decision-making processes. However, since ML systems rely on statistical approaches, they are susceptible to error, and the complexity of modern algorithms often renders the output of ML systems opaque to humans. While initial approaches from the field of explainable artificial intelligence (XAI) aim to make the output of ML systems more understandable and comprehensible to humans, current research investigating the impact of ML systems on human decision makers is limited and lacks approaches on how humans can improve their capabilities through collaboration to make better decisions in the long run. To fully exploit the potential of ML systems in high-risk areas, both humans and ML systems should be able to learn from each other to enhance their performance in the context of collaboration. Furthermore, it is essential to design effective collaboration that considers the unique characteristics of ML systems and enables humans to critically assess system decisions. This dissertation comprises five published papers that use a mixed-methods study, two quantitative experiments and two qualitative design science research (DSR) studies to explore the collaboration and bilateral influences between humans and ML systems in decision-making contexts within high-risk areas from three perspectives: (1) the human perspective, (2) the ML system perspective, and (3) the collaborative perspective. From a human perspective, this dissertation examines how humans can learn from ML systems in collaboration to enhance their own capabilities and avoid the risk of false learning due to erroneous ML output. In a mixed-methods study, radiologists segmented 690 brain tumors in MRI scans supported by either high-performing or low-performing ML systems, which provided explainable or non-explainable output design. The study shows that human decision makers can learn from ML systems to improve their decision performance and confidence. However, incorrect system outputs also lead to false learning and pose risks for decision makers. Explanations from the XAI field can significantly improve the learning success of radiologists and prevent false learning in the case of incorrect ML system output. In fact, some radiologists were even able to learn from mistakes made by low-performing ML systems when local explanations were provided with the system output. This study provides first empirical insights into the human learning potential in the context of collaborating with ML systems. The finding that explainable design of ML systems enables radiologists to identify erroneous output may facilitate earlier adoption of explainable ML systems that can improve their performance over time. The ML system perspective, on the other hand, examines how ML systems must be designed to respond flexibly to changes in human problem perception and their dynamic deployment environment. This allows the systems to also learn from humans and ensures reliable system performance in dynamic collaborative environments. Through 15 qualitative interviews with data science and ML experts in the context of a DSR study, challenges for the long-term deployment of ML systems are identified. The results show that the requirements for flexible adaptation of systems in long-term use must be established in the early phases of the ML development process. Tangible design requirements and principles for ML systems that can learn from their environment and humans are derived for all phases of the CRISP-ML(Q) process model for the development and deployment of ML models. Implementing these principles allows ML systems to maintain or even improve their performance in the long run despite occurring changes, thus creating the prerequisites for a sustainable lifecycle of ML systems. Finally, the collaborative perspective examines how the collaboration between humans and ML systems should be designed to account for the unique characteristics of ML systems, such as error proneness and opacity, as well as the cognitive biases that are inherent to human decision making. In this context, pilots were provided with different ML systems for the visual detection of other aircraft in the airspace during 222 recorded flight simulations. The experiment examines the influence of different ML error types and XAI approaches in collaboration, and shows that an explainable output design can significantly reduce ML error-induced pilot trust and performance degradation for individual error types. However, processing explanations from the XAI field increases the pilot’s mental workload. While ML errors erode the trust of human decision makers, a DSR study is conducted to derive design principles for acceptance-promoting artifacts for collaboration between humans and ML systems. Finally, the last part of the analysis shows how cognitive biases such as the IKEA effect cause humans to overvalue the results of collaboration with ML systems when a high level of personal effort is invested in the collaboration. The findings provide a broad foundation for designing effective human-AI collaboration in organizations, especially in high-risk areas where humans will be involved in decision making for the long term. Overall, the papers show how by designing effective collaboration, both humans and ML systems can benefit from each other in the long run and enhance their own capabilities. The explainable design of ML system outputs can serve as a catalyst for the adoption of ML systems, especially in high-risk areas. This dissertation defines novel requirements for the collaboration between humans and ML systems and provides guidance for ML developers, scientists, and organizations that aspire to involve both human decision makers and ML systems in decision-making processes and ensure high and robust performance in the long term. |
||||
Alternative Abstract: |
|
||||
Status: | Publisher's Version | ||||
URN: | urn:nbn:de:tuda-tuprints-287274 | ||||
Classification DDC: | 000 Generalities, computers, information > 004 Computer science 300 Social sciences > 330 Economics |
||||
Divisions: | 01 Department of Law and Economics > Betriebswirtschaftliche Fachgebiete > Information Systems | ||||
Date Deposited: | 28 Nov 2024 14:33 | ||||
Last Modified: | 02 Dec 2024 08:47 | ||||
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/28727 | ||||
PPN: | 524258317 | ||||
Export: |
View Item |