TU Darmstadt / ULB / TUprints

Towards Understanding and Arguing with Classifiers: Recent Progress

Shao, Xiaoting ; Rienstra, Tjitze ; Thimm, Matthias ; Kersting, Kristian (2024)
Towards Understanding and Arguing with Classifiers: Recent Progress.
In: Datenbank-Spektrum : Zeitschrift für Datenbanktechnologien und Information Retrieval, 2020, 20 (2)
doi: 10.26083/tuprints-00024012
Article, Secondary publication, Publisher's Version

[img] Text
s13222-020-00351-x.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (2MB)
Item Type: Article
Type of entry: Secondary publication
Title: Towards Understanding and Arguing with Classifiers: Recent Progress
Language: English
Date: 26 April 2024
Place of Publication: Darmstadt
Year of primary publication: July 2020
Place of primary publication: Berlin ; Heidelberg
Publisher: Springer
Journal or Publication Title: Datenbank-Spektrum : Zeitschrift für Datenbanktechnologien und Information Retrieval
Volume of the journal: 20
Issue Number: 2
DOI: 10.26083/tuprints-00024012
Corresponding Links:
Origin: Secondary publication DeepGreen
Abstract:

Machine learning and argumentation can potentially greatly benefit from each other. Combining deep classifiers with knowledge expressed in the form of rules and constraints allows one to leverage different forms of abstractions within argumentation mining. Argumentation for machine learning can yield argumentation-based learning methods where the machine and the user argue about the learned model with the common goal of providing results of maximum utility to the user. Unfortunately, both directions are currently rather challenging. For instance, combining deep neural models with logic typically only yields deterministic results, while combining probabilistic models with logic often results in intractable inference. Therefore, we review a novel deep but tractable model for conditional probability distributions that can harness the expressive power of universal function approximators such as neural networks while still maintaining a wide range of tractable inference routines. While this new model has shown appealing performance in classification tasks, humans cannot easily understand the reasons for its decision. Therefore, we also review our recent efforts on how to "argue" with deep models. On synthetic and real data we illustrate how "arguing" with a deep model about its explanations can actually help to revise the model, if it is right for the wrong reasons.

Uncontrolled Keywords: Argumentation-based ML, Explainable AI, Interactive ML, Influence Function, Deep Density Estimation, Probabilistic Circuits
Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-240129
Classification DDC: 000 Generalities, computers, information > 004 Computer science
Divisions: 20 Department of Computer Science > Artificial Intelligence and Machine Learning
Zentrale Einrichtungen > Centre for Cognitive Science (CCS)
Date Deposited: 26 Apr 2024 12:38
Last Modified: 14 Aug 2024 09:21
SWORD Depositor: Deep Green
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/24012
PPN: 520631889
Export:
Actions (login required)
View Item View Item