Buchmann, Jan ; Liu, Xiao ; Gurevych, Iryna
eds.: Al-Onaizan, Yaser ; Bansal, Mohit ; Chen, Yun-Nung (2024)
Attribute or Abstain: Large Language Models as Long Document Assistants.
The 2024 Conference on Empirical Methods in Natural Language Processing. Miami, Florida (12.11.2024-16.11.2024)
doi: 10.26083/tuprints-00028921
Conference or Workshop Item, Secondary publication, Publisher's Version
Text
2024.emnlp-main.463.pdf Copyright Information: CC BY 4.0 International - Creative Commons, Attribution. Download (2MB) |
Item Type: | Conference or Workshop Item |
---|---|
Type of entry: | Secondary publication |
Title: | Attribute or Abstain: Large Language Models as Long Document Assistants |
Language: | English |
Date: | 17 December 2024 |
Place of Publication: | Darmstadt |
Year of primary publication: | November 2024 |
Place of primary publication: | Kerrville, TX, USA |
Publisher: | ACL |
Book Title: | Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing |
Event Title: | The 2024 Conference on Empirical Methods in Natural Language Processing |
Event Location: | Miami, Florida |
Event Dates: | 12.11.2024-16.11.2024 |
DOI: | 10.26083/tuprints-00028921 |
Corresponding Links: | |
Origin: | Secondary publication service |
Abstract: | LLMs can help humans working with long documents, but are known to hallucinate. Attribution can increase trust in LLM responses: The LLM provides evidence that supports its response, which enhances verifiability. Existing approaches to attribution have only been evaluated in RAG settings, where the initial retrieval confounds LLM performance. This is crucially different from the long document setting, where retrieval is not needed, but could help. Thus, a long document specific evaluation of attribution is missing. To fill this gap, we present LAB, a benchmark of 6 diverse long document tasks with attribution, and experiments with different approaches to attribution on 5 LLMs of different sizes. We find that citation, i.e. response generation and evidence extraction in one step, performs best for large and fine-tuned models, while additional retrieval can help for small, prompted models. We investigate whether the "Lost in the Middle" phenomenon exists for attribution, but do not find this. We also find that evidence quality can predict response quality on datasets with simple responses, but not so for complex responses, as models struggle with providing evidence for complex claims. We release code and data for further investigation. |
Status: | Publisher's Version |
URN: | urn:nbn:de:tuda-tuprints-289216 |
Classification DDC: | 000 Generalities, computers, information > 004 Computer science |
Divisions: | 20 Department of Computer Science > Ubiquitous Knowledge Processing Zentrale Einrichtungen > hessian.AI - The Hessian Center for Artificial Intelligence |
Date Deposited: | 17 Dec 2024 16:29 |
Last Modified: | 17 Dec 2024 16:29 |
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/28921 |
PPN: | |
Export: |
View Item |