Hurdles to Progress in Long-form Question Answering - Project Page
This is a project page for our NAACL 2021 paper on long-form question answering. For more details, contact me at kalpesh@cs.umass.edu.
Abstract: The task of long-form question answering (LFQA) involves retrieving documents relevant to a given question and using them to generate a paragraph-length answer. While many models have recently been proposed for LFQA, we show in this paper that the task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress. To demonstrate these challenges, we first design a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset. While our system tops the public leaderboard, a detailed analysis reveals several troubling trends: (1) our system’s generated answers are not actually grounded in the documents that it retrieves; (2) ELI5 contains significant train / test overlap, as at least 81% of ELI5 validation questions occur in paraphrased form in the training set; (3) ROUGE-L is not an informative metric of generated answer quality and can be easily gamed; and (4) human evaluations used for other text generation tasks are unreliable for LFQA. We provide suggestions to mitigate each of these issues, which we hope will lead to more rigorous LFQA research and meaningful progress in the future.
arXiv: https://arxiv.org/abs/2103.06332
blogpost: https://ai.googleblog.com/2021/03/progress-and-challenges-in-long-form.html
slides: https://docs.google.com/presentation/d/1kkl0fGbhEqWnUDkcSbFsDWIKnojlR_HFiCvhAhXW2Uk/edit?usp=sharing
tweet: https://twitter.com/kalpeshk2011/status/1374443466537639939
video: https://drive.google.com/file/d/1OnArDF9tUsjDM29CI7seCbtnsCWnOkVg/view?usp=sharing
code: https://github.com/martiansideofthemoon/hurdles-longform-qa (supports inference from pretrained generator / retriever models, evaluation + analysis scripts, details in README file)
original Routing Transformer codebase: https://github.com/google-research/google-research/tree/master/routing_transformer
external summaries: Ruder’s newsletter, video1, video2, VentureBeat, SearchEngineJournal, MarkTechPost, TechStory