The challenging task of summary evaluation: an overview

Evaluation is crucial in the research and development of automatic summarization applications, in order to determine the appropriateness of a summary based on different criteria, such as the content it contains, and the way it is presented. To perform an adequate evaluation is of great relevance to ensure that automatic summaries can be useful for the context and/or application they are generated for. To this end, researchers must be aware of the evaluation metrics, approaches, and datasets that are available, in order to decide which of them would be the most suitable to use, or to be able to propose new ones, overcoming the possible limitations that existing methods may present. In this article, a critical and historical analysis of evaluation metrics, methods, and datasets for automatic summarization systems is presented, where the strengths and weaknesses of evaluation efforts are discussed and the major challenges to solve are identified. Therefore, a clear up-to-date overview of the evolution and progress of summarization evaluation is provided, giving the reader useful insights into the past, present and latest trends in the automatic evaluation of summaries.

Autores: 
Lloret, Elena
Plaza, Laura
Aker, Ahmet
Tipo de publicación: 
Artículo de revista
Nombre de la revista: 
Language Resources and Evaluation
Nombre del libro: 
-
Volumen: 
52
ISSN: 
1574-0218
1574-020X
Revisión por pares: 
Internacional: 
Editorial: 
Springer Netherlands
Publicable: 
DOI: 
10.1007/s10579-017-9399-2
Año de publicación: 
2 018