Pyramid-based Summary Evaluation Using Abstract Meaning Representation
Josef Steinberger
and
Peter Krejzl
and
Tomáš Brychcín
Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017 (2017)
BibTex
|
PDF
Abstract
We propose a novel metric for evaluating
summary content coverage. The evaluation
framework follows the Pyramid approach
to measure how many summarization
content units, considered important by
human annotators, are contained in an automatic
summary. Our approach automatizes
the evaluation process, which does not
need any manual intervention on the evaluated
summary side. Our approach compares
abstract meaning representations of
each content unit mention and each summary
sentence. We found that the proposed
metric complements well the widely-used
ROUGE metrics.