Home -> Abstract creation
Number of found records: 80

Author

Soricut, R.; Marcu, D
Title
Abstractive headline generation using WIDL-expressions
Source
Information Processing & Management, Nov 2007, Vol. 43 Issue 6, p1536-1548
Support
On line (04/2008) (Only UGR)
Abstract
We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation. (DB)
Keywords
Automatic abstracts; WIDL expressions;
Assessment

Author

SPARCK JONES, Karen
Title
Automatic summarising: factors and directions
Support
PDF
Abstract
This position paper suggests that progress with automatic summarising demands a better research methodology and a carefully focussed research strategy. In order to develop effective procedures it is necessary to identify and respond to the context factors, i.e. input, purpose, and output factors, that bear on summarising and its evaluation. The paper analyses and illustrates these factors and their implications for evaluation. It then argues that this analysis, together with the state of the art and the intrinsic difficulty of summarising, imply a nearer-term strategy concentrating on shallow, but not surface, text analysis and on indicative summarising. This is illustrated with current work, from which a potentially productive research programme can be developed. (AU)
Keywords
automatic summarising; context factors; input factors; output factors; evaluation;
Assessment

Author

SPARCK JONES, Karen
Title
Automatic summarising: The state of the art
Source
Information Processing & Management, 2007, Vol. 43 Issue 6, p1449-1481
Support
On line (04/2008) (Only UGR)
Abstract
Abstract: This paper reviews research on automatic summarising in the last decade. This work has grown, stimulated by technology and by evaluation programmes. The paper uses several frameworks to organise the review, for summarising itself, for the factors affecting summarising, for systems, and for evaluation. The review examines the evaluation strategies applied to summarising, the issues they raise, and the major programmes. It considers the input, purpose and output factors investigated in recent summarising research, and discusses the classes of strategy, extractive and non-extractive, that have been explored, illustrating the range of systems built. The conclusions drawn are that automatic summarisation has made valuable progress, with useful applications, better evaluation, and more task understanding. But summarising systems are still poorly motivated in relation to the factors affecting them, and evaluation needs taking much further to engage with the purposes summaries are intended to serve and the contexts in which they are used (DB)
Keywords
Automatic summaries
Assessment

Author

Vanderwende, Lucy; Suzuki, Hisami; Brockett, Chris; Nenkova, Ani
Title
Beyond SumBasic: Task-focused summarization with sentence simplification and lexical expansion
Source
Information Processing & Management, Nov. 2007, Vol. 43 Issue 6, p1606-1618
Support
On line (04/2008) (Only UGR)
Abstract
In recent years, there has been increased interest in topic-focused multi-document summarization. In this task, automatic summaries are produced in response to a specific information request, or topic, stated by the user. The system we have designed to accomplish this task comprises four main components: a generic extractive summarization system, a topic-focusing component, sentence simplification, and lexical expansion of topic words. This paper details each of these components, together with experiments designed to quantify their individual contributions. We include an analysis of our results on two large datasets commonly used to evaluate task-focused summarization, the DUC2005 and DUC2006 datasets, using automatic metrics. Additionally, we include an analysis of our results on the DUC2006 task according to human evaluation metrics. In the human evaluation of system summaries compared to human summaries, i.e., the Pyramid method, our system ranked first out of 22 systems in terms of overall mean Pyramid score; and in the human evaluation of summary responsiveness to the topic, our system ranked third out of 35 systems. (DB)
Keywords
Automatic abstracts; lexical expansion; natural language processing
Assessment
Showing page 3 of 20

Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Next

Director: © Maria Pinto (UGR)

Creation 31/07/2005 | Update 11/04/2011 | Tutorial | Map | e-mail