Home -> Abstract creation
Number of found records: 80

Author

MANI, Inderjeet; HOUSE, David; KLEIN, Gary [et al.]
Title
TIPSTER Text Summarization Evaluation Conference (SUMMAC)
Support
On line (11/05/2005)
Abstract
Analysis of the evaluation methods of automated summarization based on natural language processing techniques, which are intrinsic to the evaluation of summarisation or of any other NLP technologies
Keywords
automated summarization; evaluation methods
Assessment

Author

MARCU, Daniel
Title
Discourse structure, rhetorical parsing and text summarization
Support
On line (11/05/2005)
Abstract
It compares algorithms to determine validity of textual structure. It exists two applications : one is an abstract system based on discourse and the other one an algorithm of textual planning.
Keywords
text structure:
Assessment

Author

MAYBURY, Mark T.
Title
Automated event summarization Techniques
Support
On line (11/05/2005)
Abstract
Automatically summarizing events from data or knowledge bases is a desirable capability for a number of application areas including report generation from databases (e.g., weather, financial, medical) and simulations (e.g., military, manufacturing, economic). While there have been several efforts to generate narratives from underlying structures, few have focused on event summarization. This extended abstract outlines tactics for selecting and presenting summaries of events. We discuss these tactics in the context of a system which generates summaries of events from an object-oriented battle simulator (AU)
Keywords
Automated Summarization; Report Generation
Assessment

Author

MITRA, Mandar; SINGHAL, Amit; BUCKLEY, Chris
Title
Automatic text summarization by paragraph extraction.
Support
PDF
Abstract
Over the years, the amount of information available electronically has grown manifold. There is an increasing demand for automatic methods for text summarization. Domain-independent techniques for automatic summarization by paragraph extraction have been proposed in [12,15]. In this study, we attempt to evaluate these methods by comparing the automatically generated extracts to ones generated by humans. In view of the fact that extracts generated by two humans for the same article are surprisingly dissimilar the performance of the automatic methods is satisfactory. Even though this observation calls into question the feasibility of producing perfect summaries by extraction given the unavailability of other effective domain independent summarization tools we believe that this is a reasonable, though imperfect, alternative (AU)
Keywords
automatic methods; text summarization; evaluation; extracts
Assessment
Showing page 19 of 20

Previous 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Next

Director: © Maria Pinto (UGR)

Creation 31/07/2005 | Update 11/04/2011 | Tutorial | Map | e-mail