The symposium on which this volume was based brought together approximately fifty scientists from a variety of backgrounds to discuss the rapidly-emerging set of competing technologies for exploiting a massive quantity of textual information. This group was challenged to explore new ways to take advantage of the power of on-line text. A billion words of text can be more generally useful than a few hundred logical rules, if advanced computation can extract useful information from streams of text and help find what is needed in the sea of available material. While the extraction task is a hot topic for the field of natural language processing and the retrieval task is a solid aspect in the field of information retrieval, these two disciplines came together at the symposium and have been cross-breeding more than ever.
The book is organized in three parts. The first group of papers describes the current set of natural language processing techniques used for interpreting and extracting information from quantities of text. The second group gives some of the historical perspective, methodology, and current practice of information retrieval work; the third covers both current and emerging applications of these techniques. This collection of readings should give students and scientists alike a good idea of the current techniques as well as a general concept of how to go about developing and testing systems to handle volumes of text.
|Publisher:||Taylor & Francis|
|Sold by:||Barnes & Noble|
|File size:||4 MB|