Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings
In its first ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the different areasofexpertiseneededto deal with thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of effort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the first two days – and a series of laboratories and workshops – the second two days.
1139935545
Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings
In its first ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the different areasofexpertiseneededto deal with thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of effort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the first two days – and a series of laboratories and workshops – the second two days.
54.99 In Stock
Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings

Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings

Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings

Multilingual and Multimodal Information Access Evaluation: International Conference of the Cross-Language Evaluation Forum, CLEF 2010, Padua, Italy, September 20-23, 2010, Proceedings

Paperback(2010)

$54.99 
  • SHIP THIS ITEM
    In stock. Ships in 1-2 days.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

In its first ten years of activities (2000-2009), the Cross-Language Evaluation Forum (CLEF) played a leading role in stimulating investigation and research in a wide range of key areas in the information retrieval domain, such as cro- language question answering, image and geographic information retrieval, int- activeretrieval,and many more.It also promotedthe study andimplementation of appropriateevaluation methodologies for these diverse types of tasks and - dia. As a result, CLEF has been extremely successful in building a wide, strong, and multidisciplinary research community, which covers and spans the different areasofexpertiseneededto deal with thespreadofCLEFtracksandtasks.This constantly growing and almost completely voluntary community has dedicated an incredible amount of effort to making CLEF happen and is at the core of the CLEF achievements. CLEF 2010 represented a radical innovation of the “classic CLEF” format and an experiment aimed at understanding how “next generation” evaluation campaigns might be structured. We had to face the problem of how to innovate CLEFwhile still preservingits traditionalcorebusiness,namely the benchma- ing activities carried out in the various tracks and tasks. The consensus, after lively and community-wide discussions, was to make CLEF an independent four-day event, no longer organized in conjunction with the European Conference on Research and Advanced Technology for Digital Libraries (ECDL) where CLEF has been running as a two-and-a-half-day wo- shop. CLEF 2010 thus consisted of two main parts: a peer-reviewed conference – the first two days – and a series of laboratories and workshops – the second two days.

Product Details

ISBN-13: 9783642159978
Publisher: Springer Berlin Heidelberg
Publication date: 09/13/2010
Series: Lecture Notes in Computer Science , #6360
Edition description: 2010
Pages: 145
Product dimensions: 6.10(w) x 9.10(h) x 0.20(d)

Table of Contents

Keynote Addresses

IR between Science and Engineering, and the Role of Experimentation Norbert Fuhr 1

Retrieval Evaluation in Practice Ricardo Baeza-Yates 2

Resources, Tools, and Methods

A Dictionary- and Corpus-Independent Statistical Lemmatizer for Information Retrieval in Low Resource Languages Aki Loponen Kalervo Järvelin 3

A New Approach for Cross-Language Plagiarism Analysis Rafael Corezola Pereira Viviane P. Moreira Renata Galante 15

Creating a Persian-English Comparable Corpus Homa Baradaran Hashemi Azadeh Shakery Heshaam Faili 27

Experimental Collections and Datasets (1)

Validating Query Simulators: An Experiment Using Commercial Searches and Purchases Bouke Huurnink Katja Hofmann Maarten de Rijke Marc Bron 40

Using Parallel Corpora for Multilingual (Multi-document) Summarisation Evaluation Marco Turchi Josef Steinberger Mijail Kabadjov Ralf Steinberger 52

Experimental Collections and Datasets (2)

MapReduce for Information Retrieval Evaluation: "Let's Quickly Test This on 12 TB of Data" Djoerd Hiemstra Claudia Hauff 64

Which Log for Which Information? Gathering Multilingual Data from Different Log File Types Maria Gäde Vivien Petras Juliane Stiller 70

Evaluation Methodologies and Metrics (1)

Examining the Robustness of Evaluation Metrics for Patent Retrieval with Incomplete Relevance Judgements Walid Magdy Gareth J.F. Jones 82

On the Evaluation of Entity Profiles Maarten de Rijke Krisztian Balog Toine Bogers Antal van den Bosch 94

Evaluation Methodologies and Metrics (2)

Evaluating Information Extraction Andrea Esuli Fabrizio Sebastiani 100

Tie-Breaking Bias: Effect of an Uncontrolled Parameter on Information Retrieval Evaluation Guillaume Cabanac Gilles Hubert Mohand Boughanem Claude Chrisment 112

Automated Component-Level Evaluation: Present and Future Allan Hanbury Henning Müller 124

Panels

The Four Ladies of Experimental Evaluation Donna Harman Noriko Kando Mounia Lalmas Carol Peters 136

A PROMISE for Experimental Evaluation Martin Braschler Khalid Choukri Nicola Ferro Allan Hanbury Jussi Karlgren Henning Müller Vivien Petras Emanuele Pianta Maarten de Rijke Giuseppe Santucci 140

Author Index 145

From the B&N Reads Blog

Customer Reviews