Cantitate/Preț
Produs

Information Retrieval Evaluation in a Changing World: Lessons Learned from 20 Years of CLEF: The Information Retrieval Series, cartea 41

Editat de Nicola Ferro, Carol Peters
en Limba Engleză Paperback – 26 aug 2020
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation.

The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV andV represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings.

Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Citește tot Restrânge

Toate formatele și edițiile

Toate formatele și edițiile Preț Express
Paperback (1) 75960 lei  6-8 săpt.
  Springer International Publishing – 26 aug 2020 75960 lei  6-8 săpt.
Hardback (1) 104491 lei  6-8 săpt.
  Springer International Publishing – 26 aug 2019 104491 lei  6-8 săpt.

Din seria The Information Retrieval Series

Preț: 75960 lei

Preț vechi: 94950 lei
-20% Nou

Puncte Express: 1139

Preț estimativ în valută:
14539 15112$ 12043£

Carte tipărită la comandă

Livrare economică 05-19 februarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783030229504
ISBN-10: 3030229505
Pagini: 595
Ilustrații: XXII, 595 p. 89 illus., 75 illus. in color.
Dimensiuni: 155 x 235 mm
Greutate: 0.86 kg
Ediția:1st ed. 2019
Editura: Springer International Publishing
Colecția Springer
Seria The Information Retrieval Series

Locul publicării:Cham, Switzerland

Cuprins

From Multilingual to Multimodal: The Evolution of CLEF over Two Decades.- The Evolution of Cranfield.- How to Run an Evaluation Task.- An Innovative Approach to Data Management and Curation of Experimental Data Generated through IR Test Collections.- TIRA Integrated Research Architecture.- EaaS: Evaluation–as–a–Service and Experiences from the VISCERAL Project.- Lessons Learnt from Experiments on the Ad-Hoc Multilingual Test Collections at CLEF.- The Challenges of Language Variation in Information Access.- Multi-lingual Retrieval of Pictures in ImageCLEF.- Experiences From the ImageCLEF Medical Retrieval and Annotation Tasks.- Automatic Image Annotation at ImageCLEF.- Image Retrieval Evaluation in Specific Domains.- ’Bout Sound and Vision: CLEF beyond Text Retrieval Tasks.- The Scholarly Impact and Strategic Intent of CLEF eHealth Labs from 2012-2017.- Multilingual Patent Text Retrieval Evaluation: CLEF-IP.- Biodiversity Information Retrieval through Large Scale Content-Based Identification: A Long-Term Evaluation.- From XML Retrieval to Semantic Search and Beyond.- Results and Lessons of the Question Answering Track at CLEF.- Evolution of the PAN Lab on Digital Text Forensics.- RepLab: an Evaluation Campaign for Online Monitoring Systems.- Continuous Evaluation of Large-scale Information Access Systems: A Case for Living Labs.- The Scholarly Impact of CLEF 2010-2017.- Reproducibility and Validity in CLEF.- Visual Analytics and IR Experimental Evaluation.- Adopting Systematic Evaluation Benchmarks in Operational Settings.

Notă biografică

Nicola Ferro is an Associate Professor of Computer Science at the University of Padua, Italy. His research interests include information retrieval, its experimental evaluation, multilingual information access and digital libraries. He is the coordinator of the CLEF evaluation initiative, which includes more than 200 research groups around the globe involved in large-scale IR evaluation activities. He was also the coordinator of the EU Seventh Framework Programme Network of Excellence PROMISE on information retrieval evaluation.

Carol Peters, now Research Associate, was a Researcher at the Italian National Research Council’s “Istituto di Scienza e Tecnologie dell'Informazione.” Her main research activities focused on the development of multilingual access mechanisms for digital libraries and evaluation methodologies for cross-language information retrieval systems. She was leader of the EU Sixth Framework MultiMatch project, and coordinated the Cross-Language Evaluation Forum (CLEF) during its first ten years of activity. In 2009, in recognition of her work for CLEF, she was awarded the Tony Kent Strix Award.

Textul de pe ultima copertă

This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation.

The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV andV represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings.

Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.

Caracteristici

Provides an overview of 20 years of research in evaluating information retrieval systems Presents summaries of the most important experimental results and findings concerning information retrieval in various types of media Highlights the lessons learnt over the years, providing readers with useful guidelines on the best approaches and techniques