Difference between revisions of "TREC"

From Openresearch
Jump to: navigation, search
(Created page with "{{Event series |Acronym=TREC |Title=Text Retrieval Conference |Field=Data Mining |Homepage=trec.nist.gov }} The Text REtrieval Conference (TREC), co-sponsored by the Nationa...")
 
Line 5: Line 5:
 
|Homepage=trec.nist.gov
 
|Homepage=trec.nist.gov
 
}}
 
}}
 +
The Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and U.S. Department of Defense, was started in 1992 as part of the TIPSTER Text program. Its purpose was to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. In particular, the TREC workshop series has the following goals:
  
The Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and U.S. Department of Defense, was started in 1992 as part of the TIPSTER Text program. Its purpose was to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. In particular, the TREC workshop series has the following goals:
+
* to encourage research in information retrieval based on large test collections;
 +
* to increase communication among industry, academia, and government by creating an open forum for the  
 +
exchange of research ideas;
 +
* to speed the transfer of technology from research labs into commercial products by demonstrating
 +
substantial improvements in retrieval methodologies on real-world problems; and
 +
* to increase the availability of appropriate evaluation techniques for use by industry and academia,
 +
including development of new evaluation techniques more applicable to current systems.  
  
    to encourage research in information retrieval based on large test collections;
 
    to increase communication among industry, academia, and government by creating an open forum for the exchange of research ideas;
 
    to speed the transfer of technology from research labs into commercial products by demonstrating substantial improvements in retrieval methodologies on real-world problems; and
 
    to increase the availability of appropriate evaluation techniques for use by industry and academia, including development of new evaluation techniques more applicable to current systems.
 
  
  

Revision as of 20:44, 13 December 2017

TREC
Text Retrieval Conference
Categories: Data Mining
Table of Contents

Events

The following events of the series TREC are currently known in this wiki:

 OrdinalFromToCityCountryGeneral chairPC chairAcceptance rateAttendees
TREC 2020Nov 18Nov 20GaithersburgUSA
TREC 2016Nov 15Nov 18GaithersburgUSA
TREC 2015Nov 17Nov 20GaithersburgUSA
TREC 2014Nov 19Nov 21GaithersburgUSA
TREC 2013Nov 19Nov 22GaithersburgUSA
TREC 2012Nov 6Nov 9GaithersburgUSA
TREC 2011Nov 15Nov 18GaithersburgUSA
TREC 2010Nov 16Nov 19GaithersburgUSA
TREC 2009Nov 17Nov 20GaithersburgUSA
TREC 2008Nov 18Nov 21GaithersburgUSA
TREC 2007Nov 5Nov 9GaithersburgUSA
TREC 2006Nov 14Nov 17GaithersburgUSA
TREC 2005Nov 15Nov 18GaithersburgUSA
TREC 2004Nov 16Nov 19GaithersburgUSA
TREC 2003Nov 18Nov 21GaithersburgUSA
TREC 2002Nov 19Nov 22GaithersburgUSA
TREC 2001Nov 13Nov 16GaithersburgUSA
TREC 2000Nov 13Nov 16GaithersburgUSA
TREC 1999Nov 17Nov 19GaithersburgUSA
TREC 1998Nov 9Nov 11GaithersburgUSA
TREC 1997Nov 19Nov 21GaithersburgUSA
TREC 1996Nov 20Nov 22GaithersburgUSA
TREC 1995Nov 1Nov 3GaithersburgUSA
TREC 1994Nov 2Nov 4GaithersburgUSA
TREC 1993Aug 31Sep 2GaithersburgUSA
TREC 1992Nov 4Nov 6GaithersburgUSA

Number of Submitted and Accepted Papers (Main Track)

The chart or graph is empty due to missing data

Acceptance Rate

The chart or graph is empty due to missing data

Locations

Loading map...



The Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and U.S. Department of Defense, was started in 1992 as part of the TIPSTER Text program. Its purpose was to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. In particular, the TREC workshop series has the following goals:

  • to encourage research in information retrieval based on large test collections;
  • to increase communication among industry, academia, and government by creating an open forum for the

exchange of research ideas;

  • to speed the transfer of technology from research labs into commercial products by demonstrating

substantial improvements in retrieval methodologies on real-world problems; and

  • to increase the availability of appropriate evaluation techniques for use by industry and academia,

including development of new evaluation techniques more applicable to current systems.


TREC is overseen by a program committee consisting of representatives from government, industry, and academia. For each TREC, NIST provides a test set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results. The TREC cycle ends with a workshop that is a forum for participants to share their experiences.

This evaluation effort has grown in both the number of participating systems and the number of tasks each year. Ninety-three groups representing 22 countries participated in TREC 2003. The TREC test collections and evaluation software are available to the retrieval research community at large, so organizations can evaluate their own retrieval systems at any time. TREC has successfully met its dual goals of improving the state-of-the-art in information retrieval and of facilitating technology transfer. Retrieval system effectiveness approximately doubled in the first six years of TREC.

TREC has also sponsored the first large-scale evaluations of the retrieval of non-English (Spanish and Chinese) documents, retrieval of recordings of speech, and retrieval across multiple languages. TREC has also introduced evaluations for open-domain question answering and content-based retrieval of digital video. The TREC test collections are large enough so that they realistically model operational settings. Most of today's commercial search engines include technology first developed in TREC.