Difference between revisions of "TREC"

From Openresearch
Jump to: navigation, search
m
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
{{Event series
 
{{Event series
 
|Acronym=TREC
 
|Acronym=TREC
 +
|has Bibliography=dblp.uni-trier.de/db/conf/trec/
 
|Title=Text Retrieval Conference
 
|Title=Text Retrieval Conference
|Field=Data Mining
+
|Field=Data mining
 
|Homepage=trec.nist.gov
 
|Homepage=trec.nist.gov
 
}}
 
}}
Line 14: Line 15:
 
* to increase the availability of appropriate evaluation techniques for use by industry and academia,  
 
* to increase the availability of appropriate evaluation techniques for use by industry and academia,  
 
including development of new evaluation techniques more applicable to current systems.  
 
including development of new evaluation techniques more applicable to current systems.  
 
 
  
 
TREC is overseen by a program committee consisting of representatives from government, industry, and academia. For each TREC, NIST provides a test set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results. The TREC cycle ends with a workshop that is a forum for participants to share their experiences.
 
TREC is overseen by a program committee consisting of representatives from government, industry, and academia. For each TREC, NIST provides a test set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results. The TREC cycle ends with a workshop that is a forum for participants to share their experiences.
Line 22: Line 21:
  
 
TREC has also sponsored the first large-scale evaluations of the retrieval of non-English (Spanish and Chinese) documents, retrieval of recordings of speech, and retrieval across multiple languages. TREC has also introduced evaluations for open-domain question answering and content-based retrieval of digital video. The TREC test collections are large enough so that they realistically model operational settings. Most of today's commercial search engines include technology first developed in TREC.
 
TREC has also sponsored the first large-scale evaluations of the retrieval of non-English (Spanish and Chinese) documents, retrieval of recordings of speech, and retrieval across multiple languages. TREC has also introduced evaluations for open-domain question answering and content-based retrieval of digital video. The TREC test collections are large enough so that they realistically model operational settings. Most of today's commercial search engines include technology first developed in TREC.
 +
 +
A TREC workshop consists of a set tracks, areas of focus in which particular retrieval tasks are defined. The tracks serve several purposes. First, tracks act as incubators for new research areas: the first running of a track often defines what the problem really is, and a track creates the necessary infrastructure (test collections, evaluation methodology, etc.) to support research on its task. The tracks also demonstrate the robustness of core retrieval technology in that the same techniques are frequently appropriate for a variety of tasks. Finally, the tracks make TREC attractive to a broader community by providing tasks that match the research interests of more groups.
 +
 +
Each track has a mailing list. The primary purpose of the mailing list is to discuss the details of the track's tasks in the current TREC. However, a track mailing list also serves as a place to discuss general methodological issues related to the track's retrieval tasks. TREC track mailing lists are open to all; you need not participate in TREC to join a list. Most lists do require that you become a member of the list before you can send a message to it.
 +
 +
The set of tracks that will be run in a given year of TREC is determined by the TREC program committee. The committee has established a procedure for proposing new tracks.

Latest revision as of 20:18, 9 January 2018

TREC
Text Retrieval Conference
Categories: Data mining
Bibliography: dblp.uni-trier.de/db/conf/trec/
Table of Contents

Events

The following events of the series TREC are currently known in this wiki:

 OrdinalFromToCityCountryGeneral chairPC chairAcceptance rateAttendees
TREC 2020Nov 18Nov 20GaithersburgUSA
TREC 2016Nov 15Nov 18GaithersburgUSA
TREC 2015Nov 17Nov 20GaithersburgUSA
TREC 2014Nov 19Nov 21GaithersburgUSA
TREC 2013Nov 19Nov 22GaithersburgUSA
TREC 2012Nov 6Nov 9GaithersburgUSA
TREC 2011Nov 15Nov 18GaithersburgUSA
TREC 2010Nov 16Nov 19GaithersburgUSA
TREC 2009Nov 17Nov 20GaithersburgUSA
TREC 2008Nov 18Nov 21GaithersburgUSA
TREC 2007Nov 5Nov 9GaithersburgUSA
TREC 2006Nov 14Nov 17GaithersburgUSA
TREC 2005Nov 15Nov 18GaithersburgUSA
TREC 2004Nov 16Nov 19GaithersburgUSA
TREC 2003Nov 18Nov 21GaithersburgUSA
TREC 2002Nov 19Nov 22GaithersburgUSA
TREC 2001Nov 13Nov 16GaithersburgUSA
TREC 2000Nov 13Nov 16GaithersburgUSA
TREC 1999Nov 17Nov 19GaithersburgUSA
TREC 1998Nov 9Nov 11GaithersburgUSA
TREC 1997Nov 19Nov 21GaithersburgUSA
TREC 1996Nov 20Nov 22GaithersburgUSA
TREC 1995Nov 1Nov 3GaithersburgUSA
TREC 1994Nov 2Nov 4GaithersburgUSA
TREC 1993Aug 31Sep 2GaithersburgUSA
TREC 1992Nov 4Nov 6GaithersburgUSA

Number of Submitted and Accepted Papers (Main Track)

The chart or graph is empty due to missing data

Acceptance Rate

The chart or graph is empty due to missing data

Locations

Loading map...



The Text REtrieval Conference (TREC), co-sponsored by the National Institute of Standards and Technology (NIST) and U.S. Department of Defense, was started in 1992 as part of the TIPSTER Text program. Its purpose was to support research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies. In particular, the TREC workshop series has the following goals:

  • to encourage research in information retrieval based on large test collections;
  • to increase communication among industry, academia, and government by creating an open forum for the

exchange of research ideas;

  • to speed the transfer of technology from research labs into commercial products by demonstrating

substantial improvements in retrieval methodologies on real-world problems; and

  • to increase the availability of appropriate evaluation techniques for use by industry and academia,

including development of new evaluation techniques more applicable to current systems.

TREC is overseen by a program committee consisting of representatives from government, industry, and academia. For each TREC, NIST provides a test set of documents and questions. Participants run their own retrieval systems on the data, and return to NIST a list of the retrieved top-ranked documents. NIST pools the individual results, judges the retrieved documents for correctness, and evaluates the results. The TREC cycle ends with a workshop that is a forum for participants to share their experiences.

This evaluation effort has grown in both the number of participating systems and the number of tasks each year. Ninety-three groups representing 22 countries participated in TREC 2003. The TREC test collections and evaluation software are available to the retrieval research community at large, so organizations can evaluate their own retrieval systems at any time. TREC has successfully met its dual goals of improving the state-of-the-art in information retrieval and of facilitating technology transfer. Retrieval system effectiveness approximately doubled in the first six years of TREC.

TREC has also sponsored the first large-scale evaluations of the retrieval of non-English (Spanish and Chinese) documents, retrieval of recordings of speech, and retrieval across multiple languages. TREC has also introduced evaluations for open-domain question answering and content-based retrieval of digital video. The TREC test collections are large enough so that they realistically model operational settings. Most of today's commercial search engines include technology first developed in TREC.

A TREC workshop consists of a set tracks, areas of focus in which particular retrieval tasks are defined. The tracks serve several purposes. First, tracks act as incubators for new research areas: the first running of a track often defines what the problem really is, and a track creates the necessary infrastructure (test collections, evaluation methodology, etc.) to support research on its task. The tracks also demonstrate the robustness of core retrieval technology in that the same techniques are frequently appropriate for a variety of tasks. Finally, the tracks make TREC attractive to a broader community by providing tasks that match the research interests of more groups.

Each track has a mailing list. The primary purpose of the mailing list is to discuss the details of the track's tasks in the current TREC. However, a track mailing list also serves as a place to discuss general methodological issues related to the track's retrieval tasks. TREC track mailing lists are open to all; you need not participate in TREC to join a list. Most lists do require that you become a member of the list before you can send a message to it.

The set of tracks that will be run in a given year of TREC is determined by the TREC program committee. The committee has established a procedure for proposing new tracks.