Difference between revisions of "CLEF"

From Openresearch
Jump to: navigation, search
(Created page with "{{Event series |Acronym=CLEF |Title=Conference and Labs of the Evaluation Forum |Field=Information Systems |Homepage=http://www.clef-initiative.eu/ }} The CLEF Initiative (Co...")
 
m
 
(One intermediate revision by the same user not shown)
Line 2: Line 2:
 
|Acronym=CLEF
 
|Acronym=CLEF
 
|Title=Conference and Labs of the Evaluation Forum
 
|Title=Conference and Labs of the Evaluation Forum
|Field=Information Systems
+
|Field=Natural language processing
 
|Homepage=http://www.clef-initiative.eu/
 
|Homepage=http://www.clef-initiative.eu/
 
}}
 
}}
The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:
+
The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:
  
    multilingual and multimodal system testing, tuning and evaluation;
+
* multilingual and multimodal system testing, tuning and evaluation;
     investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
+
*     investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
     creation of reusable test collections for benchmarking;
+
*     creation of reusable test collections for benchmarking;
     exploration of new evaluation methodologies and innovative ways of using experimental data;
+
*     exploration of new evaluation methodologies and innovative ways of using experimental data;
     discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.
+
*     discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

Latest revision as of 12:02, 19 May 2020

CLEF
Conference and Labs of the Evaluation Forum
Table of Contents

Events

The following events of the series CLEF are currently known in this wiki:

 OrdinalFromToCityCountryGeneral chairPC chairAcceptance rateAttendees
CLEF 202112Sep 21Sep 24BucharestRomaniaBogdan Ionescu
K. Selcuk Candan
Henning Müller
Lorraine Goeuriot
Birger Larsen
CLEF 2020Sep 22Sep 25ThessalonikiOnlineEvangelos Kanoulas
Theodora Tsikrika
Stefanos Vrochidis
Avi Arampatzis
Hideo Joho
Christina Lioma
CLEF 2019Sep 9Sep 12LuganoSwitzerlandFabio Crestani
Martin Braschler
Jacques Savoy
Andreas Rauber
CLEF 2018Sep 10Sep 14AvignonFrancePatrice Bellot
Chiraz Trabelsi
Josiane Mothe
Fionn Murtagh
CLEF 2017Sep 11Sep 14DublinIrelandGareth J. F. Jones
Séamus Lawless
Julio Gonzalo
Liadh Kelly

Number of Submitted and Accepted Papers (Main Track)

The chart or graph is empty due to missing data

Acceptance Rate

The chart or graph is empty due to missing data

Locations

Loading map...



The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure. CLEF promotes research and development by providing an infrastructure for:

  • multilingual and multimodal system testing, tuning and evaluation;
  • investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access;
  • creation of reusable test collections for benchmarking;
  • exploration of new evaluation methodologies and innovative ways of using experimental data;
  • discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.