Difference between revisions of "SemEval 2019"

From Openresearch
Jump to: navigation, search
Line 10: Line 10:
 
|Homepage=http://alt.qcri.org/semeval2019/
 
|Homepage=http://alt.qcri.org/semeval2019/
 
|City=Minneapolis
 
|City=Minneapolis
 +
|State=MN
 
|Country=USA
 
|Country=USA
 
|Paper deadline=2019/02/28
 
|Paper deadline=2019/02/28

Revision as of 08:12, 3 June 2020

SemEval 2019
International Workshop on Semantic Evaluation 2019
Event in series SemEval
Dates 2019/06/06 (iCal) - 2019/06/07
Homepage: http://alt.qcri.org/semeval2019/
Location
Location: Minneapolis, MN, USA
Loading map...

Important dates
Tutorials: 2019/03/14
Papers: 2019/02/28
Submissions: 2019/02/28
Notification: 2019/04/06
Camera ready due: 2019/04/20
Committees
Organizers: Jonathan May, Ekaterina Shutova, Aurelie Herbelot, Xiaodan Zhu, Marianna Apidianaki, Saif M. Mohammad
Table of Contents


The following coordinate was not recognized: Geocoding failed.
The following coordinate was not recognized: Geocoding failed.


SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2019 will be the 13th workshop on semantic evaluation.

SemEval-2019 will be held June 6-7, 2019 in Minneapolis, USA, collocated with the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019).

Important Dates

Task Proposals:

  • 26 Mar 2018: Task proposals due
  • 04 May 2018: Task proposal notifications

Setup for the Competition: 20 Aug 2018: CodaLab competition website ready and made public. Should include basic task description and mailing group information for the task. Trial data ready. Evaluation script ready for participants to download and run on the trial data. 17 Sep 2018: Training data ready. Development data ready. CodaLab competition website updated to include an evaluation script uploaded as part of the competition so that participants can upload submissions on the development set and the script immediately checks the submission for format and computes the results on the development set. This is also the date by which a benchmark system should be made available to participants. Also, the organizers should run the submission created with the benchmark system on CodaLab, so that participants can see its results on the LeaderBoard.

Competition and Beyond:

  • 10 Jan 2019: Evaluation start*
  • 31 Jan 2019: Evaluation end*
  • 05 Feb 2019: Results posted
  • 28 Feb 2019: System and Task description paper submissions due by 23:59 GMT -12:00
  • 14 Mar 2019 Paper reviews due (for both systems and tasks)
  • 06 Apr 2019: Author notifications
  • 20 Apr 2019: Camera ready submissions due
  • Summer 2019: SemEval 2019
  • 10 Jan to 31 Jan 2019 is the period during which the task organizers must schedule the evaluation periods for their individual tasks. Usually, evaluation periods for individual tasks are 7 to 14 days, but there is no hard and fast rule about this. Contact the task organizers for the tasks you are interested in for the exact time frame when they will conduct their evaluations. They should tell you the date by which they will release the test data, and the date by which participant submissions are to be uploaded. Note that some tasks may involve more than one sub-task, each having a separate evaluation time frame.

Organizers

* Jonathan May, ISI, University of Southern California
* Ekaterina Shutova, University of Cambridge
* Aurelie Herbelot, University of Trento
* Xiaodan Zhu, Queen's University
* Marianna Apidianaki, LIMSI, CNRS, Université Paris-Saclay & University of Pennsylvania
* Saif M. Mohammad, National Research Council Canada