Difference between revisions of "SemEval 2017"

From Openresearch
Jump to: navigation, search
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''International Workshop on Semantic Evaluation 2017
 
'''
 
SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics. SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2017 will be the 11th workshop on semantic evaluation and will be collocated with the '''55th annual meeting of the Association for Computational Linguistics (ACL)'''. SemEval will  be held in Vancouver, Canada, at the Westin Bayshore Hotel August 3rd and 4th, 2017.
 
 
'''Program and Proceedings'''
 
The program is available here: http://aclweb.org/anthology/S17-2000#page=21
 
 
The proceedings are available here: http://aclanthology.info/events/semeval-2017#S17-2
 
Important Dates
 
 
    Mon 01 Aug 2016: Trial data ready
 
    Mon 05 Sep 2016: Training data ready
 
    Mon 05 Dec 2016: CodaLab competitions ready
 
    Mon 09 Jan 2017: Evaluation start*
 
    Mon 30 Jan 2017: Evaluation end*
 
    Mon 06 Feb 2017: Results posted
 
    Mon 27 Feb 2017: System description paper submissions due by 23:59 GMT -12:00
 
    Mon 06 Mar 2017: Task description paper submissions due by 23:59 GMT -12:00
 
    Mon 20 Mar 2017: Paper reviews due (for both systems and tasks)
 
    Mon 03 Apr 2017: Author notifications
 
    Mon 17 Apr 2017: Camera ready submissions due
 
 
 
{{Event
 
{{Event
 
|Acronym=SemEval 2017
 
|Acronym=SemEval 2017
Line 26: Line 4:
 
|Series=SemEval
 
|Series=SemEval
 
|Type=Conference
 
|Type=Conference
|Field=Natural language processing
+
|Field=Computer science, Social networks, E-science
 
|Start date=2017/08/03
 
|Start date=2017/08/03
 
|End date=2017/08/04
 
|End date=2017/08/04
 +
|Submission deadline=2017/02/27
 
|Homepage=http://alt.qcri.org/semeval2017/
 
|Homepage=http://alt.qcri.org/semeval2017/
 
|City=Vancouver
 
|City=Vancouver
 
|Country=Canada
 
|Country=Canada
 +
|Notification=2017/04/03
 +
|Camera ready=2017/04/17
 
|Has host organization=Association for Computational Linguistics
 
|Has host organization=Association for Computational Linguistics
 
}}
 
}}
 +
'''International Workshop on Semantic Evaluation 2017
 +
'''
 +
''SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics. SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2017 will be the 11th workshop on semantic evaluation and will be collocated with the '''55th annual meeting of the Association for Computational Linguistics (ACL)'''. SemEval will  be held in Vancouver, Canada, at the Westin Bayshore Hotel August 3rd and 4th, 2017.''
 +
 +
'''Program and Proceedings'''
 +
* The program is available here: http://aclweb.org/anthology/S17-2000#page=21
 +
* The proceedings are available here: http://aclanthology.info/events/semeval-2017#S17-2
 +
 +
==Important Dates== 
 +
* Mon 01 Aug 2016: Trial data ready
 +
* Mon 05 Sep 2016: Training data ready
 +
* Mon 05 Dec 2016: CodaLab competitions ready
 +
* Mon 09 Jan 2017: Evaluation start*
 +
* Mon 30 Jan 2017: Evaluation end*
 +
* Mon 06 Feb 2017: Results posted
 +
* Mon 27 Feb 2017: System description paper submissions due by 23:59 GMT -12:00
 +
* Mon 06 Mar 2017: Task description paper submissions due by 23:59 GMT -12:00
 +
* Mon 20 Mar 2017: Paper reviews due (for both systems and tasks)
 +
* Mon 03 Apr 2017: Author notifications
 +
* Mon 17 Apr 2017: Camera ready submissions due
 +
 +
''''''Welcome to SemEval-2017
 +
The Semantic Evaluation (SemEval) series of workshops focuses on the evaluation and comparison of
 +
systems that can analyse diverse semantic phenomena in text with the aim of extending the current state
 +
of the art in semantic analysis and creating high quality annotated datasets in a range of increasingly
 +
challenging problems in natural language semantics. SemEval provides an exciting forum for researchers
 +
to propose challenging research problems in semantics and to build systems/techniques to address such
 +
research problems.
 +
SemEval-2017 is the eleventh workshop in the series of InternationalWorkshops on Semantic Evaluation.
 +
The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004), focused on
 +
word sense disambiguation, each time growing in the number of languages offered, in the number of
 +
tasks, and also in the number of participating teams. In 2007, the workshop was renamed to SemEval,
 +
and the subsequent SemEval workshops evolved to include semantic analysis tasks beyond word sense
 +
disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year, but on a
 +
two-year cycle, i.e., the tasks for SemEval-2017 were proposed in 2016.
 +
SemEval-2017 was co-located with the 55th annual meeting of the Association for Computational
 +
Linguistics ([[ACL 2017]]) in Vancouver, Canada. It included the following 12 shared tasks organized
 +
in three tracks:''''
 +
Semantic comparison for words and texts
 +
• Task 1: Semantic Textual Similarity
 +
• Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity
 +
• Task 3: Community Question Answering
 +
Detecting sentiment, humor, and truth
 +
• Task 4: Sentiment Analysis in Twitter
 +
• Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News
 +
• Task 6: #HashtagWars: Learning a Sense of Humor
 +
• Task 7: Detection and Interpretation of English Puns
 +
• Task 8: RumourEval: Determining rumour veracity and support for rumours
 +
Parsing semantic structures
 +
• Task 9: Abstract Meaning Representation Parsing and Generation
 +
• Task 10: Extracting Keyphrases and Relations from Scientific Publications
 +
• Task 11: End-User Development using Natural Language
 +
• Task 12: Clinical TempEval''
 +
 +
==Organizers==
 +
 +
* Steven Bethard, University of Arizona
 +
* Marine Carpuat, University of Maryland
 +
* Marianna Apidianaki, LIMSI, CNRS, University Paris-Saclay & University of Pennsylvania
 +
* Saif M. Mohammad, National Research Council Canada
 +
* Daniel Cer, Google
 +
* David Jurgens, Stanford University

Latest revision as of 12:53, 8 June 2020

SemEval 2017
11th International Workshop on Semantic Evaluation 2017
Event in series SemEval
Dates 2017/08/03 (iCal) - 2017/08/04
Homepage: http://alt.qcri.org/semeval2017/
Location
Location: Vancouver, Canada
Loading map...

Important dates
Submissions: 2017/02/27
Notification: 2017/04/03
Camera ready due: 2017/04/17
Table of Contents


International Workshop on Semantic Evaluation 2017 SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics. SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2017 will be the 11th workshop on semantic evaluation and will be collocated with the 55th annual meeting of the Association for Computational Linguistics (ACL). SemEval will be held in Vancouver, Canada, at the Westin Bayshore Hotel August 3rd and 4th, 2017.

Program and Proceedings

Important Dates

  • Mon 01 Aug 2016: Trial data ready
  • Mon 05 Sep 2016: Training data ready
  • Mon 05 Dec 2016: CodaLab competitions ready
  • Mon 09 Jan 2017: Evaluation start*
  • Mon 30 Jan 2017: Evaluation end*
  • Mon 06 Feb 2017: Results posted
  • Mon 27 Feb 2017: System description paper submissions due by 23:59 GMT -12:00
  • Mon 06 Mar 2017: Task description paper submissions due by 23:59 GMT -12:00
  • Mon 20 Mar 2017: Paper reviews due (for both systems and tasks)
  • Mon 03 Apr 2017: Author notifications
  • Mon 17 Apr 2017: Camera ready submissions due

'Welcome to SemEval-2017 The Semantic Evaluation (SemEval) series of workshops focuses on the evaluation and comparison of systems that can analyse diverse semantic phenomena in text with the aim of extending the current state of the art in semantic analysis and creating high quality annotated datasets in a range of increasingly challenging problems in natural language semantics. SemEval provides an exciting forum for researchers to propose challenging research problems in semantics and to build systems/techniques to address such research problems. SemEval-2017 is the eleventh workshop in the series of InternationalWorkshops on Semantic Evaluation. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004), focused on word sense disambiguation, each time growing in the number of languages offered, in the number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed to SemEval, and the subsequent SemEval workshops evolved to include semantic analysis tasks beyond word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year, but on a two-year cycle, i.e., the tasks for SemEval-2017 were proposed in 2016. SemEval-2017 was co-located with the 55th annual meeting of the Association for Computational Linguistics (ACL 2017) in Vancouver, Canada. It included the following 12 shared tasks organized in three tracks:' Semantic comparison for words and texts • Task 1: Semantic Textual Similarity • Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity • Task 3: Community Question Answering Detecting sentiment, humor, and truth • Task 4: Sentiment Analysis in Twitter • Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News • Task 6: #HashtagWars: Learning a Sense of Humor • Task 7: Detection and Interpretation of English Puns • Task 8: RumourEval: Determining rumour veracity and support for rumours Parsing semantic structures • Task 9: Abstract Meaning Representation Parsing and Generation • Task 10: Extracting Keyphrases and Relations from Scientific Publications • Task 11: End-User Development using Natural Language • Task 12: Clinical TempEval

Organizers

  • Steven Bethard, University of Arizona
  • Marine Carpuat, University of Maryland
  • Marianna Apidianaki, LIMSI, CNRS, University Paris-Saclay & University of Pennsylvania
  • Saif M. Mohammad, National Research Council Canada
  • Daniel Cer, Google
  • David Jurgens, Stanford University
Facts about "SemEval 2017"
AcronymSemEval 2017 +
Camera ready dueApril 17, 2017 +
End dateAugust 4, 2017 +
Event in seriesSemEval +
Event typeConference +
Has coordinates49° 15' 39", -123° 6' 50"Latitude: 49.260872222222
Longitude: -123.11395277778
+
Has location cityVancouver +
Has location countryCategory:Canada +
Homepagehttp://alt.qcri.org/semeval2017/ +
IsAEvent +
NotificationApril 3, 2017 +
Start dateAugust 3, 2017 +
Submission deadlineFebruary 27, 2017 +
Title11th International Workshop on Semantic Evaluation 2017 +