Difference between revisions of "SemEval 2017"

From Openresearch
Jump to: navigation, search
Line 49: Line 49:
 
two-year cycle, i.e., the tasks for SemEval-2017 were proposed in 2016.
 
two-year cycle, i.e., the tasks for SemEval-2017 were proposed in 2016.
 
SemEval-2017 was co-located with the [[55th annual meeting of the Association for Computational
 
SemEval-2017 was co-located with the [[55th annual meeting of the Association for Computational
Linguistics (ACL’2017]]) in Vancouver, Canada. It included the following 12 shared tasks organized
+
Linguistics (ACL’2017)]] in Vancouver, Canada. It included the following 12 shared tasks organized
 
in three tracks:
 
in three tracks:
Semantic comparison for words and texts
+
'''Semantic comparison for words and texts'''
 
• Task 1: Semantic Textual Similarity
 
• Task 1: Semantic Textual Similarity
 
• Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity
 
• Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity
 
• Task 3: Community Question Answering
 
• Task 3: Community Question Answering
Detecting sentiment, humor, and truth
+
'''Detecting sentiment, humor, and truth'''
 
• Task 4: Sentiment Analysis in Twitter
 
• Task 4: Sentiment Analysis in Twitter
 
• Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News
 
• Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News
Line 61: Line 61:
 
• Task 7: Detection and Interpretation of English Puns
 
• Task 7: Detection and Interpretation of English Puns
 
• Task 8: RumourEval: Determining rumour veracity and support for rumours
 
• Task 8: RumourEval: Determining rumour veracity and support for rumours
Parsing semantic structures
+
'''Parsing semantic structures'''
 
• Task 9: Abstract Meaning Representation Parsing and Generation
 
• Task 9: Abstract Meaning Representation Parsing and Generation
 
• Task 10: Extracting Keyphrases and Relations from Scientific Publications
 
• Task 10: Extracting Keyphrases and Relations from Scientific Publications
 
• Task 11: End-User Development using Natural Language
 
• Task 11: End-User Development using Natural Language
 
• Task 12: Clinical TempEval
 
• Task 12: Clinical TempEval
iii
 
This volume contains both Task Description papers that describe each of the above tasks and System
 
Description papers that describe the systems that participated in the above tasks. A total of 12 task
 
description papers and 169 system description papers are included in this volume.
 
We are grateful to all task organizers as well as the large number of participants whose enthusiastic
 
participation has made SemEval once again a successful event. We are thankful to the task organizers
 
who also served as area chairs, and to task organizers and participants who reviewed paper submissions.
 
These proceedings have greatly benefited from their detailed and thoughtful feedback. We also thank the
 
ACL 2017 conference organizers for their support. Finally, we most gratefully acknowledge the support
 
of our sponsor, the ACL Special Interest Group on the Lexicon (SIGLEX).
 

Revision as of 11:32, 8 June 2020

SemEval 2017
11th International Workshop on Semantic Evaluation 2017
Event in series SemEval
Dates 2017/08/03 (iCal) - 2017/08/04
Homepage: http://alt.qcri.org/semeval2017/
Location
Location: Vancouver, Canada
Loading map...

Table of Contents


International Workshop on Semantic Evaluation 2017 SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems, organized under the umbrella of SIGLEX, the Special Interest Group on the Lexicon of the Association for Computational Linguistics. SemEval has evolved from the SensEval word sense disambiguation evaluation series. The SemEval wikipedia entry and the ACL SemEval Wiki provide a more detailed historical overview. SemEval-2017 will be the 11th workshop on semantic evaluation and will be collocated with the 55th annual meeting of the Association for Computational Linguistics (ACL). SemEval will be held in Vancouver, Canada, at the Westin Bayshore Hotel August 3rd and 4th, 2017.

Program and Proceedings

Important Dates

  • Mon 01 Aug 2016: Trial data ready
  • Mon 05 Sep 2016: Training data ready
  • Mon 05 Dec 2016: CodaLab competitions ready
  • Mon 09 Jan 2017: Evaluation start*
  • Mon 30 Jan 2017: Evaluation end*
  • Mon 06 Feb 2017: Results posted
  • Mon 27 Feb 2017: System description paper submissions due by 23:59 GMT -12:00
  • Mon 06 Mar 2017: Task description paper submissions due by 23:59 GMT -12:00
  • Mon 20 Mar 2017: Paper reviews due (for both systems and tasks)
  • Mon 03 Apr 2017: Author notifications
  • Mon 17 Apr 2017: Camera ready submissions due

Welcome to SemEval-2017 The Semantic Evaluation (SemEval) series of workshops focuses on the evaluation and comparison of systems that can analyse diverse semantic phenomena in text with the aim of extending the current state of the art in semantic analysis and creating high quality annotated datasets in a range of increasingly challenging problems in natural language semantics. SemEval provides an exciting forum for researchers to propose challenging research problems in semantics and to build systems/techniques to address such research problems. SemEval-2017 is the eleventh workshop in the series of InternationalWorkshops on Semantic Evaluation. The first three workshops, SensEval-1 (1998), SensEval-2 (2001), and SensEval-3 (2004), focused on word sense disambiguation, each time growing in the number of languages offered, in the number of tasks, and also in the number of participating teams. In 2007, the workshop was renamed to SemEval, and the subsequent SemEval workshops evolved to include semantic analysis tasks beyond word sense disambiguation. In 2012, SemEval turned into a yearly event. It currently runs every year, but on a two-year cycle, i.e., the tasks for SemEval-2017 were proposed in 2016. SemEval-2017 was co-located with the [[55th annual meeting of the Association for Computational Linguistics (ACL’2017)]] in Vancouver, Canada. It included the following 12 shared tasks organized in three tracks: Semantic comparison for words and texts • Task 1: Semantic Textual Similarity • Task 2: Multi-lingual and Cross-lingual Semantic Word Similarity • Task 3: Community Question Answering Detecting sentiment, humor, and truth • Task 4: Sentiment Analysis in Twitter • Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News • Task 6: #HashtagWars: Learning a Sense of Humor • Task 7: Detection and Interpretation of English Puns • Task 8: RumourEval: Determining rumour veracity and support for rumours Parsing semantic structures • Task 9: Abstract Meaning Representation Parsing and Generation • Task 10: Extracting Keyphrases and Relations from Scientific Publications • Task 11: End-User Development using Natural Language • Task 12: Clinical TempEval

Facts about "SemEval 2017"
AcronymSemEval 2017 +
End dateAugust 4, 2017 +
Event in seriesSemEval +
Event typeConference +
Has coordinates49° 15' 39", -123° 6' 50"Latitude: 49.260872222222
Longitude: -123.11395277778
+
Has location cityVancouver +
Has location countryCategory:Canada +
Homepagehttp://alt.qcri.org/semeval2017/ +
IsAEvent +
Start dateAugust 3, 2017 +
Title11th International Workshop on Semantic Evaluation 2017 +