Difference between revisions of "SRSL 2009"

From Openresearch
Jump to: navigation, search
(IHWVetwb)
 
Line 1: Line 1:
{{Event
+
Most of those use smnothieg called a Fourier Transform (usually a Fast Fourier Transform FFT) to get the primary frequencies of the sound, and detect beats, etc, which then it packages into a signature  or fingerprint, which you can think of as kind of like a hash. It then compares that signature to its database, and picks the closest song.
| Acronym = SRSL 2009
 
| Title = 2nd International Workshop on the Semantic Representation of Spoken Language
 
| Type = Workshop
 
| Superevent = EACL 2009
 
| Series =
 
| Field = Natural language processing
 
| Homepage = www.ofai.at/~manuel.alcantara/SRSL2009
 
| Start date = Mar 30, 2009
 
| End date =  Mar 31, 2009
 
| City= Athens
 
| State =
 
| Country =  Greece
 
| Abstract deadline =
 
| Submission deadline = Dec 19, 2008
 
| Notification =
 
| Camera ready =
 
}}
 
 
 
<pre>
 
2nd International Workshop on the Semantic Representation of Spoken    Language
 
 
 
                                      EACL2009
 
 
 
                    Athens, Greece; March 30 or 31, 2009
 
 
 
                    http://www.ofai.at/~manuel.alcantara/SRSL2009
 
 
 
                      Submission Deadline:  December 19, 2008
 
 
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
 
 
The aim of the workshop is to bring together researchers interested in
 
the semantic representation of spoken corpora, especially spontaneous
 
speech. On one hand, the semantic gap between contents conveyed by
 
natural languages and their formal representations is a burning aspect
 
in tasks such as information extraction and corpus annotation. The
 
current state-of-the-art supports solutions from very different
 
backgrounds and perspectives, but still remain important and complex
 
issues to deal with, such as the accurate segmentation of speech in
 
semantic units. The discussion of those aspects are one of the main
 
reasons for this workshop. On the other hand, spoken language is a
 
pending issue in computational linguistics and artificial intelligence,  
 
both traditionally focused on written language, although semantic
 
processing of speech is necessary for the understanding of both natural
 
and human-machine interaction. Finally, the problems found when trying
 
to linguistically structure spontaneous speech are leading to works
 
focused on its semantic representation. In-depth research on the
 
semantic representation of speech can provide us with a suitable basis
 
for further analysis of related linguistic levels, like prosody or  
 
pragmatics.
 
 
 
Papers are invited on substantial, original, and unpublished research
 
concerning the semantic representation of spoken language. Possible
 
topics include:
 
 
 
  * Corpus annotation: structures (frame-banks, proposition banks,
 
etc.) and concepts (ontologies, named entities, etc.).
 
 
 
  * Content identification and segmentation in spontaneous speech.
 
 
 
  * Semantic interpretation in dialogues.
 
 
 
  * Dialogue and discourse structures.
 
 
 
  * Topic Detection and tracking.
 
 
 
  * Multimodal Representations including speech.
 
 
 
  * Natural language understanding and reasoning in spoken dialogue
 
systems.
 
 
 
  * Speech in embedded systems.
 
 
 
  * Project descriptions about applications of spoken corpora semantic
 
representations.
 
 
 
  * Standardization work.
 
 
 
  * Interoperability/Comparison work of spoken and written corpora.
 
 
 
 
 
Submissions must be written in English and should follow the EACL2009
 
style format. They can be full (6-8 pages including references) or short papers (3-4 pages including references). As reviewing will be blind, the paper should not include author's names and affiliations. Furthermore, self-references that reveal the author's identity should be avoided. Submissions that do not conform to these styles will be rejected.
 
 
 
The only accepted format for submitted papers is PDF. Please remember
 
that the paper submission deadline is December 19th, 2008. Papers that
 
are being submitted in parallel to other conferences or workshops must
 
indicate this on the title page, as must papers that contain significant overlap with previously published work. Accepted papers will appear in the proceedings of the SRSL-2009.
 
 
 
Both full and short papers (3-4 pages) will have an oral presentation at the workshop within the framework of the EACL2009 conference in Athens (Greece). For this reason, at least one author has to register for the workshop.
 
 
 
For further information, please visit
 
http://www.ofai.at/~manuel.alcantara/SRSL2009
 
 
 
(already confirmed) Scientific Committee:
 
 
 
Christina Alexandris (National University of Athens)
 
Enrique Alfonseca (Google)
 
Paul Buitelaar (DFKI GmbH)
 
Harry Bunt (Universiteit van Tilburg)
 
Nicoletta Calzolari (ILC-CNR)
 
Anette Frank (Universität Heidelberg)
 
Johannes Matiasek (OFAI)
 
Massimo Moneglia (Università degli Studi di Firenze)
 
Juan Carlos Moreno Cabrera (Universidad Autónoma de Madrid)
 
Antonio Moreno Sandoval (Universidad Autónoma de Madrid)
 
Gael Richard (École Nationale Supérieure des Télécommunications, GET-ENST)
 
Carlos Subirats (Universitat Autònoma de Barcelona)
 
Isabel Trancoso (Universidade Técnica de Lisboa)
 
 
 
Organizing Committee:
 
 
 
Manuel Alcantara-Pla (OFAI, Vienna). Contact:
 
manuel(dot)alcantara(at)uam(dot)es
 
 
 
Thierry Declerck (DFKI GmbH, Saarbruecken). Contact:
 
declerck(at)dfki(dot)de
 
</pre>This CfP was obtained from [http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=3841&amp;copyownerid=2 WikiCFP][[Category:Speech recognition]]
 
[[Category:Information science]]
 

Latest revision as of 12:37, 1 March 2012

Most of those use smnothieg called a Fourier Transform (usually a Fast Fourier Transform FFT) to get the primary frequencies of the sound, and detect beats, etc, which then it packages into a signature or fingerprint, which you can think of as kind of like a hash. It then compares that signature to its database, and picks the closest song.