Difference between revisions of "SGPVPECA"

From Openresearch
Jump to: navigation, search
 
(3 intermediate revisions by 2 users not shown)
Line 11: Line 11:
 
|State=California
 
|State=California
 
|Country=USA
 
|Country=USA
|Submission deadline=2012/06/04
+
|Submission deadline=2012/06/25
 
}}
 
}}
 
'''ICMI-2012 Workshop on'''
 
'''ICMI-2012 Workshop on'''
 +
 
'''Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents'''
 
'''Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents'''
 +
  
 
''
 
''
Line 22: Line 24:
  
 
==Important Dates==
 
==Important Dates==
   * Submission deadline: Monday, June 4, 2012
+
  * Intent to submit: Monday, June 4, 2012
   * Notification: Monday, July 30, 2012
+
   * Submission deadline: Monday, June 25, 2012
 +
   * Notification: Monday, August 13, 2012
 
   * Camera-ready deadline: Monday, September 10, 2012
 
   * Camera-ready deadline: Monday, September 10, 2012
 
   * Workshop: Friday, October 26, 2012
 
   * Workshop: Friday, October 26, 2012
Line 40: Line 43:
  
 
==Submissions==
 
==Submissions==
 +
Please send your intent to submit to icmi2012ws.speech.gesture@gmail.com by Monday, June 4, 2012.
 +
 
Workshop contributions should be submitted via e-mail in the ACM publication style to [mailto:icmi2012ws.speech.gesture@gmail.com icmi2012ws.speech.gesture@gmail.com] in one of the following formats:
 
Workshop contributions should be submitted via e-mail in the ACM publication style to [mailto:icmi2012ws.speech.gesture@gmail.com icmi2012ws.speech.gesture@gmail.com] in one of the following formats:
 
   * Full paper (5-6 pages, PDF file)
 
   * Full paper (5-6 pages, PDF file)
Line 50: Line 55:
  
 
Accepted papers will be included in the workshop proceedings in ACM Digital Library; video submissions and accompanying abstracts will be published on the workshop website. Contributors will be invited to give either an oral or a video presentation at the workshop.
 
Accepted papers will be included in the workshop proceedings in ACM Digital Library; video submissions and accompanying abstracts will be published on the workshop website. Contributors will be invited to give either an oral or a video presentation at the workshop.
 +
 +
==Contact==
 +
  * Workshop Questions and Submissions (icmi2012ws.speech.gesture@gmail.com)
 +
  * Ross Mead (rossmead@usc.edu)
 +
  * Maha Salem (msalem@cor-lab.uni-bielefeld.de)
 +
 +
==Workshop Organizers==
 +
  * Ross Mead (University of Southern California)
 +
  * Maha Salem (Bielefeld University)
  
 
==Program Committee==
 
==Program Committee==
Line 64: Line 78:
 
   * Victor Ng-Thow-Hing (Honda Research Institute USA)
 
   * Victor Ng-Thow-Hing (Honda Research Institute USA)
 
   * Catherine Pelachaud (TELECOM ParisTech)
 
   * Catherine Pelachaud (TELECOM ParisTech)
 
==Workshop Organizers==
 
  * Ross Mead (University of Southern California)
 
  * Maha Salem (Bielefeld University)
 
 
==Contact==
 
  * Workshop Questions and Submissions (icmi2012ws.speech.gesture@gmail.com)
 
  * Ross Mead (rossmead@usc.edu)
 
  * Maha Salem (msalem@cor-lab.uni-bielefeld.de)
 

Latest revision as of 03:58, 22 May 2012

SGPVPECA
Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents
Subevent of 14th ACM International Conference on Multimodal Interaction (ICMI-2012)
Dates 2012/10/26 (iCal) - 2012/10/26
Homepage: robotics.usc.edu/~icmi
Location
Location: Santa Monica, California, USA
Loading map...

Important dates
Submissions: 2012/06/25
Table of Contents


ICMI-2012 Workshop on

Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents


This full day workshop aims to bring together researchers from the embodied conversational agent (ECA) and sociable robotics communities to spark discussion and collaboration between the related fields. The focus of the workshop will be on co-verbal behavior production — specifically, synchronized speech and gesture — for both virtually and physically embodied platforms. It will elucidate the subject in consideration of aspects regarding planning and realization of multimodal behavior production. Topics discussed will highlight common and distinguishing factors of their implementations within each respective field. The workshop will feature a panel discussion with experts from the relevant communities, and a breakout session encouraging participants to identify design and implementation principles common to both virtually and physically embodied sociable agents.


Important Dates

 * Intent to submit: Monday, June 4, 2012
 * Submission deadline: Monday, June 25, 2012
 * Notification: Monday, August 13, 2012
 * Camera-ready deadline: Monday, September 10, 2012
 * Workshop: Friday, October 26, 2012

Topics

Under the focus of speech-gesture-based multimodal human-agent interaction, the workshop invites submissions describing original work, either completed or still in progress, related to one or more of the following topics:

 * Computational approaches to:
   - Content and behavior planning, e.g., rule-based or probabilistic models
   - Behavior realization for virtual agents or sociable robots
 * From ECAs to physical robots: potential and challenges of cross-platform approaches
 * Behavior specification languages and standards, e.g., FML, BML, MURML
 * Speech-gesture synchronization, e.g., open-loop vs. closed-loop approaches
 * Situatedness within social/environmental contexts
 * Feedback-based user adaptation
 * Cognitive modeling of gesture and speech

Submissions

Please send your intent to submit to icmi2012ws.speech.gesture@gmail.com by Monday, June 4, 2012.

Workshop contributions should be submitted via e-mail in the ACM publication style to icmi2012ws.speech.gesture@gmail.com in one of the following formats:

 * Full paper (5-6 pages, PDF file)
 * Short position paper (2-4 pages, PDF file)
 * Demo video (1-3 minutes, common file formats, e.g., AVI or MP4) including an extended abstract (1-2 pages, PDF file)

If a submission exceeds 10MB, it should be made available online and a URL should be provided instead.

Submitted papers and abstracts should conform to the ACM publication style; for templates and examples, follow the link: http://www.acm.org/sigs/pubs/proceed/template.html.

Accepted papers will be included in the workshop proceedings in ACM Digital Library; video submissions and accompanying abstracts will be published on the workshop website. Contributors will be invited to give either an oral or a video presentation at the workshop.

Contact

 * Workshop Questions and Submissions (icmi2012ws.speech.gesture@gmail.com)
 * Ross Mead (rossmead@usc.edu)
 * Maha Salem (msalem@cor-lab.uni-bielefeld.de)

Workshop Organizers

 * Ross Mead (University of Southern California)
 * Maha Salem (Bielefeld University)

Program Committee

 * Dan Bohus (Microsoft Research)
 * Kerstin Dautenhahn (University of Hertfordshire)
 * Jonathan Gratch (USC Institute for Creative Technologies)
 * Alexis Heloir (German Research Center for Artificial Intelligence)
 * Takayuki Kanda (ATR Intelligent Robotics and Communication Laboratories)
 * Jina Lee (Sandia National Laboratories)
 * Stacy Marsella (USC Institute for Creative Technologies)
 * Maja Matarić (University of Southern California)
 * Louis-Philippe Morency (USC Institute for Creative Technologies)
 * Bilge Mutlu (University of Wisconsin-Madison)
 * Victor Ng-Thow-Hing (Honda Research Institute USA)
 * Catherine Pelachaud (TELECOM ParisTech)
Facts about "SGPVPECA"
AcronymSGPVPECA +
End dateOctober 26, 2012 +
Event typeWorkshop +
Has coordinates34° 1' 10", -118° 29' 28"Latitude: 34.019469444444
Longitude: -118.49122777778
+
Has location citySanta Monica +
Has location countryCategory:USA +
Has location stateCalifornia +
Homepagehttp://robotics.usc.edu/~icmi +
IsAEvent +
Logohttp://robotics.usc.edu/~icmi/2012/images/MaxAsimo.png +
Start dateOctober 26, 2012 +
Subevent of14th ACM International Conference on Multimodal Interaction (ICMI-2012) +
Submission deadlineJune 25, 2012 +
TitleSpeech and Gesture Production in Virtually and Physically Embodied Conversational Agents +