Connect:  SPS Facebook Page  SPS Twitter  SPS LinkedIn  SPS YouTube Channel  SPS Google+ Page     Share: Share

Speech and Language Processing Technical Committee Newsletter

October 2011

Welcome to the Autumn 2011 edition of the IEEE Speech and Language Processing Technical Committee's Newsletter.

In this issue we are pleased to provide another installment of brief articles representing a diversity of views and backgrounds. This issue includes 9 articles from 6 guest contributors, and our own staff reporters and editors.

We believe the newsletter is an ideal forum for updates, reports, announcements and editorials which don't fit well with traditional journals. We welcome your contributions, as well as calls for papers, job announcements, comments and suggestions. You can submit job postings here, and reach us at speechnewseds [at] listserv (dot) ieee [dot] org.

Finally, to subscribe the Newsletter, send an email with the command "subscribe speechnewsdist" in the message body to listserv [at] listserv (dot) ieee [dot] org.

Jason Williams, Editor-in-chief
Pino Di Fabbrizio, Editor
Martin Russell, Editor
Chuck Wooters, Editor


From the SLTC and IEEE

From the IEEE SLTC chair

John Hansen

Updates on ICASSP 2012, forthcoming article "Trends in Speech and Language Processing", and speaker and language recognition.

IEEE Signal Processing Society Newsletter

The IEEE Signal Processing Society, our parent organization, also produces a monthly newsletter, "Inside Signal Processing".


CFPs, Jobs, and book announcements

Calls for papers, proposals, and participation

Edited by Chuck Wooters

Job advertisements

Edited by Chuck Wooters


INTERSPEECH 2011: a success story

Piero Cosi, Renato De Mori, Roberto Pieraccini, Giuseppe Di Fabbrizio

Exactly 20 years after the second EUROSPEECH Conference, which was held in Genoa, INTERSPEECH returned to Italy this year, specifically in the cradle of the renaissance, Florence, on 27-31 August 2011.

New Transcription System using Automatic Speech Recognition (ASR) in the Japanese Parliament (Diet)

Tatsuya Kawahara

The Japanese Parliament is now using ASR from Kyoto University and NTT for transcription of all plenary sessions and committee meetings.

Detecting Intoxication in Speech

Matthew Marge

Researchers at Columbia are investigating ways to automatically detect intoxication in speech. William Yang Wang, currently a PhD student at Carnegie Mellon that worked on this team while a Master's student, discussed the project and its goals with us.

Language research presented at SemDial 2011

Antonio Roque

The latest in the SemDial series of workshops in the semantics and pragmatics of dialogue was recently held in Los Angeles on September 21-23.

Interspeech 2011 Plenary Sessions

Martin Russell

Interspeech 2011 was held in Florence, Italy, on 27-31 August. The first three days of the conference began with excellent invited plenary talks by Julia Hirschberg, Tom Mitchell and Alex Pentland.

An Overview of Translingual Automatic Language Exploitation System (TALES)

Tara N. Sainath

Over the past few years, IBM Research has been actively involved a project known as Translingual Automatic Language Exploitation System (TALES). The objective of the TALES project is to translate news broadcasts and websites from foreign languages into English. TALES is built on top of the IBM Unstructured Information Management Architecture (UIMA) platform. In this article, we provide an overview of the TALES project and highlight in more detail some of the new research directions.

Speech Application Student Contest

K. W. "Bill" Scholz and Deborah Dahl

AVIOS, the Applied Voice Input/Output Society, is a non-profit foundation dedicated to informing and educating developers of speech applications on best practices for application construction and deployment. In early 2006 we decided to focus this goal on students by giving them an opportunity to demonstrate their developmental competence to the speech community. The competition has now grown into an annual contest whose winners are substantially remunerated for their efforts, and whose winning applications are posted on our website.

Overview of Intelligent Virtual Agents IVA2011 Conference

Svetlana Stoyanchev

IVA 2011 is a research conference on Intelligent Virtual Agents. Intelligent Virtual Agents (IVAs) are animated embodied characters with interactive human-like capabilities such as speech, gestures, facial expressions, head and eye movement. Virtual agents have capabilities both to perceive and to exhibit human-like behaviours. Virtual characters enhance user interaction with a dialogue system by adding a visual modality and creating a persona for a dialogue system. They are used in interactive systems as tutors, museum guides, advisers, signers of sign language, and virtual improvisational artists.

Cloudcomputing and crowdsourcing for speech processing: a perspective

David Suendermann

This article discusses how cloudcomputing and crowdsourcing are changing the speech science world in both, academia and industry. How are these paradigms interrelated? How do different areas of speech processing make use of cloudcomputing and crowdsourcing? Which role do performance, pricing, security, ethics play?


Subscribe to the newsletter

SLTC Home