Connect:  SPS Facebook Page  SPS Twitter  SPS LinkedIn  SPS YouTube Channel  SPS Google+ Page     Share: Share

Speech and Language Processing Technical Committee Newsletter

February 2013

Welcome to the Spring 2013 edition of the IEEE Speech and Language Processing Technical Committee's Newsletter! This issue of the newsletter includes 9 articles from 17 guest contributors, and our own staff reporters and editors. Thank you all for your contributions!

We'd like to thank the retiring editor Martin Russell, and welcome our new editor Haizhou Li and staff reporter Navid Shokouhi.

We believe the newsletter is an ideal forum for updates, reports, announcements and editorials which don't fit well with traditional journals. We welcome your contributions, as well as calls for papers, job announcements, comments and suggestions. You can submit job postings here, and reach us at speechnewseds [at] listserv (dot) ieee [dot] org.

We'd like to recruit more reporters: if you are still a PhD student or graduated recently and interested in contributing to our newsletter, please email us (speechnewseds [at] ...) with applications. The workload includes helping with the reviews of submissions and writing occasional reports for the Newsletter. Finally, to subscribe to the Newsletter, send an email with the command "subscribe speechnewsdist" in the message body to listserv [at] listserv (dot) ieee [dot] org.

Dilek Hakkani-Tür, Editor-in-chief
William Campbell, Editor
Haizhou Li, Editor
Patrick Nguyen, Editor


From the SLTC and IEEE

From the IEEE SLTC chair

Douglas O'Shaughnessy

IEEE Awards and Recognition

John H.L. Hansen

CFPs, Jobs, and Announcements

Calls for papers, proposals, and participation

Edited by William Campbell

Job advertisements

Edited by William Campbell


Call for Proposals - SLT-2014

SPS-SLTC Workshop Sub-Committee, Nick Campbell, George Saon, and Geoffrey Zweig


In Pursuit of Situated Spoken Dialog and Interaction

Dan Bohus and Eric Horvitz

Advances over the last decade in speech recognition and NLP have fueled the widespread use of spoken dialog systems, including telephony-based applications, multimodal voice search, and voice-enabled smartphone services designed to serve as mobile personal assistants. Key limitations of the systems fielded to date frame opportunities for new research on physically situated and open-world spoken dialog and interaction. Such opportunities are made especially salient for such goals as supporting efficient communication at a distance with Xbox applications and avatars, collaborating with robots in a public space, and enlisting assistance from in-car information systems while driving a vehicle.


An Overview of Selected Talks at NIPS 2012 Conference/Workshop

Tara N. Sainath

The 26th annual Conference on Neural Information Processing Systems (NIPS) took place in Lake Tahoe, Nevada, December 2012. The NIPS conference covers a wide variety of research topics discussing synthetic neural systems through machine learning and artificial intelligence algorithms as well as the analysis of natural neural processing systems. This article is a summary of selected talks regarding recent developments on neural networks and deep learning algorithms presented in NIPS 2012.


Interview: Developing the Next Generation of In-Car Interfaces

Matthew Marge

Researchers at Carnegie Mellon University’s Silicon Valley Campus and Honda Research Institute have brought together many of today’s visual and audio technologies to build a cutting-edge in-car interface. Ian Lane, Research Assistant Professor at CMU Silicon Valley, and Antoine Raux, Senior Scientist at Honda Research Institute, spoke to us regarding the latest news surrounding AIDAS: An Intelligent Driver Assistive System.


The "Spoken Web Search" task at Mediaeval 2012

Xavier Anguera, Florian Metze, Andi Buzo, Igor Szoke and Luis J. Rodriguez-Fuentes

In this article we describe the "Spoken Web Search" task within Mediaeval, which tries to foster research on language-independent search of "real-world" speech data, with a special emphasis on low-resourced languages. In addition, we review the main approaches proposed in 2012 and make a call for participation for the 2013 evaluation.

Speaker Verification Makes Its Debut in Smartphone

Kong Aik Lee, Bin Ma, and Haizhou Li

My voice tells who I am. No two individuals sound identical because their vocal tract shapes and other parts of their voice production organs are different. With speaker verification technology, we extract speaker traits, or voiceprint, from speech samples to establish speaker's identity. Among different forms of biometrics, voice is believed to be the most straightforward for telephone-based applications because telephone is built for voice communication. The recent release of Baidu-Lenovo A586 marks an important milestone of mass market adoption of speaker verification technology in mobile applications. The voice-unlock featured in the smartphone allows users to unlock their phone screens using spoken passphrases.

Spoken language disorders: from screening to analysis

Tobias Bocklet, Elmar Nöth

Cleft Lip and Palate (CLP) is among the most frequent congenital abnormalities [1]; the facial development is abnormal during gestation. This leads to insufficient closure of lip, palate and jaw with affected articulation. Due to the huge variety of malformations speech production is affected inhomogeneously for different patients.

Previous research in our group focused mostly on text-wide scores like speech intelligibility [2, 3]. In current projects we focus on a more detailed automatic analysis. The goal is to provide an in-depth diagnosis with direct feedback on articulation deficits.


Overview of the 8th International Symposium on Chinese Spoken Language Processing

Helen Meng

This article gives a brief overview of the 8th International Symposium on Chinese Spoken Language Processing (ISCSLP), that was held in Hong Kong during 5-8 December 2012. ISCSLP is a major scientific conference for scientists, researchers, and practitioners to report and discuss the latest progress in all theoretical and technological aspects of Chinese spoken language processing. The working language of ISCSLP is English.



Subscribe to the newsletter

SLTC Home