Connect:  SPS Facebook Page  SPS Twitter  SPS LinkedIn  SPS YouTube Channel  SPS Google+ Page     Share: Share

Speech and Language Processing Call for Papers

Page 1 of 2  > >>

Dec 20, 2016

With the widespread explosion of sensing and computing, an increasing number of industrial applications and an ever-growing amount of academic research generate massive multi-modal data from multiple sources. Gaussian distribution is the probability distribution ubiquitously used in statistics, signal processing, and pattern recognition. However, not all the data we are processing are Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics of data (e.g., data with bounded support, data with semi-bounded support, and data with L1/L2-norm constraint) can significantly improve the performance of practical systems. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and the corresponding non-Gaussian statistical models (e.g., beta distribution for bounded support data, gamma distribution for semi-bounded support data, and Dirichlet/vMF distribution for data with L1/L2-norm constraint).

In order to analyze and understand such kind of non-Gaussian data, the developments of related learning theories, statistical models, and efficient algorithms become crucial. The scope of this special issue is to provide theoretical foundations and ground-breaking models and algorithms to solve this challenge.

We invite authors to submit articles to address the aspects ranging from case studies of particular problems with non-Gaussian distributed data to novel learning theories and approaches, including (but not limited to):

  • Machine Learning for Non-Gaussian Statistical Models
  • Non-Gaussian Pattern Learning and Feature Selection
  • Sparsity-aware Learning for Non-Gaussian Data
  • Visualization of Non-Gaussian Data
  • Dimension Reduction and Feature Selection for Non-Gaussian Data
  • Non-Gaussian Convex Optimization
  • Non-Gaussian Cross Domain Analysis
  • Non-Gaussian Statistical Model for Multimedia Signal Processing
  • Non-Gaussian Statistical Model for Source and/or Channel Coding
  • Non-Gaussian Statistical Model for Biomedical Signal Processing
  • Non-Gaussian Statistical Model for Bioinformatics
  • Non-Gaussian Statistical Model in Social Networks
  • Platforms and Systems for Non-Gaussian Data Processing



Apr 12, 2016

******* CFP: Machine Translation Journal ********

** Special Issue on Spoken Language Translation **

Guest editors:
Alex Waibel (Carnegie Mellon University / Karlsruhe Institute of Technology)
Sebastian Stüker (Karlsruhe Institute of Technology)
Marcello Federico (Fondazione Bruno Kessler)
Satoshi Nakamura (Nara Institute of Science and Technology)
Hermann Ney (RWTH Aachen University)
Dekai Wu (The Hong Kong University of Science and Technology)

Spoken language translation (SLT) is the science of automatic translation of spoken language. It may be tempting to view spoken language as nothing more than language (as in text) with an added spoken verbalization preceding it. Translation of speech could then be achieved by simply applying automatic speech recognition (ASR or “speech-to-text”) before applying traditional machine translation (MT). Unfortunately, such an overly simplistic approach does not address the complexities of the problem. Not only do speech recognition errors compound with errors in machine translation, but spoken language also differs considerably in form, structure and style, so as to render the combination of two text-based components as ineffective. Moreover, automatic spoken language translation systems serve different practical goals than voice interfaces or text translators, so that integrated systems and their interfaces have to be designed carefully and appropriately (mobile, low-latency, audio-visual, online/offline, interactive, etc.) around their intended deployment. Unlike written texts, human speech is not segmented into sentences, does not contain punctuation, is frequently ungrammatical, contains many disfluencies, or sentence fragments. Conversely, spoken language contains information about the speaker, gender, emotion, emphasis, social form and relationships and –in the case of dialog- there is discourse structure, turn-taking, back-channeling across languages to be considered. SLT systems, therefore, need to consider a host of additional concerns related to integrated recognition and translation performance, use of social form and function, prosody, suitability and (depending on deployment) effectiveness of human interfaces, and task performance under various speed, latency, context and language resource constraints. Due to continuing improvements in underlying spoken language ASR and MT components as well as in the integrated system designs, spoken language systems have become increasingly sophisticated and can handle increasingly complex sentences, more natural environments, discourse and conversational styles, leading to a variety of successful practical deployments. In the light of 25 years of successful research and transition into practice, the MT Journal dedicates a special issue to the problem of Spoken Language Translation. We invite submissions of papers that address issues and problems pertaining to the development, design and deployment of spoken language translation systems. Papers on component technologies and methodology as well as on system designs and deployments of spoken language systems are both encouraged.
Submission guidelines:
- Authors should follow the "Instructions for Authors" available on the MT Journal website:
- Submissions must be limited to 25 pages (including references)
- Papers should be submitted online directly on the MT journal's submission website:, indicating this special issue in ‘article type’

Important dates:
- Paper submission: July 15th 2016.
- Notification to authors: August 3rd 2016.
- Camera-ready*: November 19th 2016.
* tentative - depending on the number of review rounds required

Mar 8, 2016

Paderborn University, Paderborn, Germany. October 5 – 7, 2016

Feb 29, 2016

Speech Research Lab, Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT), Gandhinagar, India

Feb 17, 2016

The emergence of virtual personal assistants such as SIRI, Cortana, Echo, and Google Now, is generating increasing interest in research in speech understanding and spoken interaction. However, whilst the ability of these agents to recognize conversational speech is maturing rapidly, their ability to understand and interact is still limited to a few specific domains, such as weather information, local businesses, and some simple chit-chat. Their conversational capabilities are not necessarily apparent to users. Interaction typically depends on handcrafted scripts and is often guided by simple commands. Deployed dialogue models do not fully make use of the large amount of data that these agents generate. Promising approaches that involve statistical models, big data analysis, representation of knowledge (hierarchical, relations, etc.), utilizing and enriching semantic graphs with natural language components, multi-modality, etc. are being explored in multiple communities, such as natural language processing (NLP), speech processing, machine learning (ML), and information retrieval. However, we are still only scratching the surface in this field. The goal of the special issue is to bring together both applied and theoretical studies in spoken/natural language processing and machine learning to facilitate the emergence of new frameworks that can help advance modern conversational systems.

The special issue is a follow up of the NIPS 2015 Workshop on Spken Language Understanding and Interaction, but is not limited to those papers. All new and original submissions are encouraged. Papers will be peer-reviewed according to the journal standards.

Additional information can be found here.

Submissions due by 10 April 2016

Jan 20, 2016

The Young Researcher's Roundtable for Spoken Dialog Systems 2016 is soliciting position papers and participation. YRRSDS 2016 is an open forum for spoken dialog researchers to discuss their work and research interests. This is the 12th edition of the annual roundtable that provides a networking platform for young researchers in the field. It serves as a playground for stimulating new ideas, sharing tools, and discuss current issues in Spoken Dialog Systems research.

Participants should write a two page position paper to describe their research interests and encouraging thoughts for future research in Spoken Dialogue Systems.

Please note that by "young researchers" the workshop's organizers mean to target students and researchers in the field who are at a relatively early stage of their careers, and in no way mean to imply that participants must meet certain age restrictions.

The 2016 YRRSDS will be held at the University of Southern California Institute for Creative Technologies (ICT)

Additional information can be found here:

Page 1 of 2  > >>