Difference between revisions of "UNTREF Speech Workshop"

From Robert-Depot
Jump to: navigation, search
(Language Models)
(Using sphinx)
Line 26: Line 26:
 
*change to the pocketsphinx directory.  
 
*change to the pocketsphinx directory.  
 
**<code>cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release</code>
 
**<code>cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release</code>
*run the pocketsphinx command:
+
*run the pocketsphinx command to recognize english:
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP</code>
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP</code>
 +
*recognize spanish:
 +
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\es_MX\hub4_spanish_itesm.cd_cont_2500 -dict ..\..\model\lm\es_MX\h4.dict -lm ..\..\model\lm\es_MX\H4.arpa.Z.DMP </code>
 
**this should transcribe live from the microphone.
 
**this should transcribe live from the microphone.
  

Revision as of 07:17, 3 September 2013

<<< back to Wiki Home

Introduction

A short 1-2 day workshop introducing speech recognition and speech synthesis techniques for the creation of interactive artwork. We use pre-compiled open-source tools (CMU Sphinx ASR and Festival TTS), and focus on the specifics of language model construction and creative deployment of the technologies as vehicles for creating meaning.

Background

  • If Things Can Talk, What Do They Say? If We Can Talk to Things, What Do We Say? Natalie Jeremijenko. 2005-03-05 [1]
    • also see the responses by Simon Penny, Lucy Suchmann, and Natalie.
  • Dialogue with a Monologue: Voice Chips and the Products of Abstract Speech. [2]

Speech Recognition

Introduction

Installing CMU Sphinx

http://cmusphinx.sourceforge.net/wiki/download/

Language Models

Acoustic models versus language models.

Grammars versus Satistical Language Models.

Available language models. English, Mandarin, French, Spanish, German, Dutch and more: http://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/

Using sphinx

  • open a terminal. Windows, Run->Cmd.
  • change to the pocketsphinx directory.
    • cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release
  • run the pocketsphinx command to recognize english:
    • pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP
  • recognize spanish:
    • pocketsphinx_continuous.exe -hmm ..\..\model\hmm\es_MX\hub4_spanish_itesm.cd_cont_2500 -dict ..\..\model\lm\es_MX\h4.dict -lm ..\..\model\lm\es_MX\H4.arpa.Z.DMP
    • this should transcribe live from the microphone.

Training your own Models

grammer is trivial.

slm, can use online tools. or try the sphinxtrain packages.

Programming with Speech Recognition

Processing. Sphinx4, the java interface.

Python or c++, command line, android. pocketsphinx.

Speech Synthesis

Introduction

FestVox. CMU Speech group.

Festival from University of Edinburgh.

Installation

Test It