Difference between revisions of "UNTREF Speech Workshop"

From Robert-Depot
Jump to: navigation, search
Line 1: Line 1:
 
[[Home | <<< back to Wiki Home]]
 
[[Home | <<< back to Wiki Home]]
 +
 +
=Introduction=
 +
 +
A short 1-2 day workshop introduction open source speech recognition and speech synthesis technologies for making interactive artworks. We use pre-compiled open-source tools (CMU Sphinx ASR and Festival TTS), and focus on the importance of language model content and interaction context as context context for creating meaning in the work.
 +
 
='''Background'''=
 
='''Background'''=
 
*If Things Can Talk, What Do They Say? If We Can Talk to Things, What Do We Say? Natalie Jeremijenko. 2005-03-05 [http://www.electronicbookreview.com/thread/firstperson/voicechip]
 
*If Things Can Talk, What Do They Say? If We Can Talk to Things, What Do We Say? Natalie Jeremijenko. 2005-03-05 [http://www.electronicbookreview.com/thread/firstperson/voicechip]

Revision as of 06:17, 3 September 2013

<<< back to Wiki Home

Introduction

A short 1-2 day workshop introduction open source speech recognition and speech synthesis technologies for making interactive artworks. We use pre-compiled open-source tools (CMU Sphinx ASR and Festival TTS), and focus on the importance of language model content and interaction context as context context for creating meaning in the work.

Background

  • If Things Can Talk, What Do They Say? If We Can Talk to Things, What Do We Say? Natalie Jeremijenko. 2005-03-05 [1]
    • also see the responses by Simon Penny, Lucy Suchmann, and Natalie.
  • Dialogue with a Monologue: Voice Chips and the Products of Abstract Speech. [2]

Speech Recognition

Introduction

Installing CMU Sphinx

http://cmusphinx.sourceforge.net/wiki/download/

Language Models

Acoustic models versus language models.

Grammars versus Satistical Language Models.

Using sphinx

  • open a terminal. Windows, Run->Cmd.
  • change to the pocketsphinx directory.
    • cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release
  • run the pocketsphinx command:
    • pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP
    • this should transcribe live from the microphone.

Training your own Models

grammer is trivial.

slm, can use online tools. or try the sphinxtrain packages.

Programming with Speech Recognition

Processing. Sphinx4, the java interface.

Python or c++, command line, android. pocketsphinx.

Speech Synthesis

Introduction

FestVox. CMU Speech group.

Festival from University of Edinburgh.

Installation