Difference between revisions of "UNTREF Speech Workshop"

From Robert-Depot
Jump to: navigation, search
(Hands-on with Processing)
 
(25 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Home | <<< back to Wiki Home]]
 
 
='''Introduction'''=
 
 
 
http://phenomenologyftw.files.wordpress.com/2011/03/saussure.gif
 
http://phenomenologyftw.files.wordpress.com/2011/03/saussure.gif
  
Line 12: Line 8:
  
  
Background Reading:
+
=Background Reading=
 
*Natalie Jeremijenko. "If Things Can Talk, What Do They Say?  If We Can Talk To Things, What Do We Say?"  2005-03-05 [http://www.electronicbookreview.com/thread/firstperson/voicechip
 
*Natalie Jeremijenko. "If Things Can Talk, What Do They Say?  If We Can Talk To Things, What Do We Say?"  2005-03-05 [http://www.electronicbookreview.com/thread/firstperson/voicechip
 
**also see the responses by Simon Penny, Lucy Suchmann, and Natalie linked from that page.
 
**also see the responses by Simon Penny, Lucy Suchmann, and Natalie linked from that page.
Line 18: Line 14:
 
*Mel Bochner.  "Serial Art, Systems, Solipsism." ([[Media:Serial Art, Systems, Solipsism - Bochner 1967.pdf|pdf]])
 
*Mel Bochner.  "Serial Art, Systems, Solipsism." ([[Media:Serial Art, Systems, Solipsism - Bochner 1967.pdf|pdf]])
  
='''Automatic Speech Recognition'''=
+
=Automatic Speech Recognition=
 
https://engineering.purdue.edu/~ee649/notes/figures/ear.gif
 
https://engineering.purdue.edu/~ee649/notes/figures/ear.gif
  
Line 29: Line 25:
 
*Google ASR wrapped for processing - http://stt.getflourish.com/
 
*Google ASR wrapped for processing - http://stt.getflourish.com/
  
==Installing CMU Sphinx==
+
==Hands-on with Processing==
 +
===STT Library===
 +
*Download and install the STT library. http://dl.dropbox.com/u/974773/_keepalive/stt.zip
 +
*Download the library file, unzip it, and copy it to the ''Processing\libraries'' folder.
 +
===Example - Listening with Google ASR ===
 +
*Processing example:
 +
**http://wiki.roberttwomey.com/images/c/c2/Google_listen.zip
 +
*Try switching the recognition language. "es" vs. "en", "de", "fr".
 +
 
 +
==Hands-on with Sphinx==
 +
===Installation===
 
*Download from sourceforge: http://cmusphinx.sourceforge.net/wiki/download/
 
*Download from sourceforge: http://cmusphinx.sourceforge.net/wiki/download/
*If using windows, you need the '''sphinxbase-0.8-win32.zip''' and '''pocketsphinx-0.8-win32.zip''' files. I already downloaded these for you. They are in the untref_speech folder.
+
*If using windows, you need the '''sphinxbase-0.8-win32.zip''' and '''pocketsphinx-0.8-win32.zip''' files. I already downloaded these for you. They are in the '''untref_speech''' folder.
  
==Using sphinx==
+
===Usage===
 
*open a terminal. Windows, Run->Cmd.
 
*open a terminal. Windows, Run->Cmd.
 
*change to the pocketsphinx directory.  
 
*change to the pocketsphinx directory.  
 
**<code>cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release</code>
 
**<code>cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release</code>
*run the pocketsphinx command to recognize english:
+
*ENGLISH: run the pocketsphinx command to recognize english:
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP</code>
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP</code>
*recognize spanish:
+
*SPANISH: recognize spanish:
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\es_MX\hub4_spanish_itesm.cd_cont_2500 -dict ..\..\model\lm\es_MX\h4.dict -lm ..\..\model\lm\es_MX\H4.arpa.Z.DMP </code>
 
**<code>pocketsphinx_continuous.exe -hmm ..\..\model\hmm\es_MX\hub4_spanish_itesm.cd_cont_2500 -dict ..\..\model\lm\es_MX\h4.dict -lm ..\..\model\lm\es_MX\H4.arpa.Z.DMP </code>
 
**this should transcribe live from the microphone.
 
**this should transcribe live from the microphone.
  
==Language Models==
+
===Language Models===
'''Acoustic models''' versus '''language models'''.
+
*''Acoustic models'' versus ''language models'''.
 +
*''Grammars'' versus ''Statistical Language Models''.
 +
*Available language models for Sphinx:
 +
**English
 +
**Mandarin
 +
**French
 +
**Spanish
 +
**German
 +
**Dutch
 +
**and more: http://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/
  
'''Grammars''' versus '''Satistical Language Models'''.
+
===Training your own Models===
 +
*grammer is trivial.
 +
*slm, can use online tools. or try the sphinxtrain packages.
 +
*the online tool http://www.speech.cs.cmu.edu/tools/lmtool-new.html
 +
**upload a plain-text file of sentences. it will produce a language model from these!
 +
**download the results.
 +
**I can talk you through using the resultant model.
  
Available language models. English, Mandarin, French, Spanish, German, Dutch and more: http://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/
+
==Hands-on with Sphinx4 Library for Processing==
 +
*This section includes a wrapper of the CMU Sphinx4 recognizer for Processing. Read more about the CMU Sphinx project at http://cmusphinx.sourceforge.net/.
 +
*Below we have a library for processing, an example using a grammar of phrases for recognition, and one using a statistical language model.
  
 +
===Library===
 +
*JAR file and some necessary language and acoustic models to do Sphinx-based speech recognition.
 +
*Download the zip file below and copy it to your Processing/libraries folder:
 +
**Download file: http://wiki.dxarts.washington.edu/groups/general/wiki/d7564/attachments/d8bfa/sphinx.zip
  
==Training your own Models==
+
===Example - Grammar-based Recognition ===
grammer is trivial.
+
*Simple grammar-based speech recognition with Sphinx4 in processing.
 +
**Download file: http://wiki.dxarts.washington.edu/groups/general/wiki/d7564/attachments/63c05/sphinxGrammarCustomdict.zip
 +
this example uses a simple grammar. In the data folder it has a grammar file (.gram), a dictionary file (.dict), and a config file (.xml)
 +
the grammar file (upstairs.gram) is a JSGF format grammar file that lists the possible words your system can hear. It has a format with individual words in upper-case letters, and a "|" mark between each word. You should be able to edit this file and fill it with your own words.
 +
the dict file (upstairs.dict) is a pronunciation dictionary file. It breaks each of those upper-case words fro the grammar into phonemic units. The easiest way to make a new dictionary with your own words is to use the online language tool described below.
 +
finally, the config file (upstairs.config.xml) specifies various parameters and file-names for the speech recognition engine. In this file you will probably need to change the path to your data files such as the grammar, dict, and the Library files you installed above. If you edit the xml file you will see that a lot of the paths are of the form "/Users/rtwomey/" which is obviously my computer, replace with the path to the file on your system. contact me if this doesn't work
  
slm, can use online tools. or try the sphinxtrain packages.
+
===Example - SLM-based Recognition===
 +
*This example does Sphinx4 automatic speech recognition using a statistical language model (SLM) rather than a grammar.
 +
*I have included two SLMs in the data directory, 3990.lm/3990.dict, and 7707.lm/7707.dict
 +
*They  were generated with the CMU Sphinx Knowledge Base Tool (see below). For each, I uploaded a plain-text file of sentences and saved the resulting tar file with dict, lm, and other results.
 +
*As above, you may need to change some file paths in the sphinx_config.xml file to match the setup on your system.
 +
*Download file: http://wiki.dxarts.washington.edu/groups/general/wiki/d7564/attachments/01040/sphinxSLMTest.zip
  
==Programming with Speech Recognition==
+
===Online Tool for Training Language Models===
Processing. '''Sphinx4''', the java interface.
+
*This produces a statistical language model and dictionary (along with various other products) for the text you upload.
 +
*Your source file should be plain text, one sentence per line.
 +
*Upload the file and then click "Compile Knowledge Base."
 +
*On the results screen, click on the .TAR file to download it. Unzip this file:
 +
**The .dic is your pronunciation dictionary. You may want to rename it to .dict to match the files in the sketch. Or change your config file.
 +
**The .lm file is a 3-gram SLM file. If you are trying the SLM example above you will need this as well.
 +
*The grammar example above runs from a grammar (.gram) and a dictionary (.dict). This online language tools generates the dictionary for your text but not the grammar. You will need to make the grammar on your own.
 +
*The SLM example above runs from a grammar (.gram) and a language model (.lm). This online tool generates both files.
 +
*Sphinx Knowledge Base Tool: http://www.speech.cs.cmu.edu/tools/lmtool-new.html
  
Python or c++, command line, android. '''pocketsphinx'''.
+
==Other programming==
 +
*Python or c++
 +
*command line
 +
*android
 +
*'''pocketsphinx'''.
  
==Hands-on with Processing==
+
=Text To Speech Synthesis=
*Requires STT library. http://stt.getflourish.com/
 
*Download the library file, unzip it, and copy it to the Processing\libraries folder. I also put it on the thumbdrive.
 
*Processing example: [[:File:google_listen.zip]]
 
*Try switching the recognition language.
 
 
 
='''Text To Speech Synthesis'''=
 
 
http://www.pixel-issue.net/wp-content/uploads/2011/11/voder-2.png
 
http://www.pixel-issue.net/wp-content/uploads/2011/11/voder-2.png
  
Line 85: Line 128:
 
**Others...
 
**Others...
 
*MARY TTS online demo - http://mary.dfki.de:59125/
 
*MARY TTS online demo - http://mary.dfki.de:59125/
 
==Installing Festival==
 
*http://festvox.org/packed/festival/2.1/festival-2.1-release.tar.gz
 
*Tutorial - http://homepages.inf.ed.ac.uk/jyamagis/misc/Practice_of_Festival_speech_synthesizer.html
 
*windows binaries http://sourceforge.net/projects/e-guidedog/files/related%20third%20party%20software/0.3/festival-2.1-win.7z/download
 
*voices http://homepages.inf.ed.ac.uk/jyamagis/software/page54/page54.html
 
 
  
 
==Hands-on With Processing==
 
==Hands-on With Processing==
*For Google TTS no library is required. You don't have to install anything. You just need an internet connection to talk to google.  
+
For Google TTS no library is required. You don't have to install anything. You just need an internet connection to talk to google.  
  
 
===Example 1. Speech===
 
===Example 1. Speech===
[[:File:google_speak.zip]]
+
*http://wiki.roberttwomey.com/images/d/d2/Google_speak.zip
  
 
===Example 2. Daisy Bell===
 
===Example 2. Daisy Bell===
Line 104: Line 140:
  
 
*Processing Daisy Bell example using Google Text To Speech. Requires an internet connection:
 
*Processing Daisy Bell example using Google Text To Speech. Requires an internet connection:
**[[:File:google_daisy.zip]]
+
**http://wiki.roberttwomey.com/images/4/43/Google_daisy.zip
 +
 
 +
==Hands-on with Festival==
 +
=== Installation ===
 +
*http://festvox.org/packed/festival/2.1/festival-2.1-release.tar.gz
 +
*Tutorial - http://homepages.inf.ed.ac.uk/jyamagis/misc/Practice_of_Festival_speech_synthesizer.html
 +
*windows binaries http://sourceforge.net/projects/e-guidedog/files/related%20third%20party%20software/0.3/festival-2.1-win.7z/download
 +
*voices http://homepages.inf.ed.ac.uk/jyamagis/software/page54/page54.html
 +
*Copy festival folder to C:\
 +
 
 +
===Usage===
 +
*run the terminal. Start Menu, Run -> Cmd.
 +
*switch to the festival directory:
 +
**<code>cd C:\festival</code>
 +
*start festival:
 +
**<code>festival</code>
 +
*to say something:
 +
**<code>(SayText "this is what I am going to say")</code>
 +
*to render speech to sound file:
 +
**
 +
*to switch voices:
 +
**<code>(voice_rab_diphone)</code>
 +
**<code>(voice_uw_us_rdt_clunits)</code>
 +
*to exit festival:
 +
**<code>(exit)</code>
 +
*Festival is written in Scheme, a variant of LISP.
  
 
==Voices==
 
==Voices==
Line 115: Line 176:
 
*Robert Voice
 
*Robert Voice
  
='''Activity: Feedback Loop'''=
+
=Activity: Feedback Loop=
 
http://phenomenologyftw.files.wordpress.com/2011/03/saussure.gif
 
http://phenomenologyftw.files.wordpress.com/2011/03/saussure.gif
  
Line 121: Line 182:
  
 
==Processing Sketch==
 
==Processing Sketch==
[[File:listen_speak.zip]]
+
http://wiki.roberttwomey.com/images/8/86/Listen_speak.zip

Latest revision as of 18:33, 20 December 2014

saussure.gif

Talking To Machines

A short workshop introducing speech recognition and speech synthesis techniques for the creation of interactive artwork. We use pre-compiled open-source tools (CMU Sphinx ASR, Festival TTS, Processing, Python) and focus on the demonstrable strengths and unexpected limitations of speech technologies as vehicles for creating meaning.

Saturday Sept 21, 2-6pm Centro Cultural de Borges UNTREF.


Background Reading

Automatic Speech Recognition

ear.gif

  • Talking to Machines.

Engines

Hands-on with Processing

STT Library

Example - Listening with Google ASR

Hands-on with Sphinx

Installation

  • Download from sourceforge: http://cmusphinx.sourceforge.net/wiki/download/
  • If using windows, you need the sphinxbase-0.8-win32.zip and pocketsphinx-0.8-win32.zip files. I already downloaded these for you. They are in the untref_speech folder.

Usage

  • open a terminal. Windows, Run->Cmd.
  • change to the pocketsphinx directory.
    • cd Desktop\untref_speech\pocketsphinx-0.8-win32\bin\Release
  • ENGLISH: run the pocketsphinx command to recognize english:
    • pocketsphinx_continuous.exe -hmm ..\..\model\hmm\en_US\hub4wsj_sc_8k -dict ..\..\model\lm\en_US\cmu07a.dic -lm ..\..\model\lm\en_US\hub4.5000.DMP
  • SPANISH: recognize spanish:
    • pocketsphinx_continuous.exe -hmm ..\..\model\hmm\es_MX\hub4_spanish_itesm.cd_cont_2500 -dict ..\..\model\lm\es_MX\h4.dict -lm ..\..\model\lm\es_MX\H4.arpa.Z.DMP
    • this should transcribe live from the microphone.

Language Models

Training your own Models

  • grammer is trivial.
  • slm, can use online tools. or try the sphinxtrain packages.
  • the online tool http://www.speech.cs.cmu.edu/tools/lmtool-new.html
    • upload a plain-text file of sentences. it will produce a language model from these!
    • download the results.
    • I can talk you through using the resultant model.

Hands-on with Sphinx4 Library for Processing

  • This section includes a wrapper of the CMU Sphinx4 recognizer for Processing. Read more about the CMU Sphinx project at http://cmusphinx.sourceforge.net/.
  • Below we have a library for processing, an example using a grammar of phrases for recognition, and one using a statistical language model.

Library

Example - Grammar-based Recognition

this example uses a simple grammar. In the data folder it has a grammar file (.gram), a dictionary file (.dict), and a config file (.xml) the grammar file (upstairs.gram) is a JSGF format grammar file that lists the possible words your system can hear. It has a format with individual words in upper-case letters, and a "|" mark between each word. You should be able to edit this file and fill it with your own words. the dict file (upstairs.dict) is a pronunciation dictionary file. It breaks each of those upper-case words fro the grammar into phonemic units. The easiest way to make a new dictionary with your own words is to use the online language tool described below. finally, the config file (upstairs.config.xml) specifies various parameters and file-names for the speech recognition engine. In this file you will probably need to change the path to your data files such as the grammar, dict, and the Library files you installed above. If you edit the xml file you will see that a lot of the paths are of the form "/Users/rtwomey/" which is obviously my computer, replace with the path to the file on your system. contact me if this doesn't work

Example - SLM-based Recognition

  • This example does Sphinx4 automatic speech recognition using a statistical language model (SLM) rather than a grammar.
  • I have included two SLMs in the data directory, 3990.lm/3990.dict, and 7707.lm/7707.dict
  • They were generated with the CMU Sphinx Knowledge Base Tool (see below). For each, I uploaded a plain-text file of sentences and saved the resulting tar file with dict, lm, and other results.
  • As above, you may need to change some file paths in the sphinx_config.xml file to match the setup on your system.
  • Download file: http://wiki.dxarts.washington.edu/groups/general/wiki/d7564/attachments/01040/sphinxSLMTest.zip

Online Tool for Training Language Models

  • This produces a statistical language model and dictionary (along with various other products) for the text you upload.
  • Your source file should be plain text, one sentence per line.
  • Upload the file and then click "Compile Knowledge Base."
  • On the results screen, click on the .TAR file to download it. Unzip this file:
    • The .dic is your pronunciation dictionary. You may want to rename it to .dict to match the files in the sketch. Or change your config file.
    • The .lm file is a 3-gram SLM file. If you are trying the SLM example above you will need this as well.
  • The grammar example above runs from a grammar (.gram) and a dictionary (.dict). This online language tools generates the dictionary for your text but not the grammar. You will need to make the grammar on your own.
  • The SLM example above runs from a grammar (.gram) and a language model (.lm). This online tool generates both files.
  • Sphinx Knowledge Base Tool: http://www.speech.cs.cmu.edu/tools/lmtool-new.html

Other programming

  • Python or c++
  • command line
  • android
  • pocketsphinx.

Text To Speech Synthesis

voder-2.png

Engines

Test them online

Hands-on With Processing

For Google TTS no library is required. You don't have to install anything. You just need an internet connection to talk to google.

Example 1. Speech

Example 2. Daisy Bell

  • Daisy Bell - http://www.youtube.com/watch?v=41U78QP8nBk
    • "Daisy Bell" was composed by Harry Dacre in 1892. In 1961, the IBM 7094 became the first computer to sing, singing the song Daisy Bell. Vocals were programmed by John Kelly and Carol Lockbaum and the accompaniment was programmed by Max Mathews.

Hands-on with Festival

Installation

Usage

  • run the terminal. Start Menu, Run -> Cmd.
  • switch to the festival directory:
    • cd C:\festival
  • start festival:
    • festival
  • to say something:
    • (SayText "this is what I am going to say")
  • to render speech to sound file:
  • to switch voices:
    • (voice_rab_diphone)
    • (voice_uw_us_rdt_clunits)
  • to exit festival:
    • (exit)
  • Festival is written in Scheme, a variant of LISP.

Voices

Making a Voice

  • Portraiture
  • Robert Voice

Activity: Feedback Loop

saussure.gif

Construct a conversation with the machine.

Processing Sketch

http://wiki.roberttwomey.com/images/8/86/Listen_speak.zip