Difference between revisions of "Festival TTS"

From Robert-Depot
Jump to: navigation, search
(Using new Voices)
(Building a Unit Selection Cluster Voice from TIMIT data)
 
(45 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
*short tutorial - http://festvox.org/festtut-2.0/
 
*short tutorial - http://festvox.org/festtut-2.0/
 
*exercises and hints - http://festvox.org/festtut-2.0/exercises/
 
*exercises and hints - http://festvox.org/festtut-2.0/exercises/
 +
==Slides on HTS Synthesis==
 +
*http://www.sp.nitech.ac.jp/~tokuda/tokuda_iscslp2006.pdf
  
 
=Training Voice Models=
 
=Training Voice Models=
Line 19: Line 21:
  
 
==Building a Unit Selection Cluster Voice==
 
==Building a Unit Selection Cluster Voice==
(from here http://festvox.org/festvox/x3082.html)
+
(from here http://festvox.org/festvox/x3086.html)
 
#<pre>mkdir uw_uw_rdt
 
#<pre>mkdir uw_uw_rdt
 
cd uw_uw_rdt</pre>
 
cd uw_uw_rdt</pre>
Line 37: Line 39:
  
 
==Building a CLUSTERGEN Statistical Parametric Synthesizer==
 
==Building a CLUSTERGEN Statistical Parametric Synthesizer==
adapted from http://festvox.org/festvox/c3170.html#AEN3172
+
(adapted from http://festvox.org/festvox/c3174.html#AEN3176)
#<code>mkdir uw_us_rdt_arctic
+
#<pre>mkdir uw_us_rdt_arctic</pre>
uw_us_rdt_arctic
+
#<pre>cd uw_us_rdt_arctic</pre>
$FESTVOXDIR/src/clustergen/setup_cg uw us rdt_arctic</code>
+
#<pre>$FESTVOXDIR/src/clustergen/setup_cg uw us rdt_arctic</pre>
#copy text into <code>etc/txt.done.data</code>. use some of the lines from here http://www.festvox.org/cmu_arctic/cmuarctic.data
+
#copy text into <pre>etc/txt.done.data</pre>. use some of the lines from here http://www.festvox.org/cmu_arctic/cmuarctic.data
#copy audio files into <code>wav/</code>
+
#copy audio files into <pre>wav/</pre>
 
#use <pre>bin/get_wavs</pre> to copy files to power normalize and convert to proper format.
 
#use <pre>bin/get_wavs</pre> to copy files to power normalize and convert to proper format.
  
= using meghan voice =
+
==Building a Unit Selection Cluster Voice from TIMIT data==
 +
(from here http://festvox.org/festvox/c2645.html#AEN2716)
 +
#<pre>mkdir uw_uw_rdt_timit</pre>
 +
#<pre>cd uw_uw_rdt_timit</pre>
 +
#timit setup: <pre> $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt timit</pre>
 +
#generate prompts and prompt files: <pre>festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/timit.data")'</pre>
 +
#record sound, using audacity. save as 16k, 16bit mono.
 +
#copy sound files from recording directory into voice directory. <pre> ./bin/get_wavs ~/Sounds/TIMIT_Training_Data/warehouse_omni/*.wav </pre>
 +
#make labels: <pre>./bin/make_labs prompt-wav/*.wav</pre>
 +
#build utterance structure: <pre>festival -b festvox/build_clunits.scm '(build_utts "etc/timit.data")'</pre>
 +
#do pitch marking: <pre>./bin/make_pm_wave etc/timit.data</pre>
 +
#find Mel Frequency Cepstral Coefficients: <pre>./bin/make_mcep etc/timit.data</pre>
 +
#build cluster unit selection synth: <pre>festival -b festvox/build_clunits.scm '(build_clunits "etc/timit.data")'</pre>
  
=To run the server:
+
=Improving Quality=
*open terminal:
+
*Fix phoneme labeling - http://sourceforge.net/projects/wavesurfer/
cd /Users/murmur/Desktop/meghan
+
*tuning a voice - http://www.cstr.ed.ac.uk/emasters/summer_school_2005/tutorial3/tutorial.html
festival_server -c meghans_special_sauce.scm
 
  
 +
= Using Voices =
 +
== using meghan voice ==
 +
=== Run the Server ===
 +
*open terminal:
 +
<pre>
 +
cd /Users/murmur/Desktop/meghan
 +
festival_server -c meghans_special_sauce.scm
 +
</pre>
 
*To kill the server:
 
*To kill the server:
Control-C
+
<pre>Control-C</pre>
 
+
=== Run the Client ===
*To run the client:
+
*open a 2nd terminal window:
open a 2nd terminal window:
+
<pre>
cd /Users/murmur/Desktop/meghan  
+
cd /Users/murmur/Desktop/meghan  
festival_client myfile.txt --ttw --output client_test.wav
+
festival_client myfile.txt --ttw --output client_test.wav
+
</pre>
 
*Other stuff (python):
 
*Other stuff (python):
 
+
<pre>
 
import os
 
import os
 
os.popen("/Applications/festival_2.1/festival/src/main/festival_client /Users/murmur/Desktop/meghan/myfile.txt --ttw --output /Users/murmur/Desktop/meghan/client_test78.wav")
 
os.popen("/Applications/festival_2.1/festival/src/main/festival_client /Users/murmur/Desktop/meghan/myfile.txt --ttw --output /Users/murmur/Desktop/meghan/client_test78.wav")
 +
</pre>
  
=Using new Voices=
+
==Using A Newly Trained Voice ==
 +
===Modify the voice so festival knows it's there===
 +
*append proclaim message to your newly trained model <code>uw_us_rdt_clunits.scm</code> in <code>uw_us_rdt_clunits/festvox</code>:
 +
<pre>
 +
(proclaim_voice
 +
'uw_us_rdt_clunits
 +
'((language english)
 +
  (gender male)
 +
  (dialect american)
 +
  (description
 +
    "This is Robert Twomey trained on CLUNITS, TIMIT databse.")))
 +
 
 +
(provide 'uw_us_rdt_clunits)
 +
</pre>
 +
 
 +
===Install voice to festival directory===
 +
*http://roberttwomey.com/downloads/uw_us_rdt_clunits.tar.gz
 +
*unzip file from festival root directory, it should install to the correct directory
 
*copy your newly trained voice to <code>festival/lib/voices/english/</code>
 
*copy your newly trained voice to <code>festival/lib/voices/english/</code>
*your new voice directory (ex: <code>uw_us_rdt_clunits</code>) needs to match the voice file (ex: <code>uw_us_rdt_clunits/festvox/uw_us_rdt_clunits.scm</code>)
+
*the name of your new voice directory (ex: <code>uw_us_rdt_clunits/</code>) needs to match the voice file (ex: <code>uw_us_rdt_clunits/festvox/uw_us_rdt_clunits.scm</code>)
*add the following to <code>festival/etc/siteinit.scm</code>:  
+
 
 +
===Configure festival to use your voice by default===
 +
*to set your voice as default (and add a special pause entry), add the following to <code>festival/etc/siteinit.scm</code>:  
 
<pre>
 
<pre>
(autoload voice_uw_us_rdt_clunits "/Users/rtwomey/code/tts/festival/lib/voices/english/uw_us_rdt_clunits/festvox/uw_us_rdt_clunits" "American English male uw_us_rdt_clunits")
+
(set! voice_default 'voice_uw_us_rdt_clunits)
  
(set! voice_default 'voice_uw_us_rdt_clunits)
+
(lex.add.entry '("<break>" n (((pau pau) 0))))
 
</pre>
 
</pre>
*now you can just run <code>festival_server</code> and it will load your voice by default
+
*the <code>lex.add.entry</code> line makes a new word in the lexicon <code><break></code> that adds a pause.
 +
*run <code>festival_server</code> and it will load your new voice by default
 
*http://www.cstr.ed.ac.uk/projects/festival/manual/festival_24.html
 
*http://www.cstr.ed.ac.uk/projects/festival/manual/festival_24.html
 +
=Add a pause as a new lexical entry=
 +
*add the following after <code> (provide 'siteinit) </code> in <code>festival/etc/siteinit.scm</code>:
 +
<pre>
 +
(voice_uw_us_rdt_clunits)
 +
 +
(lex.add.entry '("<break>" n (((pau pau) 0))))
 +
</pre>
 +
*this will add a new word, "<break>" that is synthesized as a brief pause in speech.
 +
 +
=Using Voice on Raspberry Pi=
 +
*install with apt-get. alternatively, follow these instructions:  http://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)
 +
*With festival 2.1
 +
*copy the voice data into <code>/usr/share/festival/voices/english</code>
 +
*edit '''/usr/share/festival/voices.scm''' and add the new voice <code>uw_us_rdt_clunits</code> at the beginning of the default-voice-priority-list. (end of the file)
 +
*now your new voice should be the default for festival.
 +
==rpi with external i2s dac==
 +
*change aplay command within festival/scheme:
 +
<syntaxhighlight lang="bash">
 +
(Parameter.set 'Audio_Command "aplay -q -c 2 -t raw -f s16 -r 8000 $FILE")"
 +
</syntaxhighlight>
 +
*or add to startup: https://wiki.archlinux.org/index.php/Festival#Usage_with_a_Sound_Server
  
=Improving Quality=
+
=Tuning phrasing, prosody, etc with SABLE=
*tuning a voice - http://www.cstr.ed.ac.uk/emasters/summer_school_2005/tutorial3/tutorial.html
+
*http://www.cstr.ed.ac.uk/projects/festival/manual/festival_10.html#SEC31
 +
=Using Festival=
 +
==Run the Server==
 +
*from anywhere: <code>festival_server</code>
 +
==Run the Client==
 +
*run the client: <code>echo "Do you really want to see all of it?" | festival_client --ttw --output test.wav </code>
 +
*generates a wave file
 +
==Synthesize Speech to Audio Out==
 +
*run festival: <code>echo "test this" | festival --tts</code>
 +
*plays through speakers.
 +
==Render a Text File In Speech==
 +
*run the server
 +
*run the client: <code>cat ~/Documents/speech\ performance/speech\ performance\ structure.txt | festival_client --ttw --output structure.wav</code>
 +
==Phoneme tests==
 +
Switch voices:
 +
<syntaxhighlight lang="scheme">(voice_kal_diphone)</syntaxhighlight>
 +
Switch back:
 +
<syntaxhighlight lang="scheme">(voice_uw_us_rdt_clunits)</syntaxhighlight>
 +
Pronounce phonemes:
 +
<syntaxhighlight lang="scheme">(SayPhones '(pau ch pau m ay n ey m ih z r ah b er t pau))  </syntaxhighlight>
 +
<syntaxhighlight lang="scheme">(SayPhones '(pau ch pau m ay n ey m ih z r ow b er t pau))  </syntaxhighlight>
 +
<syntaxhighlight lang="scheme">(SayPhones '(pau ch pau m ay n ey m ih z r ow b ah t pau))  </syntaxhighlight>
 +
<syntaxhighlight lang="scheme">(SayPhones '(pau ch pau m ay n ey m ih z r ah b ah t pau))  </syntaxhighlight>

Latest revision as of 05:45, 10 March 2017

<<< back to Wiki Home

Getting Started

Learning

Slides on HTS Synthesis

Training Voice Models

Building a Unit Selection Cluster Voice

(from here http://festvox.org/festvox/x3086.html)

  1. mkdir uw_uw_rdt
cd uw_uw_rdt
  1. uniphone setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt uniphone
  2. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/uniphone.data")'
  3. record sound, using audacity. save as 16k, 16bit mono.
  4. make labels:
    ./bin/make_labs prompt-wav/*.wav
  5. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/uniphone.data")'
  6. do pitch marking:
    ./bin/make_pm_wave etc/uniphone.data
  7. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/uniphone.data
  8. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/uniphone.data")'

Using a Unit Selection Cluster Voice Synth

  1. from uw_us_rdt directory:
    festival festvox/uw_us_rdt_clunits.scm
  2. in Scheme:
    (voice_uw_us_rdt_clunits) 
  3. (SayText "this is a little test.")

Building a CLUSTERGEN Statistical Parametric Synthesizer

(adapted from http://festvox.org/festvox/c3174.html#AEN3176)

  1. mkdir uw_us_rdt_arctic
  2. cd uw_us_rdt_arctic
  3. $FESTVOXDIR/src/clustergen/setup_cg uw us rdt_arctic
  4. copy text into
    etc/txt.done.data
    . use some of the lines from here http://www.festvox.org/cmu_arctic/cmuarctic.data
  5. copy audio files into
    wav/
  6. use
    bin/get_wavs
    to copy files to power normalize and convert to proper format.

Building a Unit Selection Cluster Voice from TIMIT data

(from here http://festvox.org/festvox/c2645.html#AEN2716)

  1. mkdir uw_uw_rdt_timit
  2. cd uw_uw_rdt_timit
  3. timit setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt timit
  4. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/timit.data")'
  5. record sound, using audacity. save as 16k, 16bit mono.
  6. copy sound files from recording directory into voice directory.
     ./bin/get_wavs ~/Sounds/TIMIT_Training_Data/warehouse_omni/*.wav 
  7. make labels:
    ./bin/make_labs prompt-wav/*.wav
  8. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/timit.data")'
  9. do pitch marking:
    ./bin/make_pm_wave etc/timit.data
  10. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/timit.data
  11. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/timit.data")'

Improving Quality

Using Voices

using meghan voice

Run the Server

  • open terminal:
cd /Users/murmur/Desktop/meghan 
festival_server -c meghans_special_sauce.scm
  • To kill the server:
Control-C

Run the Client

  • open a 2nd terminal window:
cd /Users/murmur/Desktop/meghan 
festival_client myfile.txt --ttw --output client_test.wav
  • Other stuff (python):
import os
os.popen("/Applications/festival_2.1/festival/src/main/festival_client /Users/murmur/Desktop/meghan/myfile.txt --ttw --output /Users/murmur/Desktop/meghan/client_test78.wav")

Using A Newly Trained Voice

Modify the voice so festival knows it's there

  • append proclaim message to your newly trained model uw_us_rdt_clunits.scm in uw_us_rdt_clunits/festvox:
(proclaim_voice
 'uw_us_rdt_clunits
 '((language english)
   (gender male)
   (dialect american)
   (description
    "This is Robert Twomey trained on CLUNITS, TIMIT databse.")))

(provide 'uw_us_rdt_clunits)

Install voice to festival directory

  • http://roberttwomey.com/downloads/uw_us_rdt_clunits.tar.gz
  • unzip file from festival root directory, it should install to the correct directory
  • copy your newly trained voice to festival/lib/voices/english/
  • the name of your new voice directory (ex: uw_us_rdt_clunits/) needs to match the voice file (ex: uw_us_rdt_clunits/festvox/uw_us_rdt_clunits.scm)

Configure festival to use your voice by default

  • to set your voice as default (and add a special pause entry), add the following to festival/etc/siteinit.scm:
(set! voice_default 'voice_uw_us_rdt_clunits)

(lex.add.entry '("<break>" n (((pau pau) 0))))

Add a pause as a new lexical entry

  • add the following after (provide 'siteinit) in festival/etc/siteinit.scm:
(voice_uw_us_rdt_clunits)

(lex.add.entry '("<break>" n (((pau pau) 0))))
  • this will add a new word, "<break>" that is synthesized as a brief pause in speech.

Using Voice on Raspberry Pi

  • install with apt-get. alternatively, follow these instructions: http://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)
  • With festival 2.1
  • copy the voice data into /usr/share/festival/voices/english
  • edit /usr/share/festival/voices.scm and add the new voice uw_us_rdt_clunits at the beginning of the default-voice-priority-list. (end of the file)
  • now your new voice should be the default for festival.

rpi with external i2s dac

  • change aplay command within festival/scheme:
(Parameter.set 'Audio_Command "aplay -q -c 2 -t raw -f s16 -r 8000 $FILE")"

Tuning phrasing, prosody, etc with SABLE

Using Festival

Run the Server

  • from anywhere: festival_server

Run the Client

  • run the client: echo "Do you really want to see all of it?" | festival_client --ttw --output test.wav
  • generates a wave file

Synthesize Speech to Audio Out

  • run festival: echo "test this" | festival --tts
  • plays through speakers.

Render a Text File In Speech

  • run the server
  • run the client: cat ~/Documents/speech\ performance/speech\ performance\ structure.txt | festival_client --ttw --output structure.wav

Phoneme tests

Switch voices:

(voice_kal_diphone)

Switch back:

(voice_uw_us_rdt_clunits)

Pronounce phonemes:

(SayPhones '(pau ch pau m ay n ey m ih z r ah b er t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ow b er t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ow b ah t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ah b ah t pau))