Festival TTS

Getting Started

Learning

Training Voice Models

Building a Unit Selection Cluster Voice

(from here http://festvox.org/festvox/x3086.html)

  1. mkdir uw_uw_rdt
cd uw_uw_rdt
  1. uniphone setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt uniphone
  2. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/uniphone.data")'
  3. record sound, using audacity. save as 16k, 16bit mono.
  4. make labels:
    ./bin/make_labs prompt-wav/*.wav
  5. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/uniphone.data")'
  6. do pitch marking:
    ./bin/make_pm_wave etc/uniphone.data
  7. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/uniphone.data
  8. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/uniphone.data")'

Using a Unit Selection Cluster Voice Synth

  1. from uw_us_rdt directory:
    festival festvox/uw_us_rdt_clunits.scm
  2. in Scheme:
    (voice_uw_us_rdt_clunits) 
  3. (SayText "this is a little test.")

Building a CLUSTERGEN Statistical Parametric Synthesizer

(adapted from http://festvox.org/festvox/c3174.html#AEN3176)

  1. mkdir uw_us_rdt_arctic
  2. cd uw_us_rdt_arctic
  3. $FESTVOXDIR/src/clustergen/setup_cg uw us rdt_arctic
  4. copy text into
    etc/txt.done.data
    . use some of the lines from here http://www.festvox.org/cmu_arctic/cmuarctic.data
  5. copy audio files into
    wav/
  6. use
    bin/get_wavs
    to copy files to power normalize and convert to proper format.

Building a Unit Selection Cluster Voice from TIMIT data

(from here http://festvox.org/festvox/c2645.html#AEN2716)

  1. mkdir uw_uw_rdt_timit
  2. cd uw_uw_rdt_timit
  3. timit setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt timit
  4. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/timit.data")'
  5. record sound, using audacity. save as 16k, 16bit mono.
  6. copy sound files from recording directory into voice directory.
     ./bin/get_wavs ~/Sounds/TIMIT_Training_Data/warehouse_omni/*.wav 
  7. make labels:
    ./bin/make_labs prompt-wav/*.wav
  8. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/timit.data")'
  9. do pitch marking:
    ./bin/make_pm_wave etc/timit.data
  10. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/timit.data
  11. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/timit.data")'

Improving Quality

Using Voices

using meghan voice

Run the Server

  • open terminal:
cd /Users/murmur/Desktop/meghan 
festival_server -c meghans_special_sauce.scm
  • To kill the server:
Control-C

Run the Client

  • open a 2nd terminal window:
cd /Users/murmur/Desktop/meghan 
festival_client myfile.txt --ttw --output client_test.wav
  • Other stuff (python):
import os
os.popen("/Applications/festival_2.1/festival/src/main/festival_client /Users/murmur/Desktop/meghan/myfile.txt --ttw --output /Users/murmur/Desktop/meghan/client_test78.wav")

Using A Newly Trained Voice

Modify the voice so festival knows it's there

  • append proclaim message to your newly trained model uw_us_rdt_clunits.scm in uw_us_rdt_clunits/festvox:
(proclaim_voice
 'uw_us_rdt_clunits
 '((language english)
   (gender male)
   (dialect american)
   (description
    "This is Robert Twomey trained on CLUNITS, TIMIT databse.")))

(provide 'uw_us_rdt_clunits)

Install voice to festival directory

  • http://roberttwomey.com/downloads/uw_us_rdt_clunits.tar.gz
  • unzip file from festival root directory, it should install to the correct directory
  • copy your newly trained voice to festival/lib/voices/english/
  • the name of your new voice directory (ex: uw_us_rdt_clunits/) needs to match the voice file (ex: uw_us_rdt_clunits/festvox/uw_us_rdt_clunits.scm)

Configure festival to use your voice by default

  • to set your voice as default (and add a special pause entry), add the following to festival/etc/siteinit.scm:
(set! voice_default 'voice_uw_us_rdt_clunits)

(lex.add.entry '("<break>" n (((pau pau) 0))))

Add a pause as a new lexical entry

  • add the following after (provide 'siteinit) in festival/etc/siteinit.scm:
(voice_uw_us_rdt_clunits)

(lex.add.entry '("<break>" n (((pau pau) 0))))
  • this will add a new word, "<break>" that is synthesized as a brief pause in speech.

Using Voice on Raspberry Pi

  • install with apt-get. alternatively, follow these instructions: http://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)
  • With festival 2.1
  • copy the voice data into /usr/share/festival/voices/english
  • edit /usr/share/festival/voices.scm and add the new voice uw_us_rdt_clunits at the beginning of the default-voice-priority-list. (end of the file)
  • now your new voice should be the default for festival.

rpi with external i2s dac

  • change aplay command within festival/scheme:
(Parameter.set 'Audio_Command "aplay -q -c 2 -t raw -f s16 -r 8000 $FILE")"

Tuning phrasing, prosody, etc with SABLE

Using Festival

Run the Server

  • from anywhere: festival_server

Run the Client

  • run the client: echo "Do you really want to see all of it?" | festival_client --ttw --output test.wav
  • generates a wave file

Synthesize Speech to Audio Out

  • run festival: echo "test this" | festival --tts
  • plays through speakers.

Render a Text File In Speech

  • run the server
  • run the client: cat ~/Documents/speech\ performance/speech\ performance\ structure.txt | festival_client --ttw --output structure.wav

Phoneme tests

Switch voices:

(voice_kal_diphone)

Switch back:

(voice_uw_us_rdt_clunits)

Pronounce phonemes:

(SayPhones '(pau ch pau m ay n ey m ih z r ah b er t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ow b er t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ow b ah t pau))
(SayPhones '(pau ch pau m ay n ey m ih z r ah b ah t pau))