Difference between revisions of "Festival TTS"

From Robert-Depot
Jump to: navigation, search
(Using Festival)
Line 134: Line 134:
 
*run festival: <code>echo "test this" | festival --tts</code>
 
*run festival: <code>echo "test this" | festival --tts</code>
 
*plays through speakers.
 
*plays through speakers.
 +
==Render a Text File In Speech==
 +
*run the server
 +
*run the client: <code>cat ~/Documents/speech\ performance/speech\ performance\ structure.txt | festival_client --ttw --output structure.wav</code>

Revision as of 16:04, 29 November 2012

<<< back to Wiki Home

Getting Started

Learning

Slides on HTS Synthesis

Training Voice Models

Building a Unit Selection Cluster Voice

(from here http://festvox.org/festvox/x3082.html)

  1. mkdir uw_uw_rdt
cd uw_uw_rdt
  1. uniphone setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt uniphone
  2. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/uniphone.data")'
  3. record sound, using audacity. save as 16k, 16bit mono.
  4. make labels:
    ./bin/make_labs prompt-wav/*.wav
  5. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/uniphone.data")'
  6. do pitch marking:
    ./bin/make_pm_wave etc/uniphone.data
  7. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/uniphone.data
  8. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/uniphone.data")'

Using a Unit Selection Cluster Voice Synth

  1. from uw_us_rdt directory:
    festival festvox/uw_us_rdt_clunits.scm
  2. in Scheme:
    (voice_uw_us_rdt_clunits) 
  3. (SayText "this is a little test.")

Building a CLUSTERGEN Statistical Parametric Synthesizer

adapted from http://festvox.org/festvox/c3170.html#AEN3172

  1. mkdir uw_us_rdt_arctic

uw_us_rdt_arctic $FESTVOXDIR/src/clustergen/setup_cg uw us rdt_arctic

  1. copy text into etc/txt.done.data. use some of the lines from here http://www.festvox.org/cmu_arctic/cmuarctic.data
  2. copy audio files into wav/
  3. use
    bin/get_wavs
    to copy files to power normalize and convert to proper format.

Building a Unit Selection Cluster Voice from TIMIT data

(from here http://festvox.org/festvox/x3082.html)

  1. mkdir uw_uw_rdt_timit
cd uw_uw_rdt_timit
  1. timit setup:
     $FESTVOXDIR/src/unitsel/setup_clunits uw us rdt timit
  2. generate prompts and prompt files:
    festival -b festvox/build_clunits.scm '(build_prompts_waves "etc/timit.data")'
  3. record sound, using audacity. save as 16k, 16bit mono.
  4. make labels:
    ./bin/make_labs prompt-wav/*.wav
  5. build utterance structure:
    festival -b festvox/build_clunits.scm '(build_utts "etc/timit.data")'
  6. do pitch marking:
    ./bin/make_pm_wave etc/timit.data
  7. find Mel Frequency Cepstral Coefficients:
    ./bin/make_mcep etc/timit.data
  8. build cluster unit selection synth:
    festival -b festvox/build_clunits.scm '(build_clunits "etc/timit.data")'


Improving Quality

Using Voices

using meghan voice

Run the Server

  • open terminal:

cd /Users/murmur/Desktop/meghan festival_server -c meghans_special_sauce.scm

  • To kill the server:

Control-C

Run the Client

  • open a 2nd terminal window:
	cd /Users/murmur/Desktop/meghan 
	festival_client myfile.txt --ttw --output client_test.wav
  • Other stuff (python):
import os
os.popen("/Applications/festival_2.1/festival/src/main/festival_client /Users/murmur/Desktop/meghan/myfile.txt --ttw --output /Users/murmur/Desktop/meghan/client_test78.wav")

Using A Newly Trained Voice

Modify the voice so festival knows it's there

  • append proclaim message to your newly trained model uw_us_rdt_clunits.scm in uw_us_rdt_clunits/festvox:
(proclaim_voice
 'uw_us_rdt_clunits
 '((language english)
   (gender male)
   (dialect american)
   (description
    "This is Robert Twomey trained on CLUNITS, TIMIT databse.")))

(provide 'uw_us_rdt_clunits)

Install voice to festival directory

  • http://roberttwomey.com/downloads/uw_us_rdt_clunits.tar.gz
  • unzip file from festival root directory, it should install to the correct directory
  • copy your newly trained voice to festival/lib/voices/english/
  • the name of your new voice directory (ex: uw_us_rdt_clunits/) needs to match the voice file (ex: uw_us_rdt_clunits/festvox/uw_us_rdt_clunits.scm)

Configure festival to use your voice by default

  • to set your voice as default (and add a special pause entry), add the following to festival/etc/siteinit.scm:
(set! voice_default 'voice_uw_us_rdt_clunits)

(lex.add.entry '("<break>" n (((pau pau) 0))))

Using Voice on Raspberry Pi

  • install with apt-get. alternatively, follow these instructions: http://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis)
  • With festival 2.1
  • copy the voice data into /usr/share/festival/voices/english
  • edit voices.scm and add the new voice uw_us_rdt_clunits at the beginning of the default-voice-priority-list. (end of the file)
  • now your new voice should be the default for festival.

Tuning phrasing, prosody, etc with SABLE

Using Festival

Run the Server

  • from anywhere: festival_server

Run the Client

  • run the client: echo "Do you really want to see all of it?" | festival_client --ttw --output test.wav
  • generates a wave file

Synthesize Speech to Audio Out

  • run festival: echo "test this" | festival --tts
  • plays through speakers.

Render a Text File In Speech

  • run the server
  • run the client: cat ~/Documents/speech\ performance/speech\ performance\ structure.txt | festival_client --ttw --output structure.wav