---
license: mit
language:
- en
pipeline_tag: text-to-speech
tags:
- audiocraft
- audiogen
- styletts2
- audio
- synthesis
- shift
- audeering
- dkounadis
- sound
- scene
- acoustic-scene
- audio-generation
---
# Affective TTS - SoundScape
- Affective TTS via [SHIFT TTS tool](https://github.com/audeering/shift)
- Soundscapes, e.g. trees, water, leaves, generations via [AudioGen](https://huggingface.co/dkounadis/artificial-styletts2/discussions/3)
- `landscape2soundscape.py` shows how to overlay TTS & Soundscape to Images
- `134` build-in voices
## Available Voices
Listen Voices!
## Flask API
Install
```
virtualenv --python=python3 ~/.envs/.my_env
source ~/.envs/.my_env/bin/activate
cd shift/
pip install -r requirements.txt
```
Flask
```
CUDA_DEVICE_ORDER=PCI_BUS_ID HF_HOME=./hf_home CUDA_VISIBLE_DEVICES=2 python api.py
```
## Inference
The following need `api.py` to be running on a `tmux session`.
### Text 2 Speech
```python
# Basic TTS - See Available Voices
python tts.py --text sample.txt --voice "en_US/m-ailabs_low#mary_ann" --affective
# voice cloning
python tts.py --text sample.txt --native assets/native_voice.wav
```
**Image 2 Video**
```python
# Make video narrating an image - All above TTS args apply also here!
python tts.py --text sample.txt --image assets/image_from_T31.jpg
```
**Video 2 Video**
```python
# Video Dubbing - from time-stamped subtitles (.srt)
python tts.py --text assets/head_of_fortuna_en.srt --video assets/head_of_fortuna.mp4
# Video narration - from text description (.txt)
python tts.py --text assets/head_of_fortuna_GPT.txt --video assets/head_of_fortuna.mp4
```
**Landscape 2 Soundscape**
```python
# TTS & soundscape - overlay to .mp4
python landscape2soundscape.py
```
## Examples
Substitute Native voice via TTS
[](https://www.youtube.com/watch?v=tmo2UbKYAqc)
##
Same video where Native voice is replaced with English TTS voice with similar emotion
[](https://www.youtube.com/watch?v=geI1Vqn4QpY)
Video dubbing from subtitles `.srt`
## Video Dubbing
[](https://www.youtube.com/watch?v=bpt7rOBENcQ)
Generate dubbed video:
```python
python tts.py --text assets/head_of_fortuna_en.srt --video assets/head_of_fortuna.mp4
```
## Joint Application of D3.1 & D3.2

From an image and text create a video:
```python
python tts.py --text sample.txt --image assets/image_from_T31.jpg
```
# Live Demo - Paplay
Flask
```python
CUDA_DEVICE_ORDER=PCI_BUS_ID HF_HOME=/data/dkounadis/.hf7/ CUDA_VISIBLE_DEVICES=4 python live_api.py
```
Client (Ubutu)
```python
python live_demo.py # will ask text input & play soundscape
```