File size: 2,779 Bytes
7464107
 
8f90806
 
 
 
 
7464107
8f90806
7464107
8f90806
7464107
8f90806
 
 
 
7464107
 
 
 
2a1f17c
 
 
 
 
55864e0
ff5a7a8
2a1f17c
 
ff5a7a8
 
 
2a1f17c
 
8f90806
7464107
8f90806
7464107
 
 
 
 
 
 
 
f54df96
7464107
 
 
8f90806
7464107
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f90806
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: mit
tags:
- urdu-tts
- text-to-speech
- urdu-text-to-speech
- urdu-voice-cloning
---
# How to Use This Model

# Installation

1) pip install coqui-tts
2) Locate TTS/tts/layers/xtts/tokenizers.py in your site-packages directory.
3) Replace the tokenizers.py file with the tokenizers.py in this repository.
4) And you should be good to go!

Note: The model might not perform well on very long inputs. You can write your own text splitter to split longer inputs into shorter sentences based on your needs.


# Example

# Source Voice


<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/667b4e78a883effe7a66ebf4/mVjGFPkQgUv0by3kLTIbG.wav"></audio>

# Generated Voice 


<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/667b4e78a883effe7a66ebf4/3-dsxS6GVk43iAPJoibLe.wav"></audio>

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/667b4e78a883effe7a66ebf4/UDlJ_Pv1vyB8RXI_bEzVl.wav"></audio>

# Inference Code

```python
import torch
import torchaudio
from tqdm import tqdm
from underthesea import sent_tokenize
from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts

device = "cuda:0" if torch.cuda.is_available() else "cpu"
xtts_checkpoint = "model.pth"
xtts_config = "config.json"
xtts_vocab = "vocab.json"


config = XttsConfig()
config.load_json(xtts_config)
XTTS_MODEL = Xtts.init_from_config(config)
XTTS_MODEL.load_checkpoint(config, checkpoint_path=xtts_checkpoint, vocab_path=xtts_vocab, use_deepspeed=False)
XTTS_MODEL.to(device)

print("Model loaded successfully!")

# In case you are cloning from WhatsApp voice notes:
from pydub import AudioSegment

audio = AudioSegment.from_file("input-4.ogg", format="ogg")
audio.export("output.wav", format="wav")
print("Conversion complete!")

# Inference
tts_text = f"""یہ ٹی ٹی ایس کیسا ہے؟ اس کے بارے میں کچھ بتائیں"""
speaker_audio_file = "output.wav"
lang = "ur"

gpt_cond_latent, speaker_embedding = XTTS_MODEL.get_conditioning_latents(
    audio_path=["output.wav"],
    gpt_cond_len=XTTS_MODEL.config.gpt_cond_len,
    max_ref_length=XTTS_MODEL.config.max_ref_len,
    sound_norm_refs=XTTS_MODEL.config.sound_norm_refs,
)

tts_texts = [tts_text]
wav_chunks = []
for text in tqdm(tts_texts):
    wav_chunk = XTTS_MODEL.inference(
        text=text,
        language=lang,
        gpt_cond_latent=gpt_cond_latent,
        speaker_embedding=speaker_embedding,
        temperature=0.1,
        length_penalty=0.1,
        repetition_penalty=10.0,
        top_k=10,
        top_p=0.3,
    )
    wav_chunks.append(torch.tensor(wav_chunk["wav"]))

out_wav = torch.cat(wav_chunks, dim=0).unsqueeze(0).cpu()

from IPython.display import Audio
Audio(out_wav, rate=24000)

```