Update README.md
Browse files
README.md
CHANGED
|
@@ -34,8 +34,8 @@ First, load the processor and a checkpoint of the model:
|
|
| 34 |
```python
|
| 35 |
from transformers import AutoProcessor, SeamlessM4TModel
|
| 36 |
|
| 37 |
-
processor = AutoProcessor.from_pretrained("ylacombe/hf-seamless-m4t-
|
| 38 |
-
model = SeamlessM4TModel.from_pretrained("ylacombe/hf-seamless-m4t-
|
| 39 |
```
|
| 40 |
|
| 41 |
You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
|
|
@@ -89,17 +89,17 @@ For example, you can replace the previous snippet with the model dedicated to th
|
|
| 89 |
|
| 90 |
```python
|
| 91 |
from transformers import SeamlessM4TForSpeechToSpeech
|
| 92 |
-
model = SeamlessM4TForSpeechToSpeech.from_pretrained("ylacombe/hf-seamless-m4t-
|
| 93 |
```
|
| 94 |
|
| 95 |
|
| 96 |
### Text
|
| 97 |
|
| 98 |
-
Similarly, you can generate translated text from text or audio files
|
| 99 |
|
| 100 |
```python
|
| 101 |
from transformers import SeamlessM4TForSpeechToText
|
| 102 |
-
model = SeamlessM4TForSpeechToText.from_pretrained("ylacombe/hf-seamless-m4t-
|
| 103 |
audio_sample = dataset["audio"][0]["array"]
|
| 104 |
inputs = processor(audios = audio_sample, return_tensors="pt")
|
| 105 |
|
|
@@ -111,7 +111,7 @@ And from text:
|
|
| 111 |
|
| 112 |
```python
|
| 113 |
from transformers import SeamlessM4TForTextToText
|
| 114 |
-
model = SeamlessM4TForTextToText.from_pretrained("ylacombe/hf-seamless-m4t-
|
| 115 |
inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")
|
| 116 |
|
| 117 |
output_tokens = model.generate(**inputs, tgt_lang="fra")
|
|
|
|
| 34 |
```python
|
| 35 |
from transformers import AutoProcessor, SeamlessM4TModel
|
| 36 |
|
| 37 |
+
processor = AutoProcessor.from_pretrained("ylacombe/hf-seamless-m4t-large")
|
| 38 |
+
model = SeamlessM4TModel.from_pretrained("ylacombe/hf-seamless-m4t-large")
|
| 39 |
```
|
| 40 |
|
| 41 |
You can seamlessly use this model on text or on audio, to generated either translated text or translated audio.
|
|
|
|
| 89 |
|
| 90 |
```python
|
| 91 |
from transformers import SeamlessM4TForSpeechToSpeech
|
| 92 |
+
model = SeamlessM4TForSpeechToSpeech.from_pretrained("ylacombe/hf-seamless-m4t-large")
|
| 93 |
```
|
| 94 |
|
| 95 |
|
| 96 |
### Text
|
| 97 |
|
| 98 |
+
Similarly, you can generate translated text from text or audio files. This time, let's use the dedicated models as example.
|
| 99 |
|
| 100 |
```python
|
| 101 |
from transformers import SeamlessM4TForSpeechToText
|
| 102 |
+
model = SeamlessM4TForSpeechToText.from_pretrained("ylacombe/hf-seamless-m4t-large")
|
| 103 |
audio_sample = dataset["audio"][0]["array"]
|
| 104 |
inputs = processor(audios = audio_sample, return_tensors="pt")
|
| 105 |
|
|
|
|
| 111 |
|
| 112 |
```python
|
| 113 |
from transformers import SeamlessM4TForTextToText
|
| 114 |
+
model = SeamlessM4TForTextToText.from_pretrained("ylacombe/hf-seamless-m4t-large")
|
| 115 |
inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")
|
| 116 |
|
| 117 |
output_tokens = model.generate(**inputs, tgt_lang="fra")
|