Commit
·
370c305
1
Parent(s):
1a35bfb
Update README.md
Browse files
README.md
CHANGED
@@ -239,7 +239,7 @@ The tokenizers for these models were built using the text transcripts of the tra
|
|
239 |
|
240 |
### Datasets
|
241 |
|
242 |
-
The model was trained on
|
243 |
|
244 |
The training dataset consists of private subset with 40K hours of English speech plus 25K hours from the following public datasets:
|
245 |
|
|
|
239 |
|
240 |
### Datasets
|
241 |
|
242 |
+
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
|
243 |
|
244 |
The training dataset consists of private subset with 40K hours of English speech plus 25K hours from the following public datasets:
|
245 |
|