This tokenizer was trained on a small corpus of concatenated ARPAbet pronunciation tokens + punctuation from the python g2p_en library computed over the entire synthbot/pony-speech dataset and 240k lines from generics_kb_best, from community-datasets/generics_kb. i.e. But one on one, let's clean it. -> BAH1T WAH1N AA1N WAH1N , LEH1TS KLIY1N IH1T . Uses Unigram with vocab size of 1024.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.