For each voice, the given grades are intended to be estimates of the quality and quantity of its associated training data, both of which impact overall inference quality.
Subjectively, voices will sound better or worse to different people.
Support for non-English languages may be absent or thin due to weak G2P and/or lack of training data. Some languages are only represented by a small handful or even just one voice (French).
Most voices perform best on a "goldilocks range" of 100-200 tokens out of ~500 possible. Voices may perform worse at the extremes:
Weakness on short utterances, especially less than 10-20 tokens. Root cause could be lack of short-utterance training data and/or model architecture. One possible inference mitigation is to bundle shorter utterances together.
Rushing on long utterances, especially over 400 tokens. You can chunk down to shorter utterances or adjust the speed parameter to mitigate this.
Target Quality
How high quality is the reference voice? This grade may be impacted by audio quality, artifacts, compression, & sample rate.
How well do the text labels match the audio? Text/audio misalignment (e.g. from hallucinations) will lower this grade.
Training Duration
How much audio was seen during training? Smaller durations result in a lower overall grade.