non-English performance
#16
by
book2957
- opened
In the Model Card: β The model often thinks in English when prompted with questions in non-English languages. We observe that this sometimes affects the output quality in non-English languages.β
Can you expand on what exactly you are observing? Are the outputs more likely to be nonsensical? Use English-isms? Or just wrong? Iβm trying to pin down (general LLM) cross-lingual performance and this behavior is interesting. Appreciate any insight.