non-English performance

#16
by book2957 - opened

In the Model Card: β€œ The model often thinks in English when prompted with questions in non-English languages. We observe that this sometimes affects the output quality in non-English languages.”

Can you expand on what exactly you are observing? Are the outputs more likely to be nonsensical? Use English-isms? Or just wrong? I’m trying to pin down (general LLM) cross-lingual performance and this behavior is interesting. Appreciate any insight.

Sign up or log in to comment