Language Models' Factuality Depends on the Language of Inquiry
Abstract
Multilingual language models (LMs) are expected to recall factual knowledge consistently across languages, yet they often fail to transfer knowledge between languages even when they possess the correct information in one of the languages. For example, we find that an LM may correctly identify Rashed Al Shashai as being from Saudi Arabia when asked in Arabic, but consistently fails to do so when asked in English or Swahili. To systematically investigate this limitation, we introduce a benchmark of 10,000 country-related facts across 13 languages and propose three novel metrics: Factual Recall Score, Knowledge Transferability Score, and Cross-Lingual Factual Knowledge Transferability Score-to quantify factual recall and knowledge transferability in LMs across different languages. Our results reveal fundamental weaknesses in today's state-of-the-art LMs, particularly in cross-lingual generalization where models fail to transfer knowledge effectively across different languages, leading to inconsistent performance sensitive to the language used. Our findings emphasize the need for LMs to recognize language-specific factual reliability and leverage the most trustworthy information across languages. We release our benchmark and evaluation framework to drive future research in multilingual knowledge transfer.
Community
We present a multilingual benchmark to assess language models' ability to recall and transfer factual knowledge across languages, highlighting weaknesses in cross-lingual generalization in current LMs and proposing new metrics for evaluation.
Arxiv: https://arxiv.org/abs/2502.17955
Code and Data: https://github.com/kmrtanmay/X_FaKT
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AdaCoT: Rethinking Cross-Lingual Factual Reasoning through Adaptive Chain-of-Thought (2025)
- CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering (2025)
- How does a Multilingual LM Handle Multiple Languages? (2025)
- BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models (2025)
- CoCo-CoLa: Evaluating Language Adherence in Multilingual LLMs (2025)
- Towards Reasoning Ability of Small Language Models (2025)
- Towards Better Understanding of Program-of-Thought Reasoning in Cross-Lingual and Multilingual Environments (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper