IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding
Abstract
Known by more than 1.5 billion people in the Indian subcontinent, Indic languages present unique challenges and opportunities for natural language processing (NLP) research due to their rich cultural heritage, linguistic diversity, and complex structures. IndicMMLU-Pro is a comprehensive benchmark designed to evaluate Large Language Models (LLMs) across Indic languages, building upon the MMLU Pro (Massive Multitask Language Understanding) framework. Covering major languages such as Hindi, Bengali, Gujarati, Marathi, Kannada, Punjabi, Tamil, Telugu, and Urdu, our benchmark addresses the unique challenges and opportunities presented by the linguistic diversity of the Indian subcontinent. This benchmark encompasses a wide range of tasks in language comprehension, reasoning, and generation, meticulously crafted to capture the intricacies of Indian languages. IndicMMLU-Pro provides a standardized evaluation framework to push the research boundaries in Indic language AI, facilitating the development of more accurate, efficient, and culturally sensitive models. This paper outlines the benchmarks' design principles, task taxonomy, and data collection methodology, and presents baseline results from state-of-the-art multilingual models.
Community
IndicMMLU-Pro is a benchmark designed to evaluate Large Language Models (LLMs) across nine major Indic languages, adapting the MMLU-Pro framework to assess linguistic comprehension, reasoning, and generative capabilities.
Comprehensive Indic NLP Benchmark: Introduces IndicMMLU-Pro, a multilingual benchmark for nine Indic languages (Hindi, Bengali, Telugu, Marathi, Tamil, Gujarati, Urdu, Kannada, and Punjabi), adapted from MMLU-Pro for robust AI evaluation.
High-Quality Translation & Evaluation Pipeline: Utilizes IndicTrans2 for dataset creation, back-translation for quality assurance, and multiple validation metrics (chrF++, BLEU, METEOR, TER, SacreBLEU) to ensure linguistic fidelity.
Baseline Model Performance Analysis: Establishes performance benchmarks across state-of-the-art multilingual models (GPT-4o, IndicBERT, MuRIL, XLM-RoBERTa, etc.), revealing substantial performance gaps and highlighting areas for improvement in Indic NLP.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Setting Standards in Turkish NLP: TR-MMLU for Large Language Model Evaluation (2024)
- Enabling Low-Resource Language Retrieval: Establishing Baselines for Urdu MS MARCO (2024)
- When LLMs Struggle: Reference-less Translation Evaluation for Low-resource Languages (2025)
- Can xLLMs Understand the Structure of Dialog? Exploring Multilingual Response Generation in Complex Scenarios (2025)
- A Review of the Marathi Natural Language Processing (2024)
- Can Large Language Models Predict the Outcome of Judicial Decisions? (2025)
- MIT-10M: A Large Scale Parallel Corpus of Multilingual Image Translation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper