File size: 3,608 Bytes
0189dbf
 
 
 
 
 
 
 
 
 
 
 
 
e0bc74c
4ff62ee
b5b19aa
29546b4
e0bc74c
4f3c2a8
6a4841e
2b8a748
6a4841e
4f3c2a8
e0bc74c
331f902
 
2b8a748
 
6a4841e
e0bc74c
01ea22b
6a4841e
 
 
 
 
 
 
 
e0bc74c
01ea22b
e0bc74c
 
bc429c7
ae227e4
e0bc74c
2e8e58e
e0bc74c
b5b19aa
56635b1
 
 
1051609
56635b1
 
1051609
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56635b1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
title: "FilBench Leaderboard"
emoji: 🥇
colorFrom: green
colorTo: indigo
sdk: gradio
app_file: app.py
pinned: true
license: apache-2.0
short_description: An Open LLM Leaderboard for Filipino
sdk_version: 5.19.0
---

# HF Leaderboard Backend

This is a fork of the [leaderboard demo from HuggingFace](https://huggingface.co/demo-leaderboard-backend) with some additional scripts for parsing the results from our evaluation runs.

## Set-up and installation

To start development, clone the repository and install the dependencies.
This assumes you have `uv` installed.
Make sure to also install the pre-commit hooks so that formatting is uniform across the codebase.

```sh
git clone [email protected]:filbench/hf-leaderboard-backend.git
cd hf-leaderboard-backend
uv sync
source .venv/bin/activate
pre-commit install
```

To view the leaderboard **locally**, then run the following command:

```sh
gradio app.py
```

This will launch the leaderboard (by default in `localhost:7860`) in hot-reload mode, so you can see the changes as you edit the source code.

## Updating the HuggingFace Space

All development should happen in this GitHub remote.
If you want to update the HuggingFace space, you should add a remote pointing to the space and push all changes.
You might need to [add your machine's SSH key](https://huggingface.co/settings/keys) to HuggingFace.

```sh
git remote add hf [email protected]:spaces/UD-Filipino/filbench-leaderboard
git push hf main
```

## Cite

If you're using [FilBench](https://aclanthology.org/2025.emnlp-main.127/), please cite our work:

```
@inproceedings{miranda-etal-2025-filbench,
    title = "{F}il{B}ench: Can {LLM}s Understand and Generate {F}ilipino?",
    author = "Miranda, Lester James Validad  and
      Aco, Elyanah  and
      Manuel, Conner G.  and
      Cruz, Jan Christian Blaise  and
      Imperial, Joseph Marvin",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.127/",
    doi = "10.18653/v1/2025.emnlp-main.127",
    pages = "2496--2529",
    ISBN = "979-8-89176-332-6",
    abstract = "Despite the impressive performance of LLMs on English-based tasks, little is known about their capabilities in specific languages such as Filipino. In this work, we address this gap by introducing FilBench, a Filipino-centric benchmark designed to evaluate LLMs across a diverse set of tasks and capabilities in Filipino, Tagalog, and Cebuano. We carefully curate the tasks in FilBench to reflect the priorities and trends of NLP research in the Philippines such as Cultural Knowledge, Classical NLP, Reading Comprehension, and Generation. By evaluating 27 state-of-the-art LLMs on FilBench, we find that several LLMs suffer from reading comprehension and translation capabilities. Our results indicate that FilBench is challenging, with the best model, GPT-4o, achieving only a score of 72.23{\%}. Moreover, we also find that models trained specifically for Southeast Asian languages tend to underperform on FilBench, with the highest-performing model, SEA-LION v3 70B, achieving only a score of 61.07{\%}. Our work demonstrates the value of curating language-specific LLM benchmarks to aid in driving progress on Filipino NLP and increasing the inclusion of Philippine languages in LLM development."
}
```