jpacifico's picture
Adding Evaluation Results (#1)
1abd43d verified
|
raw
history blame
5.11 kB
---
language:
- fr
- en
license: apache-2.0
library_name: transformers
tags:
- lucie
- lucie-boosted
- llama
datasets:
- jpacifico/french-orca-dpo-pairs-revised
model-index:
- name: Lucie-Boosted-7B-Instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 25.66
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 10.26
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.76
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 2.24
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.4
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 7.0
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=jpacifico/Lucie-Boosted-7B-Instruct
name: Open LLM Leaderboard
---
### Lucie-Boosted-7B-Instruct
Post-training optimization of the foundation model [OpenLLM-France/Lucie-7B-Instruct](https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct)
DPO fine-tuning using the [jpacifico/french-orca-dpo-pairs-revised](https://huggingface.co/datasets/jpacifico/french-orca-dpo-pairs-revised) RLHF dataset.
Training in French also enhances the model's overall performance.
*Lucie-7B has a context size of 32K tokens*
### OpenLLM Leaderboard
coming soon
### MT-Bench
coming soon
### Usage
You can run this model using this [Colab notebook](https://github.com/jpacifico/Chocolatine-LLM/blob/main/Chocolatine_14B_inference_test_colab.ipynb)
You can also run Lucie-Boosted using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
### Limitations
The Lucie-Boosted model is a quick demonstration that the Lucie foundation model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.
- **Developed by:** Jonathan Pacifico, 2025
- **Model type:** LLM
- **Language(s) (NLP):** French, English
- **License:** Apache-2.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/jpacifico__Lucie-Boosted-7B-Instruct-details)
| Metric |Value|
|-------------------|----:|
|Avg. | 8.22|
|IFEval (0-Shot) |25.66|
|BBH (3-Shot) |10.26|
|MATH Lvl 5 (4-Shot)| 0.76|
|GPQA (0-shot) | 2.24|
|MuSR (0-shot) | 3.40|
|MMLU-PRO (5-shot) | 7.00|