jpacifico's picture
Update README.md
fadab48 verified
|
raw
history blame
1.96 kB
metadata
library_name: transformers
tags:
  - lucie
  - lucie-boosted
  - llama
license: apache-2.0
datasets:
  - jpacifico/french-orca-dpo-pairs-revised
language:
  - fr
  - en

Lucie-Boosted-7B-Instruct

Post-training optimization of the foundation model OpenLLM-France/Lucie-7B-Instruct
DPO fine-tuning using the jpacifico/french-orca-dpo-pairs-revised RLHF dataset.
Training in French also enhances the model's overall performance.
Lucie-7B has a context size of 32K tokens

OpenLLM Leaderboard

coming soon

MT-Bench

coming soon

Usage

You can run this model using this Colab notebook

You can also run Lucie-Boosted using the following code:

import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Limitations

The Lucie-Boosted model is a quick demonstration that the Lucie foundation model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanism.

  • Developed by: Jonathan Pacifico, 2025
  • Model type: LLM
  • Language(s) (NLP): French, English
  • License: Apache-2.0