Cabuxa-7B
Cabuxa is a LLaMA-7B model for Galician that can answer instructions in the Alpaca format.
It was fed with the 80% of the irlab-udc/alpaca_data_galician dataset, as we are keeping the remaining 20% for future evaluation and research.
This work broadens the Portuguese effort from 22h/cabrita-lora-v0-1 to Galician. Our working notes are available here.
How to Get Started with Cabuxa-7B
Use the code below to get started with the model.
from peft import PeftModel
from transformers import AutoModelForCausalLM, LlamaTokenizer, GenerationConfig
config = PeftConfig.from_pretrained("irlab-udc/cabuxa-7b")
model = AutoModelForCausalLM.from_pretrained("huggyllama/llama-7b", device_map="auto")
model = PeftModel.from_pretrained(model, "irlab-udc/cabuxa-7b")
tokenizer = LlamaTokenizer.from_pretrained("huggyllama/llama-7b")
# This function builds the prompt in Alpaca format
def generate_prompt(instruction, input=None):
if input:
return f"""Abaixo está unha instrución que describe unha tarefa, xunto cunha entrada que proporciona máis contexto.
Escribe unha resposta que responda adecuadamente a entrada.
### Instrución:
{instruction}
### Entrada:
{input}
### Resposta:"""
else:
return f"""Abaixo está unha instrución que describe unha tarefa.
Escribe unha resposta que responda adecuadamente a entrada.
### Instrución:
{instruction}
### Resposta:"""
def evaluate(instruction, input=None):
prompt = generate_prompt(instruction, input)
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].cuda()
generation_output = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(do_sample=True),
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=256,
)
for s in generation_output.sequences:
output = tokenizer.decode(s)
print("Resposta:", output.split("### Resposta:")[1].strip())
evaluate("Cal é a fórmula química da auga?")
evaluate(
"Convence ao lector por que é importante un determinado tema.",
"Por que é esencial priorizar o sono?",
)
Resposta: A fórmula química da auga é H₂O.
Resposta: O sono é esencial para todos os humanos, pero tamén é unha ferramenta importante para lograr obxectivos, aumentar a productividade, maximizar os beneficios do soño e mantenerse saudable.
Training
Configurations and Hyperparameters
The following LoraConfig
config was used during training:
- r: 8
- lora_alpha: 16
- target_modules: ["q_proj", "v_proj"]
- lora_dropout: 0.05
- bias: "none"
- task_type: "CAUSAL_LM"
The following TrainingArguments
config was used during training:
- per_device_train_batch_size: 64
- gradient_accumulation_steps: 32
- warmup_steps: 100
- num_train_epochs: 20
- learning_rate: 3e-4
- fp16=True
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Loss
Epoch | Loss |
---|---|
0.98 | 2.6109 |
1.97 | 2.0596 |
2.95 | 1.5092 |
3.93 | 1.379 |
4.92 | 1.2849 |
5.9 | 1.208 |
6.88 | 1.1508 |
7.86 | 1.117 |
8.85 | 1.0873 |
9.83 | 1.0666 |
10.81 | 1.0513 |
11.8 | 1.0365 |
12.78 | 1.0253 |
13.76 | 1.0169 |
14.75 | 1.0118 |
15.73 | 1.0035 |
16.71 | 0.9968 |
17.7 | 0.9983 |
18.68 | 0.9924 |
19.66 | 0.9908 |
Framework versions
- PyTorch 2.1.0
- PEFT 0.6.0.dev0
- 🤗 Transformers 4.34.0
- 🤗 Datasets 2.14.5
- 🤗 Tokenizers 0.14.0
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: NVIDIA RTX A6000.
- Hours used: 72.
- Cloud Provider: Private infrastructure.
- Carbon Emitted: 9.33 Kg. CO2 eq.
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for irlab-udc/cabuxa-7b
Base model
huggyllama/llama-7b