File size: 6,627 Bytes
df04d07
 
6db7b54
 
 
 
 
 
 
 
 
df04d07
6db7b54
 
 
df04d07
6db7b54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
df04d07
 
6db7b54
df04d07
 
6db7b54
df04d07
 
 
 
6db7b54
df04d07
6db7b54
 
 
 
 
 
 
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
 
 
df04d07
 
 
6db7b54
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
 
 
 
 
 
 
 
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
 
 
 
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
 
 
 
 
df04d07
 
6db7b54
df04d07
6db7b54
 
 
 
df04d07
6db7b54
df04d07
6db7b54
 
 
 
df04d07
6db7b54
df04d07
6db7b54
df04d07
6db7b54
 
 
df04d07
6db7b54
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
library_name: transformers
datasets:
- baiges/patufet-IT
- baiges/alpaCAT
- baiges/patufet-QA
- pauhidalgoo/patufet-escrits
- baiges/patufet-human-interactions
- baiges/patufet-summaries
language:
- ca
tags:
- catalan
- language-model
- transformer
- sft
model-index:
- name: cucafera-instruct
  results:
  - task:
      type: language-understanding
      name: arc_ca_challenge
    dataset:
      name: arc_ca_challenge
      type: catalan_bench
    metrics:
    - name: Accuracy
      type: acc
      value: 0.2295
    - name: Normalized Accuracy
      type: acc_norm
      value: 0.2534
    source:
      name: Eleuther AI LM Evaluation Harness
      url: https://github.com/EleutherAI/lm-evaluation-harness
  - task:
      type: language-understanding
      name: arc_ca_easy
    dataset:
      name: arc_ca_easy
      type: catalan_bench
    metrics:
    - name: Accuracy
      type: acc
      value: 0.4238
    - name: Normalized Accuracy
      type: acc_norm
      value: 0.4108
    source:
      name: Eleuther AI LM Evaluation Harness
      url: https://github.com/EleutherAI/lm-evaluation-harness
  - task:
      type: question-answering
      name: catalanqa
    dataset:
      name: catalanqa
      type: catalan_bench
    metrics:
    - name: Exact Match
      type: exact_match
      value: 0.0037
    - name: F1 Score
      type: f1
      value: 0.0991
    source:
      name: Eleuther AI LM Evaluation Harness
      url: https://github.com/EleutherAI/lm-evaluation-harness
  - task:
      type: language-understanding
      name: copa_ca
    dataset:
      name: copa_ca
      type: catalan_bench
    metrics:
    - name: Accuracy
      type: acc
      value: 0.614
    source:
      name: Eleuther AI LM Evaluation Harness
      url: https://github.com/EleutherAI/lm-evaluation-harness
  - task:
      type: machine-translation
      name: flores_ca
    dataset:
      name: flores_ca
      type: flores
    metrics:
    - name: BLEU
      type: bleu
      value: 0.5934
    source:
      name: Eleuther AI LM Evaluation Harness
      url: https://github.com/EleutherAI/lm-evaluation-harness
license: apache-2.0
base_model:
- pauhidalgoo/cucafera
---

# Model Card for cucafera 🔥🐲 (Instruct Model)


This document describes **cucafera (Instruct Model)**, a Catalan Large Language Model (LLM) fine-tuned to follow instructions and generate text in Catalan. Built upon the base model, it leverages high-quality Catalan datasets and is optimized for instruction following tasks.
## Model Details

### Model Description

**cucafera (Instruct Model)** is a 244-million parameter transformer-based language model inspired by the LLAMA architecture (notably LLAMA3). Despite its relatively small size compared to many contemporary models, it is optimized for generating coherent and contextually relevant text in Catalan.

- **Model Size:** 244M parameters  
- **Architecture:** Transformer-based (LLAMA-inspired) with 30 layers  
- **Embedding Size:** 768  
- **Attention Mechanism:** 4 key/value heads and 8 query heads (using Grouped Query Attention - GQA)  
- **Context Length:** 2048 tokens  
- **Tokenizer:** Byte-Pair Encoding (BPE) with a vocabulary size of 65,536  
- **Activation Function:** GeGLU

## Instruct Fine-Tuning

The instruct version of **cucafera** has been fine-tuned on a variety of instruction datasets to enhance its ability to follow user prompts. The fine-tuning was performed using Hugging Face's `SFTTrainer` and follows the ChatML format for conversation, for example:

```
<|im_start|>user Fes un poema <|im_end|> <|im_start|>assistant
```

### Training Data

The base model was pre-trained using the [patufet-pretrain](https://huggingface.co/datasets/pauhidalgoo/patufet-pretrain) dataset. 

The fine-tuning data utilized a mix of instruction datasets from the [patufet](https://huggingface.co/collections/pauhidalgoo/patufet-66ca6dd3888e99a28dd616ae) collection.

### Fine-tunning Procedure

The model was fine-tuned with the following setup:
- **Total fine-tunning steps:** 1500	
- **Per device train batch size:** 12
- **Sequence Length:** 2048
- **Learning rate:** 3e-5
- **Optimizer:** AdamW
- **Weight decay:** 0.01
- **Epochs**: 5

Different commits represent different fine-tunning procedures: we experimented with different data mixes, epochs, datasets...

### Direct Use

The cucafera (Instruct Model) is designed for:

- Conversational agents and chatbots in Catalan.
- Task-specific applications such as summarization, translation (within Catalan), and creative writing.
- Educational and experimental research into instruction-following LLMs.
- Creative content generation, like poems or stories

However, due to its limited size, it is not able to provide correct factual information and you must be aware of this fact when using this model.

### Out-of-Scope Uses

- **High-Stakes Applications:**  
  The model is not recommended for uses where extremely high factual accuracy is required or where outputs could have significant real-world consequences.
- **Non-Catalan Tasks:**  
  Since the model is exclusively trained on Catalan text, it is not suited for tasks in other languages without further training or fine-tuning.
- **Sensitive or safety-critical uses:** It has not undergone RLHF/DPO tuning, so outputs should be reviewed carefully.


## Bias, Risks, and Limitations

- The model has **no instruction tuning**, so it may not follow prompts effectively.
- It **only understands Catalan**, meaning it is unsuitable for multilingual applications.
- Due to its **small size (244M parameters)**, its knowledge and reasoning capabilities are limited.
- It was trained on **a limited dataset**, which may introduce biases in its outputs.

### Recommendations

- The goal of this model is educational. You are encouraged to train your own model.
- If used in production, **human review** of its outputs is recommended.
- Fine-tuning on task-specific data can **improve accuracy** and **mitigate biases**.
- Users should be cautious when using it in **sensitive or high-stakes applications**.

## Use the Instruct Model

You can use the instruct model via huggingface's transformers library. Make sure to specify the **ChatML format**.

### Acknowledgements
This model was developed as an experimental project, inspired by Karpathy's [NanoGPT Series](https://github.com/karpathy/nanoGPT).
My colleague [Roger Baiges](https://huggingface.co/baiges) also trained his own [CatGPT](https://huggingface.co/baiges/CatGPT).

For more details, updates, or to contribute to the project, please visit the [GitHub repository](https://github.com/pauhidalgoo/cucafera)