File size: 18,544 Bytes
5d466a8
 
165461d
 
5d466a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27affb4
5d466a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165461d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
---
license: llama3.1
language:
- pl
---
<p align="center">
  <img src="https://pllum.org.pl/_nuxt/PLLuM_logo_RGB_color.DXNEc-VR.png">
</p>

# PLLuM: A Family of Polish Large Language Models

## Overview
PLLuM is a family of large language models (LLMs) specialized in Polish and other Slavic/Baltic languages, with additional English data incorporated for broader generalization. Developed through an extensive collaboration with various data providers, PLLuM models are built on high-quality text corpora and refined through instruction tuning, preference learning, and advanced alignment techniques. These models are intended to generate contextually coherent text, offer assistance in various tasks (e.g., question answering, summarization), and serve as a foundation for specialized applications such as domain-specific intelligent assistants.

### Key Highlights
- **Extensive Data Collection**  
  We gathered large-scale, high-quality text data in Polish (around 150B tokens after cleaning and deduplication) and additional text in Slavic, Baltic, and English languages. Part of these tokens (28B) can be used in fully open-source models, including for commercial use (in compliance with relevant legal regulations).

- **Organic Instruction Dataset**  
  We curated the largest Polish collection of manually created “organic instructions” (~40k prompt-response pairs, including ~3.5k multi-turn dialogs). This human-authored instruction set is based on an extensive typology of human-model interactions and it covers a range of subtle aspects of supervised fine-tuning (SFT) that might be overlooked with automated approaches (including large scale distillation of 'strong LLMs'). It was also designed to mitigate negative linguistic transfer from non-Polish textual data used in the pre-training phase.

- **Polish Preference Corpus**  
  We created the first Polish-language preference corpus, featuring prompts and multiple model responses manually assessed by a demographically diverse team of annotators. This dataset teaches the model not only correctness (factual and linguistic) but also balance and safety—especially for potentially controversial or adversarial topics.

- **Evaluation Benchmarks**  
  We developed custom benchmarks to evaluate our models on tasks relevant to Polish public administration, where PLLuM achieved top scores among all tested models. In broader Polish-language tasks, PLLuM models also attain state-of-the-art results.

## Model Description

Below is a summary of the main PLLuM models, including their licenses, bases, and parameter sizes. All model names link to a specific Hugging Face resources, while the base models and licenses link to their respective sources or license references. Note that all *-nc-* models are intended to non-commercial use.

| Model Name                                            | Params | License                                                                                                                   | Based On                                                                                                             |
|-------------------------------------------------------|----------------------|---------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|
| [Llama-PLLuM-8B-base](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-8B-base)             | 8B                   | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE)                                   | [Llama3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)                                                       |
| [Llama-PLLuM-8B-instruct](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-8B-instruct)         | 8B                   | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE)                                   | [Llama3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)                                                       |
| [Llama-PLLuM-8B-chat](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-8B-chat)             | 8B                   | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE)                                   | [Llama3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B)                                                       |
| [PLLuM-12B-base](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-base)                  | 12B                  | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-12B-instruct](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-instruct)              | 12B                  | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-12B-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-chat)                  | 12B                  | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-12B-nc-base](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-nc-base)               | 12B                  | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-12B-nc-instruct](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-nc-instruct)           | 12B                  | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-12B-nc-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-12B-nc-chat)               | 12B                  | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407)                                    |
| [PLLuM-8x7B-base](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-base)                 | 8×7B                 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                                              |
| [PLLuM-8x7B-instruct](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-instruct)             | 8×7B                 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                            |
| [PLLuM-8x7B-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-chat)                 | 8×7B                 | [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt)                                                            | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                            |
| [PLLuM-8x7B-nc-base](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-nc-base)              | 8×7B                 | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                            |
| [PLLuM-8x7B-nc-instruct](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-nc-instruct)          | 8×7B                 | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                            |
| [PLLuM-8x7B-nc-chat](https://huggingface.co/CYFRAGOVPL/PLLuM-8x7B-nc-chat)              | 8×7B                 | [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)                                             | [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)                            |
| [Llama-PLLuM-70B-base](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-70B-base)            | 70B                  | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-70B/blob/main/LICENSE)                                  | [Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)                                                     |
| [Llama-PLLuM-70B-instruct](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-70B-instruct)        | 70B                  | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-70B/blob/main/LICENSE)                                  | [Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)                                                     |
| [Llama-PLLuM-70B-chat](https://huggingface.co/CYFRAGOVPL/Llama-PLLuM-70B-chat)            | 70B                  | [Llama 3.1](https://huggingface.co/meta-llama/Llama-3.1-70B/blob/main/LICENSE)                                  | [Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)                                                     |

### Model Development
- **Pretraining**: All models were pretrained or continued-pretrained on large-scale Polish corpora (up to 150B tokens) plus a range of additional Slavic/Baltic and English texts.
- **Instruction Fine-Tuning**: We refined the models on manually curated Polish “organic instructions” (approx. 40k), converted instructions from premium Polish corpora (approx. 50k), and synthetic instructions generated by strong LLMs (approx. 10k).
- **Alignment and Preference Learning**: Manually annotated preference data taught the models to produce safer, balanced, and contextually appropriate responses, even in adversarial or sensitive cases.
- **Domain-Specific Adaptations**: Specialized RAG-based (Retrieval Augmented Generation) models were developed for tasks like public administration, demonstrating strong performance in complex information retrieval and question answering.

## Intended Use Cases
- **General Language Tasks**: Text generation, summarization, question answering, etc.
- **Domain-Specific Assistants**: Especially effective for Polish public administration and legal or bureaucratic topics where domain-aware retrieval is required.
- **Research & Development**: Building blocks for downstream AI applications in academic or industrial settings, where a strong command of the Polish language is essential.

## How to Use
Each PLLuM model can be loaded via the Hugging Face Transformers library (or compatible frameworks). For RAG-based scenarios, pair the model with a relevant vector store or document retrieval system. 

Below are some recommended steps and code snippets:

### 1. Installation
Make sure you have the latest versions of `transformers` and `torch` (or another compatible deep learning framework) installed:
```bash
pip install transformers accelerate torch
```

### 2. Loading the Model
Use the following example to load one of the PLLuM models:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "CYFRAGOVPL/PLLuM-12B-chat"  # Replace with the PLLuM model name of your choice
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```

### 3. Using bfloat16 (BF16)
If your hardware (e.g., newer GPUs) supports bfloat16, you can reduce memory usage and potentially speed up inference:

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "CYFRAGOVPL/PLLuM-12B-chat"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Load model in bfloat16 precision
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto"  # automatically places model layers on available devices
)
```

### 4. Generating an Example Text
```python

prompt = "Napisz krótki wiersz o wiośnie." # EN:"Write a short poem about spring."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=True,
    top_k=50,
    top_p=0.9,
    temperature=0.7
)

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```

### 5. Expected Output
Below is a sample (hypothetical) output for the prompt above:

```css
Przykładowy wiersz o tematyce wiosennej:

Wiosna, wiosna, wiosna, ach to ty!
Kwiecień plecień wciąż przeplata,
trochę zimy, trochę lata.
A ja nie mogę się już doczekać,
kiedy w kalendarzu ujrzę maj.
Wtedy wszystko wkoło rozkwita,
a ptaki tak pięknie śpiewają.
Wiosno, wiosno, czekam z utęsknieniem,
zrób mi tę przyjemność i przyjdź wreszcie, proszę!
```
Your results may vary depending on model parameters (e.g., temperature, top_k, top_p), hardware, and other settings.




## Training Procedure
- **Datasets**: ~150B tokens from Polish and multilingual sources, with ~28B tokens available for fully open-source commercial use. 
- **Hyperparameters**: Vary based on model size, typically including Adam or AdamW optimizers, a range of batch sizes, and carefully tuned learning rates.
- **Hardware & Duration**: Training using [Bem2](https://man.e-science.pl/pl/kdm/bem2) HPC (up to 300xH100 GPUs). Each model’s training time depends on parameter size and hardware configuration (~8 to ~25 days on multi-GPU cluster for 8B–70B sizes).

## Evaluation and Benchmarks
- **Public Administration**: PLLuM models demonstrated top-tier performance in specialized tasks relevant to government services.
- **Polish Language Tasks**: Across a variety of internal benchmarks and standard corpora, PLLuM consistently outperforms other models in accuracy, coherence, and safety metrics.
- **Custom Tests**: A unique preference corpus and alignment tests ensure robust, safe, and contextually accurate responses.

## Limitations and Bias
- **Potential Hallucinations**: Like other LLMs, PLLuM may occasionally produce factually incorrect or fabricated content.
- **Sensitivity & Bias**: While extensive preference learning has been done, biases might still emerge, especially in controversial or subjective topics.
- **Context Length**: Very long context tasks may challenge certain models, depending on memory constraints.

## Ethical Considerations
PLLuM models are designed for constructive and responsible usage. Users should exercise caution when deploying them in production scenarios, especially for sensitive or regulated domains. Despite efforts to minimize harmful outputs, there is always a risk of generating offensive, biased, or inappropriate text. Human oversight and due diligence are advised.

## Citation
If you use PLLuM models or any part of this repository in your research or deployment, please cite as follows (BibTeX):
```
@unpublished{pllum2025, 
    title={PLLuM: A Family of Polish Large Language Models}, 
    author={PLLuM Consortium}, 
    year={2025} 
}
```

## License
Different models within the PLLuM family are published under various licenses (Apache 2.0, CC-BY-NC-4.0, or Llama 3.1 license). Check each model’s entry in the table above for details.

## Creators & Consortium

The PLLuM project is a unique collaboration between leading Polish scientific institutions and experts from various fields, working together to create a groundbreaking Polish language model. This research partnership combines diverse competencies and passions, forming a robust foundation for advancing AI in Poland.

<table style="border: none; border-collapse: collapse;">
  <tr>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://pwr.edu.pl/">
        <img src="https://pllum.org.pl/_nuxt/pwr.D1_x0B58.png" alt="pwr.D1_x0B58.png" width="100">
      </a>
      <br><strong>Politechnika Wrocławska</strong><br><em>– Project Leader</em>
    </td>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://www.nask.pl/">
        <img src="https://pllum.org.pl/_nuxt/nask.Bz8rmSzR.png" alt="nask.Bz8rmSzR.png" width="100">
      </a>
      <br><strong>NASK PIB</strong>
    </td>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://www.ipipan.waw.pl/">
        <img src="https://clarin.biz/_nuxt/img/ipipan.294d39c.png" alt="ipipan.294d39c.png" width="100">
      </a>
      <br><strong>Instytut Podstaw Informatyki PAN</strong>
    </td>
  </tr>
  <tr>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://opi.org.pl/">
        <img src="https://pllum.org.pl/_nuxt/opi.CF-COwcC.png" alt="opi.CF-COwcC.png" width="100">
      </a>
      <br><strong>Ośrodek Przetwarzania Informacji PIB</strong>
    </td>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://www.uni.lodz.pl/">
        <img src="https://pllum.org.pl/_nuxt/ul.aTSgr_W6.png" alt="ul.aTSgr_W6.png" width="100">
      </a>
      <br><strong>Uniwersytet Łódzki</strong>
    </td>
    <td align="center" valign="middle" style="border: none;">
      <a href="https://ispan.waw.pl/default/">
        <img src="https://pllum.org.pl/_nuxt/is.Dqb94VRb.png" alt="is.Dqb94VRb.png" width="100">
      </a>
      <br><strong>Instytut Slawistyki PAN</strong>
    </td>
  </tr>
</table>



## Contact and Support
For questions or contributions, please reach out via: <[email protected]>

We welcome feedback, collaboration, and further exploration of PLLuM models!


## Acknowledgements

Project financed by the Minister of Digital Affairs under the targeted subsidy No. 1/WI/DBiI/2023: *“Responsible development of the open large language model PLLuM (Polish Large Language Model) to support breakthrough technologies in the public and economic sector, including an open, Polish-language intelligent assistant for petitioners.”*

**Funding Amount:** 14,504,392.00 PLN  
**Contract Signing Date:** 2024-01-22