File size: 2,408 Bytes
e997036
6d31263
 
 
 
e997036
 
6d31263
e997036
c2c0711
e997036
 
6d31263
 
 
e997036
 
 
6d31263
e997036
6d31263
 
 
 
 
e997036
 
6d31263
 
 
 
 
e997036
6d31263
 
e997036
6d31263
e997036
6d31263
e997036
6d31263
 
 
 
 
 
 
 
 
 
e997036
6d31263
 
e997036
6d31263
 
 
 
 
 
e997036
6d31263
 
 
 
e997036
 
 
6d31263
e997036
6d31263
 
 
 
 
 
 
 
 
 
 
e997036
 
6d31263
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: peft
datasets:
- saillab/alpaca-polish-cleaned
---

# MISHANM/Polish_text_generation_Llama3_8B_instruct

This model is specially made to work well with the Polish language. It is good at doing things like answering questions. By using advanced language tools, it gives clear and accurate answers that fit the context. This makes sure it understands the details of Polish, and its responses are trustworthy and useful in many situations.

## Model Details
1. Language: Polish
2. Tasks: Question Answering(Polish to Polish)
3. Base Model: meta-llama/Meta-Llama-3-8B-Instruct



# Training Details

The model is trained on approx 52,002 instruction samples.
1. GPUs: 4*AMD Radeon™ PRO V620
2. Training Time: 22:57:03
  
   


 ## Inference with Transformers
 ```python3
 
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the fine-tuned model and tokenizer
model_path = "MISHANM/Polish_text_generation_Llama3_8B_instruct"

model = AutoModelForCausalLM.from_pretrained(model_path,device_map="auto")

tokenizer = AutoTokenizer.from_pretrained(model_path)

# Function to generate text
def generate_text(prompt, max_length=1000, temperature=0.9):
    # Format the prompt according to the chat template
    messages = [
        {
            "role": "system",
            "content": "You are a Polish language expert and linguist, with same knowledge give response in Polish language.",
        },
        {"role": "user", "content": prompt}
    ]

    # Apply the chat template
    formatted_prompt = f"<|system|>{messages[0]['content']}<|user|>{messages[1]['content']}<|assistant|>"

    # Tokenize and generate output
    inputs = tokenizer(formatted_prompt, return_tensors="pt")
    output = model.generate(  
        **inputs, max_new_tokens=max_length, temperature=temperature, do_sample=True
    )
    return tokenizer.decode(output[0], skip_special_tokens=True)

# Example usage
prompt = """Podaj przykład pytania, na które chatbot może odpowiedzieć."""
response_text = generate_text(prompt)
print(response_text)



```

## Citation Information
```
@misc{MISHANM/Polish_text_generation_Llama3_8B_instruct,
  author = {Mishan Maurya},
  title = {Introducing Fine Tuned LLM for Polish Language},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
```


- PEFT 0.12.0