Update README.md
Browse files
README.md
CHANGED
@@ -13,5 +13,64 @@ tags:
|
|
13 |
- Llama
|
14 |
- trl
|
15 |
---
|
|
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
- Llama
|
14 |
- trl
|
15 |
---
|
16 |
+

|
17 |
|
18 |
+
# **SmolLM2-360M-Grpo-r999**
|
19 |
+
|
20 |
+
SmolLM2-360M-Grpo-r999 is fine-tuned based on **SmolLM2-360M-Instruct**. SmolLM2 demonstrates significant advances over its predecessor, SmolLM1, particularly in instruction following, knowledge, and reasoning. The **360M** model was trained on **2 trillion tokens** using a diverse combination of datasets: **FineWeb-Edu, DCLM, The Stack**, along with new filtered datasets that we curated and will release soon. We developed the instruct version through **supervised fine-tuning (SFT)** using a combination of public datasets and our own curated datasets.
|
21 |
+
|
22 |
+
### **How to Use**
|
23 |
+
|
24 |
+
### Transformers
|
25 |
+
```bash
|
26 |
+
pip install transformers
|
27 |
+
```
|
28 |
+
|
29 |
+
```python
|
30 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
31 |
+
checkpoint = "prithivMLmods/SmolLM2-360M-Grpo-r999"
|
32 |
+
|
33 |
+
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
34 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
35 |
+
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
|
36 |
+
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
|
37 |
+
|
38 |
+
messages = [{"role": "user", "content": "What is gravity?"}]
|
39 |
+
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
|
40 |
+
print(input_text)
|
41 |
+
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
|
42 |
+
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
|
43 |
+
print(tokenizer.decode(outputs[0]))
|
44 |
+
```
|
45 |
+
|
46 |
+
### **Limitations of SmolLM2-360M-Grpo-r999**
|
47 |
+
|
48 |
+
1. **Model Size**: While **360M parameters** provide enhanced capabilities, the model still has limitations in handling highly complex reasoning tasks or long-context dependencies compared to larger models.
|
49 |
+
|
50 |
+
2. **Bias and Inaccuracy**: Despite fine-tuning on diverse datasets, the model may generate biased, inaccurate, or factually incorrect responses, particularly for niche topics or specialized knowledge areas.
|
51 |
+
|
52 |
+
3. **Context Length**: The model might struggle with very long conversations or extended prompts, potentially leading to truncation or loss of contextual coherence.
|
53 |
+
|
54 |
+
4. **Fine-Tuning Specificity**: Performance on specialized domains may require additional fine-tuning with domain-specific datasets.
|
55 |
+
|
56 |
+
5. **Generalization**: The model may not generalize as effectively to **rare queries** or **unseen tasks** compared to larger models, sometimes providing generic or incomplete answers.
|
57 |
+
|
58 |
+
6. **Limited Multi-Turn Conversations**: While it supports multi-turn interactions, its ability to retain and use context over extended conversations is **not as strong as larger models**.
|
59 |
+
|
60 |
+
### **Intended Use of SmolLM2-360M-Grpo-r999**
|
61 |
+
|
62 |
+
1. **General-purpose Conversational AI** β Ideal for chatbots, virtual assistants, and interactive applications requiring basic reasoning and knowledge retrieval.
|
63 |
+
|
64 |
+
2. **Education & Tutoring** β Supports answering educational queries, explaining concepts, and aiding learning across multiple domains.
|
65 |
+
|
66 |
+
3. **Content Generation** β Can generate short-form text, summaries, and brainstorming ideas for writing assistants or creativity tools.
|
67 |
+
|
68 |
+
4. **Code Assistance** β Fine-tuned on programming datasets, making it useful for debugging, explaining code, and assisting developers.
|
69 |
+
|
70 |
+
5. **Instruction Following** β Optimized for following structured commands, making it suitable for task-based applications.
|
71 |
+
|
72 |
+
6. **Prototyping & Experimentation** β Lightweight model for **fast deployment** in new AI applications, balancing performance with efficiency.
|
73 |
+
|
74 |
+
7. **Low-Resource Environments** β Runs on **edge devices, mobile apps, and local servers** where larger models are infeasible.
|
75 |
+
|
76 |
+
8. **Research & Development** β Can be used as a base model for **further fine-tuning** or model optimizations.
|