Update README.md
Browse files
README.md
CHANGED
|
@@ -17,11 +17,13 @@ This repository demonstrates how to fine-tune the **Qwen 7B** model to create "A
|
|
| 17 |
|
| 18 |
## 🚀 Resources
|
| 19 |
- **Source Code**: [GitHub Repository](https://github.com/while-basic/mindcraft) #todo: add mindcraft repo
|
| 20 |
-
- **Colab Notebook**: [
|
| 21 |
-
|
|
|
|
|
|
|
| 22 |
## Overview
|
| 23 |
|
| 24 |
-
This
|
| 25 |
1. Install and set up the **Unsloth framework**.
|
| 26 |
2. Initialize the **Qwen 7B** model with **4-bit quantization**.
|
| 27 |
3. Implement **LoRA Adapters** for memory-efficient fine-tuning.
|
|
@@ -67,7 +69,7 @@ import torch
|
|
| 67 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
| 68 |
model_name="unsloth/Qwen2.5-7B-bnb-4bit",
|
| 69 |
max_seq_length=2048,
|
| 70 |
-
dtype=torch.bfloat16,
|
| 71 |
load_in_4bit=True,
|
| 72 |
trust_remote_code=True,
|
| 73 |
)
|
|
|
|
| 17 |
|
| 18 |
## 🚀 Resources
|
| 19 |
- **Source Code**: [GitHub Repository](https://github.com/while-basic/mindcraft) #todo: add mindcraft repo
|
| 20 |
+
- **Colab Notebook**: [Colab Notebook](https://colab.research.google.com/drive/1Eq5dOjc6sePEt7ltt8zV_oBRqstednUT?usp=sharing)
|
| 21 |
+
- **Blog Article**: [Walkthrough](https://chris-celaya-blog.vercel.app/articles/unsloth-training)
|
| 22 |
+
- **Teaser**: [Video](https://www.youtube.com/watch?v=KUXY5OtaPZc)
|
| 23 |
+
|
| 24 |
## Overview
|
| 25 |
|
| 26 |
+
This **readme.md** provides step-by-step instructions to:
|
| 27 |
1. Install and set up the **Unsloth framework**.
|
| 28 |
2. Initialize the **Qwen 7B** model with **4-bit quantization**.
|
| 29 |
3. Implement **LoRA Adapters** for memory-efficient fine-tuning.
|
|
|
|
| 69 |
model, tokenizer = FastLanguageModel.from_pretrained(
|
| 70 |
model_name="unsloth/Qwen2.5-7B-bnb-4bit",
|
| 71 |
max_seq_length=2048,
|
| 72 |
+
dtype=torch.bfloat16,
|
| 73 |
load_in_4bit=True,
|
| 74 |
trust_remote_code=True,
|
| 75 |
)
|