File size: 1,245 Bytes
eb33e4b
 
 
1eb09e4
eb33e4b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
license: apache-2.0
datasets:
- Riyuechang/PTT-Corpus-100K_Gossiping-1400-39400_v2
base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0
pipeline_tag: text-generation
library_name: peft
tags:
- PTT
- PTT_Chat
---

# 版本資訊
使用新的噪聲較小(理論上)的數據訓練  
Lora使用了更大的r(32)  
取消了Dora  
因為Dora的提升有限,還會大幅降低訓練和推理的效率  

# 簡介
[Riyuechang/Breeze-7B-PTT-Chat-v2](https://huggingface.co/Riyuechang/Breeze-7B-PTT-Chat-v2)所使用的,未與主模型[MediaTek-Research/Breeze-7B-Instruct-v1_0](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v1_0)合併的lora模型  

# 設備  
- Ubuntu 22.04.4 LTS  
- NVIDIA GeForce RTX 3060 12G

# Lora參數
```python
r=32,
lora_alpha=32,
lora_dropout=0.1,
task_type="CAUSAL_LM",
target_modules="all-linear",
bias="none",
use_rslora=True
```

# 訓練參數
```python
per_device_train_batch_size=28,  
gradient_accumulation_steps=1,  
num_train_epochs=3,  
warmup_ratio=0.1,  
learning_rate=2e-5,  
bf16=True,  
save_strategy="steps",  
save_steps=1000,  
save_total_limit=5,  
logging_steps=10,  
output_dir=log_output,  
optim="paged_adamw_8bit",  
gradient_checkpointing=True
```

# 結果
- loss: 0.9391