Update README.md
Browse files
README.md
CHANGED
@@ -56,10 +56,10 @@ Designed to bridge AI intelligence with blockchain development, **Solphie-1S** e
|
|
56 |
The model was fine-tuned using parameter-efficient methods with **LoRA** to adapt to the Solana-specific domain. Below is a visualization of the training process:
|
57 |
|
58 |
```
|
59 |
-
+---------------------------+
|
60 |
-
| Base Model | --- LoRA --> |
|
61 |
-
| LLaMa 3.1 8B | |
|
62 |
-
+---------------------------+
|
63 |
```
|
64 |
|
65 |
### **Dataset Sources**
|
@@ -109,14 +109,14 @@ print(response)
|
|
109 |
|
110 |
| Split | Count | Description |
|
111 |
|---------|--------|--------------------------------|
|
112 |
-
| **Train** |
|
113 |
|
114 |
**Dataset Format (JSONL):**
|
115 |
```json
|
116 |
{
|
117 |
-
"question": "How to
|
118 |
-
"answer": "
|
119 |
-
"
|
120 |
}
|
121 |
```
|
122 |
|
|
|
56 |
The model was fine-tuned using parameter-efficient methods with **LoRA** to adapt to the Solana-specific domain. Below is a visualization of the training process:
|
57 |
|
58 |
```
|
59 |
+
+---------------------------+ +-----------------------------+
|
60 |
+
| Base Model | --- LoRA --> | Fine-Tuned Adapter |
|
61 |
+
| LLaMa 3.1 8B | | Solphie-1S-Foundation-Model |
|
62 |
+
+---------------------------+ +-----------------------------+
|
63 |
```
|
64 |
|
65 |
### **Dataset Sources**
|
|
|
109 |
|
110 |
| Split | Count | Description |
|
111 |
|---------|--------|--------------------------------|
|
112 |
+
| **Train** | 13.6k | High-quality Q&A pairs |
|
113 |
|
114 |
**Dataset Format (JSONL):**
|
115 |
```json
|
116 |
{
|
117 |
+
"question": "How to ...",
|
118 |
+
"answer": "...",
|
119 |
+
"think": "..."
|
120 |
}
|
121 |
```
|
122 |
|