Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ ChatAllInOne_Mixtral-8x7B-v1 is a chat language model fine-tuned on the CHAT-ALL
|
|
10 |
|
11 |
## Model Details
|
12 |
- **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
13 |
-
- **Fine-tuning Technique**: QLoRA
|
14 |
- **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1)
|
15 |
- **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
|
16 |
|
|
|
10 |
|
11 |
## Model Details
|
12 |
- **Base Model**: [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
|
13 |
+
- **Fine-tuning Technique**: QLoRA
|
14 |
- **Dataset**: [CHAT-ALL-IN-ONE-v1](https://huggingface.co/datasets/DrNicefellow/CHAT-ALL-IN-ONE-v1)
|
15 |
- **Tool Used for Fine-tuning**: [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
|
16 |
|