|
|
--- |
|
|
language: |
|
|
- en |
|
|
- hi |
|
|
- gu |
|
|
license: |
|
|
- apache-2.0 |
|
|
- cc-by-sa-4.0 |
|
|
tags: |
|
|
- gguf |
|
|
- assistant |
|
|
- AI |
|
|
- Mirror |
|
|
- mirror_code |
|
|
- LLM |
|
|
- LoRA |
|
|
- ollama |
|
|
- llama.cpp |
|
|
library_name: llama.cpp |
|
|
model_creator: Dipesh Majithia |
|
|
model_name: Mirror Dolly (GGUF) |
|
|
datasets: |
|
|
- databricks/databricks-dolly-15k |
|
|
base_model: |
|
|
- dipeshmajithia/MirrorCode |
|
|
--- |
|
|
|
|
|
# ๐ช Mirror Dolly (GGUF) โ Model Card |
|
|
|
|
|
## ๐ง Summary |
|
|
**Mirror Dolly** is a fine-tuned assistant-style language model built on top of [`dipeshmajithia/MirrorCode`](https://huggingface.co/dipeshmajithia/MirrorCode). It was fine-tuned for **1000 iterations** on the **Dolly 15k dataset** using LoRA, and later merged and converted to GGUF for local inference. |
|
|
|
|
|
Mirror Dolly is designed for structured and emotionally aware assistant conversations and supports lightweight deployment with `llama.cpp`, `ollama`, or `text-generation-webui`. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ฆ Model Overview |
|
|
|
|
|
- **Base model**: [`dipeshmajithia/MirrorCode`](https://huggingface.co/dipeshmajithia/MirrorCode) |
|
|
- **LoRA fine-tuning**: |
|
|
- Dataset: Dolly 15k |
|
|
- Iterations: 1000 |
|
|
- Layers: 4 |
|
|
- Rank: 8 |
|
|
- **Merged and Converted**: To GGUF via `transformers` + `convert_hf_to_gguf.py` |
|
|
- **Quantization options**: f16, q8_0, q4_0 |
|
|
- **Use cases**: |
|
|
- Personal assistant |
|
|
- Structured explanations |
|
|
- Lightweight offline inference |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ How to Use |
|
|
|
|
|
### โถ๏ธ With `llama.cpp` |
|
|
```bash |
|
|
./main -m mirror_dolly.gguf -p "Who are you?" |