GGUF
Inference Endpoints
Deacon-3B-gguf / README.md
Aryanne's picture
Create README.md
98f9816
---
license: cc-by-nc-4.0
datasets:
- KnutJaegersberg/trilobite
---
Some GGUF v2 quantizations of the model [KnutJaegersberg/deacon-3b](https://huggingface.co/KnutJaegersberg/deacon-3b)
Which is based on [conceptofmind/Open-LLongMA-3b](https://huggingface.co/conceptofmind/Open-LLongMA-3b) so you will need to set linear rope_scaling to 0.25.
Prompt Example:
```
### System:
You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.
### Instruction:
How do you fine tune a large language model?
### Response:
```