File size: 2,638 Bytes
2fbd022
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
library_name: transformers
tags:
- gguf
- llama.cpp
- ollama
- reasoning-llm
license: mit
datasets:
- custom/reasoning-dataset-2024v1
language:
- en
base_model:
- meta-llama/meta-Llama-3.1-8B-Instruct
pipeline_tag: text-generation
---

## Model Card for Azzedde/llama3.1-8b-reasoning-grpo-gguf

### Model Details
**Model Description**  
This is the GGUF version of **llama3.1-8b-reasoning-grpo**, optimized for complex reasoning and logical inference. The model was converted to **GGUF format** using the `convert-hf-to-gguf.py` script from **llama.cpp**, making it compatible with optimized inference frameworks like **Ollama**.

**Developed by**: Azzedine (GitHub: Azzedde)  
**Model Type**: Large Language Model (LLM) optimized for reasoning tasks  
**Language(s) (NLP)**: English  
**License**: MIT  
**Converted from**: [Azzedde/llama3.1-8b-reasoning-grpo](https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo)

### Model Sources
**Repository**: [Hugging Face](https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf)  
**Conversion Script**: `convert-hf-to-gguf.py` (llama.cpp)  

### Uses
#### Direct Use
This model is designed for **complex reasoning** and **logical inference**, particularly in:
- Analytical problem-solving
- Multi-step deduction
- Automated reasoning systems
- Advanced question-answering tasks

#### Downstream Use
- AI-driven **decision support systems**
- Multi-step **reasoning chains** in LLM applications
- **LLM-based tutoring systems**

### How to Use
#### Using with `llama.cpp`
Load the GGUF model using `llama.cpp`:

```bash
# Download the model
wget https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf/resolve/main/model.gguf

# Run with llama.cpp
./main -m model.gguf -p "Solve the following logical problem: If all A are B, and some B are C, does it follow that some A are C?"
```

#### Using with **Ollama**
You can use this model directly with **Ollama**, which provides a seamless way to interact with GGUF models:

```bash
ollama run hf.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf
```

For custom quantization:
```bash
ollama run hf.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf:Q8_0
```

For more details on Ollama usage, refer to [Ollama Docs](https://github.com/ollama/ollama/blob/main/docs/README.md).

### Citation
**BibTeX:**
```
@article{llama3.1-8b-reasoning-grpo-gguf,
  author    = {Azzedde},
  title     = {Llama3.1-8B-Reasoning-GRPO-GGUF: A Logical Reasoning LLM in GGUF Format},
  year      = {2025},
  url       = {https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf}
}
```

**Contact**: [Hugging Face Profile](https://huggingface.co/Azzedde)