Model Card for Azzedde/llama3.1-8b-reasoning-grpo-gguf
Model Details
Model Description
This is the GGUF version of llama3.1-8b-reasoning-grpo, optimized for complex reasoning and logical inference. The model was converted to GGUF format using the convert-hf-to-gguf.py
script from llama.cpp, making it compatible with optimized inference frameworks like Ollama.
Developed by: Azzedine (GitHub: Azzedde)
Model Type: Large Language Model (LLM) optimized for reasoning tasks
Language(s) (NLP): English
License: MIT
Converted from: Azzedde/llama3.1-8b-reasoning-grpo
Model Sources
Repository: Hugging Face
Conversion Script: convert-hf-to-gguf.py
(llama.cpp)
Uses
Direct Use
This model is designed for complex reasoning and logical inference, particularly in:
- Analytical problem-solving
- Multi-step deduction
- Automated reasoning systems
- Advanced question-answering tasks
Downstream Use
- AI-driven decision support systems
- Multi-step reasoning chains in LLM applications
- LLM-based tutoring systems
How to Use
Using with llama.cpp
Load the GGUF model using llama.cpp
:
# Download the model
wget https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf/resolve/main/model.gguf
# Run with llama.cpp
./main -m model.gguf -p "Solve the following logical problem: If all A are B, and some B are C, does it follow that some A are C?"
Using with Ollama
You can use this model directly with Ollama, which provides a seamless way to interact with GGUF models:
ollama run hf.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf
For custom quantization:
ollama run hf.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf:Q8_0
For more details on Ollama usage, refer to Ollama Docs.
Citation
BibTeX:
@article{llama3.1-8b-reasoning-grpo-gguf,
author = {Azzedde},
title = {Llama3.1-8B-Reasoning-GRPO-GGUF: A Logical Reasoning LLM in GGUF Format},
year = {2025},
url = {https://huggingface.co/Azzedde/llama3.1-8b-reasoning-grpo-gguf}
}
Contact: Hugging Face Profile
- Downloads last month
- 27