MathBite/self_corrective_llama_3.1_8B

This is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct that has been trained to detect and mitigate hallucinations in generated text.

How to Use

Because this model uses a custom architecture, you must use trust_remote_code=True when loading it.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "MathBite/self_corrective_llama_3.1_8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    trust_remote_code=True
)
Downloads last month
53
Safetensors
Model size
8.21B params
Tensor type
F16
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for MathBite/self_corrective_llama_3.1_8B

Quantized
(486)
this model