GGUF

This model is the official GGUF version of [https://huggingface.co/PleIAs/Pleias-Pico Pleias-Pico].

The conversion is unquantized and should yield the same generation quality as the original model.

Downloads last month
128
GGUF
Model size
353M params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for PleIAs/Pleias-Pico-GGUF

Base model

PleIAs/Pleias-Pico
Quantized
(2)
this model

Dataset used to train PleIAs/Pleias-Pico-GGUF