Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
altomek
/
Personal_4B-GGUF
like
0
Transformers
GGUF
License:
other
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Personal_4B
Personal_4B
GGUF quants of
https://huggingface.co/jeiku/Personal_4B
Downloads last month
38
GGUF
Model size
4.51B params
Architecture
llama
Hardware compatibility
Log In
to view the estimation
4-bit
Q4_0
2.65 GB
Q4_0
2.66 GB
Q4_0
2.65 GB
Q4_0
2.65 GB
8-bit
Q8_0
4.8 GB
Q8_0
4.8 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for
altomek/Personal_4B-GGUF
Base model
IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
Finetuned
jeiku/completion4B
Finetuned
jeiku/instructered4B
Finetuned
FourOhFour/Crispy_Crab_4B
Finetuned
jeiku/Personal_4B
Quantized
(
4
)
this model
Collection including
altomek/Personal_4B-GGUF
Quants for ARM
Collection
11 items
โข
Updated
Jan 4
โข
1