This is just a reupload of the quantized 4Q K M version of Gemma 3 1b by Google and unsloth. It is used for benchmark purposes on the Jetson Nano and llama.cpp with CUDA support:

Downloads last month
14
GGUF
Model size
1,000M params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for kreier/gemma3-1b

Quantized
(109)
this model