John Leimgruber III
ubergarm
AI & ML interests
Open LLMs and Astrophotography image processing.
Recent Activity
liked
a model
about 7 hours ago
unsloth/gemma-3-27b-it-GGUF
new activity
1 day ago
KVCache-ai/DeepSeek-R1-GGML-FP8-Hybrid:Documentation
new activity
1 day ago
unsloth/QwQ-32B-unsloth-bnb-4bit:How to use the bnb-4bit model?
Organizations
None yet
ubergarm's activity
Documentation
#2 opened 1 day ago
by
ubergarm
How to use the bnb-4bit model?
10
#4 opened 5 days ago
by
neoragex2002
我的3090TI 24GB显存运行非常愉快!感谢开发团队!
6
#11 opened 8 days ago
by
ubergarm
Q4KS
4
#4 opened 5 days ago
by
Alastar-Smith
DeepSeek-R1-UD-IQ1_M-FP8 : Support and Perf Results on v0.2.3
3
#1 opened 7 days ago
by
shawxysu

Optimal `weight_block_size` for Intel AMX `amx_int8` `amx_tile`?
1
#17 opened 4 days ago
by
ubergarm
How about deepseek v3 model?
1
#15 opened 6 days ago
by
JohnnyBoyzzz
Something wrong
12
#3 opened 8 days ago
by
wcde
What languages were you trained in?
2
#7 opened 8 days ago
by
NickyNicky

Cannot Run `unsloth/DeepSeek-R1-GGUF` Model – Missing `configuration_deepseek.py`
2
#32 opened 27 days ago
by
syrys4750

No think tokens visible
6
#15 opened about 1 month ago
by
sudkamath
Over 2 tok/sec agg backed by NVMe SSD on 96GB RAM + 24GB VRAM AM5 rig with llama.cpp
9
#13 opened about 1 month ago
by
ubergarm
Got it running after downloading some RAM!
4
#7 opened about 1 month ago
by
ubergarm
Over 128k context on 1x 3090 TI FE 24GB VRAM!
#1 opened about 1 month ago
by
ubergarm
Inference speed
2
#9 opened about 1 month ago
by
Iker

Control over output
1
#12 opened about 2 months ago
by
TeachableMachine

Emotions
2
#3 opened about 2 months ago
by
jujutechnology
What advantage does this have over normal algorithmic ways of turning HTML to Markdown ?
5
#5 opened about 2 months ago
by
MohamedRashad
