Ben Shankles PRO
warshanks
AI & ML interests
MLX, AWQ, GPTQ / AI in Healthcare
Organizations
README Typo?
2
#8 opened 2 months ago
by
warshanks
Update with vision support
👍 2
4
#3 opened 4 months ago
by
warshanks
Official llama.cpp support merged
1
#1 opened 5 months ago
by
warshanks
Sensitive to Quantization
2
#1 opened 7 months ago
by
warshanks
Improve model card: Add pipeline tag, correct base model, and add code/project links
2
#1 opened 7 months ago
by
nielsr
Issue with llama.cpp
18
#3 opened 7 months ago
by
wsbagnsv1
Avoid demo to be embedded in other sites
#8 opened 7 months ago
by
osanseviero
Feature Request: Disable reasoning
👀 1
3
#22 opened 8 months ago
by
SomAnon
Quantization Script
2
#1 opened 8 months ago
by
kawchar85
Model size?
2
#1 opened 8 months ago
by
warshanks
Convert in bf16 or fp16?
2
#2 opened 8 months ago
by
remember2015
Missing preprocessor_config.json
5
#2 opened 10 months ago
by
warshanks
tokenizer_config.json is not correct
12
#1 opened 10 months ago
by
depasquale
chat_template in tokenizer_config.json?
1
#1 opened 10 months ago
by
nff
mlx-community/medgemma-27b-text-it-bf16 is entirely broken on mlx-lm
👍 1
8
#1 opened 10 months ago
by
sjug
Convert model with mlx-vlm instead of mlx-lm to enable vision capabilities
3
#1 opened 10 months ago
by
ljoana
Convert model with mlx-vlm instead of mlx-lm to enable vision capabilities
3
#1 opened 10 months ago
by
ljoana
mlx-community/medgemma-27b-text-it-bf16 is entirely broken on mlx-lm
👍 1
8
#1 opened 10 months ago
by
sjug