Matthew Hendrey
mrhendrey
AI & ML interests
None yet
Recent Activity
updated
a model
2 days ago
mrhendrey/X-ALMA-13B-Lora-Adapters
published
a model
2 days ago
mrhendrey/X-ALMA-13B-Lora-Adapters
updated
a model
3 days ago
mrhendrey/X-ALMA-13B-Pretrain-FP8-dynamic
Organizations
None yet
mrhendrey's activity
Model only outputs "!!!!!!!!!!"
1
#1 opened 3 months ago
by
mrhendrey
VRAM consumption when using GPU (CUDA)
3
#37 opened 9 months ago
by
Sunjay353
Batch: inefficient memory
1
#50 opened 8 months ago
by
SinanAkkoyun

Any chance your team is working on a 4-bit Llama-3.2-90B-Vision-Instruct-quantized.w4a16 version?
2
#1 opened 6 months ago
by
mrhendrey