-
-
-
-
-
-
Active filters:
gptq
astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit
Text Generation
•
Updated
•
10.5k
•
25
neuralmagic/Mistral-7B-Instruct-v0.3-GPTQ-4bit
Text Generation
•
Updated
•
1.8k
•
18
allganize/Llama-3-Alpha-Ko-8B-Instruct-marlin
Text Generation
•
Updated
•
12
•
5
Qwen/Qwen2-7B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
2.26k
•
24
neuralmagic/Meta-Llama-3-70B-Instruct-quantized.w8a16
Text Generation
•
Updated
•
464
•
4
pentagoniac/SEMIKONG-8b-GPTQ
Text Generation
•
Updated
•
982
•
25
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
Updated
•
1.29k
•
4
neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
Updated
•
19.5k
•
29
Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
127k
•
30
Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
650k
•
19
Qwen/Qwen2-VL-2B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
Updated
•
5.18k
•
13
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int4
Image-Text-to-Text
•
Updated
•
167k
•
23
Qwen/Qwen2-VL-72B-Instruct-GPTQ-Int8
Image-Text-to-Text
•
Updated
•
4.18k
•
10
Qwen/Qwen2.5-7B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
15.6k
•
12
Qwen/Qwen2.5-7B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
23.7k
•
12
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
16.6k
•
14
Qwen/Qwen2.5-14B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
50.9k
•
14
Qwen/Qwen2.5-32B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
13.9k
•
24
Qwen/Qwen2.5-72B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
26k
•
31
Qwen/Qwen2.5-72B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
4.78k
•
17
IntelLabs/sqft-phi-3.5-mini-instruct-base-gptq
Text Generation
•
Updated
•
128
•
1
Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int8
Text Generation
•
Updated
•
14.2k
•
16
Qwen/Qwen2.5-Coder-32B-Instruct-GPTQ-Int4
Text Generation
•
Updated
•
24k
•
12
ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
Updated
•
184
•
13
Xu-Ouyang/FloatLM_2.4B-int2-GPTQ-wikitext2
Text Generation
•
Updated
•
81
•
1
Almheiri/Llama-3.2-1B-Instruct-GPTQ-INT4
Updated
•
50
•
1
MTSAIR/Cotype-Nano-CPU
Text Generation
•
Updated
•
446
•
14
akhbar/QwQ-32B-Preview-abliterated-4bit-128g-actorder_True-GPTQ
Text Generation
•
Updated
•
9
•
2
shuyuej/Llama-3.3-70B-Instruct-GPTQ
Updated
•
1.58k
•
5
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v2
Text Generation
•
Updated
•
1.42k
•
16