-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 2 • 1 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 2 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 2.03k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 48 • 1
NM Testing
company
AI & ML interests
None defined yet.
Recent Activity
View all activity
-
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors
Text Generation • 8B • Updated • 2 • 1 -
nm-testing/Meta-Llama-3-8B-Instruct-FBGEMM-nonuniform
Text Generation • 8B • Updated • 2 -
nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test
Text Generation • 8B • Updated • 2.03k -
nm-testing/Meta-Llama-3-8B-Instruct-W8-Channel-A8-Dynamic-Asym-Per-Token-Test
8B • Updated • 48 • 1
Collection of State-of-the-art FP8 Block Quantized Models
models
483
nm-testing/Llama-3.1-8B-Instruct-QKV-Cache-FP8
8B
•
Updated
nm-testing/Llama-3.1-8B-Instruct-KV-Cache-FP8
8B
•
Updated
•
65
nm-testing/TinyLlama-1.1B-Chat-v1.0-W4A16_2of4-e2e
0.3B
•
Updated
•
153
nm-testing/TinyLlama-1.1B-Chat-v1.0-W4A16_2of4_channel-e2e
0.3B
•
Updated
•
143
nm-testing/TinyLlama-1.1B-Chat-v1.0-sparse2of4_only-e2e
0.7B
•
Updated
•
121
nm-testing/TinyLlama-1.1B-Chat-v1.0-sparse2of4_fp8_dynamic-e2e
0.7B
•
Updated
•
117
nm-testing/TinyLlama-1.1B-Chat-v1.0-kv_cache_default_tinyllama-e2e
1B
•
Updated
•
100
nm-testing/TinyLlama-1.1B-Chat-v1.0-kv_cache_default_gptq_tinyllama-e2e
0.3B
•
Updated
•
119
nm-testing/TinyLlama-1.1B-Chat-v1.0-W8A8_tensor_weight_static_per_tensor_act-e2e
1B
•
Updated
•
112
nm-testing/TinyLlama-1.1B-Chat-v1.0-W8A8_channel_weight_static_per_tensor-e2e
1B
•
Updated
•
131