vllm (pretrained=/root/autodl-tmp/Qwen2.5-Coder-14B-Instruct-abliterated,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 5

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.872 ± 0.0212
strict-match 5 exact_match ↑ 0.868 ± 0.0215

vllm (pretrained=/root/autodl-tmp/output91,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 5

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.872 ± 0.0212
strict-match 5 exact_match ↑ 0.872 ± 0.0212
Downloads last month
3
Safetensors
Model size
15B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noneUsername/Qwen2.5-Coder-14B-Instruct-abliterated-W8A8-Dynamic-Per-Token