image/png

image/png

This model is a finetune of jondurbin's excellent bagel model. It has been trained with new datasets and a new technique, which we will share to the community soon. This model has not utilised any form of merging.

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
77.29 74.23 86.76 76.66 70.22 83.66 72.18

Contamination Results

With reference model jondurbin/bagel-34b-v0.2:

ARC TruthfulQA GSM8K
0.08 0.38 0.88

Vanilla Quantization by nold, Original Model abacusai/Smaug-34B-v0.1. Created using llm-quantizer Pipeline - 465d7970507dcaac4cb50221157a68c840965774

Downloads last month
104
GGUF
Model size
34.4B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nold/Smaug-34B-v0.1-GGUF

Quantized
(8)
this model