Model Card

Pythia-70m-deduped finetuned on a cleaned version of ShareGPT data.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.34
ARC (25-shot) 25.68
HellaSwag (10-shot) 25.4
MMLU (5-shot) 23.12
TruthfulQA (0-shot) 51.15
Winogrande (5-shot) 52.01
GSM8K (5-shot) 0.0
DROP (3-shot) 0.0
Downloads last month
1,500
Safetensors
Model size
70.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train HWERI/pythia-70m-deduped-cleansharegpt