We have Llama-3 at home!

Highest PHI-3-Mini MMLU and Winogrande on the board!

The model has been trained on filtered versions of tagged datasets, as well as a few thousand more examples generated with llama-3-70B.

Use Zephyr template with any system message. Default system message should be:

You are a smart, friendly and helpful assistant.

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.72
AI2 Reasoning Challenge (25-Shot) 62.29
HellaSwag (10-Shot) 79.08
MMLU (5-Shot) 69.44
TruthfulQA (0-shot) 54.08
Winogrande (5-shot) 73.40
GSM8k (5-shot) 68.01
Downloads last month
89
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Ba2han/Llama-Phi-3_DoRA

Quantizations
3 models

Datasets used to train Ba2han/Llama-Phi-3_DoRA

Evaluation results