This model is based on the LLama 7b model as a backbone, and datasets from various Orcas have been fine-tuned and merged.

The three models were combined, and the model with the best ARC and MMLU performance was given the highest weight.

First: fine-tuning beaugogh/openorca-multiplechoice-10k on llama2 7b, but using the NEFTune method.

Second: model fine-tuned with the SlimOrca dataset on llama2 7b.

Third : Model with beaugogh/openorca-multiplechoice-10k fine-tuned on llama2 7b.

We'll add the results once we have the official results

Downloads last month
1,021
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for yeen214/llama2_7b_merge_orcafamily

Quantizations
3 models

Datasets used to train yeen214/llama2_7b_merge_orcafamily