Llama3.1-SuperDeepFuse
An 8B parameter language model that merges three high-performance distilled models to boost reasoning, instruction-following, and performance in mathematics and coding.
Model Highlights
- Size: 8 billion parameters
- Base: meta-llama/Llama-3.1-8B-Instruct
- Merged Sources:
- Merge Method:
model_stock
Key Capabilities
- Enhanced multi-task reasoning
- Improved mathematical and coding performance
- Multilingual support
Performance Notes
- Maintains Llama 3.1 safety standards
- Suitable for consumer GPU deployment
- Balanced performance across diverse tasks
Considerations
- Still being benchmarked
- Capabilities limited compared to larger model variants
- Can give misleading output like all other language models
- Outputs should be independently verified
Licensing
Follows standard Llama 3.1 usage terms.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 27.30 |
IFEval (0-Shot) | 77.62 |
BBH (3-Shot) | 29.22 |
MATH Lvl 5 (4-Shot) | 17.75 |
GPQA (0-shot) | 3.24 |
MuSR (0-shot) | 5.13 |
MMLU-PRO (5-shot) | 30.83 |
- Downloads last month
- 217
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for agentlans/Llama3.1-SuperDeepFuse
Merge model
this model
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard77.620
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard29.220
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard17.750
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.240
- acc_norm on MuSR (0-shot)Open LLM Leaderboard5.130
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard30.830