Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks

This repository contains model checkpoints from the paper Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks.

For more details, including code and evaluation procedures, please refer to the official GitHub repository: https://github.com/rioyokotalab/optimal-sparsity

How to cite

If you find our work helpful, please feel free to cite the paper.

@article{nakamura2025optimalsparsitymixtureofexpertslanguage,
      title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
      author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
      year={2025},
      eprint={2508.18672},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2508.18672},
}
Downloads last month
16
Safetensors
Model size
1.08B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including llm-jp/optimal-sparsity-math-d1024-E8-k8-1.1B-A1.1B