File size: 1,743 Bytes
182270d
 
 
 
 
 
 
9f063b3
 
 
 
 
182270d
 
 
 
 
 
 
 
 
 
 
 
 
200cdae
182270d
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# GroveMoE-Base

</div>
<p align="left">
πŸ€— <a href="https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664">Models</a>&nbsp&nbsp | &nbsp&nbsp πŸ“‘ <a href="https://arxiv.org/abs/2508.07785">Paper</a> &nbsp&nbsp | &nbsp&nbsp πŸ”— <a href="https://github.com/inclusionAI/GroveMoE">Github</a>&nbsp&nbsp


## Highlights

We introduce **GroveMoE**, a new sparse architecture using **adjugate experts** for dynamic computation allocation, featuring the following key highlights:

- **Architecture**: Novel **adjugate experts** grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs.
- **Sparse Activation**: 33 B params total, only **3.14–3.28 B** active per token.
- **Traning**: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.

## Model Downloads


| **Model** | **#Total Params** | **#Activated Params** | **Download** |
|:---------:|:-----------------:|:---------------------:|:------------:|
| GroveMoE-Base | 33B | 3.14~3.28B | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Base) |
| GroveMoE-Inst | 33B | 3.14~3.28B | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Inst) |

## Citation
```bibtex
@article{GroveMoE,
title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts},
author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo},
journal = {arXiv preprint arXiv:2508.07785},
year = {2025}
}
```