Please refer to the SA paper and our GitHub repository for using this model.

To use the checkpoint of this model, you must install the transformers-4.38.0.post1+sepllm-py3-none-any.whl released from our GitHub repository. Below are the reference script for testing and a sample of test results. We conducted testing using lm_eval==0.4.0.

CUDA_LAUNCH_BLOCKING=1
lm_eval --model hf \
    --model_args pretrained=Gausson/gpt-neox-125m-deduped-SA \
    --tasks  arc_challenge,arc_easy,lambada_openai,logiqa,piqa,sciq,winogrande,wsc,wikitext  \
    --num_fewshot 5 \
    --device cuda:0\
    --batch_size 32
hf (pretrained=Gausson/gpt-neox-125m-deduped-SA), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 32
|    Tasks     |Version|Filter|n-shot|    Metric     |   | Value  |   |Stderr|
|--------------|------:|------|-----:|---------------|---|-------:|---|------|
|arc_challenge |      1|none  |     5|acc            |↑  |  0.2022|±  |0.0117|
|              |       |none  |     5|acc_norm       |↑  |  0.2483|±  |0.0126|
|arc_easy      |      1|none  |     5|acc            |↑  |  0.4920|±  |0.0103|
|              |       |none  |     5|acc_norm       |↑  |  0.4672|±  |0.0102|
|lambada_openai|      1|none  |     5|acc            |↑  |  0.3313|±  |0.0066|
|              |       |none  |     5|perplexity     |↓  | 31.0203|±  |1.0441|
|logiqa        |      1|none  |     5|acc            |↑  |  0.2366|±  |0.0167|
|              |       |none  |     5|acc_norm       |↑  |  0.2473|±  |0.0169|
|piqa          |      1|none  |     5|acc            |↑  |  0.6442|±  |0.0112|
|              |       |none  |     5|acc_norm       |↑  |  0.6458|±  |0.0112|
|sciq          |      1|none  |     5|acc            |↑  |  0.8210|±  |0.0121|
|              |       |none  |     5|acc_norm       |↑  |  0.7960|±  |0.0127|
|wikitext      |      2|none  |     5|bits_per_byte  |↓  |  1.2551|±  |   N/A|
|              |       |none  |     5|byte_perplexity|↓  |  2.3868|±  |   N/A|
|              |       |none  |     5|word_perplexity|↓  |104.7921|±  |   N/A|
|winogrande    |      1|none  |     5|acc            |↑  |  0.5170|±  |0.0140|
|wsc           |      1|none  |     5|acc            |↑  |  0.4519|±  |0.0490|

If you find our work helpful, please consider giving us a star ⭐ @ our GitHub repository and citing our paper. We greatly appreciate your support 😄

@inproceedings{chen2025sepllm,
  title={{SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator}},
  author={Chen, Guoxuan and Shi, Han and Li, Jiawei and Gao, Yihang and Ren, Xiaozhe and Chen, Yimeng and Jiang, Xin and Li, Zhenguo and Liu, Weiyang and Huang, Chao},
  booktitle={International Conference on Machine Learning},
  year={2025},
  note={Also available at arXiv:2412.12094}
}
@article{zheng2025selfadjust,
    title={Self-Adjust Softmax},
    author={Chuanyang Zheng and Yihang Gao and Guoxuan Chen and Han Shi and Jing Xiong and Xiaozhe Ren and Chao Huang and Xin Jiang and Zhenguo Li and Yu Li},
    year={2025},
    eprint={2502.18277},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
162M params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Gausson/gpt-neox-125m-deduped-SA