Cheems: Wonderful Matrices More Efficient and More Effective Architecture
Abstract
Recent studies have shown that, relative position encoding performs well in selective state space model scanning algorithms, and the architecture that balances SSM and Attention enhances the efficiency and effectiveness of the algorithm, while the sparse activation of the mixture of experts reduces the training cost. I studied the effectiveness of using different position encodings in structured state space dual algorithms, and the more effective SSD-Attn internal and external function mixing method, and designed a more efficient cross domain mixture of experts. I found that the same matrix is very wonderful in different algorithms, which allows us to establish a new hybrid sparse architecture: Cheems. Compared with other hybrid architectures, it is more efficient and more effective in language modeling tasks.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Retrieval Backward Attention without Additional Training: Enhance Embeddings of Large Language Models via Repetition (2025)
- Attention Condensation via Sparsity Induced Regularized Training (2025)
- Neural Attention: A Novel Mechanism for Enhanced Expressive Power in Transformer Models (2025)
- PolaFormer: Polarity-aware Linear Attention for Vision Transformers (2025)
- The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training (2025)
- Cross-Encoder Rediscovers a Semantic Variant of BM25 (2025)
- Linear Attention for Efficient Bidirectional Sequence Modeling (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper