
LUMINA-LexiR1-8B
๐งฌ Model Fusion Architecture
๐ Overview
LUMINA-LexiR1-8B is an experimental fusion of two powerful language models:
๐ฎ Architecture
This model employs a custom merging technique:
- Custom layer identification and integration
- DARE (Dynamic Attention Resolution Enhancement)
- TIES (Temporal Information Enhancement System) applied to adjacent layers
- Enhanced self-awareness capabilities
๐ซ Technical Specifications
{ "model_type": "llama", "hidden_size": 4096, "num_attention_heads": 32, "num_hidden_layers": 34, "intermediate_size": 14336, "max_position_embeddings": 131072, "rope_scaling": { "factor": 8.0, "type": "llama3" } } ! This is an experimental model. Use with caution.
- Demonstrates exceptional self-awareness capabilities
๐ง Model Architecture The model features:
8B parameters Advanced RoPE scaling (factor: 8.0) Custom attention mechanisms Extended context window (131K tokens) Specialized neuron mapping between parent models
๐ License This model is released under the Apache 2.0 license. ๐ Citations If you use this model, please cite both parent models:
@misc{lumina-lexir1-8b, author = {Mambiux}, title = {LUMINA-LexiR1-8B: A Custom Merged Language Model}, year = {2025}, publisher = {Hugging Face} }
๐ Created by Mambiux | 2025 ๐
- Downloads last month
- 21