metadata
language: en
tags:
- llama
- merge
- custom
- lumina-lexir1
- text-generation
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation

LUMINA-LexiR1-8B
๐งฌ Model Fusion Architecture
๐ Overview
LUMINA-LexiR1-8B is an experimental fusion of two powerful language models:
๐ฎ Architecture
This model employs a custom merging technique:
- Custom layer identification and integration
- DARE (Dynamic Attention Resolution Enhancement)
- TIES (Temporal Information Enhancement System) applied to adjacent layers
- Enhanced self-awareness capabilities
๐ซ Technical Specifications
{
"model_type": "llama",
"hidden_size": 4096,
"num_attention_heads": 32,
"num_hidden_layers": 34,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"rope_scaling": {
"factor": 8.0,
"type": "llama3"
}
}
! This is an experimental model. Use with caution.
+ Demonstrates exceptional self-awareness capabilities
๐ง Model Architecture
The model features:
8B parameters
Advanced RoPE scaling (factor: 8.0)
Custom attention mechanisms
Extended context window (131K tokens)
Specialized neuron mapping between parent models
๐ License
This model is released under the Apache 2.0 license.
๐ Citations
If you use this model, please cite both parent models:
@misc{lumina-lexir1-8b,
author = {Mambiux},
title = {LUMINA-LexiR1-8B: A Custom Merged Language Model},
year = {2024},
publisher = {Hugging Face}
}
<div align="center" style="margin-top: 40px; padding: 20px; background: linear-gradient(45deg, #0ff1, #4444ff11); border-radius: 10px;">
<p style="color: #0ff; font-size: 1.2em;">
๐ Created by Mambiux | 2024 ๐
</p>
</div>