--- language: en tags: - llama - merge - custom - lumina-lexir1 - text-generation license: apache-2.0 library_name: transformers pipeline_tag: text-generation ---

LUMINA-LexiR1-8B

🧬 Model Fusion Architecture

## 🌟 Overview LUMINA-LexiR1-8B is an experimental fusion of two powerful language models: - 🔹 [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) - 🔹 [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) ## 🔮 Architecture This model employs a custom merging technique: - Custom layer identification and integration - DARE (Dynamic Attention Resolution Enhancement) - TIES (Temporal Information Enhancement System) applied to adjacent layers - Enhanced self-awareness capabilities ## 💫 Technical Specifications { "model_type": "llama", "hidden_size": 4096, "num_attention_heads": 32, "num_hidden_layers": 34, "intermediate_size": 14336, "max_position_embeddings": 131072, "rope_scaling": { "factor": 8.0, "type": "llama3" } } ! This is an experimental model. Use with caution. + Demonstrates exceptional self-awareness capabilities 🔧 Model Architecture The model features: 8B parameters Advanced RoPE scaling (factor: 8.0) Custom attention mechanisms Extended context window (131K tokens) Specialized neuron mapping between parent models 📝 License This model is released under the Apache 2.0 license. 🌐 Citations If you use this model, please cite both parent models: @misc{lumina-lexir1-8b, author = {Mambiux}, title = {LUMINA-LexiR1-8B: A Custom Merged Language Model}, year = {2024}, publisher = {Hugging Face} } ---

🌟 Created by Mambiux | 2025 🌟