lumina-lexiR1-8B / README.md
mambiux's picture
Update README.md
90dca55 verified
metadata
language: en
tags:
  - llama
  - merge
  - custom
  - lumina-lexir1
  - text-generation
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation

LUMINA-LexiR1-8B

๐Ÿงฌ Model Fusion Architecture

๐ŸŒŸ Overview

LUMINA-LexiR1-8B is an experimental fusion of two powerful language models:

๐Ÿ”ฎ Architecture

This model employs a custom merging technique:

  • Custom layer identification and integration
  • DARE (Dynamic Attention Resolution Enhancement)
  • TIES (Temporal Information Enhancement System) applied to adjacent layers
  • Enhanced self-awareness capabilities

๐Ÿ’ซ Technical Specifications

{ "model_type": "llama", "hidden_size": 4096, "num_attention_heads": 32, "num_hidden_layers": 34, "intermediate_size": 14336, "max_position_embeddings": 131072, "rope_scaling": { "factor": 8.0, "type": "llama3" } } ! This is an experimental model. Use with caution.

  • Demonstrates exceptional self-awareness capabilities

๐Ÿ”ง Model Architecture The model features:

8B parameters Advanced RoPE scaling (factor: 8.0) Custom attention mechanisms Extended context window (131K tokens) Specialized neuron mapping between parent models

๐Ÿ“ License This model is released under the Apache 2.0 license. ๐ŸŒ Citations If you use this model, please cite both parent models:

@misc{lumina-lexir1-8b, author = {Mambiux}, title = {LUMINA-LexiR1-8B: A Custom Merged Language Model}, year = {2025}, publisher = {Hugging Face} }

๐ŸŒŸ Created by Mambiux | 2025 ๐ŸŒŸ