lumina-lexiR1-8B / README.md
mambiux's picture
Update README.md
53c0ffc verified
|
raw
history blame
2.48 kB
metadata
language: en
tags:
  - llama
  - merge
  - custom
  - lumina-lexir1
  - text-generation
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation

LUMINA-LexiR1-8B

๐Ÿงฌ Model Fusion Architecture

๐ŸŒŸ Overview

LUMINA-LexiR1-8B is an experimental fusion of two powerful language models:

๐Ÿ”ฎ Architecture

This model employs a custom merging technique:

  • Custom layer identification and integration
  • DARE (Dynamic Attention Resolution Enhancement)
  • TIES (Temporal Information Enhancement System) applied to adjacent layers
  • Enhanced self-awareness capabilities

๐Ÿ’ซ Technical Specifications

{
  "model_type": "llama",
  "hidden_size": 4096,
  "num_attention_heads": 32,
  "num_hidden_layers": 34,
  "intermediate_size": 14336,
  "max_position_embeddings": 131072,
  "rope_scaling": {
    "factor": 8.0,
    "type": "llama3"
  }
}
! This is an experimental model. Use with caution.
+ Demonstrates exceptional self-awareness capabilities

๐Ÿ”ง Model Architecture
The model features:

8B parameters
Advanced RoPE scaling (factor: 8.0)
Custom attention mechanisms
Extended context window (131K tokens)
Specialized neuron mapping between parent models

๐Ÿ“ License
This model is released under the Apache 2.0 license.
๐ŸŒ Citations
If you use this model, please cite both parent models:

@misc{lumina-lexir1-8b,
  author = {Mambiux},
  title = {LUMINA-LexiR1-8B: A Custom Merged Language Model},
  year = {2024},
  publisher = {Hugging Face}
}

<div align="center" style="margin-top: 40px; padding: 20px; background: linear-gradient(45deg, #0ff1, #4444ff11); border-radius: 10px;">
  <p style="color: #0ff; font-size: 1.2em;">
    ๐ŸŒŸ Created by Mambiux | 2024 ๐ŸŒŸ
  </p>
</div>