![DALL-E 3 prompt: a single seed growing slowly in laboratory in a desert sand, the single little plant try fight to reach light sun, while a little cute kitty feel the plant, cute 8k anime, digitral art, close up](https://i.imgur.com/uyjdkhk.jpeg)
Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.
LoRa is include and Quants: exllamav2 2.4bpw-h6, 4.25bpw-h6, 8.0bpw-h8 | GGUF Q4_K_M, IQ4_NL |
Prompt template: Alpaca
Output example tested In text-generation-webui
Input | base llama-2-chat | QLoRa |
---|---|---|
[question]: Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW |
Answer: Luminia, a mystical world of wonder and magic 🧝♀️✨ A place where technology and nature seamlessly blend together ... |
Answer! < lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... |
Output prompt from QLoRa to A1111/SD-WebUI:
![parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism,
Steps: 20, Sampler: Heun, CFG scale: 7, Seed: 479539365, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0](https://i.imgur.com/rNLaobj.png)
![parameters image metadata: <lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle, shine, sparkle, dark theme, fantasy, magic, goddess, celestial, nature, peaceful, serene, tranquil, mystical, enchanting, otherworldly, mysterious, captivating, alluring, beautiful, elegant, graceful, majestic, divine, powerful, epic, grand, sweeping, breathtaking, mesmerizing, magical, fantastical, wondrous, marvelous, extraordinary, magnificent, glorious, radiant, luminous, illumination, brilliance, glow, radiance, luminescence, brightness, splendor, glory, triumph, victory, achievement, honor, celebration, recognition, praise, admiration, appreciation, love, affection, devotion, loyalty, dedication, commitment, passion, intensity, drive, determination, energy, enthusiasm, excitement, joy, happiness, fulfillment, pleasure, enjoyment, satisfaction, delight, wonder, amazement, awe, curiosity, interest, intrigue, question, exploration, discovery, adventure, journey, path, road, trail, course, pursuit, challenge, obstacle, adversity, hardship, struggle, perseverance, resilience, tenacity, courage, bravery, heroism, inspiration, motivation, spirit, heart, soul, essence, creativity, imagination, dreams, aspirations, goals, ambition, vision, purpose, meaning, significance, relevance, importance, impact, influence, change, growth, development, evolution, improvement, progress, learning, knowledge, wisdom, insight, understanding, empathy, compassion, kindness, generosity, forgiveness, gratitude, humility, patience, tolerance, acceptance, diversity, inclusivity, unity, equality, justice, fairness, honesty, integrity, accountability, responsibility, morality, ethics, principles, values, beliefs, faith, hope, optimism
Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 959582434, Size: 512x512, Model hash: 84d76a0328, Model: epicrealism_naturalSinRC1VAE, Version: v1.7.0](https://i.imgur.com/hU8Ut4p.png)
Full Prompt
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Create stable diffusion metadata based on the given english description. Luminia
### Input:
favorites and popular SFW
### Response:
"Luminia" can be any short description, more info on my SD dataset here.
Training Details
Click to see details
Model Description
Train by: Nekochu, Model type: Llama, Finetuned from model Llama-2-13b-chat
Continue from the base of LoRA Luminia-13B-v2-QLora
Know issue: [issue]
Trainer
hiyouga/LLaMA-Efficient-Tuning
Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM.
Training hyperparameters
The following hyperparameters were used during training:
- num_epochs: 1.0
- finetuning_type: lora
- quantization_bit: 4
- stage: sft
- learning_rate: 5e-05
- cutoff_len: 4096
- num_train_epochs: 3.0
- max_samples: 100000
- warmup_steps: 0
- train_batch_size: 1
- distributed_type: single-GPU
- num_devices: 1
- warmup_steps: 0
- rope_scaling: linear
- lora_rank: 32
- lora_target: all
- lora_dropout: 0.15
- bnb_4bit_compute_dtype: bfloat16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
training_loss:
![Nekochu](https://i.imgur.com/qhuPG6F.jpg)
Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0
- Downloads last month
- 59
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.