|
--- |
|
library_name: transformers |
|
tags: |
|
- flux |
|
- stable-diffusion |
|
- prompt-enhancer |
|
license: apache-2.0 |
|
language: |
|
- en |
|
base_model: |
|
- google/flan-t5-small |
|
--- |
|
|
|
# Model Details |
|
|
|
The model has been finetuned to optimize the conversion of short prompts into detailed prompts, enabling stable diffusion or flux-based image generation. This refinement enables users to craft more specific and nuanced requests, resulting in higher-quality and more coherent images. |
|
|
|
|
|
### Model Description |
|
|
|
- **Developed by:** [Imran Ali] |
|
- **Model type:** [T5 (Text-to-Text Transfer Transformer)] |
|
- **Language(s) (NLP):** [English] |
|
- **License:** [apache-2.0] |
|
- **Finetuned from model:** [t5-small] |
|
- **Demo:** [Demo Space](https://huggingface.co/spaces/imranali291/flux-prompt-enhancer) |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
```python |
|
from transformers import T5Tokenizer, T5ForConditionalGeneration |
|
|
|
# Load the tokenizer and model |
|
tokenizer = T5Tokenizer.from_pretrained("imranali291/flux-prompt-enhancer") |
|
model = T5ForConditionalGeneration.from_pretrained("imranali291/flux-prompt-enhancer") |
|
|
|
# Example input |
|
input_text = "Futuristic cityscape at twilight descent." |
|
|
|
# Tokenize input |
|
input_ids = tokenizer(input_text, return_tensors="pt").input_ids |
|
|
|
# Generate output |
|
output = model.generate(input_ids, max_length=128, eos_token_id=tokenizer.eos_token_id, do_sample=True, top_p=0.9, temperature=0.7, repetition_penalty=2.5) |
|
print(tokenizer.decode(output[0], skip_special_tokens=True)) |
|
``` |