Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ inference: true
|
|
13 |
|
14 |
---
|
15 |
|
16 |
-
#
|
17 |
|
18 |
This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
|
19 |
|
@@ -24,7 +24,7 @@ The main validation prompt used during training was:
|
|
24 |
|
25 |
|
26 |
```
|
27 |
-
A man in front of a Sahin car
|
28 |
```
|
29 |
|
30 |
## Validation settings
|
@@ -48,9 +48,9 @@ You may reuse the base model text encoder for inference.
|
|
48 |
|
49 |
## Training settings
|
50 |
|
51 |
-
- Training epochs:
|
52 |
-
- Training steps:
|
53 |
-
- Learning rate:
|
54 |
- Effective batch size: 1
|
55 |
- Micro-batch size: 1
|
56 |
- Gradient accumulation steps: 1
|
@@ -60,7 +60,7 @@ You may reuse the base model text encoder for inference.
|
|
60 |
- Optimizer: AdamW, stochastic bf16
|
61 |
- Precision: Pure BF16
|
62 |
- Xformers: Not used
|
63 |
-
- LoRA Rank:
|
64 |
- LoRA Alpha: None
|
65 |
- LoRA Dropout: 0.1
|
66 |
- LoRA initialisation style: default
|
@@ -70,7 +70,7 @@ You may reuse the base model text encoder for inference.
|
|
70 |
|
71 |
### Sahin
|
72 |
- Repeats: 0
|
73 |
-
- Total number of images:
|
74 |
- Total number of aspect buckets: 1
|
75 |
- Resolution: 1 megapixels
|
76 |
- Cropped: True
|
@@ -86,11 +86,11 @@ import torch
|
|
86 |
from diffusers import DiffusionPipeline
|
87 |
|
88 |
model_id = 'black-forest-labs/FLUX.1-dev'
|
89 |
-
adapter_id = 'adaozer/
|
90 |
pipeline = DiffusionPipeline.from_pretrained(model_id)
|
91 |
pipeline.load_lora_weights(adapter_id)
|
92 |
|
93 |
-
prompt = "A man in front of a Sahin car"
|
94 |
|
95 |
|
96 |
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
|
|
|
13 |
|
14 |
---
|
15 |
|
16 |
+
# SahinFLUX
|
17 |
|
18 |
This is a LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
|
19 |
|
|
|
24 |
|
25 |
|
26 |
```
|
27 |
+
A man in front of a white Sahin car
|
28 |
```
|
29 |
|
30 |
## Validation settings
|
|
|
48 |
|
49 |
## Training settings
|
50 |
|
51 |
+
- Training epochs: 5
|
52 |
+
- Training steps: 100
|
53 |
+
- Learning rate: 1.0
|
54 |
- Effective batch size: 1
|
55 |
- Micro-batch size: 1
|
56 |
- Gradient accumulation steps: 1
|
|
|
60 |
- Optimizer: AdamW, stochastic bf16
|
61 |
- Precision: Pure BF16
|
62 |
- Xformers: Not used
|
63 |
+
- LoRA Rank: 4
|
64 |
- LoRA Alpha: None
|
65 |
- LoRA Dropout: 0.1
|
66 |
- LoRA initialisation style: default
|
|
|
70 |
|
71 |
### Sahin
|
72 |
- Repeats: 0
|
73 |
+
- Total number of images: 18
|
74 |
- Total number of aspect buckets: 1
|
75 |
- Resolution: 1 megapixels
|
76 |
- Cropped: True
|
|
|
86 |
from diffusers import DiffusionPipeline
|
87 |
|
88 |
model_id = 'black-forest-labs/FLUX.1-dev'
|
89 |
+
adapter_id = 'adaozer/SahinFLUX'
|
90 |
pipeline = DiffusionPipeline.from_pretrained(model_id)
|
91 |
pipeline.load_lora_weights(adapter_id)
|
92 |
|
93 |
+
prompt = "A man in front of a white Sahin car"
|
94 |
|
95 |
|
96 |
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
|