Getting only noise when generating
Please help!
I'm testing the example code, and all I get is noisy images. See the example below:
I am using with cpu offload up to 16 blocks (takes about 8secs/it):
python qwen_image.py --cpu_offload --cpu_offload_blocks 16
I have
Ubuntu 22.04
python3.10.2
torch2.8.0+cu128
RTX3090 + 46GBRAM + 20GB swap
UPDATE:
The generation with FLUX.1-dev-DF11 works good
same here
same
I tried to reproduce this problem but it doesn't occur on my system (Ubuntu 20.04, A100-40GB, 500GB RAM, torch==2.6.0).
What output messages are you seeing? Here are the outputs when I ran python qwen_image.py --cpu_offload --cpu_offload_blocks 16 --num_inference_steps 10
:
The config attributes {'pooled_projection_dim': 768} were passed to QwenImageTransformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading DFloat11 safetensors (offloaded to CPU, memory pinned): 100%|ββββββββββββββββ| 1/1 [00:36<00:00, 36.24s/it]
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 20.76it/s]
Loading pipeline components...: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 9.11it/s]
0%| | 0/10 [00:00<?, ?it/s]
Allocated 339738624 bf16 on device cuda:0
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10/10 [01:16<00:00, 7.65s/it]
Max memory: 22.34 GB
I am getting coherent images, not complete noise.
Here are a few things worth trying:
- Uninstall the
diffusers, transformers, dfloat11
packages and reinstall them. - Try an older pytorch release, 2.6 or 2.7.
Well, I did as you adviced, and uninstalled diffusers, transformers and dfloat11. I reinstalled first dfloat11, then diffusers.
After the installation I'm getting good images.
my output:
The config attributes {'pooled_projection_dim': 768} were passed to QwenImageTransformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading DFloat11 safetensors (offloaded to CPU, memory pinned): 100%|βββββββββββββββββββββ| 1/1 [00:33<00:00, 33.71s/it]Fetching 16 files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 16/16 [00:00<00:00, 40.26it/s]Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 12.28it/s]Loading pipeline components...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 5.54it/s]
0%| | 0/20 [00:00<?, ?it/s]Allocated 339738624 bf16 on device cuda:0
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 20/20 [02:42<00:00, 8.13s/it]Max memory: 22.42 GB
Thank you!
Thank you, LeanQuant!
conda activate df11
pip uninstall dfloat11 transformers diffusers -y
pip install -U dfloat11[cuda12]
pip install git+https://github.com/huggingface/diffusers
Then it works:
(df11) root@alsdfk2384:~/DF11# python qwen_image.py --cpu_offload --cpu_offload_blocks 16 --num_inference_steps 10
The config attributes {'pooled_projection_dim': 768} were passed to QwenImageTransformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading DFloat11 safetensors (offloaded to CPU, memory pinned): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:33<00:00, 33.86s/it]
2025-08-19 01:14:27,750 - modelscope - INFO - Intra-cloud acceleration enabled for downloading from Qwen/Qwen-Image
Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/Qwen/Qwen-Image
Loading checkpoint shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 28.44it/s]
Loading pipeline components...: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 10.33it/s]
etrace enabl:0 traceDepth:0 enableModu:-1774445159 duration:32765 iteration:0
0%| | 0/10 [00:00<?, ?it/s]Allocated 339738624 bf16 on device cuda:0
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 10/10 [00:43<00:00, 4.37s/it]
Max memory: 22.34 GB
The image generation works well now! Really appreciate it!
My packages now is :
accelerate==1.10.0
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.10.0
Brotli==1.1.0
certifi==2025.8.3
charset-normalizer==3.4.3
click==8.2.1
cupy-cuda12x==13.5.1
dfloat11==0.3.2
diffusers @ git+https://github.com/huggingface/diffusers@555b6cc34f1973c36a1d168edee0960625c00c8c
exceptiongroup==1.3.0
fastapi==0.116.1
fastrlock==0.8.3
ffmpy==0.6.1
filelock==3.19.1
fsspec==2025.7.0
gradio==5.21.0
gradio_client==1.7.2
groovy==0.1.2
h11==0.16.0
hf-xet==1.1.7
httpcore==1.0.9
httpx==0.28.1
huggingface-hub==0.34.4
idna==3.10
importlib_metadata==8.7.0
Jinja2==3.1.6
markdown-it-py==4.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
modelscope==1.29.0
mpmath==1.3.0
networkx==3.4.2
numpy==2.2.6
nvidia-cublas-cu12==12.8.3.14
nvidia-cuda-cupti-cu12==12.8.57
nvidia-cuda-nvrtc-cu12==12.8.61
nvidia-cuda-runtime-cu12==12.8.57
nvidia-cudnn-cu12==9.7.1.26
nvidia-cufft-cu12==11.3.3.41
nvidia-cufile-cu12==1.13.0.11
nvidia-curand-cu12==10.3.9.55
nvidia-cusolver-cu12==11.7.2.55
nvidia-cusparse-cu12==12.5.7.53
nvidia-cusparselt-cu12==0.6.3
nvidia-nccl-cu12==2.26.2
nvidia-nvjitlink-cu12==12.8.61
nvidia-nvtx-cu12==12.8.55
orjson==3.11.2
packaging==25.0
pandas==2.3.1
pillow==11.3.0
psutil==7.0.0
pydantic==2.11.7
pydantic_core==2.33.2
pydub==0.25.1
Pygments==2.19.2
python-dateutil==2.9.0.post0
python-multipart==0.0.20
pytz==2025.2
PyYAML==6.0.2
regex==2025.7.34
requests==2.32.4
rich==14.1.0
ruff==0.12.9
safehttpx==0.1.6
safetensors==0.6.2
semantic-version==2.10.0
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
starlette==0.47.2
sympy==1.14.0
tokenizers==0.21.4
tomlkit==0.13.3
torch==2.7.1+cu128
torchaudio==2.7.1+cu128
torchvision==0.22.1+cu128
tqdm==4.67.1
transformers==4.55.2
triton==3.3.1
typer==0.16.0
typing-inspection==0.4.1
typing_extensions==4.14.1
tzdata==2025.2
urllib3==2.5.0
uvicorn==0.35.0
websockets==15.0.1
zipp==3.23.0