Text-to-Image
Diffusers
Safetensors

Support `TwinFlow` via `diffusers`

#6
by alvarobartt HF Staff - opened

Hey @kenshinn and team!

This PR enables inclusionAI/TwinFlow via diffusers, so that it can be loaded as:

from diffusers import AutoPipelineForText2Image

pipeline = AutoPipelineForText2Image.from_pretrained("inclusionAI/TwinFlow", revision="refs/pr/6", device="cuda")
image = pipeline("<PROMPT>").images[0]

Note that this PR only moves the files from the nested directories into the root directory, to follow the same structure as other Diffusers-compatible models as e.g. https://huggingface.co/Qwen/Qwen-Image, so that Diffusers can easily identify and use those files. πŸ€—

alvarobartt changed pull request title from WIP to Support `TwinFlow` via `diffusers`
alvarobartt changed pull request status to open
inclusionAI org

Thanks for this PR! @alvarobartt
I noticed that you moved all the files to the root directory, which was also my initial thought.
However, considering that I might continue to upload updated models (v1.1, v1.2, etc.), I created subdirectories. But I found that the download count cannot be triggered in subdirectories. Do you have any solutions to this problem?

Hey @kenshinn , so in order to follow the same structure as other similar repositories on the Hub, I'd rather create multiple repositories and then group those under the same collection (e.g. https://huggingface.co/collections/inclusionAI/twinflow) and then I'll rename this model to e.g. inclusionAI/TwinFlow-Qwen-Image-v1.0, and then create similar repositories with either other versions or rather other base models as you mentioned in the model card I think you were targeting https://huggingface.co/Tongyi-MAI/Z-Image-Turbo soon too πŸ€—

In any case, it's obviously up to you and the team, this is just a suggestion on how to make those models play better within the Hugging Face ecosystem to reduce the friction with users used to models following the structure as in e.g. https://huggingface.co/Qwen/Qwen-Image; also please let me know if there's anything I can do to help further, and happy to connect to discuss and help https://huggingface.co/inclusionAI with upcoming model releases!

P.S. And yes, moving those to separate repositories would help keep track of the download numbers better, as well as for enabling Inference Endpoints deployments from the Hub, as well as other features within the Hub if uploading a model per repository instead.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment