The generated character shots are all in slow motion.
Thank you for your work. I tried several times with the same prompt and found that, compared to the original model, the scenes showing character movements are all in slow motion after using this Lora.
Hi, thank you for your feedback. The T2V lora slows the motion down, we are currently finding a way to solve this.
The I2V lora is better than the T2V lora, still worse than the 40-step base model though.
On the native, use 3 ksamplers, with the first step at cfg 3.5 without light. On the wrapper, you need to insert a variable value on the first cfg sampler.
On the native, use 3 ksamplers, with the first step at cfg 3.5 without light. On the wrapper, you need to insert a variable value on the first cfg sampler.
A very interesting recommendation pls let me know how exactly to set up the ksamplers? 2 samplers for high noise 1st sanpler 1step cfg 3.5, 2nd sampler 2 steps with lora and 1 sampler for low....2 steps?
If this is about the new Seko lora, I have the same issue with I2V. The videos are all in slow motion. I am using native ComfyUI components together with sageattention.
3ksampler
1 ksampler cfg 3.5 no lora (1-4 steps) HIGH
2 ksampler cfg 1 (1-4 steps) ltrx x2 HIGH
3 ksampler cfg 1 (1-4 steps) ltrx 0,7 LOW
Its an issue with resolutions somehow! My test: res 1280x720 normal speed, 960x544 a bit slower. 832x480 slow motion ..even though 832x480 is 1 of the recommended resolutions.
Prompt from the creators of the loras: In a warmly lit living room, an elderly man with gray hair sits in a wooden armchair adorned with a blue cushion. He wears a gray cardigan over a white shirt, engrossed in reading a book. As he turns the pages, he subtly adjusts his posture, ensuring his glasses stay in place. He then removes his glasses, holding them in his hand, and turns his head to the right, maintaining his grip on the book. The soft glow of a bedside lamp bathes the scene, creating a calm and serene atmosphere, with gentle shadows enhancing the intimate setting.
having the same issue, everything in slow motion
quality is improved though now its been updated for native workflow
quality wise its much much better than it was
the only problem now its just all in slow motion
Agreed. Quality has been improved, it works with very low steps and seems to be heavily optimized. I've never been able to gen above like 832x832 and now i can generate hd videos in about 2-3 minutes.
It seems the 3ksampler trick doesnt really work for me in the native workflow, the result looks way too contrasted
3ksampler
1 ksampler cfg 3.5 no lora (1-4 steps) HIGH
2 ksampler cfg 1 (1-4 steps) ltrx x2 HIGH
3 ksampler cfg 1 (1-4 steps) ltrx 0,7 LOW
Honestly, that seemed to have worked. I used total of 6 steps. First video is with 3.5 cfg, second 6.0. I feel like the second one is a little bit faster.
Only problem here that this doubles and at some cases triples gen time...
Hmm, it just boosts my other civitai loras too much and they end up 100x. Things are flopping all over the place and morphing and degrading and doing all kinds of weird stuff.
Wish the 3ksampler thing would work for me
Are you guys using the native or kijai wf?
I am using native components.
3ksampler
1 ksampler cfg 3.5 no lora (1-4 steps) HIGH
2 ksampler cfg 1 (1-4 steps) ltrx x2 HIGH
3 ksampler cfg 1 (1-4 steps) ltrx 0,7 LOW
3 samper settings is the ultimate key!!! Thx mate!!
Generated on RTX 4070 12 GB, 64 GB RAM multigpu Q8 gguf
Refering to https://huggingface.co/lightx2v/Wan2.2-Lightning/discussions/5 here are my results:
1st totally 12 steps. 1 ksampler cfg 3.5 no lora (4 steps) HIGH
2 ksampler cfg 1 ( 4 steps) ltrx 0.6 HIGH
3 ksampler cfg 1 ( 4 steps)
2nd totally 6 steps. 1 ksampler cfg 3.5 no lora (2 steps) HIGH
2 ksampler cfg 1 ( 2 steps) ltrx 0.6 HIGH
3 ksampler cfg 1 ( steps) ltrx 0,95 LOW Res 960x650 Gentime 540 secs
3rd totally 6 steps. 1 ksampler cfg 3.5 no lora (2 steps) HIGH
2 ksampler cfg 1 ( 2 steps) ltrx 0.6 HIGH
3 ksampler cfg 1 ( 2 steps) ltrx 0,95 LOW Res 1280x720 Gentime 11:40
2 sampler
totally 10 steps.
2 ksampler cfg 1 ( 5 steps) ltrx 0.6 HIGH
3 ksampler cfg 1 ( 5 steps) ltrx 0,95 LOW Res 960x650 Gentime 560 secs
Hi, thank you for your feedback. The T2V lora slows the motion down, we are currently finding a way to solve this.
The I2V lora is better than the T2V lora, still worse than the 40-step base model though.
3 samplers are working great pls check my examples!
ππΌ
Hi, thank you for your feedback. The T2V lora slows the motion down, we are currently finding a way to solve this.
The I2V lora is better than the T2V lora, still worse than the 40-step base model though.
3 samplers are working great pls check my examples!
Why is the video I generated blurry? Could you share your workflow? Thank you.
Hi, thank you for your feedback. The T2V lora slows the motion down, we are currently finding a way to solve this.
The I2V lora is better than the T2V lora, still worse than the 40-step base model though.
3 samplers are working great pls check my examples!
Why is the video I generated blurry? Could you share your workflow? Thank you.
WF is in the 1st vid from op. I guess ur "add noise" and return with left over noise seting in the ksamplers is not correct. 1st sampler enable+enable, 2nd disable+enable, 3rd disable+disable
On the native, use 3 ksamplers, with the first step at cfg 3.5 without light. On the wrapper, you need to insert a variable value on the first cfg sampler.
I use the Wrapper nodes because it seems to handle my lower VRAM better. I'm currently running 8 total stepsβ4 and 4. How many steps would you run at 3.5 on the first sampler using the scheduler?
Sorry for double post but in vid below you have WF (wrapped):
EDIT:
my spec: 5080 16vram + 64ram
my comfy .bat settings (portable): .\python_embeded\python.exe -s ComfyUI\main.py --use-sage-attention --windows-standalone-build --fast fp16_accumulation --listen 0.0.0.0 --lowvram --disable-smart-memory --reserve-vram 2
if you wanna use "lightning euler" scheduler you can find it in my git repo:
https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/clean-pr
i will try to keep it updated with kijai main repo but if you dont want to pull it just copy two files from:
https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/clean-pr/wanvideo/schedulers
You need "fm_solvers_euler.py" (its new file with implementation from Lightx2v team + beta/beta57 support) and "init.py" (contains implementation of schedulers for sampler - you need to replace yours file with this one)
Sorry for double post but in vid below you have WF (wrapped):
Thanks for the workflow! Where did you get the lightning_euler scheduler? When running your workflow I get a kinda fuzzy output, but the scheduler is set to euler only.
Thanks for the workflow! Where did you get the lightning_euler scheduler? When running your workflow I get a kinda fuzzy output, but the scheduler is set to euler only.
Its euler impementation made by lightx2v team , you can find it in my git repo - https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/testing/wanvideo/schedulers
Also - good prompt matters with bit of luck with good seed.
Thanks for the workflow! Where did you get the lightning_euler scheduler? When running your workflow I get a kinda fuzzy output, but the scheduler is set to euler only.
Its euler impementation made by lightx2v team , you can find it in my git repo - https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/testing/wanvideo/schedulers
Also - good prompt matters with bit of luck with good seed.
Ahhh got it, thanks again! I played around a bit, dont think its necessary to use CFG with the LOW model to get great motion. Instead I basically replaced sampler 3 with 3 steps of sampler 4 instead, so 5 steps in total. Sampler 1 0 - 1, Sampler 2 1 - 2, Sampler 3 3 - 5. I also changed the LoRA setup so that Steps 1 is loaded WITHOUT the lightx LoRA but this results in messed up videos after the first generation, probably not unloading the Model completely after each run.
hey dudes, write promptings for the video
Thanks for the workflow! Where did you get the lightning_euler scheduler? When running your workflow I get a kinda fuzzy output, but the scheduler is set to euler only.
Its euler impementation made by lightx2v team , you can find it in my git repo - https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/testing/wanvideo/schedulers
Also - good prompt matters with bit of luck with good seed.
I copied the fm_solvers_euler.py file into the ..\wanvideo\schedulers folder on my PC, restarted ComfyUI, and refreshed my workflow, but I still don't see the lightning_euler option. What am I missing?
You can be honest, manβam I just retarded?
I copied the fm_solvers_euler.py file into the ..\wanvideo\schedulers folder on my PC, restarted ComfyUI, and refreshed my workflow, but I sWhat am I missing?
You can be honest, manβam I just retarded?
Cant tell - dont know you π
But you need also init.py file, file you copied have class but implenentation is in init file.
(i jjust added another scheduler made by light x2v guys with beta/beta57 support)
I actually see this as a happy accident, I can generate 24fps videos directly π€£, granted only 4 seconds.
I copied the fm_solvers_euler.py file into the ..\wanvideo\schedulers folder on my PC, restarted ComfyUI, and refreshed my workflow, but I sWhat am I missing?
You can be honest, manβam I just retarded?
Cant tell - dont know you π
But you need also init.py file, file you copied have class but implenentation is in init file.
(i jjust added another scheduler made by light x2v guys with beta/beta57 support)
Thank you. I got it working. Have you submitted these changes to Kijai's repository? This new lightning_euler schedule is nice.
FYI: I'm definitely a little bit retarded. π€ͺ
no worries m8 - we all are in some sense ;)
thanks to you iv edited my post above, added info bout this sampler and my spec with comfy settings
For people like me who use KJ Nodes I2V:
This 3 samplers native workflow works well: https://civitai.com/models/1864506/wan22-i2v-faster-motions-lightx2v-gguf-upscale-60fps
I followed Mu5hr00moO's T2V guide, try to set up my I2V KJ WF, but got blurry video with heavy noise.
It seems that simply change the CFG and start/end steps on KJ Samplers doesn't works like the native KSampler(Advanced).
I thought the differences should be " add noise " and "return left over noise", but I'm silly and can't figure out how to set up KJ Samplers.
Sorry for double post but in vid below you have WF (wrapped):
EDIT:
my spec: 5080 16vram + 64ram
my comfy .bat settings (portable): .\python_embeded\python.exe -s ComfyUI\main.py --use-sage-attention --windows-standalone-build --fast fp16_accumulation --listen 0.0.0.0 --lowvram --disable-smart-memory --reserve-vram 2if you wanna use "lightning euler" scheduler you can find it in my git repo:
https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/clean-pr
i will try to keep it updated with kijai main repo but if you dont want to pull it just copy two files from:https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/clean-pr/wanvideo/schedulers
You need "fm_solvers_euler.py" (its new file with implementation from Lightx2v team + beta/beta57 support) and "init.py" (contains implementation of schedulers for sampler - you need to replace yours file with this one)
Scheduler for native gguf pls!!
@Mu5hr00moO Sorry to keep bothering you, but you didn't answer my question asking if you'd submitted these changes to Kijai's main branch. Seems like a pretty straight forward change that he could incorporate easily. I hate to have to keep replacing the init file every time I do a WanWrapper update.
@97Buckeye : I did: https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1022
No answer from kijai. Sorry for not answering, i missed your question. Who is retarded now? π
@97Buckeye : I did: https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1022
No answer from kijai. Sorry for not answering, i missed your question. Who is retarded now? π
Believe me, I'm definitely the retarded one. π€ͺ
Did you notice that your submission says it has conflicts to resolve? At least Kijai has seen it. π€
@97Buckeye i was not when i made pr. resolved conflicts, branch is now fully compatible with main. Please lets not go further with offtop here - you can leave msg at git.
how should i set up 3 sampler on kijais video wrapper? i always get blurry grainy videos
3ksampler
1 ksampler cfg 3.5 no lora (1-4 steps) HIGH
2 ksampler cfg 1 (1-4 steps) ltrx x2 HIGH
3 ksampler cfg 1 (1-4 steps) ltrx 0,7 LOW
It works but if I try to use fp8 models, I get OOM error. 5060 ti 16 gb
Use Q4
I'm able to get the most recent lora to work with just 2 sampling steps. But strength is up to 3.00, and each sampler is 4 steps. So high noise fp8: 1-CFG / 4 steps / 3.00 lightx2v and low noise fp8: 1-CFG / 4 steps / 1.00 liughtx2v. total 8 steps.
When using wan 2.1 loras, we add both high noise and low noise. Where should I add other different loras in this workflow, only low noise?
I have tested the following three scenarios for the slow motion issue:
(9:16 ratio) 480 x 832 video, 81 frames (16fps), 5 seconds === No slow motion
(9:16 ratio) 576 x 1024 video, 81 frames (16fps), 5 seconds === No slow motion
(9:16 ratio) 720 x 1280 video, 81 frames (16fps), 5 seconds === Slow motion issue occurs
(9:16 ratio) 720 x 1280 video, 49 frames (16fps), 3 seconds === No slow motion
So it is possible that the issue might be related to a combination of frame rate and resolution. The issue is most likely baked in the base high noise model of Wan 2.2 (I used FP8 scaled) since I ran the high noise without lora and low noise with lora.
You can move past CFG 1.0 with a "Skimmed CFG" node set to 1-2 and Sampler CFG to 3.5 for High and Low. Negative prompts without having to rely on NAG.
Hello, I'm a bit late but I hope you guys can help.
My settings: Q5_K-M
8 Steps I2V, Lora on High and Low 4 steps each. Shift 5.0, LCM + Simple. Good results.
I tried setting another Ksampler for High without the lora in my settings and it did not work great:
Tried to add only 1 step on High without Lora, no succes
Tried to add 2 Steps, no succes.
Tried to add 4 Steps, no succes.
Changed Shift to 8 like in the I2V WF posted above for I2V, no succes.
Changed to Euler/Beta like in the I2V WF posted above, no succes.
What I'm doing wrong? I also tried everything like the WF above, same amount of steps with same distribution and nothing.
Also, I can't find a ".\wanvideo\schedulers" folder on my PC. Where should I put the files to test the Lightning euler scheduler?
Is the lightning euler scheduler a must for this Lora?
Thank you all you guys
You can move past CFG 1.0 with a "Skimmed CFG" node set to 1-2 and Sampler CFG to 3.5 for High and Low. Negative prompts without having to rely on NAG.
where in the workflow did you put the skimmed cfg? i am trying after shift right now with full skim negative and disable flip enabled.
You can move past CFG 1.0 with a "Skimmed CFG" node set to 1-2 and Sampler CFG to 3.5 for High and Low. Negative prompts without having to rely on NAG.
where in the workflow did you put the skimmed cfg? i am trying after shift right now with full skim negative and disable flip enabled.
you can do like this (OFC str not 7 XD, go for 1 or 2)
iv made CFG skim support for WVW here:
https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1122
Bump for visibility.
Here some samples with base settings:
4 steps = 2 high, 2 low = using lighting T2V LoRAs
CFG 4.0 on high, CFG 2.5 on low
Same CFG Skimming settings for both samplers
1:
2:
3:
Wonderful! Do you have a workflow that I can test?
Wonderful! Do you have a workflow that I can test?
gimme sec, i will make PR with new addictions and i will add WF in git comment
edit:
done - https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1122
Wonderful! Do you have a workflow that I can test?
gimme sec, i will make PR with new addictions and i will add WF in git comment
edit:
done - https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1122
This might be a dumb question, but oh well... Are you Kijai but with a different name? Or do you just offer excellent additions to his nodes? Do you have any idea when this CFG Skimming will be available in his main branch? I don't usually like to jump to non-main branches, but I would love to give this a try.
Wonderful! Do you have a workflow that I can test?
gimme sec, i will make PR with new addictions and i will add WF in git comment
edit:
done - https://github.com/kijai/ComfyUI-WanVideoWrapper/pull/1122This might be a dumb question, but oh well... Are you Kijai but with a different name? Or do you just offer excellent additions to his nodes? Do you have any idea when this CFG Skimming will be available in his main branch? I don't usually like to jump to non-main branches, but I would love to give this a try.
hehe nop - im only myself XD
no clue about merging tho, u need to ask kijai about it. He seems to be busy with recent s2v now.
i will try to separate all guiding logic in his code to make things simpler and easier to maintain.
In my repo https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/CFG_Skimming is merged with kijai main
and https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/CFG_Dev with his s2v branch
edit: because im using both of my branches i will keep them updated with main
hehe nop - im only myself XD
no clue about merging tho, u need to ask kijai about it. He seems to be busy with recent s2v now.
i will try to separate all guiding logic in his code to make things simpler and easier to maintain.In my repo https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/CFG_Skimming is merged with kijai main
and https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper/tree/CFG_Dev with his s2v branchedit: because im using both of my branches i will keep them updated with main
I'm new to trying forked code. So, do I JUST use your code and disable Kijai's WanWrapper nodes?
I'm new to trying forked code. So, do I JUST use your code and disable Kijai's WanWrapper nodes?
better way is to use git:
(from custom_nodes folder if you dont have it downloaded)
git clone https://github.com/kijai/ComfyUI-WanVideoWrapper.git
(go inside WVW folder):
cd ComfyUI-WanVideoWrapper
(add my branch as upstream):
git remote add upstream https://github.com/Mu5hr00moO/ComfyUI-WanVideoWrapper.git
(get all branches from kijai and me):
git fetch --all
after that (inside ComfyUI-WanVideoWrapper):
if want Kijai main:
git checkout main
if mine CFG_Skimming:
git checkout upstream/CFG_Skimming
remember to refresh comfy and browser after swapping
Bump for visibility.
Here some samples with base settings:4 steps = 2 high, 2 low = using lighting T2V LoRAs
CFG 4.0 on high, CFG 2.5 on low
Same CFG Skimming settings for both samplers
And what was the other settings? I am getting the same slowmo :( with your settings + Linear Dual Scale in WVW WF