v1.5 - Lora update - New Wan 2.2 Lightx2v Moe.
v1.4 - Lynx GGUF option & minor edits.
v1.3 - Added Lynx face consistency model.
v1.2 - Color Match bypass fix and minor edits. Added new Wan 2.2 Reward loras to experiment with. Helps with overall quality, motion, color but can have weird outputs with it also.
v1.1 - Added Color Match to experiment with.
This is a modified version of this workflow.
Using the lynx face consistency model, pusa, lightxv2-MOE for wan 2.2/lightning loras its twice faster than example WanVideoWrapper workflow. Great face consistency with various NSFW loras loaded.
Downside is it is using more vram than the example WanVideoWrapper workflow, I'm not sure why but Pusa scheduler must be the reason, then again double inference speed must also be from it.
Its a looping workflow; by default you can create up to 5 videos with the nodes already present. 2x upscale & interpolation at the end.
Description
Lora update - Wan 2.2 Lightx2v Moe.
FAQ
Comments (5)
Hey, I saw this interesting node that takes the latent directly from the last gen and uses it on the next one instead of using last frame. I'm about to try it in my workflow but I would have to copy paste my workflow 3 or 4 times and make a giant mess lol. I can't build a looping workflow as clean as this. Anyway if you get the chance give it a try, it might be good enough to replace lynx and color match entirely
https://github.com/synystersocks/ComfyUI-SocksLatentPatcher
Here's his example without vace
https://github.com/synystersocks/ComfyUI-SocksLatentPatcher/tree/main/example_workflows/i2v
How to lower the amount of videos to 1 or 3?
Hi dozaler, what's your latest Wan 2.2 favorite model? Do you still prefer the Wan2.2 Insight model?
Sorry I haven't been using comfy/AI stuff etc for a while. I was using the insight models but I haven't noticed any difference tbh.
I thought lynx was for t2v only. How did you make it work with i2v?
