Update 08/08/2025
There was an error in the link I shared for the Lighting 2.2 I2V LoRA by Kijai. I’ve just fixed it so you can download it now.
---------
Update 07/08/2025
What I’ll say next is my personal opinion — I could be wrong — but based on the few tests I’ve done, this is my conclusion.
The more steps you use, the lower the strength you should assign to the lighting LoRAs. For example, I believe that for 6+6 or 7+7 steps, a strength of 1.0 is appropriate.
I think different configurations not only affect generation speed but also the movement itself.
Personally, using 6+6 or 7+7 steps with KSampler and LoRA strength at 1.0 gives more natural character movements, without losing image quality. The downside is that it's slower, whether you're using Lightning 2.2 or Lightx2 2.1 T2V.
On the other hand, using 4+4 steps with a strength of around 1.50 for high and 1.20 for low results in simpler movements and only a slight reduction in quality, but it’s much faster. That setup might be useful for certain types of animations.
LCM Sampler also seems to work.
You can run your own tests with different sampler and lighting LoRA configurations.
Lightx2v Actual Kijai Lora speed for 2.2: https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Wan22-Lightning
----------
I made a compact WF for WAN 2.2 GGUF
Links:
Lora Lightx2v (wan 2.1-But it might still be compatible.): https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Lightx2v
GGUF: https://huggingface.co/QuantStack/Wan2.2-I2V-A14B-GGUF/tree/main
or https://huggingface.co/bullerwins/Wan2.2-I2V-A14B-GGUF/tree/main
You need a High model and a Low model with the same quantization. The VAE is the same as WAN 2.1.
Make sure to update your ComfyUI.
Description
FAQ
Comments (14)
This workflow works great. Thanks you.
Is GGUF better than FP8, what is the difference ?
What does ‘better’ mean?
GUFF = slower , lower Vram Variation.
FP8 = Faster, Higher vram usage.
dkjdjsww Better as in is the result higher quality somehow ? It's said that GUFF is FP8+FP16 and has better result than FP8
Ok so I've tested both, the difference is massive, GGUF Q8 is way better than FP8 scaled and I don't see any speed loss on 5080, it has better compatibility with Loras and better motions
Works great with 4070 12GB, just added interpolation to the end.
pretty new to all this, what nodes did you use for that?
holy f*** you were not joking, this produced a 512x304 vid , 4 secs long in 131 secs with my 16gb vram card with decent quality for that res, that's ridiculously fast, good job.
place give me a link to : wan itv 720 e50 with trigger
Live Wallpaper Style - Wan2.1 I2V 14B 720P | Wan Video 14B i2v 720p LoRA | Civitai
The creator of Lora uploaded more versions for WAN 2.2, I think only the low version.
EechiZero Thank you
This workflow is super helpful, and I'm generating really high quality renders, but I have a few questions if anyone out there can help answer them:
1) Why does with workflow include a T2V lora as well as an I2V lora? I have both of them loaded into each Lora Sampler as in the workflow, but I'm curious why there's a T2V version included.
2) I'm not getting much prompt coherence, even using 2.1 Loras - any ideas how I can improve this aspect?
3) I was able to avoid an OOM issue (I have a 4080) by adding in a VRAM Debug node between Sampler 1 and Sampler 2 that clears the cache and unloads the UNET 1 model. Any downsides to this fix?
1- I forgot to take out the I2V LoRA — it’s not needed in the workflow. That one’s just for giving a still image a bit of animation, more like a wallpaper with tiny movements.
2- Depends on which LoRAs you’re using. Looks like Wan 2.2 still has issues with some 2.1 LoRAs. Also, some Lightx2v LoRAs mess with the motion a bit — try other Lightx2 ones and see if any work better.
3- I’ve got no idea about that, I never had VRAM problems. You could try asking on Discord or Reddit, maybe someone with a similar setup can help.
-Workflow updated with NAG nodes for better adherence to the negative prompt.
Did you use ReActor?
