A remake of my Wan 2.1 I2V workflow to support Wan 2.2, nothing fancy
Join my Discord server for updates on new LoRAs, tips & tricks, Workflows, RunPod templates and the holy right to be close to an amazing charismatic person like myself:
https://discord.gg/fyha5Pzm
Description
FAQ
Comments (66)
Wow, that looks great!
"Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 19, 160, 88] to have 36 channels, but got 32 channels instead"
Update comfyUI, Update gguf nodes
I took me several cycles of updates (and one running that pip command to run through requirements.txt) for things to improve. If you don't have the new Wan 2.2 templates in 'Browse Templates' then you're too far behind.
I have updated ComfyUI. This is my current version.
ComfyUI: v0.3.46-2-g7d593baf. (2025-07-29)
ComfyManager: V3.35
This is still an issue. Can you provide a more in-depth solution?
I have updated ComfyUI. This is my current version.
ComfyUI: v0.3.46-2-g7d593baf. (2025-07-29)
ComfyManager: V3.35
This is still an issue. Can you provide a more in-depth solution??
Red_Line_Studios try removing "videohelpersuite" out of custom_nodes folder. That was the issue for me
dumb question lora of wan 2.1 working with 2.2?
No
I read a rumor if you turn the lora up to 2.0 it will work, will try this when I install later
DrainBamage Would love to hear if that worked!
mauriziomkr sadly I updated comyUI and I'm getting the popular torch error, unable to test. Will try again tonight and let you know results
DrainBamage That’s a pity. Hope it works mate
You mention sage attention in the workflow, but I'm failing to find the setting for it.
bump, not seeing it in the workflow
I am super impressed with Wan 2.2. so far but there is one MAJOR fly in the soup! The Lora I was using the most (Bouncing Boobs by ai_build_art) returns 'Lora key not loaded' and the boobs don't bounce as nicely as before! (Tragic I know!)
The boobs in your reference video are much nicer that what I've been able to get all day. Could you comment on the 'best boob bounce' for Wan 2.2. that you found so far? (e.g. Do we really need a boob bounce Lora now?)
no we do not need any lora for this. write simple prompt only.
dakshroy98 Boobs do bounce but quite stiff.. a lora trained for 2.2 is definitely needed for extra jiggle physics
"Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 19, 160, 88] to have 36 channels, but got 32 channels instead"
Update ComfyUI
I have updated ComfyUI. This is my current version.
ComfyUI: v0.3.46-2-g7d593baf. (2025-07-29)
ComfyManager: V3.35
This is still an issue. Can you provide a more in-depth solution?
Red_Line_Studios got the same error; in my case I tried using vae_2.2 which gave that error, switched back to vae_2.1
yastim8282 vae 2.2 seems to be for T2I only, doesnt work with I2V
I was getting that error then I saw a comment on Reddit about removing a Flow2 node which worked for a while then stopped working. In the end I reinstalled Comfyui_Portable from scratch and ran all the updates and now it works (probably until the next update)
2.2. and 2.1 vae are giving me the same problem, and I don't have flow2 installed either. It only happens with this workflow. Every other wan2.2 workflow I've used works.
ultimaniac do you use the 5B model in the other workflows? I read it only works with this specific model
are you able to make a version that contains sage attention and tea cache / block swap features like in the previous workflow or is that not compatible with 2.2? Also i know 2.1 loras arent compatible, but does that mean 2.2 is nsfw trained?
Not compatible yet
HearmemanAI yes. we need tea cache with 2.2, cause generating 3 sec takes 35 min
romanfmz373 I'm using high noise low noise model and getting 8 sec video in under 12 mins including reloading models as Im on 12gb vram. lightx2v is very fast and use q3 model if you want to add more loras on top.
SynthArtSnap08 what workflow is this with lightx2v and loras?
Is it intended that the WanImageToView and VAE Decode nodes have missing inputs? I get a lot of warnings when opening the workflow and also in the console, the job does not fail but there is no output!?
I2V , why video look accelerated ?
PS: I switch to 480x832 or it is way too slow on runpod 5090
Nice node for setting the number of steps then halving it and filling all 4 fields, very useful thanks :)
FIX FOR ERROR:
"KSamplerAdvanced
Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 19, 90, 60] to have 36 channels, but got 32 channels instead"Using all the exact same models as was defaulted in the workflow on first load, but when I try to run it I get this runtime error. Seems like an issue with the diffusion model/sampler config. Updaying ComfyUI did not seem to help.
I'm using:
Diffusion Models: wan2.2_i2v_high_noise_14B_fp16.safetensors
wan2.2_i2v_low_noise_14B_fp16.safetensors
Clip: umt5_xxl_fp8_e4m3fn_scaled.safetensors
VAE: wan_2.1_vae.safetensors
EDIT: It was definitely a ComfyUI version issue; if you use the portable version of ComfyUI, you will need to download it again ( for me the Update ComfyUI was not enough; you want the 0.3.48 version). If you're like me you'll need to move over a few folders into the new installation, e.g. 'models', 'custom_nodes', and 'user'.
LASTLY I did notice as well that despite the update there was one node that still did not work, and installing missing nodes does not resolve it. The
"ImageListToImageBatch" node has been renamed with spaces, "Image List To Image Batch". So I had to (and you may need to) re-add this node from the Node Library and replace the old with the new. Same node, just spelled with spaces all of a sudden. It worked well for me. Hope this helps anyone else struggling.
Thanks, HearmemanAI. :)
How much ram/vram do you need to run this ?
minimum 8GB VRAM
You cut it REAL close with 16GB VRAM. I wish we could do blockswap or utilize CPU/RAM for some processes. Sure - it will add time, but at least you can get a video out
what if i just want 24fps interpolated from 16fps?
Putting the fist 2 video helper nodes to 16fps and the 3rd to 24 fps worked, but the final video is just longer and slower, rather than same length and smoother
2.2 is not 16fps native like 2.1. Doubling speed and doubling framerate will preserve speed. Non-multiple changes will result in dropped frames, but usually not enough for a huge change speed. Check your multiplier. You're probably adding too many frames.
@Ponder_Stibbons yep, you were right. I just botched the (simple) math lol. thanks
@Gun4hire Btw since 2.2 is native 24fps you're better off using that for the decode and upscale. Then you have a direct comparison between pre and post-upscale. The motion is so much better now and it's much, much easier, imo, to prompt for really fast motion with the intent to interpolate. When you set the decode to 16fps it has no effect whatsoever on what is actually generated; it will just be the same exact frames running a bit slow. I know this stuff can tie your head in knots, we intuitively think in terms of a camera capturing frames of the real world. But the models only understand n frames per second, per training data. To it, there is no such thing as framerate, insofar as a sampling metric is concerned. At least not the way we think of it. If that makes any sense.
@Ponder_Stibbons I think i see what ur saying. So basically the generated video is what it is, so if you lower the fps, it will just make the video play slower?
So does 32 fps take twice as long to generate as 16 fps?
@Gun4hire WAN generates frames. It's the number of frames that affects how long it takes. FPS has nothing at all to do with it. Which is why for the initial decode you use the native framerate (what it was trained on). That's the closest the model can come to understanding what 'speed' is. They fed 2.2 videos that we play at 24 frames per second. So ideally, a 24 frame output should look like one second of training footage.
I know, it ties the brain in knots, but it will click. Just set the first preview for 24fps. When you interpolate, you're adding frames, so if you don't increase the framerate, it will slooow down. If you up the fps without interpolating, itwillspeedup. That's why the norm is 3x interpolation saved as 60fps. You're tripling the frames so you need to now cram them into the same time as one. So it (roughly) triples to 60fps, the combiner will drop superfluous frames as needed.
If you're concerned about how much time the sampler takes, don't think about rate at all, just the total frames. And the size of the latent of course. Lots of other factors really, but NONE of them are fps.
This workflow is a never ending error. Missing nodes
DownloadAndLoadGIMMVFIModel
Get Image Size
EG_WXZ_QH
GIMMVFL_interpolate
FastFilmGrain
ReActorRestoreFace
CM_FoatTolnt
FramePackFindNearestBucket
I already installed but still pops up "This nodes are missing"
I think you're in the wrong page.
This workflow doesn't have these nodes.
I have an rtx 3080ti. Failing on the KSampler portion for not having enough vram. Would you happen to know why? Any workarounds to fix it? Thank you
finding a low vram workflow on this site will help use the fliter on the site for workflow and search low vram or simple wan
I think maybe you should add a version that includes the nodes that are mentioned in the notes. It's not a huge problem to patch in sageattention, lora, etc. nodes for anyone familiar with WAN, but I can definitely see someone being confused by references to nodes that are not there. There are plenty of embedded WFs in posted videos that would better represent your work. IMO.
Just a warning, this will take significantly longer than a normal video to render. I usually get 5 minutes for 5 seconds. This has been running an hour, and with out any preview who knows what's coming out the other end.
So 300 seconds for a .5 megapixel a frame image, to 5000 seconds for this.
(Before someone says there's a preview, there is but it comes so late in the process).
That being said, ... yeah the results aren't bad)
Is it possible to add a lora to this workflow? Or even a blockswap?
Yes. Easy enoght. Even sageattention. Simple add lora manualy. Works perfect.
@SaiWeb where i need to place sageattention and lora? Can you please share your workflow?
@dayzsteam725 Yeah I never really figured that part out either. That being said, I thought you could run SageAttentioned by disabling XFormers and letting your command line know you want SageAttention instead (assuming you went through the motions of pip installing SageAttention - ChatGPT helps a ton here).
On the flipside, I "believe" you can add a lora to this workflow by going to the "1st Pass" group (blue) and expand both "GET_HIGH_NOISE_MODEL" and "GET_LOW_NOISE_MODEL". You can then do whats standard and just connect those nodes to a "Get Lora Node" and have that lora feed into each respective KSampler.
I'm also probably wrong in saying this, but keep a special eye out to Lora's that have both HIGH and LOW varients. You will need both for each respective KSampler. You probably also Need to interject a "Model sample" node for both as well.
So it would look like this
1: GET_HIGH_NOISE_MODEL > LORA > Model Sampling KSampler 1
2: GET_LOW_NOISE_MODEL > LORA > Model Sampling KSampler 2
The prompt I use to launch ComfyUI to get more bang for my buck
python main.py --use-sage-attention --disable-xformers --dont-upcast-attention --preview-method none
This helps out with some VRAM as well.
@dayzsteam725 Can't now show my workflows. Don't have my working space for now. You need place Sageattention node before ModelSamplangSD3 node (don't remember right name). LoraModelOnly node before Sageattention. And lora nodes as rawrasaurussss134 says from GET_HIGH_NOISE_MODEL or something like whatt. Simple look from where pin HIGH and LOW lead. Because Lora always come after model loading. Repeat for HIGH and LOW models.
This nodes doesn't exist in this workflow. You simple create then from ComfyUI itself. Double click on random free space in workflow and typing node name and it shows it. Also need install all missing nodes from ComfyUI manager. And also Kjnodes (don't remember also roght name now). Without Kjnodes this workflow can show some red missing nodes. In my case however.
@SaiWeb Thanks for the clarification - I learned something new :)
Hi can you please assist. installed comfyUI and manager, but even after installing all prerequirements it wont work because getNode and setNode is not known by comfy :(
Were you able to fix?
Go to Node Manager and install KJNodes
@josaethy254875 thx worked :)
@josaethy254875 still same errors
way too long even on 5090
works great for me. thanks
Hey, thanks for this, but I don't get something, there is an upscale model in the workflow connected to a Use Everywhere node, but the upscale model itself is not called anywhere in the workflow and the upscale is just upscaling the image with a regular upscale image node, what am I missing here?
thanks for this workflow! I've been using it for more than 7 month now and it's amazing! now with the updates some nodes ( the sliders like clip length) don't show the slider anymore. do you have any advice?
ummm the amputee porn needs to stop lol... ffs yall