VERSION 3 NOTES
First off, this has a custom node I made for predicting flow to help smooth transitions iteration to iteration. Place the whole folder (included in workflow download) in your custom nodes.
Version 3 is an experiment at this point. Could use some more testing and tweaking, but it has some cool features so wanted to share.
Predictive Flow - this is probably the coolest one and is working the best right now. Basically at the end of an iteration, it predicts the beginning of what the next iteration should start like. This gets converted to a latent to get blended in with your subsequent iteration. Should reduce jitters from iteration to iteration
Noise adaptation - if your iterations start losing quality and get noisier, it should dynamically switch to adding more steps and adjusting sampler parameters
Face ID - Not working yet since I'm running into wan compatibility issues, but it's intended to prevent face warping. I'm working on this still.
So from version 2, I added a new T2V feature for it. With this, you can change the workflow to start from a T2V prompt creation instead of an I2V workflow for the first iteration. Then subsequent iterations will go off I2V so you can continue your T2V prompt for however many loops you want.
Heads up that FLF on T2V still references the uploaded image. I'm wanting a future version to reference the first image of the T2V created vod, but I haven't got the conditional logic set up yet.
Notes on credits:
I got the base of it from https://civarchive.com/models/1829052?modelVersionId=2070152
I got most of the florence stuff from https://civarchive.com/models/1687498/wan-2221-i2v-2-workflows-merge-fusionx-lora-2-sampler-florence-caption-last-frame-color-match?modelVersionId=2061133
The stuff around FLF was mainly me messing around so I added that myself and also made some adjustments from the previous two workflows mentioned to what I liked more.
What does it do?
It's basically a for-loop gguf wan2.2 workflow. You can set up however many iterations to go through. There is an option for the last iteration you have to go to a FLF workflow which should help continuity a bit.
Features:
For-loop based I2V gguf workflow
T2V first iteration to I2V gguf workflow
Auto captioning per iteration + customizable after text per iteration
Auto sizing images (to avoid mat errors)
Last For-loop iteration goes to FLF (optional selection)
Upscale + Interpolate
Individual Loras per high and low lora per iteration (power lora so easier selection)
Using ClownsharKSampler
To be added at some point... maybe (once I study up on these more)
VACE integration (wanting to look into Phantom too)
Spline Integration
So far I've been finding good results with linear/euler + beta57, but always looking for better options.
Description
THIS IS AN EXPERIMENTAL VERSION. I will remove this one final v3 is made and more testing is done.
THIS USES A CUSTOM NODE I MADE. Put the whole folder predictive_flow into your custom_nodes and restart comfyui for it to work.
Custom node: Motion Prediction - Smooths transitions between iterations, so less abrupt jitters in motion
Adaptive Noise Steps: Calculates amount of noise added since start. If certain threshold is reached, increases step count and reduces denoising
Preventative Face Degradation (not working) - Trying to guide model into keeping reference image face. Doesn't break things, but doesn't do anything I think currently.
FAQ
Comments (27)
Request for Help: Unable to find the "MotionPredictor" node. After replacing with a similar node, the counter node reported an error.
Per the download notes and description: "this has a custom node I made for predicting flow to help smooth transitions iteration to iteration. Place the whole folder (included in workflow download) in your custom nodes."
The folder should be included in the workflow. simply place it in your custom_nodes folder
Thank you very much.
I'm a newbie. Please give me some guidance: After installing all the nodes, there is still an error. I checked the cause of the error.
The MathExpression|pysssss node (ID: 437) has an input mathematical expression that contains division by zero operations (such as x/0, (a+b)/c where c=0, etc.), which causes the program to be unable to calculate and throws an error.
How should we solve this? Thank you!
It sounds like it's missing an initial value at the node or I'm missing an edge case. I'll try and look into it today, since another user reported the same.
Oh to be clear though, this math expression error is only happening in v3_EXPERIMENTAL. If you still want to use the workflow apart from that, try the v2.1
I am getting a "cannot divide by zero" error in the math processing top right "a/b". I got all the nodes including the one coming with the workflow. Any ideas?
Hmm this may be some edge case I'm missing? I would think that all images would have a score from the detection operating, but who knows. (lowkey when I submitted the workflow, I think I accidentally put a/b, but it should be calculating based off b/a lol)
I'll try and fix it up for next version to handle 0s regardless.
Oh to be clear though, this math expression error is only happening in v3_EXPERIMENTAL. If you still want to use the workflow apart from that, try the v2.1
Thanks. I should have added that I also bypassed the ip-adapter segment because even though I did install it says it does not find the model. It should not matter for the problem described (I think?!) but just in case.
@bowiba1265909 Ah the IP Adapter stuff can be bypassed for now. It's not working anyways even if you have the models. That's the 'experimental part' of this lol. I've been trying to brainstorm coming up with a solution on that, because the problem is pretty complex.
Your issue is in a separate spot though. It's happening in the top right section under the noise detection stuff.
Looking forward to your next version
i to am getting the same error, i change'd a/b into b/a but i am still getting it any fix?
Whenever I gen using a real image and they reveal skin it turns to cartoon, what can I do about that? I'm new to WAN2.2 and I'm using wan2.2_i2v_high_noise_14B_Q3_K_L as well as the low noise version of the model. The wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise lora as well as the low noise variant.
Hmm that sounds more like a prompt + image thing on your end rather than something with the workflow pathing I think? I would check on that
@gumpbubba721291 I'm just a little confused I guess with what to change. Are the default clip models input into the workflow usable for real-person generation? Because I changed one of them since I didn't have it. Also Should I be using the wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise and low noise loras with this workflow? I've even added things like digital-art, illustration to the negative prompt
Well, to an extent some of this is just experimentation and you're going to have to see what works and doesn't for your workflow. At least from my end, I haven't had any issue with the stuff I have in the workflow. It has generated video of realistic looks perfectly fine.
@gumpbubba721291 I see, I had one "decent" generation but it devolved from there lol, guess I'll have to tweak with prompt and settings until I get something I like.
By chance are you trying to do something like "remove clothes" or something that would expose a whole lot of new generated space? I imaging that could be a root cause
@gumpbubba721291 that is exactly it. I had one time it worked okay-ish but nothing good after that
I would suggest using a lora for taking off clothes then. That would help reinforce what you're looking for, instead of using the base model, and it would probably look more natural in the clothes taking off too.
@gumpbubba721291 Would Wan2.1 loras work on 2.2?
usually. I usually try to have high noise on 2, and low noise on 1. again, it's something you need to experiment with. every lora is different.
I'm using v2.1 and basic I2V mode. I could be blind but the log keep saying the Wan21 I2V 14B lightx2v lora is missing, I don't see where that could be changed in the WF. I only see the lightning node reference under the Low Noise section and I manually changed it to a Wan2.2 Low Noise Lightning node and the error still pop up in the log. I'm using Comfyui under Stability Matrix so the models are not in the usual places.
got prompt Failed to validate prompt for output 63: * LoraLoaderModelOnly 69: - Value not in list: lora_name: 'Wan21_I2V_14B_lightx2v_cfg_step_distill_lora_rank64.safetensors' not in (list of length 187) Output will be ignored Failed to validate prompt for output 79: Output will be ignored Florence2 using sdpa for attentionHey!
Workflow looks great, i'm super curious about this cuntinuity!
Tho I have an error : "Wan21.process_out() missing 1 required positional argument: 'latent'"
It looks like the clownsharkKsampler doesnt register the latent image for some reason?
I tried on your two latest workflow, but doesnt seem to work with me :/
Anything i'm doing wrong?
Thanks!
I tried to replace it with a regular ksampler but get this one now
"WanModel.forward() got an unexpected keyword argument 'control'"
Hmm I haven't seen this error happening before, so I'm not sure on the resolution for this, but if clownsharkKsampler isn't working, you should be able to switch it out with the standard ksampler and try that