SMOOTHMIX WAN 2.2 T2V v3.0 UPDATE! - 03/14/2026
Just tweaked the effects of the prompts "smoothmixanime" and "smoothmixrealism" and realism in general.
All videos on the High and Low Showcases were made using "WAN 2.2 Smooth Workflow v4.0" with settings: 900x600 resolution/Steps 8/Sampler Euler/Scheduler simple.
Just as T2V v2.0 it has light2xv baked in it.
The effects of the prompts "smoothmixanime" and "smoothmixrealism" were a little too strong - now you need to complement them with more prompts related to the visual style for the effect. Adding "Realistic Style" or "Anime Style" prompts should be enough. ^^
By popular demand (lol) you can make more normal sized breasts now - no flat chests thought, sorry flat chest lovers.
More details to skins if you try going for more realistic style videos - as long you don't use the "smoothmixrealism" prompt. In that case the skin will be very smooth automatically.
Added some Abstract concepts to it! It adds more variety and colors to the results.
GGUF MODELS For I2V v2.0 and T2V v2.0!
Great news for those that need GGUFs versions!
The user @BigDannyPt managed to convert SmoothMix WAN 2.2 Img2Vid v2.0 and SmoothMix WAN 2.2 Txt2Vid v2.0!!
Be sure to thank him for his efforts! =D
GGUF - SmoothMix WAN 2.2 Img2Vid v2.0
GGUF - SmoothMix WAN 2.2 Txt2Vid v2.0
SMOOTHMIX WAN 2.2 I2V v2.0 UPDATE!
For more info about the update and differences between versions check out this article.
All videos on the High and Low Showcases were made using "WAN 2.2 S. Workflow v2.0" with default settings except the resolution - they all used 900x600 on the workflow.
Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low.
To render futanari characters and male figures correctly, LoRAs remain essential. Try using mine or any of your favorites.
Be aware that hyper realistic content could possibly suffer some morphing since the model will gravitate towards the style of "SmoothMix Animations". That effect can be mitigated a bit by using Loras trained only on realistic content and by using prompts that pushes towards realism.
Sorry for the irregular posts and updates. I’m currently pretty busy and need to reorganize a lot of things, so free time has been scarce. If everything goes smoothly, I expect to have considerably more free time starting in February. Yay ^^
Have fun!
SMOOTHMIX WAN 2.2 T2V v2.0 UPDATE!
ITS FINALLY DONE! T_T
SmoothMix WAN 2.2 Txt2Vid v2.0 is what the model should have been in the first place - now it can show what can really be done!
Merged with Loras made with only images and videos generated from the SmoothMix Checkpoints!
Very high quality image and smooth animations! Use it with the updated version of the Smooth txt2vid Workflow in case you haven't downloaded yet!
Much - MUCH - more variety of clothes, hair styles, clothes, poses, body types and skins colors.
You can use captions or prompts! Both will work! Use both to ensure what you want is generated!
Fox girls, cat girls, demon girls, oni girls - all the girls (and MILFs) are here. ;)
It responds to the prompts 'SmoothMixAnime' and 'SmoothMixRealism'! All Loras merged to it had those key promtps from the SmoothMix Animations Style and they have the same effect here! Check the SmoothMix Animations Style page for details!
Its completely uncensored so it also should work MUCH better with NSFW Loras. Give it a try. ;)
IT CAN'T generate male anatomy reliably! You are going to need Loras for that! SmoothMix's priority is the ladies.
Smooth Mix Wan 2.2
A Smooth Mix version of the Wan 2.2 A14B!
I tried to make it as versatile as I could, I hope you guys like it!
Every video on the showcase used an image from my Gallery! All of them have a comment with a link to the source image used.
Key Points:
Every video on the showcase was made using my new Wan 2.2 Workflow v2.0/Txt2Video Workflow v.20 on its default settings. Make sure to use it!
No Loras were used to make the videos on the showcase. Try to make a video without using Loras first.
When using Loras, start by setting their weight between 0.3~0.5 and increase it if necessary.
Recommended Settings
Steps: 4 or 6
CFG: 1
Sampler/Scheduler: Euler a/Normal or UniPC/Simple
Resolutions:
Use the resolution that your Setup can handle. As a start I recommend these:
For HighEnd Spec PCs: 560 x 940 - 940 x 560
For Mid Spec PCs: 480 x 720 - 720 x 480
After testing these resolutions adjust as needed.
Have fun! =)
Description
FAQ
Comments (65)
Does the T2V v3.0 still bake in Lightx2\ning? I know you decided to remove it from i2v, but you hadn't from t2v.
Yes. Sorry I forgot to say that in the description. >.<
Hi, I've been using your I2V v2 for a while now, and I wanted to tell you that you've done a really impressive job. Lots of positions, including spooning are working, which is quite rare ! :)
The only "problem" is the excessive bounciness/flabbiness occuring even with prompts supposed to prevent that effect, so I wanted to ask you if there's any chance that you release a version with more "firm" skin/bodies ? That would be awesome !
V2 I2V honestly is a straight up downgrade.
I found the oldest version on huggingface, for me, that one seems to work better than all of the others. I could just be on crazy pills and the only thing DigitalPastel changed was the name though. Since they're the same size lol
LTX 2.3 version please.
It is nice
Hahaha, I downloaded it without reading that it was T2V, but oh well, I'll wait for the I2V version if you plan to make it. Your work is always perfect.
doesn't T2V model work with I2V? I saw a lot of I2V works here done on T2V model
In both the old and new versions, the faces are the same, the same thing over and over again. It's annoying, doesn't he know any different ones?
It's trained on a lot of SDXL images. So you'll see that face a lot. And the SDXL face is everywhere even in top end closed sourced models. That's what makes identifying AI generated people easy, most faces are some slight variation on that girl's face. Susan D. XL is what I've named her.
你不会使用人物LORA吗?
@sea5216 我想将这些背景故事用于其他目的,但大量的背景故事会造成损害。
Overall a great model! Any hints on how to make jiggly parts a bit tighter? Less "water balloon" and more "Jello"? I can't seem to get anything to look firm.... Any tips would be greatly appreciated :)
Ever figure this out? my girls butts are all over the place lmao
@sploopin No, unfortunately, I haven't found anything that works.
In T2V v3.0 it says: "By popular demand (lol) you can make more normal sized breasts now". But how??? It ignores any commands regarding smaller or normal breasts, they are always unnaturally huge.
you know, "unnaturally huge" at civitai is normal size :) But you can try the sliders that were released some month ago, they can make them smaller
I said the same thing but deleted my comment 😅. It's actually worse than v2. They're HUGE. If you try to go smaller, it looks like a man's chest and just weird. It's obvious the creator is a big breast man and has trained the model on such. I like big breast like everyone else but not ALL THE TIME. No love for us guys that like petite girls it seems. Got to mix it up every once in a while. I wish he would train on some other type of women than what's specifically to his taste since he's charging us money for it. (buzz.) I spent like 10 or 20 dollars on this model (forget what the conversion is) and I don't even use it. I'm still using v2. I just don't think the women v3 creates are worth the money. It kind has me wanting to learn to train my own models. Maybe he'll read our comments and train the next model with the taste of other men in mind 😅
@dft78750707 Just tried this after reading your comment just to test and it doesn't really work sadly. That lora is 6 months old and doesn't really hold up anymore. The chest are still very broad/ manly. The type of body frame meant for a girl with a huge rack. It just doesn't look right. Oh well. Thanks for the suggestion though
how to make iterations have exact color oscilloscope reading at the stitching point? i have gotten close, im about 10% off across the board. i dont care if they differ across the video, just at the crucial point between transition
Don't know why, but I2V (V1) just gives me a black output. No issues with V2.
I have quite the same issue, but for me it's related to T2V. With smoothmixV2 no issue. With smoothmixV3 on the exact same workflow, black screen
Downgrade your comfyUI to stable version using the 'update_comfyui_stable' scripts
I'm not messing with my comfy installation, lol. There's no telling what'll break if I do, so I'm going to pass. I don't need to try every new model that comes around. Chances are this gets fixed with a v3.1 anyways.
@Tompte same here, I won't make my config un risk for a model. Let's wait next one
I've had that same issue on the stand alone version using pytoch (nightly) after using git pull. Doesn't happen in the app version. Try updating (or downgrading) pytorch.
Has anyone have the black screen on V1 ? i try everything but it still black it work well before now it just all black
Downgrade your comfyUI to stable version using the 'update_comfyui_stable' scripts
I have the black screen with the V3 T2V. No issues running any other Wan22 model.
@sorrelservices
I’m on a Linux cloud sever not a Windows local install so I don't have the .bat scripts. Every time
I boot it pulls the latest custom nodes from GitHub making a manual rollback unstable.
I already matched the frontend to 1.41.21 and the backend to the March 19th commit but I'm still hitting c10::BFloat16 runtime errors and black frames.
Is there a specific pip version or dependency lock for Wan 2.2 that your stable script uses to fix the NaN math?
This happened to me after doing git pull. The update seemed to mess up comfy somehow. Hopefully they fix it soon. I've tried everything. Changing pytorch from nightly to stable. Downgrading python from .14 to .13. Reinstalling comfyui. No dice. The app seems to still work though just not the python local version.
test
Any reason why the file sizes a re significantly smaller than the previous versions? (Ex: 19 GB vs 13.5 GB)
Is there any benefit to using Patch Sage Attention KJ and Model Patch Settings?
Yes , optimizations and speed. Triton/ Sage Attention can speeds up the generation. Runtimes can offload model weights to CPU, RAM or NVMe so low‑VRAM GPUs (like 10 GB) plus lots of system RAM (64 GB or above) can run larger FP16/BF16 models, but slower than staying on GPU.
Model Patch Settings (fp16_accumulation):
FP16 is more prone to NaNs, so enable it if you use FP16 model and disable it if you use BF16, it is more stable and less likely to produce NaNs.
https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/tree/main loras now better than the one referenced at 3 and 1.5 strength? Or should keep using the KJ loras for lightx2v?
@DigitalPastel will there be an I2V for v3.0?
For some reason if i try to run it with the Load Diffusion Model node it always giving black output when it was working fine before, but when i switch to checkpointloaderKJ and the UNETLoader for gguf then it work, is this like a bug or something can someone report this ?
I think the baked-in lightx2v loras are not compatible with the new comfy loaders, If I'm not wrong it has to do with the new stable version of pytorch.
yes,I encountered the same issue yesterday. Only the smooth i2v v1 version produces an entirely black output, while all other models work properly. In the end, I chose to roll back the version of ComfyUI.
@ACancd i think it fine now i just try the v1 and it work like before, i guess they fix it
Black output on v2 for me, using Q6 2.2 normal wan files works normal though.. but I’m on forge. Maybe it doesn’t like safetensors
I noticed character loras trained on base Wan2.2 T2V models loose consistency with smoothmix (only mitigation I've found is to use wan2.2 T2V low noise model), is it possible to retrain these lora directly against SmoothMix ? I tried & failed with aitoolkit (using Wan2.2 text encoder & smoothmix safetensors). Thank you
GGUF of V3?
How do I get it to stop making my characters Cum out of their mouths? >.>
use f4c3spl4sh, so that they come out from the rigth place. No guarantees.
@CyberCream Is that a lora or prompt?
They also keep spitting and sweating and dripping fluids that I never asked them to and I'd like it to stop >.>
@Umbra The first move would be putting words like "spit" or "wet" in the negative prompt, and randomizing generation seeds to find a seed that's less drippy. Maybe also adding some detail to the positive prompt so that "fluids" don't have a chance to fill in.
You could also try using a cum/fluids lora but give it negative weight if the prompt/seed can't keep it out, but negative weighting is more experimental for concept loras compared to sliders.
My third and least sure suggestion is changing the shift value. You can test it on one seed and see whether a flow shift of 2 vs 8 changes it. Sometimes the higher motion on a high shift 'creates' fluid. But lower shift favors detail, and fluid is a detail... The first two options are more straightforward.
@SIDK Thank you, I'll see if that helps. I'm so used to A111's SD randomising the seed every time I generate, I forget I have to manually do it on Comfy.
良い物ができました。ぜひ、私の投稿を見てください。
Wow T2V V3 is great, can't wait to check out I2V when it released
Could VACE be added to these?
Node 'ID #8' has no class_type. The workflow may be corrupted or a custom node is missing. 这个问题要怎么解决
my videos always are blurry with workflow default parameters, any tips?
This model is pretty cool.
My settings: for T2V version 3.0
Shift: 5.5
Steps: 8 = 4 + 4
CFG high: 1.5
CFG low: 1.3
Resolution: 512 x 512
Length: 81 frames
sampler: euler_ancestral
scheduler: simple
Has something happened to the model? Smoothmix i2v v2 and also the gguf version are giving me blurry or motionless videos without having made any changes to my workflow; other models are working fine.
Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low.
I have the same issue. Have you fixed it?
@ws1gbg No, for now I give up and using another merge. The model worked well without accelerator loras, but I haven't tried using them since I prioritize quality with more steps.