CivArchive

    SMOOTHMIX WAN 2.2 T2V v3.0 UPDATE! - 03/14/2026

    Just tweaked the effects of the prompts "smoothmixanime" and "smoothmixrealism" and realism in general.

    • All videos on the High and Low Showcases were made using "WAN 2.2 Smooth Workflow v4.0" with settings: 900x600 resolution/Steps 8/Sampler Euler/Scheduler simple.

    • Just as T2V v2.0 it has light2xv baked in it.

    • The effects of the prompts "smoothmixanime" and "smoothmixrealism" were a little too strong - now you need to complement them with more prompts related to the visual style for the effect. Adding "Realistic Style" or "Anime Style" prompts should be enough. ^^

    • By popular demand (lol) you can make more normal sized breasts now - no flat chests thought, sorry flat chest lovers.

    • More details to skins if you try going for more realistic style videos - as long you don't use the "smoothmixrealism" prompt. In that case the skin will be very smooth automatically.

    • Added some Abstract concepts to it! It adds more variety and colors to the results.

    GGUF MODELS For I2V v2.0 and T2V v2.0!

    Great news for those that need GGUFs versions!

    The user @BigDannyPt managed to convert SmoothMix WAN 2.2 Img2Vid v2.0 and SmoothMix WAN 2.2 Txt2Vid v2.0!!

    Be sure to thank him for his efforts! =D

    GGUF - SmoothMix WAN 2.2 Img2Vid v2.0

    GGUF - SmoothMix WAN 2.2 Txt2Vid v2.0

    SMOOTHMIX WAN 2.2 I2V v2.0 UPDATE!

    For more info about the update and differences between versions check out this article.

    • All videos on the High and Low Showcases were made using "WAN 2.2 S. Workflow v2.0" with default settings except the resolution - they all used 900x600 on the workflow.

    • Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low.

    • To render futanari characters and male figures correctly, LoRAs remain essential. Try using mine or any of your favorites.

    • Be aware that hyper realistic content could possibly suffer some morphing since the model will gravitate towards the style of "SmoothMix Animations". That effect can be mitigated a bit by using Loras trained only on realistic content and by using prompts that pushes towards realism.

    Sorry for the irregular posts and updates. I’m currently pretty busy and need to reorganize a lot of things, so free time has been scarce. If everything goes smoothly, I expect to have considerably more free time starting in February. Yay ^^

    Have fun!

    SMOOTHMIX WAN 2.2 T2V v2.0 UPDATE!

    ITS FINALLY DONE! T_T

    SmoothMix WAN 2.2 Txt2Vid v2.0 is what the model should have been in the first place - now it can show what can really be done!

    • Merged with Loras made with only images and videos generated from the SmoothMix Checkpoints!

    • Very high quality image and smooth animations! Use it with the updated version of the Smooth txt2vid Workflow in case you haven't downloaded yet!

    • Much - MUCH - more variety of clothes, hair styles, clothes, poses, body types and skins colors.

    • You can use captions or prompts! Both will work! Use both to ensure what you want is generated!

    • Fox girls, cat girls, demon girls, oni girls - all the girls (and MILFs) are here. ;)

    • It responds to the prompts 'SmoothMixAnime' and 'SmoothMixRealism'! All Loras merged to it had those key promtps from the SmoothMix Animations Style and they have the same effect here! Check the SmoothMix Animations Style page for details!

    • Its completely uncensored so it also should work MUCH better with NSFW Loras. Give it a try. ;)

    • IT CAN'T generate male anatomy reliably! You are going to need Loras for that! SmoothMix's priority is the ladies.

    Smooth Mix Wan 2.2

    A Smooth Mix version of the Wan 2.2 A14B!

    I tried to make it as versatile as I could, I hope you guys like it!

    Every video on the showcase used an image from my Gallery! All of them have a comment with a link to the source image used.

    Key Points:

    • Every video on the showcase was made using my new Wan 2.2 Workflow v2.0/Txt2Video Workflow v.20 on its default settings. Make sure to use it!

    • No Loras were used to make the videos on the showcase. Try to make a video without using Loras first.

    • When using Loras, start by setting their weight between 0.3~0.5 and increase it if necessary.

    • Steps: 4 or 6

    • CFG: 1

    • Sampler/Scheduler: Euler a/Normal or UniPC/Simple

    • Resolutions:

    Use the resolution that your Setup can handle. As a start I recommend these:

    • For HighEnd Spec PCs: 560 x 940 - 940 x 560

    • For Mid Spec PCs: 480 x 720 - 720 x 480

    After testing these resolutions adjust as needed.

    Have fun! =)

    Description

    FAQ

    Comments (518)

    Showing latest 287 of 518.

    DeviantApeArtNov 10, 2025
    CivitAI

    How much VRAM is needed for the I2V model?

    qekNov 10, 2025

    It's better to ask about RAM too, it depends on multiple factors

    Setian91Nov 11, 2025· 2 reactions

    Your best experience would be with atleast 16GB VRAM and 64GB RAM
    You can run on lower specs, but your result will get worse because you then need to cut in quality

    I tried with 32GB of system RAM and it took me over 15 minutes to render 6 seconds, after my RAM upgrade only 5 minutes... so it does matter a lot.

    DeviantApeArtNov 13, 2025

    @Setian91 What's was specific GPU when it took 15mins? Seems this model is poorly optimized. Is 24GB VRAM not enough to run this model fully on GPU?

    qekNov 13, 2025

    @DeviantApeArt Don't listen to him, he's wrong

    Setian91Nov 13, 2025

    @DeviantApeArt I used a 4080 super which has 16GB of VRAM and I used the scaled clip version

    qekNov 13, 2025

    @Setian91 I have less RAM, my GPU is worse, but I've been running WAN 2.2 without getting OOM

    Setian91Nov 13, 2025

    @qek we might have different settings or a different workflow, I used Smooth Mix's workflow without sage or triton since my ComfyUI refuses to install it with al the compatibility issues.

    I didn't claim I got out of memory though, it just used the hard drive.

    I use DaSiWa's workflow now with some small edits like adding clip vision for better results, for me that workflow works a little better.

    I also use Windows 10, your experience might vary, but this was my experience and the slow generation was instantly fixed after adding more memory

    bhoppingNov 18, 2025

    @Setian91 fr? I also have 16gbvram but its a 4070tisu but only with 32gb of ram. i think it took like 10-20 minutes for the hunyuan base model so it sounds like im in the same boat as you? I'm always resorting to GGUF's atm and then hear about ppl being able to use the raw models with 16gbvram somehow, maybe i do need the 64gb of ram

    Edit: RAM prices are atrocious rn nvm lol

    Setian91Nov 20, 2025· 1 reaction

    @bhopping yeah, there are ways to run it fast on lower RAM... and I tried but the results were so bad compared to what I create now... and yeah RAM prices has gone up a LOT sadly

    bhoppingNov 21, 2025

    @Setian91 thats crazy, how much ram made the difference? i have 2x16 ddr5 ram (32gb) and have 2 slots left. i could try to upgrade to 64, is that how much u have? sounds like its worth the upgrade

    iluvlamiaNov 11, 2025· 12 reactions
    CivitAI

    no I2V V2.0?

    jonk999Nov 11, 2025· 2 reactions
    CivitAI

    On Smoothmix flow for whatever reason I could not get rid of the missing node error for Sampler Select and Scheduler Select. Not sure if I needed to downgrade the version of a custom node (it kept bringing up an issue with ComfyUI_Essentials). So ended up just removing them and setting the sampler and scheduler in the Sampler nodes manually.

    fairypantsNov 13, 2025· 1 reaction

    Replace comfy essentials with this version: https://github.com/cubiq/ComfyUI_essentials/tree/9d9f4bedfc9f0321c19faf71855e228c93bd0dc9 that worked for me. hopefully it does for you

    jonk999Nov 13, 2025

    @fairypants Thanks. I'll give that a try. So I remove the old folder in custom_nodes and clone that one?

    fairypantsNov 14, 2025· 1 reaction

    @jonk999 Yep. I just overwrote the files in the original comfyui_essentials folder. Just make a backup in case anything goes wrong - but it should work fine.

    jonk999Nov 15, 2025

    @fairypants Thanks heaps. Will give it a go once I have some time.

    boxertwin75593Nov 11, 2025
    CivitAI

    Hi does this work with character LoRAs trained on WAN2.2? Mine doesn't seem to be responding well with this model/workflow. Thanks

    p2105633233Nov 12, 2025· 11 reactions
    CivitAI

    Has anyone else had issues with hips that has the irresistible urge to boogie? like they just can't stop thrusting their hips regardless of how i prompt them. edit* Using the wan2.2 i2v model without any additional loras and the image used already has a man with a donger.

    kyonn92Nov 12, 2025

    same issue. i tried to use use both positive and negative prompt to suppress the body movement, but did not work

    doublesimp980Nov 12, 2025

    A bit, depends on what it was trained on, might have been overtrained on dance or nsfw. I'm trying to figure this out myself, what negative prompts have you used?

    Also remember if GFC is set to 1.0 it ignores the negative prompt so I usually try 1.5 - 2.0, above 2 tends to lead to overexposure

    Setian91Nov 13, 2025

    Always even if you specify something like "static" "natural" "at ease" "calm" or "rest" haha

    It leaves you with goofy results sometimes

    ArtificeAINov 13, 2025· 1 reaction

    Sounds like the same issue I've been seeing with the Remix checkpoint, too. Must be a Lora causing it.

    p2105633233Nov 15, 2025

    I've found giving them something to do, walking, standing up, sitting down, etc.. Sorta suppresses the urge but the moment they stop they go into their idle animation of the boogie hips.

    mygenaiessentials138Nov 12, 2025
    CivitAI

    An absolutely brilliant model. Please keep iterating. Hoping for a improved I2V release.

    Pity_the_FooNov 13, 2025
    CivitAI

    For those wanting your character loras to work properly and have better adherence to body types and other custom details this is what worked for me:

    "For my GGUF workflow using the quantized V2 T2V checkpoints I had to change the lownoise to the standard quantized Q8 model for Wan 2.2 A14B T2V while leaving the hghnoise smoothmix checkpoint. I don't have a checkpoint loader with weights so I went this route, still has better range of more natural motion in scenes without the ridiculous adherence to model specific proportions, I use the older loras to add back the smoothmix look at lower weights using the lownoise loras in the WF, waiting for the V2 loras to leave early access."

    AlberistNov 13, 2025· 1 reaction
    CivitAI

    If a new I2V model gets made, I'd love to see an attempt to fix the issue of nipples phasing through clothes. I love doing clothed vids, but most of the time, if the spot the nipples would be is out of frame, I'll end up with bright red nipples right on top of the clothing. Sometimes I'll lose half of my gens to it. Not unique to this checkpoint, but it's maybe something that could be addressed.

    boobkake22Nov 18, 2025

    I've had this issue as well. Good callout.

    evilenerjohn190Nov 13, 2025· 1 reaction
    CivitAI

    For the life of me I cant stop these videos from returning to the start position at the end of the video. I have loop set to 0.

    Lora_AddictNov 14, 2025

    You probably trying to generate videos longer then 81 frames?

    wildkraussNov 14, 2025

    I echo Lora_Addict. In my experience this never happens when generating videos up to 81 frames (5 seconds at 16 FPS) in length. Sometimes I can push it to 96 frames (6 seconds), but beyond that the video almost always returns to the start position by the end of the video

    qekNov 14, 2025

    @wildkrauss It seems it's the reason why some Hunyuan Video users say that Wan 2 can't generate long videos. There are some LoRAs making it possible to generate better longer videos

    evilenerjohn190Nov 14, 2025

    @Lora_Addict yep i was

    Lora_AddictNov 15, 2025

    @evilenerjohn190 yeah, Wan 2.2 can't do more then 5 seconds / 81 frames. Or lets say, you CAN but it will do what you witnessed :)

    MrfenderaiNov 13, 2025
    CivitAI

    que configuracion se recomienda para una tarjeta de 8gb vram

    qekNov 14, 2025

    --lowvram ?

    harenchi91Nov 16, 2025

    I use the Q3_K GGUF models from BigDannyPt on a 3060ti 8GB VRAM + 32GB RAM

    Then just follow the recommended settings for Mid End. Use the Smooth Workflows V2.

    It takes around 5 minutes for a 5 seconds video at 1 CFG.

    copik47280549Nov 16, 2025

    Y para una 7600xt (16gb) ?, sigo viendo HIP out of memory...

    Bedovyy's gguf of i2v, i use q6 on my 4060 laptop gpu 8gb vram + 16 ram and its work pretty nice
    without sage and any loras its like 500 secs for 81 frames 480x848

    androsnyNov 14, 2025· 2 reactions
    CivitAI

    I have a really hard time getting the urinary meatus to show up on a new penis in an i2v, usually the glans will be missing it. Any tips? pun intended

    rivdemon1221554Nov 14, 2025
    CivitAI

    By far the best I2V for prompting motions and movement that isn't typical to the initial image.

    5419587Nov 14, 2025
    CivitAI

    This is nice, very nice.

    One critic if you do another version, tone down the breast movements as they are exaggerated.

    qekNov 14, 2025

    And remove the text encoder not to reload the same one, @DigitalPastel

    5419587Nov 25, 2025

    I'm also going to add that this model has an over infatuation with making that stupid Pokimane face where she sticks out her tongue.

    singonNov 14, 2025
    CivitAI

    这个模型内置加速lora了吗?或者还有必要使用light加速lora吗?

    qekNov 14, 2025

    没有,内置了,别再加了。

    Stefano_038Nov 15, 2025
    CivitAI

    用中文写提示词更好还是用英文写提示词更好?
    Is it better to write prompts in Chinese or in English?

    qekNov 15, 2025

    Can be both, thank umt5 and the Wan team for training on Chinese captions

    delta45424155Nov 15, 2025· 3 reactions
    CivitAI

    does using (token:1.5) help with getting that token to affect the output?

    ca8683Nov 16, 2025
    CivitAI

    I download the wan2.2 i2v fp8 model and use the i2v workflow offered by author, (I use umt5 xxl fp8 e4m3fn scaled, clip vison h, wan 2.1 vae) then an error occurred: KSamplerAdvanced

    Given groups=1, weight of size [5120, 36, 1, 2, 2], expected input[1, 32, 21, 60, 60] to have 36 channels, but got 32 channels instead. (But the t2v workflow can run normally.)

    i maybe wrong but use a different text encoder.

    slikvik55570Nov 16, 2025· 3 reactions
    CivitAI

    Could you please upload to Huggingface? I have to use Civit via a VPN in my country and I don't have the bandwidth to download.

    qekNov 16, 2025

    Just get the links and download without the VPN

    slikvik55570Nov 19, 2025

    @qek the links are still on the civit domain so I can't.

    asspowerNov 17, 2025
    CivitAI

    holy fuck,your checkpoint and workflow works amazing

    ZannarkNov 17, 2025
    CivitAI

    any page how for free use pc or page?

    yukiookami_8593027523Nov 17, 2025
    CivitAI

    I do not understand ,this model has SmoothMix Animmations lora inside of itself or not?

    yukiookami_8593027523Nov 17, 2025
    CivitAI

    T2V version does not generating dicks, and even with loras sex scenes looks bad.

    qekNov 17, 2025

    You can use i2v

    K3NKNov 18, 2025· 2 reactions

    @qek you can also use the damn base model and learn to combine loras yourself... -.-" this merged bs models looks like wan2.1

    qekNov 18, 2025· 1 reaction

    @K3NK or complain about women with dicks and cum leaking from everywhere because they have been merging random loras and say that it works as intended

    boobkake22Nov 18, 2025
    CivitAI

    A reminder my Yet Another Workflow has a Smooth Mix version that follows their guidance.
    Also, by request, my RunPod template can now disable the default model downloads if you just want to use Smooth Mix to save on space and startup time.

    DuckyDuoNov 18, 2025· 4 reactions
    CivitAI

    Is there a list of the merged loras anywhere?

    qekNov 19, 2025· 3 reactions

    smoothMixWan22I2VT2V_t2vHighV20.safetensors
    SPEED_HIGH.safetensors (outdated Wan 2.1 lightning?) strength=3.0
    SmoothMixAnimation_High.safetensors, strength=0.5
    SmoothMixStyle_High.safetensors, strength=0.2
    SmoothMixStyle_MoreOptions.safetensors, strength=0.2
    -----
    smoothMixWan22I2VT2V_t2vLowV20.safetensors
    Similar, but loras for the Low model

    RumabenNov 20, 2025· 2 reactions

    Thank you qek. :) Do you think the same loras are used for the v1 t2v version. Imo it's better than v2. I found the first two but can't seem to find 'SmoothMixStyle_MoreOptions.safetensors' anywhere. :/ I guess it should be found here?: https://civitai.com/user/DigitalPastel/models

    I like how faces look with this model.. Less ai look than the vanilla model. I'm trying to tone down the massive loads of artificial looking cum and strange and loose looking vagina and anus. Subtracting the first two loras doesn't seem to do a lot but I'll keep on testing.

    Kung_fu_PronNov 27, 2025

    is there a way to remove a lora from the model? i would like to remove the lightning lora

    131xxx0130409Nov 20, 2025· 6 reactions
    CivitAI

    It would be greatly appreciated if the upcoming i2v v2 could include better control over female breast size. Personally, I often want a flat or very small chest, but the current version consistently generates them larger. When I try to use LoRAs to achieve a flatter chest, either the LoRA doesn’t apply properly or it causes unwanted deformation/distortion in the face and other areas.

    I’d be very grateful if the team could consider adding this level of control in the next version. Thank you so much for your continued improvements!

    131xxx0130409Nov 23, 2025

    @qek I've tried using this LoRA, but in I2V, there are issues where the face becomes blurry or turns into a completely different person.

    CherrySockNov 26, 2025

    Yeah, unfortunately it seems like the model is incapable of making small breasts. Even using loras, like the one mentioned, doesn't help.

    binge5208Nov 20, 2025
    CivitAI

    I'm using Smooth's T2V model to generate video, but why is the character shaking in the first few frames?

    brandschatzen1945598Nov 20, 2025
    CivitAI

    Any plans for a quant model of it ? (cause not everyone has super efficient monster local machines ^^)

    confernoNov 21, 2025· 1 reaction

    its already here. google smoothwanMix gguf in the google

    david469Nov 20, 2025
    CivitAI

    I'm using the Yet Another Workflow with this. I copied your checkpoint to ComfyUI/models/diffusion_models.

    It's not showing SmoothMix as a choice in the Unet Loader. Also, the workflow wants a high noise and low noise loader - but you have just one JSON file.

    What am I not understanding?

    TIA

    david469Nov 20, 2025

    Ok, I changed the start from Unet Loader to Load Diffusion Model and increased the size so it doesn't need to be scaled at the end. Looking much better.

    However, there is only a single SmoothMixWan and it's high. I used that for both but where is the Low for this checkpoint?

    TIA

    civitaisks777Nov 21, 2025

    @david469 there's a high and low download tho?

    david469Nov 21, 2025· 2 reactions

    @civitaisucks777 I'm an idiot - there is. I didn't think to look at the download choices (still learning CivitAI).

    ReubyDeubsNov 21, 2025· 1 reaction
    CivitAI

    For some reason I'm getting

    Prompt execution failed TypeError: Failed to fetch

    only with this diffusion model, both high and low. Only experiencing it now for the first time after having it work for the last 2 weeks. I haven't re-downloaded the model, though I've read that could be a solution, because the file is large, so I'm just going to post here to see if anyone else is having similar issues.
    Thanks

    DuckyDuoNov 21, 2025
    CivitAI

    Motion is better in my experience most of the time. But one thing I've noticed is that using loras alongside merged models seem to produce an output noticeably worse in overall motion/motion variance. Sometimes a T2V output will have that I2V motion look that lacks realism. Not a knock on the model just an observation, great results otherwise.

    qekNov 21, 2025· 1 reaction
    RumabenNov 21, 2025

    Faces are pretty good on version 1 of this model imo but for best realism try t2i (960x1440 or higher) with the base model and this lora and the workflow embedded in the image: https://civitai.com/images/98215009

    Res_2s/beta57 (or bong tangent) are pretty good for t2i. You'll find these samplers/schedulers in the RES4LYF repo.

    Then do i2v with that. I tried t2i with the smoothmix merge but the images just came out distorted or strange looking.

    Something like the new nano banana pro is properly best at ai realism but I suspect they don't allow nsfw content.

    For generating i2v I recommend the enhanced merge. It doesn't go crazy with loras and improves motion I think. https://civitai.com/models/2053259/wan-22-enhanced-nsfw-or-camera-prompt-adherence-lightning-edition-i2v-and-t2v-fp8-gguf?modelVersionId=2409571

    yukiookami_8593027523Nov 21, 2025
    CivitAI

    How do I achieve smooth yet dynamic animation? This model often gives me violent movements, which is sometimes good, but interpolation is completely ineffective because it doesn't understand physics. If I simply increase the framerate, the character moves even faster, so it doesn't help me. Slowmotion prompts don't help either.

    Melty1989Nov 21, 2025

    If you're using Lightx2v speed up loras , turn those off. Those are already baked into the model

    @Melty1989 Yes i know, the point is if characters movement are dynamic and fast, then automatic interpolation does not correctly record strongly shaking parts, e.g. breasts

    simonishereNov 23, 2025

    try using CFG 1.5/2.5 in HIGH noise and use negative prompts to avoid those extreme moves

    @simonishere for me, negative prompts changes absolutly nothing

    simonishereNov 24, 2025

    @yukiookami_8593027523 negative prompts only works with high CFG, that's why i said, ''try CFG 1.5/2.5'' (even 3.5). CFG 1 ignores negative prompt

    civitaisks777Nov 24, 2025

    @simonishere raising cfg will increase the time it takes, @yukiookami_8593027523 try a NAG node for negatives before raising cfg from 1

    civitaisks777Nov 24, 2025

    (or try a diff checkpoint, another one recently came off early access and it's smooth as butter 90% of the time)

    @civitaisucks777 can you tell what it is name

    simonishereNov 24, 2025

    @civitaisucks777 of course it will, if you want better results sometimes you SHOULD wait longer times, that's a direct correlation

    I type prompt to take out dic from girl, and thats it, but generated video showing a sex animation, I can type sex, doggystyle, sex from behind... in negative prompt and increasing CFG all day long but effect is the same

    The second problem is that the animation is often jumpy, for example in cowgirl position, woman rises and stops halfway for a split second, then resumes, and so on

    simonishereNov 24, 2025

    @yukiookami_8593027523 @yukiookami_8593027523 what sampler/scheduler are you using? shift? are you using loras?

    @simonishere just default and close to default settings in Smooth Workflow Wan 2.2 (img2vid/txt2vid/first2last frame) workflow

    civitaisks777Jan 17, 2026

    @simonishere are you intentionally giving bad advice? more time for better quality makes sense, but that's not the case here. you can only raise cfg to about ~2 before you REDUCE quality. and at ~2 cfg, the negatives aren't very strong anyway.
    so you're adding 1.5 to 2x more time for barely any difference. when you could just enable NAG with zero added time. NAG also gives stronger negative guidance than ~2cfg and you can change the NAG scale to make it even stronger if you need to.

    you shouldn't ever need more than 1cfg if you're using NAG and prompting properly. it's better to use that time saved for more steps or higher res

    simonishereJan 17, 2026

    @civitaisks777 you sound like an ad bot

    androsnyNov 21, 2025
    CivitAI

    I use the i2v almost exclusively and this is the best model by far for nsfw, I bought the early access of the t2v in appreciation. The only issue I have with this model is that it doesn't render the urinary meatus of penises correctly, if that could be fixed in the next version I think this model is perfect without any loras.

    qekNov 21, 2025

    You could just download the base models and those SmoothMix loras instead

    tenokeNov 23, 2025

    Any chance you have a workflow that works for the initial image. I got e.g. the 'WAN 2.2 Smooth Workflow v2.0 (img2vid)' workflow to work but in the end the image doesn't appear in the video at all.

    binge5208Nov 23, 2025
    CivitAI

    Regarding the I2V model in SMOOTHMIX WAN 2.2, the generated videos consistently show visible nipples on the woman's outer clothing, which bothers me and affects the aesthetics of the video. I like the model, and I hope you can provide a solution. Thank you.

    tenokeNov 23, 2025

    Any chance you have a workflow that works for the initial image. I got e.g. the 'WAN 2.2 Smooth Workflow v2.0 (img2vid)' workflow to work but in the end the image doesn't appear in the video at all.

    yar4ik141Nov 23, 2025
    CivitAI

    Really new to trying video gen

    Can someone explain to how do I run this model?

    I have a 3090 24gb and I don't understand how much vram I need. Do I need the high and low models at the same time ( if yes than wtf 40gb of vram? ) and I try to use the smooth wf

    Please explain I'm soo lost right now

    qekNov 23, 2025

    "Do I need the high and low models at the same time" We start with High and refine with Low. Maybe you should try only using one, some have been only using Low, also, Wan 2.1 and Wan 2.2 5b don't need a refiner model
    "if yes" If High doesn't get unloaded, then yes, both will remain loaded. If it doesn't unload for some reason, then you have to separate
    "40gb of vram?" You can use GGUF to save more memory, lower quants of the transformers and umt5

    yar4ik141Nov 23, 2025

    @qek ty for reply
    just to be clear can force the unload of one model before the second one gets loaded?
    since when the second model starts loading the comfy just stops working

    qekNov 23, 2025

    @yar4ik141 I think you can play with some arguments related to memory management and lower quants of the model(s). It appears that the backend doesn't want to unload already loaded models unless they no longer present in a workflow (deleted loader nodes, not just bypassed), or you press Unload Models and Execution Cache to unload everything completely. You can try memory cleaner nodes, but it didn't really help me in the case. You can save a latent made by High, then unload High and load the latent for Low to finish, you would have to reload them anyway

    yar4ik141Nov 23, 2025

    @qek did you found a solution for it or just decided to you a lower model?

    qekNov 24, 2025

    @yar4ik141 It's already stated

    551408Nov 25, 2025

    我也是3090显卡,我在同时使用高低频型号,实际上它们是运行完一个之后再运行另一个,所以24GB显存可以完全正常运行。40GB显存是指你的3090显卡有24GB显存和16GB共享内存。你不需要去管它,因为你的显卡不会去使用那16GB共享内存,共享内存的速度比显存慢太多了,用它反而会大幅拖慢速度。

    haxoNov 29, 2025

    More frames/higher resolution, more VRAM, that's it. How many frames and resolution did you render?

    charlenebelmontDec 2, 2025

    on a working workflow. 1 model load at a time so about 21gb (for the models and the text encoder) of vram need total, and when it run the vae it should be ok. using both high and low is recomended but dont worry as only 1 work at a time if set or workflow intend it to. resolution wise dont go higher then 480x720 or vise versa and if runing 16 fps keep at 5 sec of video length increasing these will require more vram. use their official workflow for that model to test and tweek . once you learn more and want to branch out then you can start experimenting.

    551408Dec 28, 2025

    @haxo A total of 113 frames, with a resolution of 1536 × 1024. Higher resolution may cause graphics memory to burst or run at low speeds using shared memory.

    _RUST_Nov 23, 2025· 1 reaction
    CivitAI

    Unfortunately, my Laura's body type changes a lot for the character, even though I write that she has a slim figure, small breasts, and her face changes slightly. Is there a way to fix this?

    vrilismDec 1, 2025

    Only use the high model...

    TwistedJimmyNov 24, 2025
    CivitAI

    Thank you for your workflow. Is there any workflow with an starting image or like an option to build a starting image node in that workflow?

    Yc3kNov 24, 2025

    Just use i2v workflow.

    delta45424155Nov 24, 2025· 1 reaction
    CivitAI

    When writing a prompt; can i use, (small breasts:1.5), to alter the weight?

    CernerNov 24, 2025
    CivitAI

    If I am running the standard i2v template that is in comfyui using lightx2v 4 step loras and painteri2v, what is the bare minimum I need to add to make this model work?

    qekNov 28, 2025

    Without lightx2v 4 step

    SeikkailijapoikaNov 27, 2025· 2 reactions
    CivitAI

    I can't for the life of me generate women with smaller busts. I'm not sure if I'm doing something wrong, but whatever I prompt, everyone has some pretty impressive tits.

    charlenebelmontDec 2, 2025

    are you using any loras? and prompt it for smaller bust? to help we need more information. so we can try to see what the culprit is.

    roguewolfDec 2, 2025

    bigger the better :3

    SeikkailijapoikaDec 2, 2025

    @charlenebelmont Just the lightning step Lora. Here's the prompt that produced quite the knockers:
    SmoothMixAnime. A young woman wearing a skin-tight gymnast outfit spreading her legs seductively. She is seducing the viewer. She has small breasts and a flat chest. She has wide hip bones and a toned stomach. she is impossibly beautiful and cute and innocent. she is petite. Blonde hair and blue eyes. Refined and mature expression.

    charlenebelmontDec 4, 2025

    @Seikkailijapoika from a lot of my testing with this model it get confuse with conflicting words sometime. my suggestion is try using description that wont overlap or contradict each other or to much detail in one area. for Example: try using "her breast are small and flat" . when you try to give it to much detail this model for me at least get confuse and break sometime. what i find also work is . describe the women feature first before any clothing detail. this way you can fine tune fore the look you want and when you start adding loth it wont mess with the prompt for her feature and description

    so my promt would be like:
    (1)her look and feature (physical appearance)
    (2)clothing

    (3) then the prompt for action and or camera and setting or what you want the video to be about.

    this way you can say change her look without messing up the other stuff or change her action or cloth and it wont interfere with what you want to do. and it alot more organize :)

    leepeter1231Nov 28, 2025
    CivitAI

    hi, I have a question. If I generate an image of a girl using Lora in webui forge, then I use Wan2.2 to generate video, will the artstyle of the girl's face change?

    qekNov 28, 2025

    Nope

    leepeter1231Nov 29, 2025
    CivitAI

    May I know how I can add character lora (e.g. https://civitai.com/models/2175882) in the workflow ?

    qekNov 30, 2025

    Add the Load Lora node yourself

    leepeter1231Nov 30, 2025

    @qek if I add the load lora node, where should the node connect to and where should the trigger word be typed? Should it be typed in the Positive prompt node in the workflow?

    qekNov 30, 2025

    @leepeter1231 Their workflow is a mess. Should at least connect to "model". The trigger word(s) may help

    civitaisks777Dec 5, 2025

    @leepeter1231 yes, add triggers to positive prompt

    EshinioDec 2, 2025
    CivitAI

    Can I take these models and simply add them into the Wan workflow I have been using so far, or do I have to download a specific workflow for them to work?

    SeoulSeekerDec 3, 2025

    They can just be added to your workflow. No lightning loras are needed.

    singsuncolor907Dec 2, 2025
    CivitAI

    只是下载的时候太耗费时间了,我不知道到底是为什么,其他模型就没有这个问题。

    sea5216Dec 3, 2025
    CivitAI

    你的T2V版本是我用过以后感觉最好的版本,给你点赞!

    billbDec 3, 2025
    CivitAI

    I tried using the T2V model with a VACE workflow, but it doesn't seem to work. Is there any way get this to work with VACE so we can use multiple frame inputs?

    qekDec 3, 2025

    It isn't VACE

    FgonaxDec 3, 2025
    CivitAI

    could you post the gguf cuantized I2V here? i cannot for the life of me make q6 work

    FgonaxDec 4, 2025

    @qek thats what i meant, i have already found them and im trying to use Q6 and Q8 no errors in the generation but the videos come out like patchy, exaggerated movements and bad overall. i tried changing the lighx loras, the wan 2.2 that comes with the wokflows, the rank 32 mentiones on the description, the rank 16, 64, disable all loras, including the ones to speed up generation. changed the clip for a cuantized one, changed to the wan2.2 one, changed again foor a xxlbf16, xxlfp8, the one in the confyui wan2.2 github page, im trying so many options and every generation gets worse than the last one. i thought that if the models where posted on civitai some people would generate and i could see how they used the quantized models and see what i am doing wrong

    qekDec 4, 2025· 2 reactions

    @Fgonax It's a simple merge with porn loras, the users should have downloaded the base models and DigitalPastel's loras instead. I do not recommend this SmoothMix merge, it isn't the first comment about output videos with strange and/or unwanted movements

    sixpt55Dec 3, 2025· 2 reactions
    CivitAI

    I absolutely love this model and use it all the time for nsfw things, but when I try to do anything sfw the characters comically bounce/thrust their bodies rhythmically, unprompted (even with descriptors of those actions in negative prompt). Anybody else experience this? I haven't changed shift or cfg from 8 or 1 respectively... could these be the culprit? dpmpp_2m sampler and sgm_uniform scheduler, too.

    qekDec 3, 2025

    It's a simple merge with porn loras, do not use it

    This happens to me too, and it drives me nuts. even when I am trying to get it to do other NSFW things where the woman's hips happen to be in view, she wont stop gyrating or bouncing

    AisamplewalrusDec 4, 2025
    CivitAI

    is 32GB of Ram enough for this model and workflow?

    qekDec 4, 2025

    Of course

    AisamplewalrusDec 4, 2025

    @qek i thought that too but its randomly now taking up 100percent of my ram for a single render, using the smooth workflow and not changing a thing. Any tips that could help?

    qekDec 4, 2025

    @Aisamplewalrus I don't know the workflow. Make sure you run the models in fp8 and use a lightweight VAE from lightx2v

    AisamplewalrusDec 4, 2025

    @qek Oh I’m using the smooth workflow that this model recommends. Sorry noob question but how do I run the models in fp8? I’ve just been using the smoothmix high and low from here and their smooth workflow.

    civitaisks777Dec 5, 2025

    it runs on a 3060 12gb just fine

    DBrepairDec 6, 2025

    I2V. Mostly no. At high resolution 1024 and a 5-second clip, I use up 60+ gigabytes of memory during upscaling.

    qekDec 6, 2025

    @DBrepair Wow, I need less

    viniandrieu845Dec 4, 2025
    CivitAI

    Can I use realistic photo for first frame ? I don't know if my RTX5060TI 16Go is capable to run this model ...

    qekDec 4, 2025

    Yes

    Setian91Dec 5, 2025

    Yes and if you add a clip vision to your workflow you don't even have to explain the image being realistic.

    viniandrieu845Dec 8, 2025

    @Setian91 Thanks a lot, where can I find a simple workflow for i2v with Wan 2.2 smooth ... ?

    Setian91Dec 12, 2025

    I kind of like DaSiWa's workflow since it doesn't require Triton installs:
    https://civitai.com/models/1823089/dasiwa-wan22-workflows

    You need to attach clip vision yourself though, I don't think the newest version has that.

    I haven't tried the newest yet though, but its fairly simple to use, Smooth's workflow is also good, but I keep getting errors after the 4th video < maybe because I disabled the triton nodes... but w/e.

    Setian91Dec 4, 2025· 1 reaction
    CivitAI

    This is driving me nuts and I don't know how to avoid this from happening with prompting, but every time HIS penis is in front of HER vagina she gets balls...

    With and without lora's, I tried many seeds too, but she's always turning into some trans girl...

    Wtf xD

    Any tips?

    qekDec 5, 2025

    Futa transformation

    Setian91Dec 5, 2025

    Okay I've finally managed some yank sollution using the pull out cumshot lora (0.5 weight) along with prompting "Woman vagina is behind man penis"

    It probably has to do with all the futa training which I'm not against but if its the only result even for females having straight sex this can be frustrating...

    KinglinkDec 5, 2025· 1 reaction
    CivitAI

    Is there anything blocking "Animation". I can do "South park" I can do "Harley Quinn and batman" But I can't do any animated version of "Harley Quinn and Batman" easily? The second I drop either of those names, I Get ultra realism in my final video no matter what I do? Maybe I'm missing something. I'm using the stock Smooth Workflow txt2video workflow but overall... not getting anime/studio cartoon, comics, or anything else with those two?

    necrophagism777Dec 6, 2025
    CivitAI

    The best dynamic one I have tried.

    yoyo12333Dec 6, 2025
    CivitAI

    Doing I2V, the faces/eyes are often blurry/weird. Does anyone know how to fix this?

    SupremeWGDec 7, 2025

    I have the same problem, especially when the person is at a distance. I understand that a face detailer is needed, but I don't quite understand how to integrate it into the author's workflow

    maxtusrordey512Dec 6, 2025· 1 reaction
    CivitAI

    I know this has nsfw added into it. But could I ask why vagina/pussies and anus look so retarded when i generate them from I2V image? It limits to what i can do with any character. Any good compatible good genitals loras that can improve on this?

    McGreenDec 7, 2025
    CivitAI

    This model is without question fantastic! Thanks!

    qekDec 7, 2025

    No, it's just the pretrain fron Wan AI + their 3 porn loras, nothing fantastic at all

    solxrac781Dec 8, 2025
    CivitAI

    Can a RTX 5070 12GB run this model?

    sea5216Dec 8, 2025· 1 reaction

    可以的

    jonk999Dec 8, 2025

    I run the GGUF versions on a 3060 12GB. Think I had some issues with these ones...

    poisas69220Dec 8, 2025

    I'm so glad i recently bought 5070ti version with ddr7 and 16gb :D runs so fine

    qekDec 8, 2025

    The icon: no b1tches?

    chenmaini0721244Dec 8, 2025
    CivitAI

    i am a beginer in AIGC.Could anyone tell me where to download LowV20? there is only High in the file link

    viniandrieu845Dec 8, 2025
    CivitAI

    Hi, this model is only for anime videos ?

    qekDec 8, 2025· 1 reaction

    Do not use. It's just base models with loras from https://civitai.com/models/2040641

    pekopeko2Dec 9, 2025
    CivitAI

    someone make a quantized i2v one, my set up keeps crashing

    pekopeko2Dec 9, 2025

    @qek thanks mate

    GitarooManDec 9, 2025
    CivitAI

    when i try to get the i2v one i get smoothMixWan22I2VT2V_i2vHigh.safetensors and when I get the t2v one I get smoothMixWan22I2VT2V_t2vHighV20.safetensors

    These are really really hard to tell apart, I'm not sure if I'm reporting a bug or asking a question

    FreddinDec 9, 2025

    one says i2v the other says t2v right after the "_"

    Vasto525iDec 10, 2025
    CivitAI

    I can't use any other model after using yours. Will we be getting a V2 for i2v? then I can wait for Danny to make a GGUF out of those as well :3

    bebicat946364Dec 10, 2025
    CivitAI

    I'm tired of watching people suffer, it works on 8 VRAM @wan_2_2

    qekDec 10, 2025

    They will continue posting such comments

    mnlapnDec 10, 2025
    CivitAI

    great model, quick gen, but one question : why does the face change so much (I2V) ??

    qekDec 10, 2025· 1 reaction

    What's your sampler and scheduler?

    mnlapnDec 10, 2025

    @qek Euler and Simple !

    qekDec 12, 2025

    @mnlapn My combo: Euler A + Beta

    GsssyikDec 20, 2025

    It completely changes the face on every setting, Useless for i2v

    AginoDec 10, 2025
    CivitAI

    What makes the character look more 3d a normal human, because a lot of my prompts make the character look 3d

    BobbyarttDec 29, 2025

    Same here. For me t2v give "Pixar / Disney style"

    bleachigo786Dec 10, 2025
    CivitAI

    i am very new to vid generation just got down some basics, i believe the models wont work with 8GB card and 4070, is there a GGUF version for I2V for smooth mix? or is it possible to run models with large size on 8gb without getting OOM error

    genetik73Dec 16, 2025

    My workflow run on a 3070 8gb with the GGUF version - Q4_K_M
    https://civitai.com/models/2083588?modelVersionId=2357653

    vrilismDec 10, 2025
    CivitAI

    Anyone know which lightning lora should I use for wan2.2 t2v smooth high noise model?

    moomtong269Dec 20, 2025

    I'm not sure but in my understanding, lora makes the generation faster by reducing the steps it needs.

    As this model is already able to achieve good quality in 4 steps, it seems to me not necessary to add extra lightning lora to it.

    RezhaScarletDec 11, 2025
    CivitAI

    is there any versi i2v gguf?

    radiantraptorDec 12, 2025
    CivitAI

    Hello, according to the filename "smoothMixWan22I2VT2V_t2vHighV20.safetensors" the model, which I have downloaded from the T2V page, is capable to do both T2V ans I2V, but it seems that in the I2V workflow the image is ignored. SO is the model really correct for I2V or do I have to use a different one?

    ThatSoKittenDec 13, 2025

    I have the same issue, its ignoring the image and just making a picture video using the prompt

    radiantraptorDec 14, 2025

    @ThatSoKitten 
    I figured it out, it is really the models. I have downloaded the "I2V High" and "I2V Low" version and now it is working.

    crafted101Dec 14, 2025· 3 reactions
    CivitAI

    i am using this for i2v and it crashs when it gets the low noise any tips to get it from crashing i have a 12gb vram off loading onto 32gb of system ram ?

    ThatSoKittenDec 16, 2025

    If your using comfyui, try setting to --cache-none, and --reserve-vram 0.9

    razor3208Dec 16, 2025

    @ThatSoKitten i tried that, still didn't work. it was working fine weeks ago but for some reason comfy crashes when loading low noise model. i have 16gigs of Vram. all other models load just fine. something is seriously wrong with this low noise model. still unable to run it. i hope they fix this issue on the V2 version and release it soon!

    qekDec 16, 2025

    @razor3208 use --normalvram or --lowvram

    crafted101Dec 17, 2025

    @qek yeah running low vram offlloading onto my system ram

    razor3208Dec 22, 2025

    @images101 does it work when set to low vram mode? cause mine is set to normal

    crafted101Dec 22, 2025

    @razor3208 for me smooth mix does not work thats why i am here asking for help

    Eeve_La_FaeDec 26, 2025

    I'm also having this problem.

    DarkEngine2024Dec 15, 2025
    CivitAI

    Any method to reduce or eliminate humping in the output? Even something like general hugging from behind, it insists the male is banging in so many images. Is it unavoidable due to how the checkpoint's been trained?

    qekDec 16, 2025· 1 reaction

    It's a simple merge with 3 porn loras, I do not recommend

    ToxicBotDec 16, 2025

    The ultimate helpful thing would be for some kind nerd out there to provide images from the checkpoint as compared to it's counterparts, and on each variant, and so on. Out of curiosity, what is the benefit of merging the loras like this, does it work better than summoning lora like normal? If your baking in loras, and new versions of loras release, or better loras, seems like bad idea.

    guyuan55555Dec 17, 2025
    CivitAI

    How to run this model in moeKsampler? For both high and low noise, the cfg value of 1.0 and the number of steps set to 6 make the picture look very bad. shift is 8, sampler euler, and scheduler simple

    guyuan55555Dec 17, 2025

    Well, it's my fault, I used the wrong model and used I2V instead of T2V

    dlfoid23Dec 19, 2025
    CivitAI

    Is it me or does this make blurry videos?

    moomtong269Dec 20, 2025

    The 2 models ('high' and 'low' model pair) are meant to be used together and their results "combined" to get clear videos. Check the docs of wan 2.2 and the example workflow here.

    If you only use one of them alone, you get blurry videos.

    dlfoid23Jan 25, 2026

    @moomtong269 i'm using both. it's not so blurry that its just blobs, it's just not very sharp.

    CrystalCipherDec 19, 2025· 1 reaction
    CivitAI

    What video frames does everyone set the videos at?

    moomtong269Dec 20, 2025

    I'm using 16 fps and it looks fine

    nsfwVariantDec 23, 2025· 2 reactions

    For the I2V model most of the time it seems to output at about 20fps. Depending on your prompt it might be as low as 16 or as high as 24, but it's usually 20. I'm guessing the training data was very mixed so it's ended up in the middle between the two.

    GsssyikDec 20, 2025· 6 reactions
    CivitAI

    This completely destroys all likeness to the original image. It has lots of great movement and motion data, but I would not use this for I2V, only T2V.

    cvenggDec 26, 2025

    I agree

    AzulAuthorityDec 21, 2025· 1 reaction
    CivitAI

    Does this have the "XXX Animations" LoRAs baked in too? Or just the regular "Animation" LoRA from https://civitai.com/models/2040641?modelVersionId=2376136

    AzulAuthorityDec 22, 2025

    @qek Yes but I'm asking was the "XXX Animations" release also baked in

    StrangeLiminaDec 24, 2025· 1 reaction
    CivitAI

    This is FP16 right? Is there a scaled FP8 version anywhere?

    StrangeLiminaDec 25, 2025

    @qek Oh, nice perfect thanks.

    PdidiDec 27, 2025· 2 reactions
    CivitAI

    stupid question but im new to WAN and all this. I onyl dl a high noise, do I drop this in the low noise unet too? same?

    csjsssDec 29, 2025· 1 reaction

    Scroll to the top of the webpage and download the "low" file; these are two models.

    bobby2daniels664Dec 29, 2025· 1 reaction

    You need to download two models, one for high noise one for low noise. Find a basic WAN2.2 workflow and you'll see what to do.

    DreamShapeDec 27, 2025· 3 reactions
    CivitAI

    can someone help me? i get mar errors but im doing everything fine with the models :( https://imgur.com/a/ZX7nmkJ

    DreamShapeDec 29, 2025

    @qek i got the T2V V2, ill get the i2v and try on them, thanks for the answer!

    DreamShapeJan 6, 2026

    @qek i tried loading the models i2v on the"load diffusion model" nodes, i still got the mat problem

    wejaster817Dec 28, 2025
    CivitAI

    Is I2V no longer working?

    obsidiancloudDec 29, 2025
    CivitAI

    This model creates too much unnecessary movements in video, it seems want to animate everything even when prompted not to

    boulbi78Dec 29, 2025· 1 reaction

    Had this feeling at first, but i was doing short sdxl like prompt. Now, while going for "story telling longer prompts" i manage to get better results.

    wemiteDec 29, 2025· 2 reactions
    CivitAI

    where is I2V broo?

    Devilday666Dec 30, 2025
    CivitAI

    Does smooth Mix have build in Lightx?

    Devilday666Dec 30, 2025

    @qek So that's a no to the motion of Smooth mix being baked in with lighting.

    ShroomWombDec 31, 2025· 2 reactions

    @Devilday666 you've probably already worked this out yourself by now, but just in case not, yes it's baked in.

    Devilday666Dec 31, 2025

    @ShroomWomb Hey thanks for the info. So does this mean I don't need to use a LightingX lora? would it not be wise to use the lora along with the check point that's already got it baked in?

    ShroomWombDec 31, 2025

    @Devilday666 Unless you're deliberately looking to generate something that is at hyper speed and glitchy, then no, you probably don't want lightx2v on top of SmoothMix. That being said, the output always makes me laugh.

    burakaltJan 1, 2026· 3 reactions
    CivitAI

    I think, this is the ultimate guide for SmoothMix. I got it from there.

    https://civitai.com/models/2260527/wan22-fp8-kj?modelVersionId=2544663&dialog=commentThread&commentId=1060516



    - base_model High [SmoothMix i2v H] -> Lora_whatever 0.65 -> Lightx2v 0.20
    - base_model Low [KJ i2v] or any other low model you like -> Lora_whatever 0.65 -> Lightx2v 0.2 (if u r using untouched base model 1.0)
    SmoothMix and most other base models shared here already have LightX2V merged.
    You can use FP16 (high) or FP8 (low) -> higher precision is always better, lower-precision models will work with whatever signal they receive.
    Regardless of the LoRA or base model, if the text encoder is FP8, prompt is limited. If the resolution is unsupported, the video will break or drift from the prompt no matter what you do.
    WAN2.2 prefers resolutions under 1 megapixel, which is where it gives the best balance of motion, detail, stability.
    720×1280 is supported
    640×960 || 576×896 is where WAN2.2 feels alive

    Portrait video sweet spot :

    512×768 -> light, stable, but calmer motion

    576×896 -> better motion without breaking consistency

    640×960 -> magic zone

    Upper safe zone (still good if VRAM allows):

    704×1024

    768×1024

    Beyond 1 MP:

    motion weakens

    model will break
    Tip: It's hard to memorize all those sizes. whatever size you choose, see if it is divisible by 8 like 512x848 : 512/8 = 64. 64/8 = 8. 8/8 = 0

    gceuterpe489508Jan 1, 2026
    CivitAI
    Every time I try to use this specific template, my ComfyUI disconnects the connection. I don't know why this happens. My PC has 48GB of RAM, an RTX 5060 Ti with 16GB of VRAM... Does anyone know anything about this?

    craftogrammerJan 2, 2026

    check the batch size, frames, or show the logs please.

    clzpetn804Jan 4, 2026

    When I run it with a 4070Ti 12GB it uses 49GB of RAM. I have 80GB and I am incredibly glad I upgraded right after I started making video. You might be OOMing

    craftogrammerJan 4, 2026

    @clzpetn804 anyone touching these Comfyui/AI stuff should have at least 48GB of RAM at least, minimum.

    Sant0zJan 5, 2026

    A month or two ago, I could run "this" model on a 4080 Super with 32 GB of RAM, even at 720p. Other models in civitai works fine. Now, ComfyUI crashes whenever I use it—sometimes it works once, but then stops entirely. Since updating to ComfyUI v0.7.0, it can’t generate videos at all, completely filling the RAM and making the PC unusable. This is the second time this has happened after an update, with a similar issue occurring in v0.6.0.

    gceuterpe489508Feb 12, 2026

    @Sant0z I changed the font and it improved a little, but the RAM filling up issue is something that happens frequently.

    Sant0zFeb 14, 2026

    @gceuterpe489508 Turns out my SSD was nearly full, so Windows auto switched the page file to an HDD, causing the whole PC to freeze whenever I ran a high/low model. I switched it back to my SSD, and now everything works again.

    gceuterpe489508Feb 27, 2026

    @Sant0z In my case, my SSD is small, 500GB (specifically 468GB, Windows uses almost 40GB, which is absurd. I really wanted to switch to Linux, but I have things I need on Windows). I'll try cleaning up the SSD a bit and come back to report if it improved. I don't use ComfyUI on the HDD; I tried it once and it's terrible!

    gceuterpe489508Mar 3, 2026· 1 reaction

    @Sant0z I cleaned it up, managed to free up about 50GB, the model worked perfectly, so the problem was there. For anyone who is going to use it, I recommend leaving at least 50-80GB free. 50GB is already good, working with a margin of 10-15GB left on the SSD, but it's always good to leave more GB free on the SSD, because SSDs tend to malfunction with little free space.

    RL1775Jan 1, 2026· 1 reaction
    CivitAI

    Any chance whatsoever of getting a version of Smoothmix with VACE either baked in, or compatible with the fp8_scaled VACE modules?

    RL1775Jan 4, 2026

    Given no reply, I'm gonna guess "probably not". I might have a go at creating one myself then.

    fronyaxJan 2, 2026
    CivitAI

    Is this only t2v model??

    nikolatesla20145Jan 2, 2026
    CivitAI

    Finally no more slow motion videos with regular WAN and the light2x lora (it sucks always gives slow motion). This SmoothMix model gives good motion while only needed small steps

    GRIJAYJan 2, 2026
    CivitAI

    so no lightx needed? just 2step per each sampler?

    GrimmsterJan 7, 2026

    @qek Why does their workflow have 6 steps then (3 per sampler)?

    wormtail59Jan 2, 2026· 1 reaction
    CivitAI

    couuld u release a version without the lightx speed up lora baked in? For those of us with higher end hardware the option to add the lora or not too is nice for when we find a seed we like, if we want to generate it again but higher quality.

    qekJan 2, 2026

    No, just use Base Wan with their loras, the model is nothing

    dataandmindJan 3, 2026
    CivitAI

    Great model. Only drawback is , consistency is lost non slim character image is given as input image. It makes the character very slim in almost all cases. Only if the person faces camera directly, the body proportions are kept as is.

    clzpetn804Jan 4, 2026
    CivitAI

    Is seems this model has some Wan 2.1 LoRA baked in because it has a number of the most annoying Wan 2.1 artifacts and issues that Wan 2.2 fixed. is there a chance in the future to get a wan 2.2 only version? Is the lightx2v or speed LoRA baked in a wan 2.1 version?

    qekJan 4, 2026

    "Is the lightx2v or speed LoRA baked in a wan 2.1 version?" Yes :/

    billysaltzman625Jan 4, 2026· 11 reactions
    CivitAI

    This model is way too trained on porn. Normal videos are almost impossible. Tears look like semen. Women will start making sensual facial expressions in scenes that are not even sexual.

    Setian91Jan 5, 2026· 1 reaction

    Use more steps and be very specific

    I done many non sexual scenes and that pretty much solved it, increasing shift 8+ work as well.

    billysaltzman625Jan 4, 2026
    CivitAI

    Also, why does the i2v always lose every character's face?

    Yc3kJan 5, 2026

    Lose in what way? You mean the faces are not consistent or?

    billysaltzman625Jan 5, 2026

    @Yc3k yeah like you'll have a character in the initial image for i2v (not even any other character in the image), and halfway through the character's face turns into someone else.

    castielaensland189Jan 5, 2026· 1 reaction

    @billysaltzman625 use 8 steps total, 4+4, and modelsamplingsd3 put a 9 on both, use Euler (Not the a) and Simple and use 480x832, DONT use any lora and try.

    Setian91Jan 5, 2026

    Using higher steps usually fixes the issue, I use 8 steps sometimes even 12 when the desired animation is stubborn.

    The only issue I face is when women hands are not visible in the first image the hands which are rendered look like creepy old witch hooker hands lmao, looks very weird on young adults, especially when Asian (sometimes the fingers are longer than the males chest even). Also the feet turn into manly boats instead of feminine feet

    clzpetn804Jan 5, 2026· 1 reaction

    @Setian91 when you add more steps in a checkpoint designed around low step count to 'fix' something you are basically telling the model to keep changing things. because it is designed to be 'finished' in about 4 steps and you are just telling it 'no, keep going'. So it just looks around and says 'well ok what are things in this video I have a crapload of dataset info about that i can pile details or refinement into because apparently I have to do SOMETHING but I dont know what'. so it over refines the hands, feet, changes faces, tries to reopen its decisions about poses, animation, etc. and then when the steps you specify finish you get its decisions that it has reopened and rerefined about 4 times and most often they are either way overtuned and over refined like the veiny bloated hooker witch hands or you get them in a between state where they are mid-renegotiation and then you get alien hands or taffy rubbery hands etc.

    clzpetn804Jan 5, 2026· 1 reaction

    Ultimately this is is a result of the checkpoint being absolutely blasted by the overpowering speed LoRA baked in at an insanely high strength, and the general attitude that if something is breaking or not what you want in video generation, then screaming more at the model and adding steps is the only answer (spoiler alert, its not). But essentially what is happening is that the checkpoint is built in such a way that its first order of business is to DECIDE NOW AND DO IT FAST. thats how speed LoRA do the speed thing. they pick the most generic and highly represented possibility for the videos outcome from their dataset, make the lowest possible risk interpretation of the prompt (including just ignoring your prompt or elements of your prompt sometimes, and doing what it wants based on lowest risk and lowest cost outcomes) and then handing that over as fast as possible to the low noise model for detailing. If the model has already decided on identity, pose, etc but its being told by the workflow to keep going either by prompt pressure, too many steps, too high of a strength of speed LoRA, etc, then it reopens its decisions and starts reinterpreting things like faces etc. this is how you get changing faces. There are a lot of reasons this could happen but it largely boils down to the model not being allowed to decide and then just chill.

    RL1775Jan 8, 2026

    @clzpetn804 sometimes you really need those extra steps though. Ever tried to get a female character to take her top off with long sleeves?

    Lucky_Plane_5587673Jan 5, 2026
    CivitAI

    What to do if last frame is distorted?

    Setian91Jan 5, 2026

    Shorten the generation?

    clzpetn804Jan 5, 2026

    if its literally just the last frame and everything else is fine then use a node to trim the last frame or do it manually.

    Setian91Jan 7, 2026

    @clzpetn804 I mean do it again but instead of 6 seconds 5 seconds, worked for me

    haxoJan 8, 2026

    FirstLastFrame methods? as far as i know there is no cure for it. Just cut the last few frames.

    mygenaiessentials138Jan 7, 2026
    CivitAI

    Mate, can you do an I2V fp16 merge with the loras please T_T

    qekJan 8, 2026

    No, just use pretrained Wan with their porno loras

    KDwright42Jan 7, 2026
    CivitAI

    Need help. the photo i upload change to a different person, prompts are followed but not the photo with the prompting

    GeekHermitJan 8, 2026

    When I had this occur it was because I was loading the t2v instead of i2v models in my workflow 🤦 - so double check if that’s the case for you

    oldthrashbarJan 8, 2026· 2 reactions

    I've also seen this happen when using greyscale or black and white image. Or just anything that the AI can't figure out wtf it is. Actually a poor mans way to turn I2V into T2V is to just put a blank image in and run it with a t2v prompt lol@GeekHermit 

    GeekHermitJan 8, 2026

    @oldthrashbar Makes sense- never considered/tried that before, great tip thank you!!

    KDwright42Jan 10, 2026

    @oldthrashbar yes that is it

    RL1775Jan 8, 2026· 1 reaction
    CivitAI

    For anyone else struggling with VACE here in ComfyUI, I finally got it to work. For starters, you need to be using the T2V models (I2V and VACE do not play well together).

    You should already have KJNodes installed, if not go ahead and do so. Then head over to:

    https://huggingface.co/Kijai/WanVideo_comfy/tree/main/Fun/VACE

    and/or

    https://huggingface.co/Kijai/WanVideo_comfy_fp8_scaled/tree/main/VACE

    Download the VACE modules and put them into your models/diffusion_models folder. The changes needed to make VACE function with smoothmix require you to use the Diffusion Model Loader KJ and Diffusion Model Selector nodes instead of the standard Load Diffusion Model (unless you're using WanVideoWrapper, in which case you'll need to use the WanVideo VACE Module Select node). Just load the VACE modules through the model selector node, connect it to the model loader KJ node as extra_state_dict, then connect that model to your workflow as normal.

    Cheers

    garrett48Jan 8, 2026
    CivitAI

    why do I get a 'NoneType' object has no attribute 'clone' for ModelSamplingSD3? I haven't changed anything.

    vamorandJan 8, 2026· 14 reactions
    CivitAI

    Bro, if there would be Smooth mix LTX-2
    I'd fucking marry you

    simonishereJan 9, 2026· 5 reactions
    CivitAI

    THE KING IS BACK

    Checkpoint
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    15,562
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/18/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    smoothMixWan2214BI2V_t2vLowV20.safetensors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.