CivArchive

    SMOOTHMIX WAN 2.2 T2V v3.0 UPDATE! - 03/14/2026

    Just tweaked the effects of the prompts "smoothmixanime" and "smoothmixrealism" and realism in general.

    • All videos on the High and Low Showcases were made using "WAN 2.2 Smooth Workflow v4.0" with settings: 900x600 resolution/Steps 8/Sampler Euler/Scheduler simple.

    • Just as T2V v2.0 it has light2xv baked in it.

    • The effects of the prompts "smoothmixanime" and "smoothmixrealism" were a little too strong - now you need to complement them with more prompts related to the visual style for the effect. Adding "Realistic Style" or "Anime Style" prompts should be enough. ^^

    • By popular demand (lol) you can make more normal sized breasts now - no flat chests thought, sorry flat chest lovers.

    • More details to skins if you try going for more realistic style videos - as long you don't use the "smoothmixrealism" prompt. In that case the skin will be very smooth automatically.

    • Added some Abstract concepts to it! It adds more variety and colors to the results.

    GGUF MODELS For I2V v2.0 and T2V v2.0!

    Great news for those that need GGUFs versions!

    The user @BigDannyPt managed to convert SmoothMix WAN 2.2 Img2Vid v2.0 and SmoothMix WAN 2.2 Txt2Vid v2.0!!

    Be sure to thank him for his efforts! =D

    GGUF - SmoothMix WAN 2.2 Img2Vid v2.0

    GGUF - SmoothMix WAN 2.2 Txt2Vid v2.0

    SMOOTHMIX WAN 2.2 I2V v2.0 UPDATE!

    For more info about the update and differences between versions check out this article.

    • All videos on the High and Low Showcases were made using "WAN 2.2 S. Workflow v2.0" with default settings except the resolution - they all used 900x600 on the workflow.

    • Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low.

    • To render futanari characters and male figures correctly, LoRAs remain essential. Try using mine or any of your favorites.

    • Be aware that hyper realistic content could possibly suffer some morphing since the model will gravitate towards the style of "SmoothMix Animations". That effect can be mitigated a bit by using Loras trained only on realistic content and by using prompts that pushes towards realism.

    Sorry for the irregular posts and updates. I’m currently pretty busy and need to reorganize a lot of things, so free time has been scarce. If everything goes smoothly, I expect to have considerably more free time starting in February. Yay ^^

    Have fun!

    SMOOTHMIX WAN 2.2 T2V v2.0 UPDATE!

    ITS FINALLY DONE! T_T

    SmoothMix WAN 2.2 Txt2Vid v2.0 is what the model should have been in the first place - now it can show what can really be done!

    • Merged with Loras made with only images and videos generated from the SmoothMix Checkpoints!

    • Very high quality image and smooth animations! Use it with the updated version of the Smooth txt2vid Workflow in case you haven't downloaded yet!

    • Much - MUCH - more variety of clothes, hair styles, clothes, poses, body types and skins colors.

    • You can use captions or prompts! Both will work! Use both to ensure what you want is generated!

    • Fox girls, cat girls, demon girls, oni girls - all the girls (and MILFs) are here. ;)

    • It responds to the prompts 'SmoothMixAnime' and 'SmoothMixRealism'! All Loras merged to it had those key promtps from the SmoothMix Animations Style and they have the same effect here! Check the SmoothMix Animations Style page for details!

    • Its completely uncensored so it also should work MUCH better with NSFW Loras. Give it a try. ;)

    • IT CAN'T generate male anatomy reliably! You are going to need Loras for that! SmoothMix's priority is the ladies.

    Smooth Mix Wan 2.2

    A Smooth Mix version of the Wan 2.2 A14B!

    I tried to make it as versatile as I could, I hope you guys like it!

    Every video on the showcase used an image from my Gallery! All of them have a comment with a link to the source image used.

    Key Points:

    • Every video on the showcase was made using my new Wan 2.2 Workflow v2.0/Txt2Video Workflow v.20 on its default settings. Make sure to use it!

    • No Loras were used to make the videos on the showcase. Try to make a video without using Loras first.

    • When using Loras, start by setting their weight between 0.3~0.5 and increase it if necessary.

    • Steps: 4 or 6

    • CFG: 1

    • Sampler/Scheduler: Euler a/Normal or UniPC/Simple

    • Resolutions:

    Use the resolution that your Setup can handle. As a start I recommend these:

    • For HighEnd Spec PCs: 560 x 940 - 940 x 560

    • For Mid Spec PCs: 480 x 720 - 720 x 480

    After testing these resolutions adjust as needed.

    Have fun! =)

    Description

    FAQ

    Comments (84)

    oldthrashbarSep 30, 2025· 2 reactions
    CivitAI

    Edit: I'll be damned, this actually works really well. And thats a nice workflow, that hitthatwhitham guy is right!

    Looks neat. Previews definitely got me to take the bait. Looking forward to giving it a test tonight.

    hithatwhitham895Sep 30, 2025· 6 reactions
    CivitAI

    These models, along with the workflow you made, are the best thing on this site. Well done!

    grillmaster320768Sep 30, 2025· 2 reactions
    CivitAI

    Do I need to load both models or only one? Running the high model by itself is giving me videos but the output is bright and grainy with heavy artifacting.

    DigitalPastel
    Author
    Sep 30, 2025

    To generate WAN 2.2 videos you need both the High and Low Checkpoints.

    grillmaster320768Sep 30, 2025

    Is a 53.9 gb pagefile enough to run both?

    grillmaster320768Sep 30, 2025

    42 gb is sufficient for caching both.

    GasparGamesSep 30, 2025

    only one is loaded at one time. You dont need memory for both simultaneously. Just use the Smooth Workflow.

    Hamer66Sep 30, 2025· 3 reactions
    CivitAI

    First impression is very very good (both the model and workflow). Even a simpel longer video (15 seconds) test gave no problems.

    Very well done, thanks!

    GasparGamesSep 30, 2025· 4 reactions
    CivitAI

    What black magic did you conjure up with this one? This is the most impressive thing I've ever seen on this site. I'm getting faster generation AND faster motion. This is pure sorcery.

    DigitalPastel
    Author
    Oct 1, 2025· 1 reaction

    Nah. Just a whole lot of me banging my head against the wall until I make something that works. xD

    GasparGamesOct 2, 2025

    @DigitalPastel Well it worked. Amazing job.

    ragnaroookSep 30, 2025· 2 reactions
    CivitAI

    Will this work with 16 GB VRam, or will I need a gguf?

    EDIT: Works with my 16 GB VRAM and others could get it to work with 12 GB.

    ragnaroookSep 30, 2025· 1 reaction

    I got it to work. but added a "purge vram" after the first ksampler, before it ran through. Not entirely sure, whether I had luck with this, or not. Will give it a few more tries.

    Pergy25Oct 1, 2025· 1 reaction

    For me it works with 16GB of VRAM with the WAN 2.2 Workflow v.2.0. But I have also 64 GB of RAM. I don´t know how much the RAM helps out here.

    ragnaroookOct 1, 2025

    @Pergy25 Was probably the RAM then. I only have 32 GB ram and yeah it's full to the brim.

    fatberg_slimOct 2, 2025· 2 reactions

    I'm using a 4070 with 12 GB VRAM and 32 GB RAM. I don't use the GGUF and it works fine in 480x720.

    ragnaroookOct 2, 2025

    @fatberg_slim In that case the problem was probably just randomly stopping when Chrome ate to much RAM and my purgeVRAM "patch" is nothing more than superstition 😂.

    clzpetn804Oct 4, 2025· 2 reactions

    works fine on my 12GB 4070Ti

    NyxxiNyxSep 30, 2025· 3 reactions
    CivitAI

    Spunked the 2k for the two models.

    Time to put it to the test...

    Liked the videos you posted yesterday. All looked nice mate. Excited to try a new workflow.

    DigitalPastel
    Author
    Sep 30, 2025

    I'm excited to see what you can make!

    Pergy25Oct 1, 2025· 2 reactions
    CivitAI

    Thank you very much! In combination with your workflow this is the fastest way for me to create videos and the quality is also the best. The 2k early access are really worth it. Very good work!

    AlberistOct 1, 2025· 2 reactions
    CivitAI

    Really enjoying these models. I'm not certain if I get better results at higher steps than with plain WAN yet, but it's definitely a great way to get very solid results at low steps. In my limited testing with a version of the Advanced Smooth I2V Workflow that I modified to stitch together multiple generated videos, the quality degraded massively when attempting to use the final frame as an input for the next video, which is a shame. That might have been user error, though.

    testuser301Oct 2, 2025· 6 reactions
    CivitAI

    How to create a GGUF version of this model?

    EricRollei21Oct 3, 2025
    CivitAI

    works well, thanks! I think you cooked in the lightning lora since you suggest 4 to 6 steps? And what shift do you recommend, high and low?

    DigitalPastel
    Author
    Oct 3, 2025

    I'm using 8 on both with good results. Its the default setting for the Smooth Wan Workflow v2.0.

    nsfwVariantOct 3, 2025· 6 reactions
    CivitAI

    My dude, you have created the best wan 2.2 checkpoint.

    Minor tips for those generating:

    1. Very high quality results using 6 steps split by 2 high and 4 low, but 4 steps 2&2 is also fine

    2. NAG works well, having success around 11 strength (see my posts for a "no talking" negative prompt that helps a lot with anime style characters constantly opening their mouths)

    3. Short prompts seem best; unlike other wan checkpoints, describe each scene element only ONCE and it adheres to it well - repeating yourself seems to confuse it, don't describe the same motion twice

    4. Like all wan checkpoints it actually natively does 24fps, but works best at this if you specify "slow" actions and gen at least 73 frames

    5. Loras should start at 0.3-0.5 as OP suggested, but that's mainly for HIGH; the LOW ones can be more like 0.5-1.0 and may give better results that way

    crombobularOct 4, 2025

    Like all wan checkpoints it actually natively does 24fps

    this just isn't true wtf. why do you think the motion goes slow when you gen at 24fps? jesus.

    nsfwVariantOct 4, 2025

    @crombobular It... doesn't go slow when you do 24fps? If anything it goes too fast (because 24fps means faster playback).

    The actual generation doesn't care what framerate you output the video in, it only cares how many frames you ask for in total. When I say it generates in 24fps natively, I mean the motion and physics are in 24 FPS.

    HOWEVER, most speedup loras are trained on lower FPS, which biases them towards fast movement at lower FPS. Most of the time they still generate small motions that match 24 FPS though, but often with large motions being in fast-forward. You can match it back up by specifying "slow" for the big motions.

    Some of the wan models natively do 16, some do 24. This particular one we're commenting on right now seems to be doing a hybrid, because big motions are fast but small motions are slow. At least, that's what it seems like to me. You can bridge the gap by specifying SLOW large movements, which matches them both up. You can't specify slow small movements, so this is really the only way to do it when they mismatch.

    I've been running a lot of gens with this and it's just what's working for me.

    Ada321Oct 4, 2025· 4 reactions
    CivitAI

    There goes 30k buzz, worth it though, big improvements across the board with this model. I recommend everyone try it.

    CrystalVisageOct 4, 2025· 11 reactions
    CivitAI

    Could someone do GGUFs now that it's left early access?

    Adaptalab0rOct 4, 2025· 2 reactions
    CivitAI

    @those guys who generated the example Videos: How much vram did you use? Usually I use quant gguf models and I doubt my 16gb 4070 ti Super will cut the cheese.

    nsfwVariantOct 4, 2025

    I used 32GB (5090) to make mine, 928x640 resolution, 73 frames, 6 steps, took about ~3 mins to generate and was using about 25GB VRAM at any given moment. So using GGUFs on a 4070 will probably be more like 6-10 mins I'd guess. Probably much longer using the non-GGUFs.

    Adaptalab0rOct 4, 2025

    @nsfwVariant Thanks for answering. Your estimation is close what I would think a gguf would need to generate. But I'm pretty sure safetensors above 15GB in Size will take forever... SO. Nice to know that you really did have a beefy card :-)

    nsfwVariantOct 4, 2025

    @Adaptalab0r I don't, I spun up a VM online to be able to do it. My actual GPU only has 11GB lol

    coolstradOct 4, 2025· 2 reactions

    @Adaptalab0r yep it took 13 minutes to generate a 6 second video on my 4070 ti super. the OP said they will work on the gguf

    Adaptalab0rOct 4, 2025

    @coolstrad thanks for the heads up. I dont know which resolution you used but 13 mins sounds not too bad :-). Since you posted your comment, I'll started downloading. Can't wait for the gguf version to be dropped

    coolstradOct 4, 2025

    @Adaptalab0r I really didn’t check the resolution, I just downloaded the workflow of OP of this model, and just used the base settings, I didn’t change anything, I used one of the picture that the OP used in showcase. I mainly did it to check how long it takes. But I am guessing it was probably on 480p cause I do y think so my setup would be able to handle all that in 720p. I can check later and let you know.

    Adaptalab0rOct 4, 2025· 1 reaction

    @coolstrad thanks

    clzpetn804Oct 4, 2025

    im using 12gb 4070Ti with this model and it works fine.

    Adaptalab0rOct 4, 2025· 1 reaction

    @coolstrad and @nsfwVariant nice! thanks. In the meantime my poor internet connection managed to finish the download ^^'. AND You are totally right! It works and woooohooooo it is beautiful and fast! 720 x 720 x 81 frames = 6 mins and 27 seconds (including loading the models) I posted a result. Thanks for giving me motivation trying the safetensors !!!

    DigitalPastel
    Author
    Oct 4, 2025· 12 reactions
    CivitAI

    Both versions are already out of early access!? O.o All right time for the GGUFs versions. Give me some time as I need to not only convert them but also test them out.

    Ada321Oct 4, 2025

    That and the full versions would be nice somewhere so we could do stuff like quant them for nunchaku (4bit inference that makes it about 4x faster for minimal quality loss) when the wan version comes out. And maybe DC-gen if the code for that is made public to make it even faster.

    MikushaOct 4, 2025· 1 reaction

    i tried to make some ggufs but failed, not sure if the model is a fp8 scaled one or why but it didn't work.
    the error was: 22112: GGML_ASSERT(info->ne[i] > 0)

    sewwlinaOct 11, 2025

    @Mikusha because ggmlnot support 5D,this for LLM

    lamentcounterbalanceOct 4, 2025· 7 reactions
    CivitAI

    Will you release a T2V model as well?🙏

    lighthorsexajz830Oct 4, 2025· 2 reactions
    CivitAI

    Good job but a question is a CHEICPOINT GUFF

    grimmygummy1769Oct 4, 2025· 1 reaction
    CivitAI

    Best checkpoint and Workflow :3

    HerasturaOct 4, 2025· 1 reaction
    CivitAI

    Amazing work :)

    coolstradOct 4, 2025· 7 reactions
    CivitAI

    In case yall think, with the .safetentors file, it took 13 minute 12 seconds to generate on rtx 4070 ti super 16vram. 32gb system ram. The results are AMAZING @DigitalPastel waiting for the .gguf model my guy, your model is awesome.

    coolstradOct 4, 2025

    more info. I didnt change anything no lora or stuff, it was on the base config. I used the workflow of the creator (digitalpastel)

    MarbellousOct 4, 2025
    CivitAI

    Hi ,why when i put a Loras i have nothing effect ?

    clzpetn804Oct 4, 2025
    CivitAI

    I tend to go for a more handheld gritty realism vibe and only do my original videos in 480p and then just upscale and interpolate with RIFE, they arent nearly as heavily conditioned, controlnet'ed or polished as a lot of stuff in the gallery, but for reference I use an 11th gen i9, 80GB of system RAM and a 4070Ti 12GB VRAM and the model runs fine and I average about a minute per second of video, with my typical video completing in 3.5 to 4 minutes. There is a lot baked into this model so if you are used to heavy LoRA usage or loading a speed LoRA, skip the speed LoRA and load your other LoRA at like 0.2 strength and expect a very different reaction to standard prompting, this model takes prompts much differently. Also expect motion to be much more chaotic and fast compared to standard Wan models. Expect a period of adjustment to this model if you have a long track record with Wan using a standard model.

    magicballoonOct 4, 2025

    Agreed with all your comments, it affects the cumshot aesthetics LoRA quite heavily but it can be made to work well if you lower the strength to very low levels.

    Also, have you considered using GIMM-VFI? It's a newer way to interpolate than RIFE and I think it produces better results. All you have to do is install the node for it in ComfyUI. It is a little bit slower, but not by much.

    clzpetn804Oct 4, 2025

    @magicballoon thanks ill check the gimm out, havent really tried other kind of interpolation

    AnonBlahOct 5, 2025

    Can you give an example what you mean by very different reaction to prompting? I'm having good success with it and have had success with niche concepts the q8 gguf just couldn't even begin to understand (realistic samus x realistic floating metroid blob). I am using my experimental workflow tho and it had good prompt execution on the q8 gguf before. I've found that you have to be more exact in what you prompt such as the woman takes of her tshirt(in the examples he posted) she turns her button up shirt into a normal tshirt but when prompted correctly unbuttons it.

    The most I've had problems with is how the nsfw lora baked in handles bjs the woman turns the guy into ken dolls (prompting "the male's intact penis slides in|out of the woman's mouth|vagina" solves it mostly ), or licks the topside on retreat(can't figure this one out).

    Also I'm having problems with game screens stuff speech blocks/bubbles, ending screen fades, etc. But the quality of physics this has on both realistic and animation is greater than fp8 scaled and is way smoother

    speed loras I believe was baked in don't know which ones though which has killed a lora I used before that can't activate with those.

    speed wise q8 guff is 50%(5-6 thrusts) speed, this is(11 thrusts) ,and the f8 scaled is 170% speed (17 thrusts) on my tests. Can use the sd3 model shift to adjust some but can cause weirdness like looping or visible glitches (I settled at 15 I think on mine with )

    clzpetn804Oct 5, 2025

    @AnonBlah ah yeah, well mostly in my limited experience it was that you didn't really have to be so specific, you didn't have to write a book to get what you wanted, and often times it seemed to me like the semantic neighbors of words was a much bigger cluster, like you could use a much greater variety of words to describe something and it would still know what you were talking about even if you mixed terms, like there wasn't as much differentiation between words which in a lot of cases is nice because you don't have to worry so much about exact phrasing. and in one of the videos I posted I did a two video set with one from the standard gguf and one of this model and the motion was completely different, much faster and everything is more 'dialed up' so its important to know that. The only downside I have seen in my experiments so far is that it seems this 'semantic collapse' of terms and concepts also tends to oversimplify. So just about any liquid tries to turn into cum, a dick anywhere near a mouth tries to turn into a blowjob, it just seems to want to pull hard towards certain concepts even if it isn't in the prompt, so its a balancing game and will take experimentation to get to know more about how it all works.

    magicballoonOct 5, 2025

    @clzpetn804 LoRAs help, it's just that some of them work better than others. For example, Huge Titfuck works really well because it was seemingly designed to be used with Lightning anyway, and POV Insertion really helps with penis shape if the entire penis was not in the initial frame

    PetrKOct 4, 2025
    CivitAI

    I apologize for a noob question, but where do I put this? What folder?

    Dumcluck51Oct 4, 2025

    I put it in the Diffusion Models folder

    Dumcluck51Oct 4, 2025
    CivitAI

    Loaded the diffusion models, loaded your workflow, told comfyui where to find everything, disabled the lora nodes, dragged in an image, added my prompt and ran it. Worked first time and excellent results. Not sure how you got a workflow with such big models (20GB each) to work on my 16GB VRAM but well done!

    rivdemon1221554Oct 4, 2025

    There's plenty of other methods, you just need to look. I've been making easy videos with my 4080 super and 32GB ram, using models like this (fp8) ever since wan2.2 surface with no probs.

    vraimentosefOct 5, 2025

    Is there something special that i miss because for me the first two frame start to animate my picture then it just change everything and i have a complete different style, like i input an anime picture and i end up with a realistic style scene

    Narumi_AokiOct 5, 2025· 1 reaction
    CivitAI

    Great work! But why is my generated video a few degrees brighter using the workflow you provided?

    Narumi_AokiOct 5, 2025

    Solved it by switching to unpc, the best model ever!!!

    TheodorSidOct 5, 2025
    CivitAI

    Does it use the new updated lightning lora ?

    AnonBlahOct 5, 2025
    CivitAI

    I'm having 'good' success with this model on a 12gb vram and 32gb system ram. I can generate a 5sec 720x720 clips in about 4-5 minutes (use the clear vram node before any model switches(high,low,clip,vision)). I've deleted my q8 gguf models after trying this. I do have a few problems with this due to baking the nsfw and the light loras into the model. I'm using a custom experimental workflow using 4 ksamplers 8 passes on 7 steps (lol yup 8 passes over 7 steps huh)

    The nsfw lora has great physics but has given me grief in how it handles penis entering|exiting holes. The woman turns the male smooth lol (prompting "the male's intact penis slides in|out of the woman's mouth|vagina" kinda solves this for me). When retreating during pov bjs she licks the top of the penis (how do you fix this)????

    I'm having random game elements popup and can't figure out where from. (examples speech blocks|bubbles, game over screen fades, fade outs, random black censor bars on forearms lol, screen brightness rising till flashed, etc..)

    The stomach bulge loras toss errors and don't work on this checkpoint. I know they had problems for most to work before but they toss errors on this checkpoint.

    speed actions in the video is improved over the q8 gguf but slower than the scaled fp8.

    q8 gguf(5-6 thrusts) < this (10-11 thrusts) < fp8scaled (17thrusts)

    But the smoothness of motion and physics are way better than both q8guff and the fp8scaled.

    The render time is way better than the gguf(7-9minutes) so I'm guessing its based on an fp8 model(3-5minutes). This is due to not having to convert|compress but slightly slower than the fp8 probably size and offloading to ram.

    Visual quality is:

    q8 gguf is poop (phantom limbs) < this has increased resolution and smooth motion but has problem elements from loras < fp8scaled improved clear resolution but jumping motion due to the clarity

    prompt adherence is great for me better than the q8 gguf and can do niche concepts the q8 gguf couldn't even begin to understand examples- pan around object, rotated camera in room, non bipedal creatures controlled (slimes etc.) maybe needs a bit more control but they do actions now, Handles realistic very well.

    coolstradOct 5, 2025

    What workflow and Lora did you use? And how were you able to generate it in 4-5 minutes? My rtx 4070 ti super did it in 13min 12 seconds. I did not add any Lora or stuff. If you are okay would you mind sharing your workflow?

    AnonBlahOct 7, 2025

    @coolstrad The workflow I'm using was a standard 3 ksampler 6step that I have modded to 4 kasmplers(I'm back down to 3 and 3 minute 720 vids) and modified the model pipeline if you want to try it I'll upload a copy after I've cleaned it up a bit.

    If sounds like your offloading to page file with your diffuse times if you are there is no workflow that will help. You can check by opening task manager and watch if your harddrive has a bunch of activity while you diffuse. You can try closing background task to free vram space

    coolstradOct 7, 2025

    @AnonBlah Can you share the workflow? Yes it is using page file memory.

    AnonBlahOct 7, 2025

    @coolstrad  If i don't share the workflow today I won't be able to get to it for a few days super busy.

    I am going to restate that if you are already offloading to the pagefile my workflow will not help you tbh. You need to lessen background tasks

    Walter_BOct 5, 2025· 2 reactions
    CivitAI

    Used your workflow and worked like a charm. Even on my aging 3080 10gb.

    oldthrashbarOct 5, 2025· 2 reactions
    CivitAI

    Edit: LMAO I don't know whats going on but its like between sessions this checkpoint works amazing. I'm back to using it again. I just can't make up my mind.

    Really liked this model at first. But after fighting with the prompts for quite a few hour I've opted to just go back to a base model with lora stacks

    DigitalPastel
    Author
    Oct 6, 2025· 6 reactions
    CivitAI

    Here is a sneak peek for SmoothMix txt2vid Wan!

    Worked a bit on the Txt2Vid Version this weekend.

    Finally getting some consistent good results but still needs work.

    When its done I will be posting a new workflow I made for it as well. ^^

    MikushaOct 6, 2025· 1 reaction

    looking nice, meanwhile, can we have i2v gguf's plz 🙏

    WoogyOct 6, 2025· 4 reactions
    CivitAI

    Wow... perfect.

    Running here on an RTX 4080 with 16GB VRAM.

    Set the length to 241 frames = 15 seconds. Render time approx. 7-8 minutes for 15 seconds in 960x1440px – awesome!

    arcad3147Oct 8, 2025

    Also got an RTX 4080 but cannot run with this setting without running out of memory

    WoogyOct 9, 2025

    @supertype3 At first, I had Comfyui

    via Stability Matrix—that didn't work properly either.

    Then I reinstalled Comfyui locally, and everything worked.

    Give it a try.

    MarbellousOct 13, 2025

    Hi where can i change for 960x1440px resolution ?

    5848052Oct 6, 2025· 2 reactions
    CivitAI

    Simply the BEST!!! And so FAST!!! 👍

    oldthrashbarOct 6, 2025
    CivitAI

    Anyone have advice for cumshots with this checkpoint? Right now I'm just switching to a base model for them because they are insane and cartoony in here pretty much no matter what I try.

    Mario1964Oct 6, 2025
    CivitAI

    Finally a really good wan model, i love it :D

    ethanfelOct 6, 2025· 6 reactions
    CivitAI

    Hi, Could you share the model without any lighting lora merged or share the information which one was merged that we it can be substracted ? That way we can use it with cfg and more steps. Thanks :)

    snappy47Oct 7, 2025· 2 reactions

    Yea, especially since the lightning loras could go through future iterations and perhaps different variants. Merging whichever makes this checkpoint a lot less flexible. Hoping OP would reconsider.

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    28,894
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/27/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    smoothMixWan2214BI2V_i2vLow.safetensors

    Mirrors