CivArchive

    ⚠️ Important information

    All models already include the Lightning LoRAs, except the SVI models, where Lightning is not included.
    Do not use additional Lightning LoRAs on models that already have Lightning integrated, or the quality will be degraded.
    For SVI models, you can use Lightning LoRAs if you want faster video generation.

    2or3 ksampler (for svi) https://civarchive.com/models/2079192?modelVersionId=2668801
    v2.1 (for nsfw v2) https://civarchive.com/models/2079192?modelVersionId=2562360

    v2.1 with MMAUDIO (for nsfw v2) by @huchukato https://civarchive.com/models/2320999?modelVersionId=2613591

    Another wf triple KSampler KSampler https://civarchive.com/models/1866565/wan22-continuous-generation-svi2-pro-or-gguf-or-32-phase-or-upscaleinterpolate-w-subgraphs-and-bus?modelVersionId=2559451

    triple KSampler wf setup allows for more motion and helps prevent slow-motion issues. In exchange, your videos will take longer to generate.

    For those having issues with my SVI workflow , you can try Kijai's wf here: https://github.com/user-attachments/files/24364598/Wan.-.2.2.SVI.Pro.-.Loop.native.json. Alternatively, you can try fmlf wf https://github.com/wallen0322/ComfyUI-Wan22FMLF/tree/main/example_workflows they will be simpler. There are others on Civitai that work very well too.

    Qwen-VL workflow as an alternative to Grok for creating your dynamic NSFW prompts. Thanks @huchukato for his work: https://civarchive.com/models/2320999?modelVersionId=2611094

    🟣 SVI Update – NSFW

    Model Presentation (SVI-compatible version)

    This update was made because the NSFW V2 models were not fully compatible with SVI LoRAs.
    This version was created to work smoothly with SVI, while still functioning without them (though the workflow must be adapted).

    Be careful, SVI LoRAs will only work with a workflow specifically designed for them, otherwise, it won't work.

    There are two models available for SVI:

    Fast Move (FM) – Sexual scenes may differ from the Consistent Face model and will generally be faster.
    Consistent Face (CF) – Slightly better image quality, which may be preferable for anime-style videos; sexual scenes differ from Fast Move, but the difference is minimal.

    You can also mix models between High and Low LoRAs:

    • FM (Fast Move) as High + CF (Consistent Face) as Low

    • CF (Consistent Face) as High + FM (Fast Move) as Low
      Both combinations work and give slightly different results, offering more flexibility for your videos.

    For this version, the main improvements include:

    ✔ Fully adapted for SVI LoRAs
    ✔ Greater flexibility: Lightning and SVI LoRAs must be loaded manually for custom workflows


    🟣 SVI LoRAs – Strengths & Weaknesses

    ⚡ Overview

    Strengths

    • Best solution for making long videos

    • Excellent transitions between video segments

    • Reduced degradation compared to other solutions

    • Strong character coherence: the model retains information from the previous video, helping maintain consistency

    ⚠️ Weaknesses

    • Weaker prompt understanding

    • Weaker camera understanding

    • Videos are less dynamic

    • Sometimes slow-motion effect
      (can be improved with proper Lightning LoRAs, dynamic prompts or triple ksampler)


    🟣 SVI LoRAs – Download Links

    ⚡ Download

    High LoRA
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    Low LoRA
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    Note: Both LoRAs must be loaded manually in your workflow.


    🟣 Suggested Lightning LoRA Combos (Optional)

    ⚡ Overview

    You don’t have to use these Lightning LoRA combos. They are optional and allow you to fine-tune motion and degradation.
    You can also use other Lightning LoRAs or assign different combos per video for more control.


    🔥 Combo 1 – More Motion (Rapid Video Degradation)

    High LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 4

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22-Lightning/old/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1.4


    💜 Combo 2 – Less Image Degradation

    High LoRA →
    https://civarchive.com/models/1585622/lightning-lora-massive-speed-up-for-wan21-wan22-made-by-lightx2v-kijai

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22-Lightning/old/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1


    🟢 Combo 3 – Balanced Motion / Moderate Degradation

    High LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 3

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 1.5


    🧠 Advanced Usage Tip

    You can disable global Lightning LoRAs (in my wf) and assign different combos per video:
    Combo 1 for Video 1, Combo 3 for Video 2, Combo 2 for Video 3.

    Each combo produces different motion and degradation behavior.

    If you want to create several-minute-long videos while maintaining high quality, it is possible, but it will take a very long time. You just need to avoid using Lightning LoRAs and use the full model instead.


    🧠 Dynamic Prompts – Better Control & More Motion

    You can use dynamic prompts to have better control and this helps to make the video more dynamic.
    You just need to give this example prompt to an LLM like ChatGPT. It will be enough for it to describe the image you have and what you want as a video while keeping the same structure of the prompt in the following example.

    ⚠️ This will be a NSFW prompt; ChatGPT will not accept it.
    You can use GROK https://grok.com, which can make NSFW prompt modifications.

    For more examples of prompts with different poses, check:
    Enhanced FP8 Model.
    Give these prompts to GLM with the dynamic prompt structure if you want.


    Example 1

    (At 0 seconds: Wide shot showing a slightly overweight man casually walking down a city street, camera fixed in front, urban environment with buildings and cars.)
    (At 1 second: Suddenly, a massive shark bursts from the pavement ahead, looking terrifying at first, pavement cracking, dust and debris flying, camera from side angle.)
    (At 2 seconds: Medium shot from the side, the man stumbles backward in shock, while the shark dramatically slows down and strikes a comically exaggerated sexy pose, revealing large, exaggerated shark breasts, covered by a colorful bikini.)
    (At 3 seconds: Close-up on the man’s face, eyes wide in disbelief, as he turns to look at the shark, small cartoon-style hearts floating above his head to emphasize his amazement, camera slightly low-angle.)
    (At 4 seconds: Dynamic travelling shot showing the man frozen in the street, the shark maintaining its sexy pose, water splashes and debris still moving realistically, urban chaos around.)
    (At 5 seconds: Wide cinematic shot pulling back, showing the man standing in the street, staring at the bikini-wearing shark with hearts above his head, epic perspective highlighting absurdity and humor.)


    Example 2 – Anime NSFW

    (At 0 seconds: The couple in a cozy bedroom, anime style, soft lighting highlighting their intimate embrace, her back arched slightly as he positions himself.)
    (At 1 second: The man’s hips moving rhythmically, the head of his penis sliding effortlessly into her vagina, her body responding with a gentle, fluid motion, anime-style motion lines emphasizing the smooth penetration.)
    (At 2 seconds: Her back arching deeply against him to intensify the pleasure, hips swaying with each thrust, breasts bouncing subtly, small hearts floating around them to capture the erotic energy.)
    (At 3 seconds: Her face, eyes closed in bliss, a soft moan escaping, hands resting behind her head, anime-style blush on her cheeks, the air filled with a seductive aura.)
    (At 4 seconds: The man penetrating her deeply, her body moving in sync with his, the bed sheets slightly rumpled, the room’s warm lighting enhancing the intimate, lustful atmosphere.)
    (At 5 seconds: The couple locked in a passionate embrace, the scene exuding vibrant, seductive energy, anime style with smooth lines and soft shadows.)

    🟣 Lightning Edition – NSFW I2V V2

    Model Presentation (2 new versions available)

    I originally planned to release only one V2, but some people preferred the NSFW V1 over the Fast Move V1 version, so depending on what you’re looking for, one version may suit you better than the other.

    For these V2 versions, I tried a new approach:
    ✔ I made sure that most sexual poses work, while the model is also good for SFW content
    ✔ More flexible for general use


    🔥 NSFW Fast Move V2

    Improvements included in this version:

    • Better prompt understanding

    • Better camera understanding

    • Reduced unnecessary back-and-forth movements outside sexual poses
      (cannot be completely removed, but strongly reduced)

    • Improved bounce effect on the buttocks

    • If a man appears, he will no longer automatically attempt to penetrate the woman when she is nude

    This version is designed for those who want more dynamic scenes with more movement.


    💜 NSFW V2

    The difference between this version and NSFW Fast Move V2:

    • Less camera control

    • Less camera understanding

    • But body movements are less pronounced (breasts and buttocks)

    • Some preferred V1 NSFW to V1 NSFW Fast Move, and this version keeps that spirit

    For varied sexual poses, check the previews — there are many.
    You can use the shown prompts and adapt them to your images, but of course, other prompts will work as well.
    Don’t hesitate to use other LoRAs for creating specific concepts.

    • Steps: 2+2
      (Jellai recommends 2+3 for even better results, and I agree)

    • Sampler: Euler simple

    • CFG: 1

    • This model already includes the Lightning LoRAs, don’t use them, or the quality will be degraded

    • You need to download both models: H for High and L for Low.

    Here is an example made with the base workflow + the First/Last Frame workflow + upscaler (only on the starting images) + inpainting (cum).

    https://civarchive.com/images/114733916


    LoRA: Penis Insert WAN 2.2

    LoRA weight: 1

    Other examples with different LoRAs

    Recommended LoRA to obtain a very realistic vagina and anus:
    https://civarchive.com/models/2217653?modelVersionId=2496754

    💡 Tip for Anime Style:

    If you’re working in an anime style, feel free to follow the advice of @g1263495582 thanks to him for this.
    Try these LoRAs together or separately; they help maintain face consistency:

    🔹 Note:
    For these LoRAs, use only Low Noise.
    For the examples mentioned above, use High and Low Noise as indicated.

    Many things can be done with the model, but don’t hesitate to use other LoRAs for specific purposes. And don’t hesitate to lower the LoRA strength to preserve the face as much as possible.


    📷 Dynamic prompt example with different camera angles:

    (at 0 seconds: wide shot showing the woman standing in the snowy plain, a massive giant dragon emerging behind her, snow cracking and dust rising).
    (at 1 second: the woman jumps backward onto the dragon’s back as it bursts fully from the sky, camera tracking the motion from a side angle, debris and snow flying).
    (at 2 seconds: medium shot from the side, the woman balances heroically on the dragon’s back as it begins to run forward across the snowy plain, slow-motion on her posture).
    (at 3 seconds: close-up on the woman’s determined face, camera slightly low-angle to emphasize her heroic stance, snow and debris flying around).
    (at 4 seconds: dynamic travelling shot alongside the dragon, showing the snowy plain, scattered debris and ice fragments flying everywhere as it gains speed).
    (at 5 seconds: wide cinematic shot pulling back, showing the dragon taking off with the woman riding on its back, soaring above the snowy plain, epic perspective with snow, wind, and scale emphasizing the drama).

    For available camera angles, check further down in the “cam V2” model description.


    🔞 Normal Clip vs NSFW Clip:

    You can also use the NSFW version for your clip.
    It can bring positive effects for sexual scenes, but it can also cause issues, as in this example:

    Here are the links to NSFW clips (thanks to zoot_allure855 for correcting the BF16 version):


    If you have any questions, don’t hesitate to ask!

    Many people ask me for help via private message. You can do that, no problem, but I would appreciate it if you could do it in the comment section, it could help other people. Thank you.

    🟣 Lightning Edition – NSFW I2V camera prompt adherence

    ⚡ Model Presentation (2 versions available)

    I decided to release 2 versions. Both can produce different results, as you can see in the previews (the seed was the same).

    • Fast Move Version: provides more movement, and the movements will be faster, with better prompt understanding and camera handling
      (you can see it in the 4th preview "fast move high" where the man slaps the woman)

    • Natural Motion Version: offers more natural breast movements depending on the situation and produces slower scenes.

    👉 Check both and choose the one that works best for you.

    This NSFW edition is, of course, focused on sexual poses.
    You should achieve very good results.
    To create specific concepts, feel free to use specific LoRAs — they work very well with this model.


    • Steps: 2+2
      (Jellai recommends 2+3 for even better results, and I agree)

    • Sampler: Euler simple

    • CFG: 1

    • This model already includes the Lightning LoRAs, don’t use them, or the quality will be degraded

    You can also use this T5, which may improve understanding:
    https://huggingface.co/NSFW-API/NSFW-Wan-UMT5-XXL/tree/main?not-for-all-audiences=true


    📝 Usage Tips

    • Start with a clean, high-resolution image for better results.

    • The model will not change the face; if it does, increase the resolution.

    • Also adjust your prompt if you are not getting what you want.

    • Wan understands some terms well, but not all.

    • For prompts, check the video previews: adapt them to your image.
      Other prompts will work too.


    🎥 Training

    I trained 2 LoRAs:

    • The first one using several videos and images of different sexual positions.

    • The second one to bring more dynamic motion.

    Respect to everyone who creates this type of LoRA — it requires a lot of work.


    🙏 Credits

    This model wouldn’t exist without the incredible work of these creators:

    A special thanks to Alcaitiff and CubeyAI, two very kind and humble people.


    🔔 Important

    Please don’t support my work with buzzes here, I don’t need it.
    If you want to support someone, support the creators listed above — they truly deserve it.


    💬 Feedback

    Feel free to give me feedback, positive or negative, to help improve future updates.

    Update WAN 2.2 V2 CAM I2V – NEW feature Camera & Prompt Improvements

    This custom version of WAN 2.2 I2V has been updated to deliver better prompt comprehension and improved handling of camera angles and cinematic movements. It provides more accurate scene interpretation, smoother transitions, and enhanced control over dynamic.

    Key Features:

    • Excellent understanding of prompts and scene composition.

    • Supports various camera angles and movements, including zoom, dolly, pan, tilt, orbit, tracking, and handheld shots.

    • Ideal for cinematic storytelling, animated sequences, and creative video-to-image projects.

    • Flexible multi-step prompts, standard 4 steps (2+2), can be increased for higher fidelity.

    • Recommended sampler: Euler simple.

    You can, of course, use your usual prompts; this is just one example among many.


    Example Prompt with Different Camera Angles

    (at 0 seconds: wide frontal shot of a man standing in front of an open fridge, cinematic lighting, subtle ambient kitchen reflections, the fridge contents visible, camera static).
    (at 1 second: medium shot from the front as he opens the fridge fully, reaches for a can, slight zoom-in to emphasize the action, cinematic framing).
    (at 2 seconds: camera shifts to a side medium shot, tracking him as he lifts the can to his mouth, fluid movement, maintaining lighting and reflections).
    (at 3 seconds: camera starts a smooth 360-degree orbit around the man, following him as he drinks from the can, motion fluid, background slightly blurred for cinematic effect).
    (at 4 seconds: close-up on his face and upper body while drinking, orbit continues subtly, fridge reflections accentuating realism, cinematic polish).
    (at 5 seconds: final wide shot as he lowers the can, camera completes orbit to original angle, showcasing the kitchen space, lighting, and dynamic movement).
    

    Available Camera Movements

    Zoom / Dolly

    • zoom in

    • zoom out

    • camera zooms in on subject

    • camera zooms out gradually

    • dolly in

    • dolly out

    • camera dollies in slowly

    • camera dollies out steadily

    • crash zoom

    Pan

    • pan left

    • pan right

    • camera pans across the scene

    • gentle pan left

    • sweeping pan right

    Tilt

    • tilt up

    • tilt down

    • camera tilts up to reveal…

    • camera tilts down from…

    Orbital / Tracking / Arc / Rotation

    • orbit around subject

    • 360° orbit

    • camera circles around

    • tracking shot

    • camera tracks alongside subject

    • arc shot

    • curved camera movement

    Other Movements & Styles

    • static camera / static shot

    • handheld shot

    • camera roll

    Note:
    LoRAs work perfectly with this model, offering full compatibility and consistent results across styles and concepts.

    # SUPPLEMENTARY ADVICE

    You can use negative prompts, but be careful: this will double the generation time.
    Only use them if you really want to prevent something from appearing in your video.
    In that case, enable the corresponding node; otherwise, keep it disabled.

    ⚠️ Important
    The model must be used with CFG set to 1, so negative prompts do not work by default.
    However, there is a simple way to enable them.

    How to enable negative prompts:

    1. Open the Manager

    2. Search for kjnode and install it

    3. In your workflow, add the WAN Nag node

    4. To use it correctly:

      • Connect this node after the LoRA Loader

      • Feed it with the negative prompt

      • Then connect it to the first kSampler (High)

    👉 Only use this option when necessary, to avoid unnecessarily increasing generation time.

    And here is the negative prompt for unwanted movements:
    motion artifacts, animation artifacts, movement blur, motion distortion, dynamic distortion, shifting shapes, unstable render, instability, wobbling effect, jittering effect, vibrating render, inaccurate details, visual noise, distorted surfaces, rendering errors, warped shapes, exaggerated butt movement, jiggle, overanimated hips, unnatural butt motion, hyper bounce, extreme curves, distorted hips, unnatural pose, unrealistic anatomy, deformed body, disproportionate body, floating limbs, blurry textures, clipping, stretching, low detail, messy background, artifacts, butt bounce, moving hips, swinging hips, shaking butt, wiggling butt, moving lower body, moving pelvis, jiggling buttocks, bouncing butt, unstable stance, unnatural hip motion, exaggerated hip movement, hip sway, hip rotation, bottom motion, pelvis motion, wobbling hips, fidgeting lower body, dancing hips, pelvic movement, motion blur, unnatural movement

    And here is the negative prompt from the official ComfyUI workflow:
    色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走

    Note: If you use the default workflow without the node, this negative prompt will not work.

    UPTDATE: Lightning Edition – T2V

    I’m not really experienced with T2V myself, but a colleague who works with it a lot tested this version and confirmed that it performs very well. From the few tests I managed to do on my side, I got the same impression, although I haven’t compared it directly with the base version yet.

    The settings are the same as in the image-to-video version: 2+2 steps or more if you prefer, CFG at 1, and Euler simple (though other samplers also work great).

    I don’t plan to make an NSFW version for the T2V release,
    but for the I2V version, I’m already quite happy with the first results.

    Update WAN 2.2 V1.1 I2V

    Updated version of the original Lightning merge — same settings 2+2 steps or more, featuring more movement and smoother flow (depending on the prompt).

    The model already works very well in NSFW. Just use the right LoRAs, and the movement will improve.

    WAN 2.2 V1 I2V

    This checkpoint is based on the original WAN 2.2, with the Lightning WAN 2.2 and Lightning WAN 2.1 LoRAs already integrated. This improves image quality, makes motion smoother and more dynamic, and removes the slow-motion effect that can occur with the Lightning models.

    A common setup is to use 2 steps on the high model and 2 steps on the low model, though other settings may work as well. Do not apply the Lightning LoRAs manually — they’re already included in this checkpoint.

    My workflow

    https://civarchive.com/models/2079192/wan-22-i2v-native-enhanced-lightning-edition

    Description

    FAQ

    Comments (116)

    gxbsyxhJan 18, 2026
    CivitAI

    Thank you for all the time in this series, your model is the best balanced among so many authors, but I have not been too correct in the NFSW series, which is a bit of a pity

    juliusmartinJan 18, 2026
    CivitAI

    hi, its me again, whining about the image degradation that seems to occur in all wan models.

    and in this nearly perfect model, thats unfortunately the biggest issue in my setup. its so frustrating because without that, the model looks almost near perfect. most other issues i can usually attribute to randomness, a bad lora in the stack, or some setting mismatch. but the image degradation thing is hard to overcome once it starts.

    first of all: you went in the correct direction removing the lightning. its noticeably more stable now.

    also, this is not meant as demotivation at all, just trying to point at the one issue that still holds it back. if you prefer, i can dm this kind of feedback in the future instead of posting it publicly.

    incredible model overall. the non lightning setup finally gives room to do proper testing, and thats what i did. the biggest issue im seeing is color and detail degradation over time. im honestly wondering why almost no one mentions it here. maybe because most people are not generating long enough clips to see it clearly (like beyond 30 seconds). but technically, the degradation already starts within the first seconds, it just becomes obvious after a few sequences.

    are your latest svi workflows already aligned with your preferred "least color degrading" settings for this model, or did you publish them with more general defaults while you run something slightly different locally?

    taek75799
    Author
    Jan 18, 2026

    Hello! Don't worry, I accept all constructive criticism; it helps me. Unfortunately, the color shift throughout the video is an issue when making long videos; it's inevitable. I believe you can avoid it simply by not using Lightning LoRAs, but the generation time will be extremely long. I think using Lightning 1030 and 1022 is the best way to avoid this, but of course, it’s inevitable anyway. We aren't ready to make infinite videos just yet, lol. I tried to bypass it with nodes like Color Match, but it's useless.

    g1263495582Jan 18, 2026· 1 reaction

    I use the same seed for long generation, both h + l seed identical throughout the entire process. Initially, I ran into the degradation issue, but after adjusting something (I can't recall what), the problem went away.

    fedivom111Jan 18, 2026

    @taek75799 Have you tried using Color Match with Reinhard at 0.5 strength? That's what I'm using based on someone else's suggestion and not seeing any color deg.

    taek75799
    Author
    Jan 18, 2026

    @fedivom111 Hello! I tried Color Match at 0.7 with MKL, and it was even worse lol. Thanks for the tip, I’ll give it a try and share the results!

    bnzarev821Jan 19, 2026

    @taek75799 With the correct loras the degradation is minimal, i've generated 10 min vids with almost no degradation (like this one), you should probably check this workflow for the correct loras and fix your stuff.

    taek75799
    Author
    Jan 19, 2026· 1 reaction

    @bnzarev821 Thank you for your feedback! I'm familiar with this method—it uses the Triple KSampler, and it's been around for a long time. The first one is used to generate more movement with a higher CFG without Lightning LoRAs, while the other two use the Lightning LoRAs. I already have a workflow (wf) using the Triple KSampler node; it’s easier to set up. I’ll double-check it and share it soon—I'm sure people will be interested. Thanks again!"

    norman12393420Jan 18, 2026
    CivitAI

    Update: I saw the CF version!!


    Sorry, the version I see available for download is the FM version. Could you please tell me where I can download the CF version

    wewewewJan 18, 2026
    CivitAI

    I don't see the SVI CF model, or quants for SVI. Maybe I missed them... maybe put up different model pages? There are way too many versions.

    qekJan 18, 2026

    No, here

    taek75799
    Author
    Jan 18, 2026

    Hello! I posted the CF version an hour ago. As for the GGUF files, that will be later

    dementymail195Jan 18, 2026
    CivitAI

    Hello! Great Work!

    Can you share workflow for t2v version please!

    Or give advice how to create it on your workflow?

    Thank you, for your work!

    taek75799
    Author
    Jan 18, 2026

    Hello! I don't have a workflow for T2V, lol. I know it's weird, I just used the default ComfyUI workflow. I'm not very familiar with Wan T2V; I only made a model because I was asked to. Actually, someone shared a tip with me that I haven't tried yet to turn I2V models into T2V: just put a black image in "Load Image" and enter your prompt.

    Darkf0rgeJan 18, 2026
    CivitAI

    This does seem to have better consistency and the eyes no longer change. But Now I can't really get it to follow my prompts at all =(

    taek75799
    Author
    Jan 18, 2026

    Hello! Try changing the prompt. SVI is much more finicky when it comes to understanding, but it eventually gets there, lol!

    Don't hesitate to use an LLM to help you

    levogo2909307Jan 18, 2026· 1 reaction
    CivitAI

    I was very happy with nsfw fastmove with lightning included, but for anyone reading and wondering if the jump to SVI is worth it, it's definitely worth it. using the CF model.

    VektorJan 18, 2026
    CivitAI

    jfc Somebody please just tell me which one to dl. 😵‍💫 I have a workflow that uses lightning. Also, I have no clue what SVI is.

    mattmcfcbriggs95833Jan 19, 2026
    CivitAI

    Without sounding try sound dumb :) where do I put the GGUF model? I've tried loads of different folders but it never loads in.

    HasenbeinJan 19, 2026· 1 reaction

    Usually unet folder. And you need a gguf loader in the workflow.

    spartanjello123Jan 19, 2026

    replace model loader with guff loader node

    ArtificialOtakuJan 19, 2026
    CivitAI

    So I have all requirements installed, but still I'm getting this when I enable the second group (everything is fine using group 1 and 3:

    Prompt outputs failed validation: CLIPTextEncode: - Required input is missing: clip CLIPTextEncode: - Required input is missing: clip IAMCCS_WanImageMotion: - Required input is missing: anchor_samples SamplerCustomAdvanced: - Required input is missing: sampler - Required input is missing: sigmas SamplerCustomAdvanced: - Required input is missing: sampler - Required input is missing: sigmas VAEDecode: - Required input is missing: vae

    So, I can make videos with your WF, but your group2 seems to be messed up... any ideas how to fix it?

    taek75799
    Author
    Jan 19, 2026

    Hello! Look behind the subgraphs; you'll find all the Get and Set nodes. Check to see if they are properly activated. To enable or disable a node, click on it and press Ctrl+B.

    zzozzJan 19, 2026
    CivitAI

    I tried a variety of prompts with Grok, but I couldn't suppress the freedom of breast

    GlowingGuardianGirlJan 19, 2026
    CivitAI

    Hello 🙌. Is there a place like a Huggingface to download more GGUF versions of the model? I'm looking for Q2_K for tests and Q5_K_M. Thank you

    taek75799
    Author
    Jan 19, 2026

    Hello! I haven't made those two versions, sorry. Maybe when I have some time, but it would be best if you could tell me which specific one you want. I won't be able to redo all the models, lol.

    GlowingGuardianGirlJan 19, 2026· 1 reaction

    @taek75799 Thank you for your answer. The Q6 model doesn't fit in my VRAMlet computer. I would only need the Q2K (High Noise only) and Q5K_M (H&L) of "NSFW Fast Move V2". I don't know how hard it is for you to make those models, so thank you very much if you ever consider taking the time to do it.

    taek75799
    Author
    Jan 19, 2026· 1 reaction

    @GlowingGuardianGirl Try to remind me if I forget. Let's say in about a month, I might be able to

    SoulcodingJan 19, 2026
    CivitAI

    q8 v2 model works with svi workflow?

    taek75799
    Author
    Jan 19, 2026

    It will work, but it has quality loss issues.

    lolbleach001584Jan 19, 2026
    CivitAI

    Hi, I have a very noticeable color change in clip 2, is this normal? https://files.fm/u/6zggj5z6n3#/view/mg642vm387

    I've also attached my workflow in case anyone can help me solve the color problem: https://files.fm/u/6zggj5z6n3

    EddieMurfingtonJan 19, 2026

    Hello! Same for me, like a change in contrast.

    lolbleach001584Jan 19, 2026

    @EddieMurfington Hi, I set the motion amplitude to 0 and I think that fixed it.

    taek75799
    Author
    Jan 19, 2026· 1 reaction

    Thanks for letting me know. I haven't really encountered this issue myself, but I've added a note to the workflow

    EddieMurfingtonJan 19, 2026· 1 reaction

    I just tested the amplitude at 0 and it does indeed solve the problem, but Taek, you've done a great job, both with the models and the workflow, it's brilliant.

    taek75799
    Author
    Jan 19, 2026

    Thanks for your feedback! I’m going to repost the wf with the value set to 0. I feel it adds very little in terms of motion.

    anyezhixieJan 19, 2026· 1 reaction
    CivitAI

    When the scene shifts to a sideways intercourse position, the female genitalia being penetrated suddenly displays something resembling swollen labia or testicles, which cannot be avoided through prompts or negative prompts. Both the NSFW and NSFW FM V2 versions exhibit this same issue, which is quite amusing.

    Note: This issue specifically occurs when the characters' initial pose in the scene is not sideways intercourse, but the scene progresses into a sideways position. If the initial image already depicts sideways intercourse, this problem is largely avoided.

    11014798Jan 19, 2026
    CivitAI

    Hey I can't seem to find the download link for a specific diffusion model. When trying to generate in comfyui, I get this message: Model in folder 'diffusion_models' with filename 'wan2.2\Wan2_2-I2V-A14B-HIGH_SVI_fast_move_nsfw_fp8.safetensors' not found.

    Do you know where i can get it?

    taek75799
    Author
    Jan 19, 2026

    Hello! Please tell me the name listed in the description instead. Unfortunately, Civitai renames the files I upload, so I'm not exactly sure which one it is.

    11014798Jan 19, 2026

    @taek75799 Hey thank you for the fast reply, in the description of your new Model you say this: "There are two models available for SVI:

    Fast Move (FM) – Sexual scenes may differ from the Consistent Face model and will generally be faster."

    I would assume that what I am missing is this.
    Theres a high and low version for it. SVI Fast Move is missing.
    Sorry I am pretty new just been doing generations on the "easy generator" or frontend.

    taek75799
    Author
    Jan 19, 2026

    @AcidKing2501x No problem! A lot of people make the same mistake: they only download the 'high' version. Wan 2.2 works differently compared to other models.

    11014798Jan 19, 2026

    @taek75799 Haha, no I meant that I am missing 2 diffusion models, and don't know where to get them. In your Workflow there are 2 "Load Diffusion Model (Safetensor) High and Low".
    One of those "diffusion models is called "diffusion_models' with filename 'wan2.2\Wan2_2-I2V-A14B-HIGH_SVI_fast_move_nsfw_fp8.safetensors".

    I've had several dependencies I downloaded for your workflow I just can't seem to finde the download links for those two. Can you maybe help me?

    taek75799
    Author
    Jan 19, 2026

    @AcidKing2501x The names of my models aren't the same when I post them on Civitai they change the names lol, but they are all here.

    justanaxolotlJan 19, 2026· 1 reaction

    Hey @taek75799, I think I've figured it out, but ran into a very similar problem to AcidKing, just finding the SVI model needed was very unclear in your guide (not meant as an insult, I think I finally figured it out, but just mentioning).

    Maybe a guide like this:

    1. Download this (link) and put in models/etc...
    2. Download this (link) ....

    Would be useful. I'm not a total noob (though definitely still an average user at best) to ComfyUI but have been out of the game for awhile, so I am getting back up to the learning curve, so I imagine this is super confusing for people not deep in the weeds here.

    For Acid, I think you need:
    https://civitai.com/models/2053259?modelVersionId=2606405
    https://civitai.com/models/2053259?modelVersionId=2606408

    Thank you for sharing, hope this is helpful constructive criticism! I am just finishing up what I think is the correct setup, if it is, maybe I'll try to make an example KISS (Keep it simple stupid) setup.

    taek75799
    Author
    Jan 19, 2026· 1 reaction

    @justanaxolotl No problem! I didn't take it as criticism, but rather as advice. Don't worry about it! The most important thing is to be constructive, and you certainly were. Thanks! ;)

    9517554Jan 19, 2026· 1 reaction
    CivitAI

    Any chance you could do versions of your model (GGUF Q8, non-SVI preferably) without the lightx2v loras baked in? I'd like to try mixing in some non-lightx2v sampling, and it would be great if I could still use your (superb) checkpoint.

    I second this motion, except without gguf, id rather just use the fp8. SVI doesn't really interest me much so I would love to play around without catering to its reduced prompt adherence but still want full quality without lightning!

    Thanks for all your hard work on this model! By far the GOAT

    taek75799
    Author
    Jan 20, 2026· 1 reaction

    Hello! I don't mind at all. I don't have access to the LoRAs I used to create the models right now. As soon as I'm back home, I'll be able to post them.

    taek75799
    Author
    Jan 20, 2026· 2 reactions

    @sifpjogzbihsjhbiej705 Hello, and thanks for your feedback, I appreciate it! You can try the SVI version; it will also work without the SVI LoRAs

    _Glint_Jan 20, 2026
    CivitAI

    I’m not quite sure why this is happening. I’ve already installed all the nodes you mentioned, but when I open the workflow, some nodes still report errors. ComfyUI also doesn’t seem to indicate which specific nodes are missing. Could there be any additional or special nodes that I need to install?

    taek75799
    Author
    Jan 20, 2026

    Hello! Normally, no, there are no special nodes. Still, take a look inside the subgraphs, especially the one containing the first prompt. You can also find the list of nodes used in the workflow (wf) in the description.

    Ponder_StibbonsJan 21, 2026

    When you have nodes installed and they are still red you are missing dependencies. It's a bit like the toilets they have on display at home depot. Ask the LLM of your choice why that guy in the orange apron's face is so red.

    rzxunfetteredJan 20, 2026· 1 reaction
    CivitAI

    SVI no lightning is amazing , only if it could understand prompts better, I even tried dynamic prompts from grok ...

    taek75799
    Author
    Jan 20, 2026

    Yes, unfortunately, that’s the issue with SVI.

    1720435Jan 20, 2026
    CivitAI

    If i'm using NSFWQ8 version (the latest one) do I still need lora nsfw or it's already baked in with this model? Also do you use PainterI2V for the workflow?

    taek75799
    Author
    Jan 20, 2026

    HI, the latest Q8 model is already NSFW. For the workflow (wf), you can use the 2.1 wf found in the description link; it uses Painter Advanced and Painter Long Video.

    dkain76Jan 20, 2026
    CivitAI

    Qwen VL Advanced taking longer than 15 minutes, still on the node

    InfernusIntraMeJan 20, 2026
    CivitAI

    I feel like there was an fp8 model that was called sexual pose but it either got removed or changed the name, but I pay good money if I am correct for that specific model if it is in fact not here anymore.

    jagat334433Jan 23, 2026

    I thin u are referring to Naughty I2V?

    Ponder_StibbonsJan 21, 2026· 4 reactions
    CivitAI

    Ah, very interesting. This is my model of choice for my SVI workflow, I did notice some slomo going on at inappropriate times. I shall download this new one. I have the feeling that by the time it's finished you're going to have an update posted. Btw my D drive asked me to ask to you please slow down. Poor thing can't keep up with you.

    taek75799
    Author
    Jan 21, 2026

    Hello and thanks for your feedback! Try adding a KSampler for the SVI LoRAs, making it a total of 3 KSamplers. This helps a lot in removing the slow-motion effect, though of course, the generation will take longer.

    Ponder_StibbonsJan 21, 2026· 1 reaction

    @taek75799 The triple ksampler setup was exactly what I used to use - integrating 2.1 as a base got rid of the awful 2.2 low noise look that's so annoying. When I switched to using the wrapper, stupid me, didn't notice that VraethrDalkr's awesome node had been updated to work with wrapper nodes. So yeah, I agree 100% that that's the way to go. Hopefully the extra overhead doesn't force me to drop my 4th stage, but of course with SVI it's just a matter of starting the next run with the saved latent. Which is probably why the I is for infinite. It's either that or incarcerated.

    Ponder_StibbonsJan 22, 2026

    Confirmed. TripleWVSampler (Advanced) using a full fp8 high model as base, 8 lightning steps took care of it. I feel like a complete moron, as usual. Don't know how I missed that update. My inference subgraph is now three nodes, kind of sad looking. Haven't had much luck with the SVI specific version yet but V2 is working great with the added steps. Three thumbs up.

    Biscuit10Jan 21, 2026
    CivitAI

    am giving the SVI a try, and without lightning lora ( i assume i load it like any other Lora in your Workflow - what strengths recommended?)

    Without the Lightning Lora the image went very grey and blurry straight away, with it it had a tinge of blur and grey - i took your advice from other thread and change steps to 3+3, what else might be adding the grey/blurriness?

    Biscuit10Jan 21, 2026

    litereally saw another post suggest motion amp to 0, will try that when back at PC. sorry for duplicate comment!

    taek75799
    Author
    Jan 21, 2026

    Hello! Yes, the motion seems to be causing some issues for some people. Give it a try anyway, as it probably depends on the image; I personally haven't encountered this problem. As for the Lightning strength, check the description or look inside the workflow itself, there are a few recommendations there.

    Biscuit10Jan 21, 2026

    @taek75799 turns out im an idiot and kept using the other workflow that didnt have the adjustments... lol

    but now have a persisted issue with custom node installation...

    "IAMCCS_WanImageMotionin subgraph 'New Subgraph'" - have tried multiple versions with mod manager

    taek75799
    Author
    Jan 21, 2026

    @Biscuit10 Maybe try installing it manually using git clone; the link is available in the description.

    Biscuit10Jan 21, 2026

    @taek75799 seems ot pertain to the video length, all say "Unknown" and the value is set to 81, which i think was the length value from the other workflow

    spartanjello123Jan 21, 2026

    its wrong lora or svi,get exact lora like in workflow...or u will get this blured,gray videos.

    taek75799
    Author
    Jan 21, 2026

    @spartanjello123 Hello! Just download any preview posted here: the metadata hasn't been stripped. Simply open them with ComfyUI, and you'll see they were made using the same LoRAs and parameters as the posted workflow. Logically, if I post a preview, it’s obviously done with that same workflow; otherwise, it wouldn't make any sense

    Biscuit10Jan 21, 2026

    okay, managed to get it working (was re-installing the kjunodes rather than the individual one missing) - but yeah now similiar issue of video "dying" - rather than grey out its gone a fuzzy yellow, but there was yellow in background so im guessing it diluted into that.?

    Biscuit10Jan 23, 2026

    @spartanjello123 have downloaded the loras from the previously shared workflows etc, correct models and model loads etc, but still getting that grey blurry - any suggestions?

    taek75799
    Author
    Jan 23, 2026

    @Biscuit10 Save your workflow after generating the video and send it to me so I can check it.

    https://www.swisstransfer.com/fr-fr

    spartanjello123Jan 25, 2026

    @Biscuit10 
    i use this for svi,some other i used gave me gray videos....idk if it work with guff wan....lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16, SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16, SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16

    spartanjello123Jan 21, 2026
    CivitAI

    Compared to some other models this one output slow animations which makes 81 frames too little to do anything.But CF do work...so for me atm best combination is to use this low with some other high models,which gives best results.Could be im just doing something wrong...

    taek75799
    Author
    Jan 21, 2026· 1 reaction

    This is one of the known issues with SVI check the description. To fix this, you can try the NSFW V2 version, which doesn't have these problems at all. If you want to remove the slow-motion effect with SVI LoRAs, you need to use a 3 KSampler workflow, but that can introduce other issues.

    mrmagregus827Jan 22, 2026· 2 reactions
    CivitAI

    How to stop the hip motion?

    anyezhixieJan 23, 2026

    My personal experience: Don't let an erect penis appear in the frame. Wearing shorts or having a flaccid penis prevents the character from automatically performing the fuck action.

    playnproto266Jan 24, 2026

    Sometimes, prompting for other actions "distracts" the model enough to stop hip action. If you're okay with the character sleeping, you can try "the subject does not react. The subject is fast asleep." That doesn't ALWAYS stop hip action but sometimes does, ore reduces it.

    cliang96844Jan 22, 2026
    CivitAI

    It is possible to make a 5B FP8 model?

    taek75799
    Author
    Jan 22, 2026

    Hello! Sorry, I don't think I'll be making a 5B model. I assume you have a lower-end GPU? Have you tried models like Q4?

    cliang96844Jan 22, 2026

    @taek75799 I have a 4060TI and it's only 16G VRAM version, but I've looked up about the Hi and Lo version, it says I need something like 24+ G RAM to run, so I have not tried it at all.

    taek75799
    Author
    Jan 22, 2026· 1 reaction

    @cliang96844 I also have a computer with a 4060 Ti, 16GB of VRAM. I can run models using Q8 versions. How much VRAM do you have? If you have 16GB, you should give it a try.

    cliang96844Jan 22, 2026

    @taek75799 OK, thank you for the explaination. So I assume the "base" version is the NSFW one, or is it the nolightning version now?

    taek75799
    Author
    Jan 22, 2026· 2 reactions

    @cliang96844 There are many versions available: the most advanced SFW version is CAM V2, followed by NSFW V2 as the best option. For creating long videos with SVI LoRAs, the SVI NoLightning version will give you the best results.

    I might post an LTX NSFW version soon, but it will probably be a beta.

    cliang96844Jan 23, 2026

    @taek75799 Sorry to bother again. So I kept getting the error of expecting 32 channels but got 64 instead error with NSFW V2 for I2V with the comfyUI's default I2V workflow. The models work for T2V no problem so I looked it up, which is supposedly an issue with model type.

    It is true or is your NSFW models for both scenarios?

    taek75799
    Author
    Jan 23, 2026

    @cliang96844 No problem. The NSFW V2 model is an I2V (Image-to-Video) model; it's surprising that it works in T2V (Text-to-Video), lol. There is a trick to make Wan I2V models work as T2V: you just need to use a black image along with a prompt, but I haven't tried it myself.

    cliang96844Jan 23, 2026

    @taek75799 Oh ok I see, I've no idea how it worked then, but it does explain the generated video looking overcooked...
    I just found that the lightning loras are different for i2v and t2v, actually, so gonna try that and hopefully that'd sort it out

    cliang96844Jan 23, 2026

    @taek75799 Guess that's not a problem with the loras... I tried using GGUF loader for the NSFW Q4 model but it keeps giving me the same error. Do you have a workflow for the GGUF models? I'd like to see which node it is using for the GGUF checkpoints cause I'm really stuck right now

    g1263495582Jan 23, 2026

    @cliang96844 What vae did you use?

    taek75799
    Author
    Jan 23, 2026

    @cliang96844 Start by trying the base ComfyUI WF. You can also try using the very first WF I made; it’s even more basic: https://blog.comfy.org/p/wan22-day-0-support-in-comfyui

    Just replace the Wan Load Diffusion Model LoRA with a UNET Loader GGUF if you want to use the Q4 version.

    cliang96844Jan 23, 2026

    @g1263495582 I tried wit both Wan 2.1 and 2.2. The odd part is that the unet loader and the vae worked in the comfyUI website's T2V workflow but not I2V

    cliang96844Jan 23, 2026· 1 reaction

    Update: weird. I re-opened the i2v WF and changed the loader to the unet GGUF one like before, but I deleted the no-lightning block and now it works.

    salzheldJan 23, 2026
    CivitAI

    is the svi model integrated in the „nolightning svi“ version or is it just a model optimized for svi? there are so many versions to choose from.

    taek75799
    Author
    Jan 23, 2026· 1 reaction

    Hello! SVI is not integrated.

    salzheldJan 25, 2026· 2 reactions

    Thank you very much. Read your description and yoz clearly explain that. Sorry.

    taek75799
    Author
    Jan 25, 2026

    @salzheld No problem

    K0DEXJan 24, 2026
    CivitAI

    Hey, thanks for your models, they're awesome! Is a quantized version of the models for SVI planned?

    taek75799
    Author
    Jan 24, 2026· 2 reactions

    Hello and thank you for your feedback! Yes, I'm releasing the Q4_K_M version, and then I plan to release the Q6 next weekend

    Latent_DreamscapeJan 24, 2026
    CivitAI

    Thank you - so far your models are some of my favorite.
    However, on your newest GGUF uploads, I've noticed you list both a high and low model - but they seem to both be named identically?

    Are we meant to rename them, was it a typo/overlooked, or are you supposed to use the same model for both high and low??

    taek75799
    Author
    Jan 24, 2026

    Hello, thanks for your feedback! Civitai renames the files I upload. Actually, I'm limited by the character count, so I can't add "high" or "low" at the end. Look closely: it says H or L at the end.

    You can, of course, rename them; it's more convenient.

    Latent_DreamscapeJan 24, 2026

    @taek75799 Thanks for the quick response. Though I am not seeing any L or H anywhere in the model name after clicking download (or downloading), they are both 'wan22EnhancedNSFWSVICamera_nolightningSVICfQ4KM.gguf'
    No big deal to rename them though, just wanted to give you a heads-up as it might confuse people, especially those using Civ downloaders.

    Thanks again!

    taek75799
    Author
    Jan 24, 2026· 2 reactions

    @Latent_Dreamscape Okay, I see now. I just tried it and, indeed, I might try re-uploading them to see. It's quite annoying

    Latent_DreamscapeJan 25, 2026

    @taek75799 Fixed now. Thank you, it's appreciated.
    And I agree - a little frustrating that Civitai has such a limit in the first place!

    taek75799
    Author
    Jan 25, 2026· 1 reaction

    @Latent_Dreamscape Thanks for your feedback! I re-uploaded the Low file, but it was the same thing. So I removed the letter 'g' from 'Lightning' in the title, and it added a letter to the end of the downloaded file. It turns out there's also a limit on the downloaded filename, lol!

    goodluckhavefunJan 24, 2026
    CivitAI

    Hi, I can not get any video generation workflow to work.

    I am new to AI video generation.
    I use a macbook 32GB RAM (unified RAM/GPU).


    According to my research I decided to use the high noise and low noise model files of "NSFW FAST MOVE V2 Q4KM" because they are the newest Q4KM GGUF.
    Please give feedback on the model choice.

    I think the real issue is the Workflow in ComfyUI.

    Where do I find a working one? You seem to only provide worklfows for SVI and the models I chose seem to be not SVI.


    First I tried the "Image to Video (New)" Wan2.2 starter template. I have to make changes to make it work (errors) with GGUF but the best I achieved was one fully black video. I replaced loading and lighting by only "UNET Loader GGUF" for high and low and connected them to both "KSampler (Advanced)". I also tried replacing WAN 2.1 VAE with the WAN 2.2 VAE from https://huggingface.co/wangkanai/wan22-vae/tree/main/vae/wan.

    My current error is: KSamplerAdvanced 'dict' object has no attribute 'get_model_object'

    Despite the SVI label I tried one of your workflows. Looks very complex but didn't work after downloading missing dependencies and replacing the models.

    taek75799
    Author
    Jan 24, 2026

    Hello! For the NSFW V2 model, first try the most basic WF here: https://blog.comfy.org/p/wan22-day-0-support-in-comfyui. For the Q4_K_M model, you need to replace the Load Diffusion Model nodes with Load UNET GGUF. Then, load the High and Low models and see if everything works correctly.

    Regarding the WFs here, you can choose between the SVI models (which also work without SVI) and others like the NSFW V2, which I think is better to start with. SVI has the advantage of creating better long videos, but you can also achieve this with the NSFW V2 model.

    Another important point: do not use the Wan 2.2 VAE, as it will only work for the 5B model. Use the 2.1 VAE instead.

    goodluckhavefunJan 24, 2026

    @taek75799 What about Fast Move? I thought it's just faster for less VRAM but it seems to be something different. Would you recommend that or the normal Q4_K_M to me?

    taek75799
    Author
    Jan 24, 2026

    @goodluckhavefun Let's just say that for the NSFW version, the term 'fast move' refers to the fact that the videos will generally feature more movement. As for PC resource management, the Q4 version will be the least demanding.

    Start with the simplest option: the SVI versions are more recent, but for short videos, the NSFW V2 version is ideal rather than the SVI. Just start with that one; you'll find that as you go, you'll understand more and want to push things further.

    goodluckhavefunJan 24, 2026

    EDIT: It seems to work now after I disabled all plugins, only re-enabled the GGUF plugin, restarted comfyUI and set up the GGUF loader nodes again.

    I am still getting Error: KSamplerAdvanced 'dict' object has no attribute 'get_model_object' after connecting the Unet Loader GGUF nodes directly to the KSampler (Advanced) nodes.

    If I connect with the ModelSamplingSD nodes in the middle the error is "ModelSamplingSD3

    'dict' object has no attribute 'clone'"

    I downloaded the normal V2 Q4KM models and followed the blog you sent which made me use a different workflow template - the one with the white warrior for image to video.

    I set the upper block with lighting to bypass and set the lower block without lighting to always active and adjusted that block.

    What am I still doing wrong?

    JellaiJan 24, 2026· 1 reaction
    CivitAI

    These nolightning SVI models, are they all NSFW versions? I'm just asking because those used to be marked. It seems like they are NSFW versions from the examples.

    taek75799
    Author
    Jan 24, 2026· 2 reactions

    Hello Jellai! Yes, indeed, I can't include 'NSFW' because I'm limited to a very small number of characters.

    JellaiJan 26, 2026· 1 reaction

    @taek75799 Okay. You're doing some great work. I'm really hoping for a new SVI SFW model at some point. Any plans to make a new one? The SFW models keep a lot of the Wan detail and variety that the NSFW content greatly diminishes. Maybe something laser targeted on camera work, and whatever else you've learned works best for SVI?

    taek75799
    Author
    Jan 26, 2026· 2 reactions

    @Jellai I tried making an SFW version earlier, like the Cam model, but it didn't work. SVI really struggles with understanding camera movements. Also, adding a LoRA to improve dynamics didn't work either. The only solution for now is the Triple KSampler. I will try training the LoRAs again with different parameters.

    JellaiJan 26, 2026· 1 reaction

    @taek75799 Ah, bummer. It's cool that you're trying, but this result is pretty unfortunate, considering the power of SVI.

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    3,878
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/18/2026
    Updated
    5/4/2026
    Deleted
    -