CivArchive

    ⚠️ Important information

    All models already include the Lightning LoRAs, except the SVI models, where Lightning is not included.
    Do not use additional Lightning LoRAs on models that already have Lightning integrated, or the quality will be degraded.
    For SVI models, you can use Lightning LoRAs if you want faster video generation.

    2or3 ksampler (for svi) https://civarchive.com/models/2079192?modelVersionId=2668801
    v2.1 (for nsfw v2) https://civarchive.com/models/2079192?modelVersionId=2562360

    v2.1 with MMAUDIO (for nsfw v2) by @huchukato https://civarchive.com/models/2320999?modelVersionId=2613591

    Another wf triple KSampler KSampler https://civarchive.com/models/1866565/wan22-continuous-generation-svi2-pro-or-gguf-or-32-phase-or-upscaleinterpolate-w-subgraphs-and-bus?modelVersionId=2559451

    triple KSampler wf setup allows for more motion and helps prevent slow-motion issues. In exchange, your videos will take longer to generate.

    For those having issues with my SVI workflow , you can try Kijai's wf here: https://github.com/user-attachments/files/24364598/Wan.-.2.2.SVI.Pro.-.Loop.native.json. Alternatively, you can try fmlf wf https://github.com/wallen0322/ComfyUI-Wan22FMLF/tree/main/example_workflows they will be simpler. There are others on Civitai that work very well too.

    Qwen-VL workflow as an alternative to Grok for creating your dynamic NSFW prompts. Thanks @huchukato for his work: https://civarchive.com/models/2320999?modelVersionId=2611094

    🟣 SVI Update – NSFW

    Model Presentation (SVI-compatible version)

    This update was made because the NSFW V2 models were not fully compatible with SVI LoRAs.
    This version was created to work smoothly with SVI, while still functioning without them (though the workflow must be adapted).

    Be careful, SVI LoRAs will only work with a workflow specifically designed for them, otherwise, it won't work.

    There are two models available for SVI:

    Fast Move (FM) – Sexual scenes may differ from the Consistent Face model and will generally be faster.
    Consistent Face (CF) – Slightly better image quality, which may be preferable for anime-style videos; sexual scenes differ from Fast Move, but the difference is minimal.

    You can also mix models between High and Low LoRAs:

    • FM (Fast Move) as High + CF (Consistent Face) as Low

    • CF (Consistent Face) as High + FM (Fast Move) as Low
      Both combinations work and give slightly different results, offering more flexibility for your videos.

    For this version, the main improvements include:

    ✔ Fully adapted for SVI LoRAs
    ✔ Greater flexibility: Lightning and SVI LoRAs must be loaded manually for custom workflows


    🟣 SVI LoRAs – Strengths & Weaknesses

    ⚡ Overview

    Strengths

    • Best solution for making long videos

    • Excellent transitions between video segments

    • Reduced degradation compared to other solutions

    • Strong character coherence: the model retains information from the previous video, helping maintain consistency

    ⚠️ Weaknesses

    • Weaker prompt understanding

    • Weaker camera understanding

    • Videos are less dynamic

    • Sometimes slow-motion effect
      (can be improved with proper Lightning LoRAs, dynamic prompts or triple ksampler)


    🟣 SVI LoRAs – Download Links

    ⚡ Download

    High LoRA
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors

    Low LoRA
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Stable-Video-Infinity/v2.0/SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

    Note: Both LoRAs must be loaded manually in your workflow.


    🟣 Suggested Lightning LoRA Combos (Optional)

    ⚡ Overview

    You don’t have to use these Lightning LoRA combos. They are optional and allow you to fine-tune motion and degradation.
    You can also use other Lightning LoRAs or assign different combos per video for more control.


    🔥 Combo 1 – More Motion (Rapid Video Degradation)

    High LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 4

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22-Lightning/old/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1.4


    💜 Combo 2 – Less Image Degradation

    High LoRA →
    https://civarchive.com/models/1585622/lightning-lora-massive-speed-up-for-wan21-wan22-made-by-lightx2v-kijai

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_high_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/main/LoRAs/Wan22-Lightning/old/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensors

    or

    https://huggingface.co/lightx2v/Wan2.2-Distill-Loras/blob/main/wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022.safetensors
    Weight: 1


    🟢 Combo 3 – Balanced Motion / Moderate Degradation

    High LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 3

    Low LoRA →
    https://huggingface.co/Kijai/WanVideo_comfy/blob/709844db75d2e15582cf204e9a0b5e12b23a35dd/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors
    Weight: 1.5


    🧠 Advanced Usage Tip

    You can disable global Lightning LoRAs (in my wf) and assign different combos per video:
    Combo 1 for Video 1, Combo 3 for Video 2, Combo 2 for Video 3.

    Each combo produces different motion and degradation behavior.

    If you want to create several-minute-long videos while maintaining high quality, it is possible, but it will take a very long time. You just need to avoid using Lightning LoRAs and use the full model instead.


    🧠 Dynamic Prompts – Better Control & More Motion

    You can use dynamic prompts to have better control and this helps to make the video more dynamic.
    You just need to give this example prompt to an LLM like ChatGPT. It will be enough for it to describe the image you have and what you want as a video while keeping the same structure of the prompt in the following example.

    ⚠️ This will be a NSFW prompt; ChatGPT will not accept it.
    You can use GROK https://grok.com, which can make NSFW prompt modifications.

    For more examples of prompts with different poses, check:
    Enhanced FP8 Model.
    Give these prompts to GLM with the dynamic prompt structure if you want.


    Example 1

    (At 0 seconds: Wide shot showing a slightly overweight man casually walking down a city street, camera fixed in front, urban environment with buildings and cars.)
    (At 1 second: Suddenly, a massive shark bursts from the pavement ahead, looking terrifying at first, pavement cracking, dust and debris flying, camera from side angle.)
    (At 2 seconds: Medium shot from the side, the man stumbles backward in shock, while the shark dramatically slows down and strikes a comically exaggerated sexy pose, revealing large, exaggerated shark breasts, covered by a colorful bikini.)
    (At 3 seconds: Close-up on the man’s face, eyes wide in disbelief, as he turns to look at the shark, small cartoon-style hearts floating above his head to emphasize his amazement, camera slightly low-angle.)
    (At 4 seconds: Dynamic travelling shot showing the man frozen in the street, the shark maintaining its sexy pose, water splashes and debris still moving realistically, urban chaos around.)
    (At 5 seconds: Wide cinematic shot pulling back, showing the man standing in the street, staring at the bikini-wearing shark with hearts above his head, epic perspective highlighting absurdity and humor.)


    Example 2 – Anime NSFW

    (At 0 seconds: The couple in a cozy bedroom, anime style, soft lighting highlighting their intimate embrace, her back arched slightly as he positions himself.)
    (At 1 second: The man’s hips moving rhythmically, the head of his penis sliding effortlessly into her vagina, her body responding with a gentle, fluid motion, anime-style motion lines emphasizing the smooth penetration.)
    (At 2 seconds: Her back arching deeply against him to intensify the pleasure, hips swaying with each thrust, breasts bouncing subtly, small hearts floating around them to capture the erotic energy.)
    (At 3 seconds: Her face, eyes closed in bliss, a soft moan escaping, hands resting behind her head, anime-style blush on her cheeks, the air filled with a seductive aura.)
    (At 4 seconds: The man penetrating her deeply, her body moving in sync with his, the bed sheets slightly rumpled, the room’s warm lighting enhancing the intimate, lustful atmosphere.)
    (At 5 seconds: The couple locked in a passionate embrace, the scene exuding vibrant, seductive energy, anime style with smooth lines and soft shadows.)

    🟣 Lightning Edition – NSFW I2V V2

    Model Presentation (2 new versions available)

    I originally planned to release only one V2, but some people preferred the NSFW V1 over the Fast Move V1 version, so depending on what you’re looking for, one version may suit you better than the other.

    For these V2 versions, I tried a new approach:
    ✔ I made sure that most sexual poses work, while the model is also good for SFW content
    ✔ More flexible for general use


    🔥 NSFW Fast Move V2

    Improvements included in this version:

    • Better prompt understanding

    • Better camera understanding

    • Reduced unnecessary back-and-forth movements outside sexual poses
      (cannot be completely removed, but strongly reduced)

    • Improved bounce effect on the buttocks

    • If a man appears, he will no longer automatically attempt to penetrate the woman when she is nude

    This version is designed for those who want more dynamic scenes with more movement.


    💜 NSFW V2

    The difference between this version and NSFW Fast Move V2:

    • Less camera control

    • Less camera understanding

    • But body movements are less pronounced (breasts and buttocks)

    • Some preferred V1 NSFW to V1 NSFW Fast Move, and this version keeps that spirit

    For varied sexual poses, check the previews — there are many.
    You can use the shown prompts and adapt them to your images, but of course, other prompts will work as well.
    Don’t hesitate to use other LoRAs for creating specific concepts.

    • Steps: 2+2
      (Jellai recommends 2+3 for even better results, and I agree)

    • Sampler: Euler simple

    • CFG: 1

    • This model already includes the Lightning LoRAs, don’t use them, or the quality will be degraded

    • You need to download both models: H for High and L for Low.

    Here is an example made with the base workflow + the First/Last Frame workflow + upscaler (only on the starting images) + inpainting (cum).

    https://civarchive.com/images/114733916


    LoRA: Penis Insert WAN 2.2

    LoRA weight: 1

    Other examples with different LoRAs

    Recommended LoRA to obtain a very realistic vagina and anus:
    https://civarchive.com/models/2217653?modelVersionId=2496754

    💡 Tip for Anime Style:

    If you’re working in an anime style, feel free to follow the advice of @g1263495582 thanks to him for this.
    Try these LoRAs together or separately; they help maintain face consistency:

    🔹 Note:
    For these LoRAs, use only Low Noise.
    For the examples mentioned above, use High and Low Noise as indicated.

    Many things can be done with the model, but don’t hesitate to use other LoRAs for specific purposes. And don’t hesitate to lower the LoRA strength to preserve the face as much as possible.


    📷 Dynamic prompt example with different camera angles:

    (at 0 seconds: wide shot showing the woman standing in the snowy plain, a massive giant dragon emerging behind her, snow cracking and dust rising).
    (at 1 second: the woman jumps backward onto the dragon’s back as it bursts fully from the sky, camera tracking the motion from a side angle, debris and snow flying).
    (at 2 seconds: medium shot from the side, the woman balances heroically on the dragon’s back as it begins to run forward across the snowy plain, slow-motion on her posture).
    (at 3 seconds: close-up on the woman’s determined face, camera slightly low-angle to emphasize her heroic stance, snow and debris flying around).
    (at 4 seconds: dynamic travelling shot alongside the dragon, showing the snowy plain, scattered debris and ice fragments flying everywhere as it gains speed).
    (at 5 seconds: wide cinematic shot pulling back, showing the dragon taking off with the woman riding on its back, soaring above the snowy plain, epic perspective with snow, wind, and scale emphasizing the drama).

    For available camera angles, check further down in the “cam V2” model description.


    🔞 Normal Clip vs NSFW Clip:

    You can also use the NSFW version for your clip.
    It can bring positive effects for sexual scenes, but it can also cause issues, as in this example:

    Here are the links to NSFW clips (thanks to zoot_allure855 for correcting the BF16 version):


    If you have any questions, don’t hesitate to ask!

    Many people ask me for help via private message. You can do that, no problem, but I would appreciate it if you could do it in the comment section, it could help other people. Thank you.

    🟣 Lightning Edition – NSFW I2V camera prompt adherence

    ⚡ Model Presentation (2 versions available)

    I decided to release 2 versions. Both can produce different results, as you can see in the previews (the seed was the same).

    • Fast Move Version: provides more movement, and the movements will be faster, with better prompt understanding and camera handling
      (you can see it in the 4th preview "fast move high" where the man slaps the woman)

    • Natural Motion Version: offers more natural breast movements depending on the situation and produces slower scenes.

    👉 Check both and choose the one that works best for you.

    This NSFW edition is, of course, focused on sexual poses.
    You should achieve very good results.
    To create specific concepts, feel free to use specific LoRAs — they work very well with this model.


    • Steps: 2+2
      (Jellai recommends 2+3 for even better results, and I agree)

    • Sampler: Euler simple

    • CFG: 1

    • This model already includes the Lightning LoRAs, don’t use them, or the quality will be degraded

    You can also use this T5, which may improve understanding:
    https://huggingface.co/NSFW-API/NSFW-Wan-UMT5-XXL/tree/main?not-for-all-audiences=true


    📝 Usage Tips

    • Start with a clean, high-resolution image for better results.

    • The model will not change the face; if it does, increase the resolution.

    • Also adjust your prompt if you are not getting what you want.

    • Wan understands some terms well, but not all.

    • For prompts, check the video previews: adapt them to your image.
      Other prompts will work too.


    🎥 Training

    I trained 2 LoRAs:

    • The first one using several videos and images of different sexual positions.

    • The second one to bring more dynamic motion.

    Respect to everyone who creates this type of LoRA — it requires a lot of work.


    🙏 Credits

    This model wouldn’t exist without the incredible work of these creators:

    A special thanks to Alcaitiff and CubeyAI, two very kind and humble people.


    🔔 Important

    Please don’t support my work with buzzes here, I don’t need it.
    If you want to support someone, support the creators listed above — they truly deserve it.


    💬 Feedback

    Feel free to give me feedback, positive or negative, to help improve future updates.

    Update WAN 2.2 V2 CAM I2V – NEW feature Camera & Prompt Improvements

    This custom version of WAN 2.2 I2V has been updated to deliver better prompt comprehension and improved handling of camera angles and cinematic movements. It provides more accurate scene interpretation, smoother transitions, and enhanced control over dynamic.

    Key Features:

    • Excellent understanding of prompts and scene composition.

    • Supports various camera angles and movements, including zoom, dolly, pan, tilt, orbit, tracking, and handheld shots.

    • Ideal for cinematic storytelling, animated sequences, and creative video-to-image projects.

    • Flexible multi-step prompts, standard 4 steps (2+2), can be increased for higher fidelity.

    • Recommended sampler: Euler simple.

    You can, of course, use your usual prompts; this is just one example among many.


    Example Prompt with Different Camera Angles

    (at 0 seconds: wide frontal shot of a man standing in front of an open fridge, cinematic lighting, subtle ambient kitchen reflections, the fridge contents visible, camera static).
    (at 1 second: medium shot from the front as he opens the fridge fully, reaches for a can, slight zoom-in to emphasize the action, cinematic framing).
    (at 2 seconds: camera shifts to a side medium shot, tracking him as he lifts the can to his mouth, fluid movement, maintaining lighting and reflections).
    (at 3 seconds: camera starts a smooth 360-degree orbit around the man, following him as he drinks from the can, motion fluid, background slightly blurred for cinematic effect).
    (at 4 seconds: close-up on his face and upper body while drinking, orbit continues subtly, fridge reflections accentuating realism, cinematic polish).
    (at 5 seconds: final wide shot as he lowers the can, camera completes orbit to original angle, showcasing the kitchen space, lighting, and dynamic movement).
    

    Available Camera Movements

    Zoom / Dolly

    • zoom in

    • zoom out

    • camera zooms in on subject

    • camera zooms out gradually

    • dolly in

    • dolly out

    • camera dollies in slowly

    • camera dollies out steadily

    • crash zoom

    Pan

    • pan left

    • pan right

    • camera pans across the scene

    • gentle pan left

    • sweeping pan right

    Tilt

    • tilt up

    • tilt down

    • camera tilts up to reveal…

    • camera tilts down from…

    Orbital / Tracking / Arc / Rotation

    • orbit around subject

    • 360° orbit

    • camera circles around

    • tracking shot

    • camera tracks alongside subject

    • arc shot

    • curved camera movement

    Other Movements & Styles

    • static camera / static shot

    • handheld shot

    • camera roll

    Note:
    LoRAs work perfectly with this model, offering full compatibility and consistent results across styles and concepts.

    # SUPPLEMENTARY ADVICE

    You can use negative prompts, but be careful: this will double the generation time.
    Only use them if you really want to prevent something from appearing in your video.
    In that case, enable the corresponding node; otherwise, keep it disabled.

    ⚠️ Important
    The model must be used with CFG set to 1, so negative prompts do not work by default.
    However, there is a simple way to enable them.

    How to enable negative prompts:

    1. Open the Manager

    2. Search for kjnode and install it

    3. In your workflow, add the WAN Nag node

    4. To use it correctly:

      • Connect this node after the LoRA Loader

      • Feed it with the negative prompt

      • Then connect it to the first kSampler (High)

    👉 Only use this option when necessary, to avoid unnecessarily increasing generation time.

    And here is the negative prompt for unwanted movements:
    motion artifacts, animation artifacts, movement blur, motion distortion, dynamic distortion, shifting shapes, unstable render, instability, wobbling effect, jittering effect, vibrating render, inaccurate details, visual noise, distorted surfaces, rendering errors, warped shapes, exaggerated butt movement, jiggle, overanimated hips, unnatural butt motion, hyper bounce, extreme curves, distorted hips, unnatural pose, unrealistic anatomy, deformed body, disproportionate body, floating limbs, blurry textures, clipping, stretching, low detail, messy background, artifacts, butt bounce, moving hips, swinging hips, shaking butt, wiggling butt, moving lower body, moving pelvis, jiggling buttocks, bouncing butt, unstable stance, unnatural hip motion, exaggerated hip movement, hip sway, hip rotation, bottom motion, pelvis motion, wobbling hips, fidgeting lower body, dancing hips, pelvic movement, motion blur, unnatural movement

    And here is the negative prompt from the official ComfyUI workflow:
    色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走

    Note: If you use the default workflow without the node, this negative prompt will not work.

    UPTDATE: Lightning Edition – T2V

    I’m not really experienced with T2V myself, but a colleague who works with it a lot tested this version and confirmed that it performs very well. From the few tests I managed to do on my side, I got the same impression, although I haven’t compared it directly with the base version yet.

    The settings are the same as in the image-to-video version: 2+2 steps or more if you prefer, CFG at 1, and Euler simple (though other samplers also work great).

    I don’t plan to make an NSFW version for the T2V release,
    but for the I2V version, I’m already quite happy with the first results.

    Update WAN 2.2 V1.1 I2V

    Updated version of the original Lightning merge — same settings 2+2 steps or more, featuring more movement and smoother flow (depending on the prompt).

    The model already works very well in NSFW. Just use the right LoRAs, and the movement will improve.

    WAN 2.2 V1 I2V

    This checkpoint is based on the original WAN 2.2, with the Lightning WAN 2.2 and Lightning WAN 2.1 LoRAs already integrated. This improves image quality, makes motion smoother and more dynamic, and removes the slow-motion effect that can occur with the Lightning models.

    A common setup is to use 2 steps on the high model and 2 steps on the low model, though other settings may work as well. Do not apply the Lightning LoRAs manually — they’re already included in this checkpoint.

    My workflow

    https://civarchive.com/models/2079192/wan-22-i2v-native-enhanced-lightning-edition

    Description

    FAQ

    Comments (208)

    imafan927Nov 23, 2025· 2 reactions
    CivitAI

    Finally got around to trying your FP8 model again (after adding a real swap file). Didn't do many runs yet, but great so far! On 32 GB RAM, I saw it use 8-9 GB swap at peak, users should plan ~40 GB memory for RAM + swap combined. Haven't tried your new Q4 version yet, but I plan to! Even with swap usage, since my swap is on a fast NVMe, generation times were still reasonable for the resolution/length I tried.

    taek75799
    Author
    Nov 23, 2025· 1 reaction

    Hello and thank you for your feedback. I’m glad you were able to get the model working. So, if I understand correctly, you are using block swaps? I assume you are using a workflow with WAN Wrapper? May I ask which GPU you have? And have you tried using a native workflow? I’m asking because, on my end, WAN Wrapper is much more resource-intensive and I can’t go very high in resolution.

    imafan927Nov 24, 2025· 1 reaction

    For workflow, I took the Smooth Mix Wan 2.2 T2V workflow that had fewer custom nodes and worked for me, and modified it. I loaded your models instead, and added the Painter and FindPerfectResolution nodes from your workflow, and a Load Image node, and made it into an i2v workflow. I don't know what block swaps are, I'm on linux and just created a regular swap file on my btrfs partition on my nvme drive. I didn't have a real swap file before, just zram compression virtual swap. I don't know anything about WAN Wrapper, so I guess I'm not using that? I link from the PainterI2V node to the ksamplers. No loras. In these initial tests I've been setting the image width to 360, and this workflow has an upscale, and I have that set to 1.5x, for a roughly 528x1024 resulting video. I may try a higher resolution, as this seems to be working well. I can get length 81 (5 seconds) in under 3 minutes. My CPU is a Ryzen 5 9600x.

    taek75799
    Author
    Nov 24, 2025· 1 reaction

    @imafan927 Alright, thanks for your response. If it’s this one, it’s good: https://civitai.com/models/1847730?modelVersionId=2091039. It uses the native nodes.

    imafan927Nov 24, 2025· 1 reaction

    @taek75799 It's this one: https://civitai.com/models/1995784?modelVersionId=2323420
    And then my modifications. I think that's pretty similar? I could always try the one you linked.

    taek75799
    Author
    Nov 24, 2025· 1 reaction

    @imafan927 I didn’t understand, the link you sent me is a model? Anyway, I think what you’re using is great since it’s the native version, and I can see that you know how to use the nodes quite well, since you picked Find Perfect Resolution, which is very useful, and Painter I2V, which is also an excellent node.

    As for the rest of my workflow, it’s mostly standard: most people use the same things in their workflows for long videos. The difference, maybe, with mine is that every 5 seconds you can choose a different prompt and different LoRAs.

    hambonebeaverNov 24, 2025· 5 reactions
    CivitAI

    can i run this on my 5070 ti with 16gb VRAM?

    taek75799
    Author
    Nov 24, 2025· 1 reaction

    Hello, yes, I’m running it with a 4060 Ti. Your GPU is better.

    BigSad11Nov 24, 2025· 8 reactions
    CivitAI

    This model helped with my cow fetish. Thank you. 🐮

    taek75799
    Author
    Nov 24, 2025· 4 reactions

    Lol, you’re welcome

    kostyg31337Nov 24, 2025· 1 reaction
    CivitAI

    May I ask you to share GGUF workflow?

    taek75799
    Author
    Nov 24, 2025· 1 reaction

    Hello, you just need to download a video from the Q6K model preview, for example, and drag it into your ComfyUI interface; you will get the WF.

    happylittleteapotNov 24, 2025

    Also the only difference in a GGUF workflow and a safetensor one is the node used to load the model. I don't know if comfy ships one yet, but if you already wanted to load a GGUF you likely have that same node so idk lol

    maxanjo512Nov 24, 2025

    @taek75799 
    "download a video from the Q6K model preview"
    Where is it?

    taek75799
    Author
    Nov 24, 2025· 1 reaction

    @maxanjo512 This one, for example:
    On the page https://civitai.com/images/111331204, right-click on the video and select ‘Save video as’.
    Once you have the video, drag it into your ComfyUI interface and you will get the entire workflow that was used to create the video.

    kostyg31337Nov 24, 2025· 1 reaction

    @taek75799 thanks a lot!

    radiantResistorNov 24, 2025· 3 reactions
    CivitAI

    Anyone missing Super Power Lora Loader node? I have rgthree installed and updated but can't seem to get this node.

    taek75799
    Author
    Nov 24, 2025· 3 reactions

    Hello, download the v1 of my workflow. Inside, you will find the folder with the workflow and the rgtree folder. Go to your custom nodes folder and replace it with this one: https://civitai.com/models/2079192?modelVersionId=2353363

    wamwampampam459Nov 24, 2025
    CivitAI

    I heard this model works super well. But ive never used ComfyUI or anything like it and I am wondering how to set it up? Can someone point me in a direction so I could set this up and experiment with it?

    happylittleteapotNov 24, 2025· 2 reactions

    This can seem daunting. Simply three links, one use of CMD/Git, and a bunch of downloading:

    https://docs.comfy.org/installation/comfyui_portable_windows

    This link is possibly unnecessary nowadays, but a bit of a legacy requirement. You could try some workflows without it (like the one in the third page, and see how your UI responds and if you can get the necessary plugins): https://github.com/Comfy-Org/ComfyUI-Manager

    https://comfyanonymous.github.io/ComfyUI_examples/wan22/

    Once you make it to the third link, the models on this CIVITAI page can be used to replace the base WAN diffusion models. I would still recommend downloading the base models, but the model on this page may end up being a default for you if you enjoy it.

    You'll want at least 100GB of disk space free, like probably thrice that.

    ikwerner1432Nov 24, 2025· 1 reaction
    CivitAI

    Could you release a Q8 for older GPUs?

    taek75799
    Author
    Nov 24, 2025· 2 reactions

    Hello, the Q8 version is the one that requires the most resources, but it is also the best.

    I will publish it tomorrow.

    redlucario1735Nov 24, 2025

    @taek75799 is the Q8 too much for a 5080? i'm not sure what the sweet spot would be, i have 32gb of ram.

    taek75799
    Author
    Nov 24, 2025

    @redlucario1735 Hmm, it’s true that RAM can make a difference. It would be worth trying. I have a 4060 Ti with 16 GB of VRAM and I also have 64 GB of RAM, and when I upgraded from 32 to 64 GB, I really noticed a speed improvement.

    forfun5800Nov 25, 2025· 1 reaction

    @taek75799 is it good with 3090 24vrm and 32 of ram

    serpentineNov 24, 2025
    CivitAI

    Before I possibly waste time downloading 22Gb, is this only trained on women/hetero or can it do men as well?

    taek75799
    Author
    Nov 24, 2025

    Hello, the model is oriented toward straight content. I haven’t tried gay scenes, for example. But if you give it an image, it might be able to do it, though I’m not sure, sorry.

    serpentineNov 25, 2025

    @taek75799 I may give a go later on as I am forced to use hetero content anyway. Thanks for the reply though! I greatly appreciate it!

    taek75799
    Author
    Nov 25, 2025

    @serpentine I’ve thought a bit about your request. I recently came across a LoRA that might interest you:
    https://civitai.com/models/2095123/nfsw-hardcore-gay-sex?modelVersionId=2370454
    Maybe you already know it. To be honest, I think that if you use the NSFW model, it might possibly turn the penis into a vagina, but I'm not really sure — you would need to test it.
    But I think the best option would be either to use the base WAN version or the V2 Cam version here. It’s not NSFW-oriented, it will give you better camera control and more dynamics compared to the base WAN, and it has no NSFW content.
    However, regarding the LoRA, it seems to only have one pose unfortunately. But you can, I’m sure, go further by generating blowjob images for example and animating them with WAN. I’m sure it would work.

    serpentineNov 25, 2025· 1 reaction

    @taek75799 I've tried that lora and it's... okay xD. I did end up downloading the Q6K models and they are doing quite well! I'm even using my own, very basic, workflow with quite good results! Only problem I've been having is keeping the guy's face in frame and in focus, though I assume that's from the training data being heavily female. Thank you so much for being so thoughtful and thorough!!!

    taek75799
    Author
    Nov 25, 2025

    @serpentine Great, then, if it works for you. For the face, try increasing the resolution if you can — it’s really the key to keeping maximum consistency.

    howard_spinksNov 25, 2025· 1 reaction
    CivitAI

    Extremely impressive! Hoping you'll release quantized versions of the non-fast movement NSFW version since the fast movement one likes to live up to the name and it's not always appropriate. Been impressed at how well it handles non-NSFW too, going to have to try your model purpose built for it

    taek75799
    Author
    Nov 25, 2025· 1 reaction

    Hello, and thank you for your feedback. To be honest, I wasn’t planning to release the GGUF versions for the NSFW model because no one had asked for them. I always take users’ requests into account.
    What I can offer is that you wait until this weekend: I’m planning to release NSFW V2, and there will be only one version.
    This model was made with a different approach. Let’s say that if this version doesn’t suit some people, and if, like you, they prefer the NSFW V1, then I will publish the GGUF versions as promised.

    SutadasutoNov 25, 2025
    CivitAI

    I'm starting to play with your models and I'm having a good time so far, but I'm wondering what are the differences between "NSFW FAST MOVE FP8" (which is more recent) and "V2 CAM I2V FP8". I could guess the one with "nsfw" works better with nsfw content, but the file names of the "V2 CAM I2V FP8" safetensors include "nsfw" too.

    taek75799
    Author
    Nov 25, 2025· 1 reaction

    Hello, and thank you for your feedback and question.
    The cam model is designed to give better camera control and more dynamism to the video.
    Regarding the claim that it is labeled NSFW, that's incorrect: I simply allowed people to post NSFW content. They can produce it using LoRAs, but the model itself cannot generate it.
    Actually, the base WAN can do it, but only on a small scale.

    hoy829269Nov 25, 2025
    CivitAI

    Its dynamic capabilities are fantastic, but I don't understand why everyone, including men, has a pair of jiggling breasts; unfortunately, it doesn't seem to fit well with the anime style.

    taek75799
    Author
    Nov 25, 2025

    hi, Lol, I haven’t noticed this issue with men, but I imagine it can happen. Let’s not forget that we are using WAN the model is not perfect yet.

    For anime, I made quite a few previews with it, and I think it’s pretty good. Don’t forget one very important thing: the resolution does a lot of the work to maintain coherence. If possible, try to use 1024×1024 for a square image, and follow the same logic for landscape or portrait (2048 total).
    If you can't, you can try a Q4_KM model: there’s not really any loss—on the contrary, you’ll get better quality if you can increase the resolution.

    taek75799
    Author
    Nov 25, 2025

    Thank you for your feedback.

    qekNov 26, 2025

    @taek75799 I am not sure that he can run it, I am not sure that he upgraded his PC either, he said "My graphics card only has 6G. It takes me 15 minutes to generate with pony." a year ago, and he still uses SD WEBUI v1.10.1 and the civitai generator
    @hoy829269 其实挺适合动漫风格的,要避免那些奇怪动作也不难

    taek75799
    Author
    Nov 26, 2025

    @qek Ah yes, indeed, that might be difficult. Maybe with a Q3 model, I’m not really sure, you would have to try. On Civitai, you have the WAN base model in Q3, I think.

    ronnyboi20112Nov 25, 2025
    CivitAI

    why does it change face completely?, tried Q6 model with Lora

    taek75799
    Author
    Nov 25, 2025

    Hello, have you tried without a LoRA? Try increasing the resolution.

    hahahihi6969Nov 26, 2025
    CivitAI

    i run this fp8 version with general nsfw lora at 0.7 strength. why it change the face so much. any suggestion?

    taek75799
    Author
    Nov 26, 2025

    Hello, and thank you for your feedback. Honestly, the only thing I usually suggest is increasing the resolution; it does a lot to keep the face consistent. I should try it with the general NSFW LoRA, but do you really need it? It’s already built into the model.

    wamwampampam459Nov 26, 2025
    CivitAI

    For some reason SimpleMathInt+ does not exist for me lol. Any solutions, this is my only problem now

    wamwampampam459Nov 26, 2025

    okay after just inputting a random num just for it to run I also have no module named 'sageattention' on KSampler(advanced). I don't know what it means so if that would also be helped then that would be nice

    mrcane56Nov 26, 2025

    @wamwampampam459 Look up how to install SageAttention. It's not particularly easy and it can be finicky. You can just bypass the nodes though, but the speed of the generation will go up.

    kostyg31337Nov 26, 2025

    Yes, same problem. Using another flow

    taek75799
    Author
    Nov 26, 2025

    @kostyg31337 @wamwampampam459 Hello, the SimpleMath node isn’t essential; you can remove it. It’s only there to make things easier. If you remove it, you will need to set two resolutions: width and height. You can replace it with any int node, for example the one from KJNode called int constant.

    BoneMechNov 27, 2025

    @kostyg31337 @wamwampampam459 these videos worked very well for me in the installation of sage attention. they both contain instructions for portable and one includes desktop. it is literally step by step and you can watch on screen as you go. i recommend watching the shorter video first as it goes over desktop installation as well and is VERY straight forward in approach. i found that sage attention is worth the effort, only works with NVIDIA gpu
    https://www.youtube.com/watch?v=ex_rK4FfF0w
    https://www.youtube.com/watch?v=VswdrceLIrM

    kharadamon165Nov 27, 2025

    thanks for the sage advice had a similar problem with trying to find and install the Super Power Lora Loader from rgthree. Currently can only find the Power Lora loader?

    taek75799
    Author
    Nov 27, 2025· 1 reaction

    @kharadamon165 Hello, download the V1 of my WF: https://civitai.com/models/2079192/wan-22-i2v-native-enhanced-lightning-edition-long-video-multi-prompt-new-node-painter. Inside, there is a folder named RGTree. Go to your custom node folder and replace the RGTree folder with mine; you will get the Super Power LoRA.

    kharadamon165Nov 29, 2025· 1 reaction

    @taek75799 ty very much for the help

    Seii1Nov 26, 2025· 1 reaction
    CivitAI

    do you have a simple workflow that i can use,

    mrcane56Nov 26, 2025· 1 reaction

    At the bottom of models description.

    GsssyikNov 28, 2025

    @mrcane56 They said simple, not spaghetti monster nightmare

    zereshkealtNov 26, 2025· 2 reactions
    CivitAI

    thx for the work, but i still have the changing faces issue, do u have a solution for it?

    taek75799
    Author
    Nov 26, 2025· 2 reactions

    "I also have this problem at low resolution on the base WAN model, but when I increase the resolution, it doesn’t change the face. Try setting at least 1024 × 1024 (square image). If you can’t, try using a lower GGUF model. Honestly, the Q4KM model works very well, and the resolution really makes a difference.

    forever9801Nov 27, 2025· 2 reactions

    When I try it with anime image i2v, without lora the "face changing"(become realistic) is little, but more of the problem is the character tends to close their eyes in the middle of the actions. My prompts already content those "face static" prompts though.

    zereshkealtNov 27, 2025

    @forever9801 i did notice that too, the closing of eyes!! i thought it's making an angry face or something!!

    zereshkealtNov 27, 2025

    @taek75799 i've been using the Q8 with 720p resolutions like generations, i'll try 1024*1024, i don't think my 8gb gpu could make it though. have to say, idk what happened, but yesterday i did a couple of i2v generation, but the faces didn't change that much. deepseek suggested a couple of things, i did them, clearing caches and changing browser! i'm not sure what was the solution.

    forever9801Nov 27, 2025· 2 reactions

    So far I think the model is good in some common sex positions, but even the working ones I think the facial expression is moving too much :/

    g1263495582Nov 29, 2025· 1 reaction

    look into Lynx, Phantom, or Wan 2.2 Animate for face repair. I've noticed some users chaining Lynx with Wan 2.2, though I haven't tested it personally.

    zereshkealtNov 29, 2025

    @g1263495582 thx, i'll check them.

    LiqviorNov 26, 2025
    CivitAI

    Like I said, love this model but have issue with woman's pubic hair, they don't appears, don't know how prompt it D:

    taek75799
    Author
    Nov 26, 2025

    Hi, to be honest, I don’t know if it’s capable of doing that, maybe with a LoRA if one exists :)

    banlg22793Nov 27, 2025
    CivitAI

    If the SEED remains the same, the results are too similar. HOw to fix that

    taek75799
    Author
    Nov 27, 2025

    Hello, I didn’t understand: is the seed the same and the prompt as well?

    TwerkmanNov 27, 2025

    change the seed

    forever9801Nov 27, 2025

    Isn't this how it suppose to work? lol.

    That's the exact point of keeping the seed the same, because with the same seed, if nothing else changes, then it's expected that the results will be exactly the same. You can use a fixed seed as a tool to accurately figure out how changing other settings affect the result. This is the whole purpose of a fixed seed.

    banlg22793Dec 8, 2025· 2 reactions

    MY FAULT .I USE I2V IN T2V WORKFLOW..

    BiggusDingusNov 27, 2025
    CivitAI

    Woud you be able to briefly explain the difference between fp8 and Q# versions? Why one would want to pick one over the other?

    So far I've only tried the i2v Cam Q4 version, and it's one of the best models I've used. I've previously been using Q5 versions of other models (which you don't seem to have), so guess I'll try Q6 next and see how my PC handles it.

    taek75799
    Author
    Nov 27, 2025· 1 reaction

    hello, To keep it simple: the highest precision is BF16, followed by FP16, but both of these use a lot of resources. Next come Q8, Q6, and Q5 (equivalent to FP8), then Q3 and Q2. Below Q4, the quality really starts to drop, so if possible, stick to Q8 or Q6.

    rollerinotNov 27, 2025
    CivitAI

    Did anyone have issues with the NSFW UMT5? either two does not load, showing invalid tokenizer warning in bg16 and the FP8_Scaled version says 'clip missing'. Is there a fix for that?

    taek75799
    Author
    Nov 27, 2025

    Hello, for me the BF16 version doesn’t work, but the FP8 one works.
    It’s annoying.
    From what I’ve noticed, the NSFW CLIP doesn’t affect sexual positions that much most of the time, but it does improve, let's say, horrific scenes.

    Have you looked on the internet to see if someone has the same error?

    gxbsyxhNov 28, 2025
    CivitAI

    Hello dear author: I use thiswan22EnhancedNSFWCameraPrompt_nsfwFASTMOVEQ6KHigh The video of the model on the T2V workflow is blurry, it is normal to use it on I2V, does this GGUF not support T2V?

    taek75799
    Author
    Nov 28, 2025· 2 reactions

    Hello, you can’t use the NSFW model with a T2V workflow because it’s an I2V model.
    However, I’ve seen a trick several times to use an I2V model as T2V: just put a white or black image in Load Image and run it. Of course, you may get 2 or 3 frames with that white or black image, but apparently it works quite well.

    HaikoNov 28, 2025· 1 reaction
    CivitAI

    Thank for your work so far. May I ask what node I have to install for "super power lora loader (rgthree)". I would need something like a git link. Comfy does not show me something helpful.

    taek75799
    Author
    Nov 28, 2025

    Hello and thanks for your feedback. Look here: https://civitai.com/models/2079192/wan-22-i2v-native-enhanced-lightning-edition. Take the V1 version; inside you’ll find a folder named RGTree. Then, go to your custom node folder and replace the RGTree folder with mine. But you don’t really need the Super Power LoRA — well, if you often use combinations of LoRAs it’s very useful; otherwise take the version without Super Power LoRA: it will just include Power LoRA.

    coachbateNov 28, 2025
    CivitAI

    You apply sageattn, fast fp16, and shift to the model in each step. Can you just pass the model after the ModelSamplingSD3 node to each step so it doesn't have to repeat that, and then just apply the different loras at each step to the model?

    taek75799
    Author
    Nov 28, 2025

    Hello, to be honest I left modelesamplingsd3 on each step because I tend to change it depending on the LoRAs used. It sometimes gives better transitions. But yes, you can keep it only once.

    zoot_allure855Nov 29, 2025· 2 reactions
    CivitAI

    I fixed ns_fw_wan_umt5-xxl bf16.

    Check out the only model under.

    https://huggingface.co/zootkitty/

    I'm finding it fixes some issues I was facing with this and other fine tunes. I get better prompt adherence with it, and less general weird behavior.

    taek75799
    Author
    Nov 29, 2025

    Thank you for that. Do you want me to put it in the description?

    zoot_allure855Nov 29, 2025

    @taek75799 Sure if you like.

    azra1lNov 29, 2025

    @zoot_allure855 you sire are the hero we needed

    And what does this version fix actually ? Any concrete example ? How was it fixed ?

    taek75799
    Author
    Nov 29, 2025

    @zoot_allure855 Thank you. I will add it in the next update, but I will also make a small comparison. There is one important thing to know about the NSFW version.

    zoot_allure855Nov 29, 2025

    @randomchatter1234776 I fixed this like 5 days ago and have been using it for a while. What was wrong? The spiece_model key was missing which is just a hash identifying the model. I copied the one from umt5-xxl into this. What does it fix? Well many times males will spit instead of e-jac, this helps with that. Seminal fluids look better, less opaque more translucent. I encounter less situations where transitions are unworldly. I mean it's not perfect just more consistent than the 8FP one.

    @zoot_allure855 Incredible. thank you very much.

    qekNov 29, 2025

    @taek75799 I have no issues with the original one (bf16)

    taek75799
    Author
    Nov 29, 2025

    @qek Thanks, that’s good to know. It’s strange that some people have issues and others don’t.

    g1263495582Nov 29, 2025

    @qek That's weird... hmm. 

    zoot_allure855Nov 29, 2025· 4 reactions

    @taek75799 Technically the original bf16 version has all the trained encoder data there. You can bring it up in python and use it as is. But when comfy (at least the latest version) brings up the model it looks for a key called spiece_model. If that is not there comfy throws an exception. You can go to the NS_FW repository link you have in your description and click on the safetensors models to see the keys. You'll notice that the FP8 one has spiece_model as the last entry, where this is missing in the BF16 one. It's possible that software other than comfy (maybe even old comfy releases) ignores spiece_model because all this does is provide information about what model it is. If you already know what the model is you can just access the encoder layers directly.

    CemreKNov 29, 2025
    CivitAI

    Hey thanks for making the model! I just have one question: no matter what prompt I enter, a male genital organ always appears. Can we hide this? Or is there a special prompt?

    taek75799
    Author
    Nov 29, 2025

    Hello, I don't think you can, for example add a vagina if the AI detects a man maybe with the clip encoder in NSFW BF16, I don't really know, it would need to be tried, or else use a LoRA if one exists

    CemreKNov 29, 2025

    @taek75799 I tried BF16 but same...

    taek75799
    Author
    Nov 29, 2025· 1 reaction

    @CemreK It might be difficult, I think, without a LoRA.

    Doom666milfNov 29, 2025
    CivitAI

    thanks a lot for such great work!just one question: Which one should i pick for 4060 8G ?

    g1263495582Nov 29, 2025· 1 reaction

    Stick with Q4, unless you've got 32GB of RAM or more—then you can give FP8 or Q6 or other a try.

    genelec1031a915Nov 29, 2025· 1 reaction
    CivitAI

    thanks for ns_fw q8 works best and not changing face.

    juliusmartinNov 29, 2025

    which model ID

    genelec1031a915Nov 30, 2025

    urn:air:wanvideo-22-i2v-a14b:checkpoint:civitai:2053259@2439404

    genelec1031a915Nov 30, 2025

    urn:air:wanvideo-22-i2v-a14b:checkpoint:civitai:2053259@2439407@juliusmartin 

    cleptoesDec 3, 2025

    Hello, it changes the face every time for me... What is your workflow?

    taek75799
    Author
    Dec 3, 2025

    @cleptoes “Hello, and thank you for your feedback. Would it be possible for you to send me your workflow with the settings that change the face? Ideally, you could send an image with the prompt so that I can show the result and post it here. I’m sure we can probably fix this if you want

    cleptoesDec 3, 2025

    @taek75799 Hello, I used this workflow: WAN 2.2 GGUF I2V Simple Video Workflow - Main | Wan Video Workflows | Civitai

    Only thing I changed was the GGUF I added your low and high. For the LORA models I used lightx2v_I2V_14B_480 the BF16 one in both low and high set to strength 1.00. Should I have a low and high version? Or do I use 1 in both? In K sampler steps are 8. Wan 2.1 VAE encode. I see amazing results at the bottom of your page and get nowhere near that. I appreciate all the help.

    taek75799
    Author
    Dec 3, 2025

    @cleptoes Do not use the Lightning LoRAs with my model. Only use my model in high and low; using the LoRAs lightning will degrade the video.

    cleptoesDec 4, 2025· 2 reactions

    @taek75799 Thanks! This was it. Amazing work btw and thanks again for the help!

    pixeldriverNov 29, 2025· 2 reactions
    CivitAI

    This is really great! Tip... to avoid going through adding that Nag node, just up the CFG to 1.1, then you can use the Negative prompt that's already there. It's working great this way for me. Thanks for making this! It was very much needed.

    randomchatter1234776Nov 29, 2025· 1 reaction

    Just reminding that setting cfg to ANY other value than 1.0 automatically doubles generation time.

    pixeldriverNov 29, 2025

    @randomchatter1234776 Good to know!! Thanks much.

    pixeldriverNov 29, 2025· 1 reaction

    This is SOOOOO good. I just made an amazing video... one that I always struggled with. Lots of view changes and action happening. And my 1st attempt was a home run!! Thanks so much for sharing this amazing addition!

    pixeldriverDec 1, 2025

    This is by far the BEST addition to ComfyUI and Wan 2.2 that I've come across to date. Just wanted to add that. It gives so much control and it WORKS! Some of the things I've seen... the way my characters respond and make all sorts of totally realistic motions and silly little very believable facial movements, (with excellent camera control) is just amazing. Go here, camera go there, etc. Again... THANK YOU!!

    taek75799
    Author
    Dec 1, 2025

    @pixeldriver I’m glad you like the model, and thank you for your feedback. Which version are you using?

    g1263495582Nov 29, 2025
    CivitAI

    Is the Q8 based on FP8, FP16, or BF16? Also, can I make a LoRA request?

    taek75799
    Author
    Nov 29, 2025

    All GGUF models are based on BF16. Sorry about the LoRA, it won’t be possible. I currently have many things in preparation.

    g1263495582Nov 29, 2025· 1 reaction

    @taek75799 thanks you.

    dredfjorst119Nov 29, 2025
    CivitAI

    Is there any chance you will upload these models to huggingface?

    taek75799
    Author
    Nov 29, 2025

    Hello, other people had asked me the same thing. I tried, but I couldn't do it: I always get an error, the upload fails

    dredfjorst119Nov 29, 2025

    @taek75799 Dang, that's so sad. I use Runpod to run models and I have an issue with downloading files from Civitae, they just don't work and drop an error of a corrupted file when i try to start generating. But when i get them with "wget" in terminal from huggingface everything works well. So now i use your workflow (which is the best workflow so far) without your models haha, that's sad. I tried "wget" from civitae, but they are not appearing in folders.

    qekNov 30, 2025

    @taek75799 I have been posting on HuggingFace Hub

    Darju95Nov 30, 2025
    CivitAI
    Compared the standard Q8 model to your NSFW Q8, and there's a huge difference in terms of generation. It's a very good model, with satisfactory results at 2+2 steps. In my workflow with Sage Attention and Tritton, on a 4090 graphics card, I can generate a 5-second video in less than a minute. It's amazing.

    taek75799
    Author
    Nov 30, 2025

    Hello and thank you for your reply. Indeed, it’s very fast on your side. On my end, it takes a little over 2 minutes to generate with a 4060 Ti, which is really decent for a video model.

    Darju95Nov 30, 2025

    @taek75799 I'd like to ask you out of curiosity, what resolution are you using for the initial generation? And do you use the t2v model later to enhance the higher-resolution version?

    taek75799
    Author
    Nov 30, 2025

    @Darju95 I only use 4 steps and make sure to start with a very clean image. I upscale the image to at least 2K, and for the final video resolution I set the total to 2048 pixels (1024×1024 for square format). I interpolate to 64 fps. There’s no upscaling or second pass in T2V for the previews, but sometimes I do upscale with FlashVSR or Topaz. However, I prefer not to show those results here so everything stays as close as possible to what people can achieve with the model.

    taek75799
    Author
    Nov 30, 2025

    @Darju95 “By the way, you just need to download the video and drop it into your ComfyUI. I leave the metadata in the file, but it seems that with recent versions of ComfyUI the metadata doesn’t appear anymore. For WAN, I’m using a slightly older version of ComfyUI.

    juliusmartinDec 1, 2025

    can you share a WF or link to attached one? so curious, that speed is insane

    taek75799
    Author
    Dec 1, 2025· 1 reaction

    @juliusmartin Here is mine: https://civitai.com/models/2079192/wan-22-i2v-native-enhanced-lightning-edition-long-video-multi-prompt-new-node-painter
    I don’t know if Darju95 is using a different one, but mine is quite basic. It’s just simple long-video generation with separate prompts and LoRAs, plus the Painter node on the latest version.

    kenpachi601674Dec 1, 2025
    CivitAI

    I am sorry, forgive my ignorance but where do i import these gguf files? I could not find on the workflow? Also should i put them in diffusion models folder or loras folder?

    qekDec 1, 2025

    /diffusion_models

    kenpachi601674Dec 1, 2025

    thanks. Where do i select them on the workflow?

    FantasticFeverDec 2, 2025· 1 reaction

    models/unet, I don't put ggufs in /diffusion_models folder, not sure if qek knows something I don't. You need a gguf loader node instead of a diffusion model node in you workflow.

    qekDec 2, 2025· 1 reaction
    kenpachi601674Dec 2, 2025

    After adding gguf node should i disconnect the diff model node and connect them to loras node am i right?

    BrAInB0tDec 1, 2025
    CivitAI

    any chance of a fp16 download in diffusers format? preferably a non-distilled version for training loras on?

    qekDec 1, 2025

    What is it for? Anyway, I think it should require a lot of memory to convert

    BrAInB0tDec 1, 2025

    @qek training loras on a NSFW checkpoint may result in better loras with less intensive training and smaller datasets. I’m testing combined motion loras and when trained with standard wan 2.2 that means having a set of videos for each motion when I could simply use one set.

    qekDec 2, 2025

    @BrAInB0t It appears to be a merge, it's better to train on the base

    juliusmartinDec 1, 2025
    CivitAI

    where to find "wan22EnhancedCameraPrompt_camI2VQ6KMLOW.gguf"

    was is renamed to nsfw?

    TY

    taek75799
    Author
    Dec 1, 2025· 1 reaction

    Sorry, it’s complicated to keep track of everything. I don’t know why, but when I upload a file to Civitai, it automatically renames it, and it’s impossible to change it.

    I think it’s the CAM v2 version, this one: https://civitai.com/models/2053259?modelVersionId=2379693
    Be careful, it doesn’t do NSFW, but it has better camera control and offers more dynamic scenes.

    juliusmartinDec 4, 2025

    hi again and thank you for your time.

    Could you help me find an fp16 or q8 version of “wan22EnhancedNSFWCameraPrompt_nsfwFP8High.safetensors”?

    This is the only model that follows the prompt without losing too much detail:

    (At 0 second: She is still as all of her clothing suddenly disappears.)

    I have tried these so far without success:

    wan22EnhancedCameraPrompt_camI2VQ8HIGH.gguf

    wan22EnhancedCameraPrompt_camI2VQ8LOW.gguf

    wan22EnhancedNSFWCameraPrompt_nsfwFASTMOVEQ8High.gguf (outputs black videos)

    wan22EnhancedNSFWCameraPrompt_nsfwFASTMOVEQ8Low.gguf (outputs black videos)

    wan22EnhancedNSFWCameraPrompt_v11I2VQ8HIGH.gguf

    wan22EnhancedNSFWCameraPrompt_v11I2VQ8LOW.gguf

    wan22EnhancedNSFWCameraPrompt_v2CAMI2VQ8HIGH.gguf

    wan22EnhancedNSFWCameraPrompt_v2CAMI2VQ8LOW.gguf

    Also, one more question: for face detail, is it better to stick to the native 1024x1024 resolution when generating, or can WAN handle higher resolutions without losing identity, whats your max resolution recommended?

    taek75799
    Author
    Dec 4, 2025

    @juliusmartin It seems that some people prefer the NSFW version. Are you having a black-video issue with the GGUF models? Have you tried the Q6_K version? It’s very close to Q8.
    The GGUF versions don’t exist for the NSFW model — they are only available for the NSFW fast-move version, but I think it won’t work either if you already have an issue with the GGUF format.

    I’m planning to release the NSFW v2 this Saturday. If some people still prefer the NSFW v1, I’ll consider converting those to GGUF as well.

    For your prompt, try adapting it like this one: https://civitai.com/posts/24889694

    Regarding resolution, I can go up to 1280×1280, and WAN seems to understand that well. However, be aware that it may struggle with certain resolutions depending on the prompt. Of course, the higher the resolution, the better the video quality — but the starting image is also very important.

    juliusmartinDec 4, 2025

    @taek75799 hi thank you for your quick response I believe that Sunday will be a great day then :)

    ... btw the nsfw version can do all that undressing without loras, and another good thing is that the "At X Seconds" format can be very useful for automated stuff, building on that will make the difference in future, maybe even going so far to change the prompt for every X frames like (A frames 5-10: prompthere) for absolute control

    a serious question: I have downloaded your suggested image/wf and I was shocked in a good way how many loras you used, they look highly optimized, since you seem to archive perfection I would like to ask you about your absolute best penile insertion workflow for all or a specific position

    taek75799
    Author
    Dec 4, 2025

    @juliusmartin Don’t pay attention to the number of LoRAs in that video, it was just an example. I think the base model can probably handle it as well — the only issue is that the vagina might not be generated correctly, so adding a dedicated LoRA for that would help. Even dr4emlay NSFW could do it; it really depends on what you want to achieve.

    The ideal approach is to build your video in several steps instead of trying to do everything in the first 5 seconds. For example, with my workflow you can change the prompt every 5 seconds and add different LoRAs.

    Keep in mind that there will be degradation with every transition, so you need an excellent image resolution first, and then a high video resolution. Of course, you can use other tools like FlashVSR to upscale the video, or other methods — but that would take a long time to explain here.

    kronos1959777Dec 2, 2025
    CivitAI

    any tips on how to get camera move to ground worms eye view? I cannot no matter what I try. it literally always just moves the camera up higher when i prompt it.

    taek75799
    Author
    Dec 2, 2025

    Unfortunately, I don’t think that’s possible with WAN. You would need to do it with a LoRA.

    Even with Qwen-Edit, it’s very difficult—only possible in certain situations. If you manage to do it, you would then need to use WAN together with First/Last Frame.

    twittweeDec 2, 2025
    CivitAI

    Can the "NSFWCameraPrompt_nsfwFASTMOVEFP8" be used for I2V? If not, which do you recommend?

    taek75799
    Author
    Dec 2, 2025

    “Hello, yes, it’s an I2V model. I recommend the Q8 version if you can get it working; otherwise, the Q6K is good. If not, the FP8 version works very well too.

    eatergod63954Dec 2, 2025· 1 reaction
    CivitAI

    Why does it generate just noise on the 2nd run any fixes. After some runs it then throws memory error never had that issue before.

    qekDec 2, 2025· 1 reaction

    Post the workflow

    eatergod63954Dec 3, 2025· 1 reaction

    @qek issue solved, I updated all the customs nodes and it worked.

    zereshkealtDec 2, 2025· 1 reaction
    CivitAI

    i wanted to ask, when will we get to see the V2 model? NGL, the prompt adherence is outstanding compared to other models that i've tested, might i ask for something, that u do a slow move model? it's a bit too much, can't stop it going overboard!! i know there is a NSFW model, is there a gguf version for it?

    taek75799
    Author
    Dec 2, 2025· 2 reactions

    Hello, the v2 version will be available this Saturday. Unfortunately, I wasn’t able to make the GGUF versions of the NSFW models. After releasing v2, I’ll have more time to create the GGUF versions if some people prefer the NSFW v1.
    Thank you for your feedback.

    I imagine you tried adding words like ‘slow movement’ in your prompt? To be honest, I didn’t test it for movements. I only tried it for ‘slow penetration,’ and it seemed to work a bit.

    zereshkealtDec 2, 2025· 2 reactions

    @taek75799 Great, then i'll wait for the V2. yup i tried gently, slowly, but it didn't change much. even tried to lower the frame rate, but ...

    zzozzDec 3, 2025· 1 reaction

    I agree. making slow thing is very hard. if there's other model for slow or stopped motion, user can swap as needed.
    I mean 'slow penetration' or 'stopped while penentration'.

    it's hard to control.

    zereshkealtDec 3, 2025· 1 reaction

    @taek75799 one other thing, i'm not sure if this is the models fault or not, trying to do expressions, is a bit hard when i'm trying to do a I2V, when most of the times, the characters, opening their mouth wide and closing their eyes for no reason, it might be the "having orgasm" makes them do this or something like that?

    zzozzDec 3, 2025· 2 reactions

    I tried dasiwa, smooth, original gguf, enhanced fast nsfw (gguf), enhanced nsfw (fp8). every model have simileier problem.

    eyes, mouth movement is very hard to control. it ignores prompt in I2V.

    zereshkealtDec 3, 2025· 1 reaction

    @zzozz yup, don't have much info about these things, but it might be the lora they've used to make the model.

    taek75799
    Author
    Dec 3, 2025· 2 reactions

    @zereshkealt Yes, it is completely possible that sometimes the model opens the mouth during the sexual act. I don’t know if adding ‘closed mouth’ in the prompt could help, but placing it in the right part of the prompt might also make a difference.

    taek75799
    Author
    Dec 3, 2025· 1 reaction

    @zzozz Yes, it is completely possible that sometimes the model opens the mouth during the sexual act. I don’t know if adding ‘closed mouth’ in the prompt could help, but placing it in the right part of the prompt might also make a difference.

    cleptoesDec 3, 2025
    CivitAI

    This drastically changes the faces for me. Maybe I am getting confused though as I have never used a gguf before. I do not use the regular Wan or lightning diffusion model correct? Does anyone have the .json of how to set this up correctly?

    taek75799
    Author
    Dec 3, 2025

    Hello, and thank you for your feedback. Would it be possible for you to send me your workflow with the settings that change the face? Ideally, you could send an image with the prompt so that I can show the result and post it here. I’m sure we can probably fix this if you want

    cleptoesDec 4, 2025

    @taek75799 thank you it works well now. One other question when i use 2 steps high and 3 steps low with Euler Simple the video isnt good at all. When I use 8 steps on both its a much better result, much clearer. Is there something else I am missing. This is while creating a 480 p video.

    taek75799
    Author
    Dec 4, 2025

    @cleptoes Actually, I tried 3 and 3 a long time ago and it worked well too. Do you also use the Lightning LoRA?

    cleptoesDec 5, 2025

    @taek75799 no I dont use that lora since you told me that could cause an issue. I dont use any loras actually at this time.

    Rainart1989Dec 6, 2025

    @cleptoes how did u fix the face changing? its happening to me, I'm not adding any loras and using this workflow https://civitai.com/models/1852904?modelVersionId=2257723

    cleptoesDec 7, 2025· 1 reaction

    @Rainart1989 resolution did it for me. Also the right steps. Start at 0 to 2 (on high) then 2 to 4 on low total of 4 steps.

    aawass0001167Dec 3, 2025· 4 reactions
    CivitAI

    Please excuse my stupid question.

    1. Where can I download the Natural Motion Version? I only see models like nsfw fast move Q8~Q4, fp8, etc., but I can't find the Natural Motion Version.

    2. Where can I see the image of the man slapping the woman?

    mr_hYd3Dec 3, 2025· 1 reaction

    I have also the problem to identify the natual motion versions... There are so many. Could you give us a hint, pls?

    taek75799
    Author
    Dec 3, 2025· 2 reactions

    Hello, there are two NSFW versions: the fast-move version and the regular NSFW version. The difference is that the fast-move version provides faster movements. I plan to release v2 this weekend.

    taek75799
    Author
    Dec 3, 2025· 11 reactions
    CivitAI

    I noticed that some people are using the Lightning LoRAs with these models. Please do not do that — they are already included in the model. If you use them, the video quality will be degraded.

    porndli212Dec 4, 2025

    yeah i don`'t know why people include lightning in the models. I would like control of the lightning cfg. Is there any benefit to baking them in?

    taek75799
    Author
    Dec 4, 2025

    @porndli212 I imagine you want CFG control to access the negative? I don’t like increasing the CFG with Lightning — the video will look slightly overexposed with a CFG of 1.1.
    If you need the negative, use the NAG node.
    Including the Lightning LoRAs has a clear advantage: on these models, I used a mix of several Lightning LoRAs to get the best possible result.
    Stacking multiple LoRAs will always degrade the video, even though WAN handles merging pretty well. Once they’re merged, the problem disappears — even when adding

    porndli212Dec 5, 2025

    @taek75799 actually the other way around personally , actually decrease the lightning lora cfg and and increase steps a little. But some might want to play around and experiments with other ideas as-well so having the lora cfg available is always a nice thing to have to test ideas and experiment.

    zzozzDec 5, 2025

    lowering lightning high can control movement speed.

    lightning high value down 1 to 0.5, it increase moving speed up.

    opposite works. up 0.5 to 1, it slows down.

    this technique helps sometime. if lightning is not merged.

    pittmanerotica849Dec 3, 2025· 1 reaction
    CivitAI

    I'm having an issue with male on male, it keeps wanting to transform the other males genitals, is there a way to prevent that with your model?

    taek75799
    Author
    Dec 3, 2025· 1 reaction
    pittmanerotica849Dec 3, 2025· 1 reaction

    @taek75799 thanks for the suggestion but I'm working with I2V.

    taek75799
    Author
    Dec 4, 2025· 1 reaction

    @pittmanerotica849 You can try it, most T2V models work on I2V.

    StableBeepBoopDec 4, 2025· 3 reactions
    CivitAI

    Yooo amazing work! FAST MOVE Q8 is the best WAN2.2 model I've ever used 10/10. The motion is so good and easy to prompt

    NSFW FAST MOVE Q8 high

    vanguy143620Dec 5, 2025· 1 reaction
    CivitAI

    I keep running into this issue: Given groups=1, weight of size [48, 48, 1, 1, 1], expected input[1, 16, 31, 66, 36] to have 48 channels, but got 16 channels instead

    any advice? thanks

    taek75799
    Author
    Dec 5, 2025· 1 reaction

    Hello, which VAE are you using: the WAN 2.1 VAE or the WAN 2.2?
    Are you using a WAN T2V workflow with an I2V model?
    This error seems to be caused by using the wrong models.

    vanguy143620Dec 6, 2025· 3 reactions

    sorry for the late response. I fixed it by running WAN 2.1 VAE. thanks!

    10221203Dec 5, 2025· 1 reaction
    CivitAI

    Hi! Which of these is not Fast Move GGUF checkpoint (if there is such a thing)? Sorry, I dumb and cannot find. Amazing model! I'm just trying to get the output to be a little more tame (I do have the recommended negative, although I cannot seem to use the custom node to increase it without encountering issues, totally an issue on my end).

    10221203Dec 5, 2025· 1 reaction

    It's the NAG I tried to use but that node pack seems to bork my setup, currently.

    taek75799
    Author
    Dec 5, 2025· 2 reactions

    Hello, are you looking for an NSFW or SFW model?
    Otherwise, look at the top where the models are: on the far right, there’s a small arrow to show the other available models. You have the NSFW models, which are a bit different from the NSFW Fast Move models.
    If you’re not doing NSFW, try the CAM 2 version: it has very good prompt understanding, adds dynamism, and understands cameras quite well.

    Don’t worry about the NAG node: if it causes issues, the model will work perfectly fine without it

    10221203Dec 5, 2025· 1 reaction

    @taek75799 I was looking specifically for a NSFW model that was not fast move, but also a GGUF file set (as opposed to safetensors) I downloaded NSFW FP8 but that appears to be safetensors... or am I putting that in the wrong folder and need to switch back to a different Unet Loader? Thank you for the reply, I really appreciate it. I'll try CAM 2, for testing.

    taek75799
    Author
    Dec 5, 2025· 2 reactions

    @Utulek The folder to put it in is the same. To load the .safetensors file, you need to do it using the ‘load Diffusion Model’ node.

    10221203Dec 5, 2025· 2 reactions

    @taek75799 That worked! I really appreciate it thank you.

    taek75799
    Author
    Dec 5, 2025· 1 reaction

    @Utulek I’m glad to hear that, happy generating!

    Fadoo2077Dec 5, 2025· 2 reactions
    CivitAI

    Banger - Meilleur version

    cleptoesDec 5, 2025· 5 reactions
    CivitAI

    I was having trouble with this and Taek helped me out. It was related to the steps. You want 4 steps on the low side and 4 steps on the high. Then you set the high start step at 0 to end 2 and the low start step at 2 to end 4. Works amazing.

    captain19910324695Dec 5, 2025· 1 reaction
    CivitAI

    Your model is excellent! Could you please recommend the appropriate resolution settings?

    taek75799
    Author
    Dec 5, 2025· 1 reaction

    Hello, and thank you for your feedback. I would say no, lol.
    To be honest, there are some resolutions where WAN understands the prompt better, but I tend to use the images I like regardless of their resolution.
    I’ve already had worse results with square images in 1024 × 1024, while in 720 × 720, WAN understood much better.

    Wan, like any other T2I model, will work better with a resolution whose dimensions are divisible by 16. Check my workflow: there is a node called Find Perfect Resolution, which automatically gives you the correct values.

    captain19910324695Dec 7, 2025· 1 reaction

    @taek75799 Thank you for your reply

    706784Dec 5, 2025· 2 reactions
    CivitAI

    The videos I generate are too fast and rushed, as if running at 2X. I am using length 153, fps 30, the workflow seems okay, 2 x 2 steps and I am not using any lightning LORA. I also used positive and negative prompt asking for videos to be slow. Anyone else faced this, or knows how to resolve it ?

    taek75799
    Author
    Dec 5, 2025· 6 reactions

    Hello. WAN was trained with videos of 81 frames, so about 5 seconds. Beyond that, there can be looping issues. To fix this, try my workflow — check at the very bottom of the description.
    For the sped-up videos, I think it’s because you set 30 frames. WAN works with 16 frames. You can increase the frame count, but you need to interpolate the output video (which will be 16 fps) to 30 or more.

    706784Dec 6, 2025· 4 reactions

    @taek75799 Thank you so much. I was struggling with this since I started using WAN 2.2. There is no explanation for it anywhere at all, this one really helped. Works perfectly for almost anything. I am just not able to get good penis shapes during sex or blowjob scenes but maybe its because I do not know how to prompt it yet. Best NSFW WAN 2.2 model though.

    slowmoDec 5, 2025· 1 reaction
    CivitAI

    This model turns everyone into Chinese face. wth dude

    taek75799
    Author
    Dec 5, 2025· 1 reaction

    Hello, I don’t have this issue at all. Look at my previews and the ones from other users — do you see any distortions?

    emmie991Dec 6, 2025· 2 reactions
    CivitAI

    Seems it has a thing for turning 2d into 3d... Am I doing something wrong or is that really on the model?

    taek75799
    Author
    Dec 6, 2025· 1 reaction

    Hello, do you want to transform 2D into 3D? I don’t think Wan can do that natively — you would need to create a LoRA for this.

    schschDec 6, 2025· 1 reaction

    I guess the problem is using 2d anime/cartoon imagens with WAN, which tends to transform to 3D or realistic. This generally occurs with WAN LORAs trained for realistic videos. Perhaps you can add an 'anime LORA' that 'animalitize/cartoonize/maintain' the cartoon nature during process of image to video. Here, I generate realistic videos anyway, so no problem here.

    taek75799
    Author
    Dec 6, 2025· 2 reactions

    @schsch I understand better now. Try adding the SmoothAnimation Style LoRA — it seems to have been trained with 2D images. I also tend to add anime style to my prompts.
    Important: increase the resolution, it’s crucial.

    g1263495582Dec 7, 2025

    That’s actually normal behavior. Video generation models are trained overwhelmingly on real-world footage compared to 2D art, so they have a strong bias towards realism. When you feed them a 2D image, they instinctively try to force a 3D perspective. If you stick with 2D, make sure the image has clear and distinct lighting. Otherwise, the model will try to interpret outlines or flat details as lighting cues (like shadows or contours) to force that 3D look.

    g1263495582Dec 7, 2025· 1 reaction

    @taek75799  I think he is using 2D images, but the resulting video looks like 3D.

    In my opinion, training a LoRA on pure frame-by-frame 2D animation is extremely difficult due to consistency issues. The most effective fix is to train on 3D anime with flat colors or use cel-shaded/toon-shaded footage. This bridges the gap—giving you the 2D look while keeping the 3D structure the model understands.

    Checkpoint
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,909
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/23/2025
    Updated
    5/4/2026
    Deleted
    -