CivArchive
    Wan 2.2 Animate & InfiniteTalk+UniAnimate - Wan 2.2 Kijai I2V
    NSFW

    Wanvideo 2.2 Animate

    I uploaded a version of ComfyUI without models (4.38G), it has all the nodes and workflows to run and avoid errors, if you have your own ComfyUI, you can ignore this part

    https://drive.google.com/file/d/1lcK9QX6FmLO6rQNP6200rbLiqnHpt0uK/view?usp=drive_link

    1.Model

    ComfyUI\models\diffusion_models

    Wan22Animate-Kijai

    ComfyUI\models\vae

    Wan2_1_VAE_fp32.safetensors

    ComfyUI\models\text_encoders

    umt5-xxl-enc-fp8_e4m3fn.safetensors

    umt5-xxl-enc-bf16.safetensors

    ComfyUI\models\clip_vision

    clip_vision_h.safetensors

    ComfyUI\models\Loras

    WanAnimate_relight_lora_fp16

    lightx2v_I2V_14B_480p_cfg_step_distill_rank256_bf16.safetensors

    Kijai Lightx2v

    ComfyUI\models\detection

    vitpose-l-wholebody

    yolov10m

    ComfyUI\models\sams\

    Sec-4B

    ---

    ComfyUI\models\custom_nodes

    auto_wan2.2animate_freamtowindow_server

    ComfyUI-WanAnimatePreprocess

    Download the ZIP file and extract to custom_nodes.

    ---

    Wan2.2 Kijai I2V

    I uploaded a version of ComfyUI without models (4.38G), it has all the nodes and workflows to run and avoid errors, if you have your own ComfyUI, you can ignore this part

    https://drive.google.com/file/d/1lcK9QX6FmLO6rQNP6200rbLiqnHpt0uK/view?usp=drive_link

    1.Model

    ComfyUI\models\diffusion_models

    Kijai bf16

    Wan2_2-I2V-A14B-HIGH_bf16.safetensors

    Wan2_2-I2V-A14B-LOW_bf16.safetensors

    Kijai fp8

    WanVideo_comfy_fp8_scaled

    lightx2v Distill

    lightx2v/Wan2.2-Distill-Models

    (After testing, it can greatly improve video dynamics)

    ---

    ComfyUI\models\vae

    Wan2_1_VAE_fp32.safetensors

    ComfyUI\models\text_encoders

    umt5-xxl-enc-fp8_e4m3fn.safetensors

    umt5-xxl-enc-bf16.safetensors

    ComfyUI\models\clip_vision

    clip_vision_h.safetensors

    ComfyUI\models\Loras

    lightx2v_I2V_14B_480p_cfg_step_distill_rank256_bf16.safetensors

    Wan2.2-Fun-A14B-InP-high-noise-MPS.safetensors

    Wan2.2-Fun-A14B-InP-low-noise-HPS2.1.safetensors

    Kijai Lightx2v

    ---

    InfiniteTalk + UniAnimate

    I uploaded a version of ComfyUI without models (4.38G), it has all the nodes and workflows to run and avoid errors, if you have your own ComfyUI, you can ignore this part

    https://drive.google.com/file/d/1lcK9QX6FmLO6rQNP6200rbLiqnHpt0uK/view?usp=drive_link

    1.Model

    ComfyUI\models\diffusion_models

    Wan2.1-I2V-14B-480P

    Wan2_1-InfiniTetalk-Single_fp16

    ComfyUI\models\unet

    InfiniteTalk_GGUF

    Wan2.1-I2V-14B-480P-gguf

    ComfyUI\models\vae

    Wan2_1_VAE_fp32

    ComfyUI\models\text_encoders

    umt5-xxl-enc-fp8_e4m3fn.safetensors

    umt5-xxl-enc-bf16.safetensors

    ComfyUI\models\clip_vision

    clip_vision_h.safetensors

    ComfyUI\models\Loras

    UniAnimate-Wan2.1-14B-Lora-12000-fp16

    lightx2v_I2V_14B_480p_cfg_step_distill_rank256_bf16.safetensors

    Kijai Lightx2v

    ---

    InstantID

    InstantID does not support the latest version of ComfyUI. Please download the old version. I deleted the model and kept the nodes to ensure that there will be no errors when running (5.62g)

    https://drive.google.com/file/d/1_HPGG2iMAyovS3jO8ubhti4YIcOhOLUj/view?usp=drive_link

    1.Model

    ComfyUI\models\checkpoints

    leosamsHelloworldXL_helloworldXL50GPT4V

    lustifySDXLNSFWSFW_v20

    majicmixRealistic_v7

    SUPIR-v0Q

    ComfyUI\models\loras

    POVMissionary

    pov-squatting-cowgirl-lora-1-mb

    PornMaster-Amateur&DPO

    Hand v3 SD1.5

    ComfyUI\models\unet

    FLUX.1 [dev]

    FLUX.1 [dev] fp8

    ComfyUI\models\clip\

    t5xxl_fp16.safetensors

    t5xxl_fp8_e4m3fn.safetensors

    clip_l.safetensors

    CLIP-ViT-bigG-14-laion2B-39B-b160k

    Rename 「open_clip_model.safetensors」 to 「CLIP-ViT-bigG-14-laion2B-39B-b160k」

    clip-vit-large-patch14

    Rename 「model.safetensors」 to 「clip-vit-large-patch14」

    ComfyUI\models\vae

    ae.safetensors

    vae-ft-mse-840000-ema-pruned.ckpt

    ComfyUI\models\controlnet

    ttplanetSDXLControlnet_v20Fp16.safetensors

    diffusion_pytorch_model

    ComfyUI\models\instantid

    ip-adapter.bin

    Description

    October 23, 2025

    • 「Wan2.2-I2V-A14B-Distill-Lightx2v」

    https://huggingface.co/lightx2v/Wan2.2-Distill-Models

    Distillation-accelerated version of Wan2.2 - Dramatically faster speed with excellent quality

    (After testing, using Wan2.1 Lightx2v LoRA & Wan2.2-Fun-Reward-LoRAs on a high-noise model can improve the dynamics to the effect of the original model. You can refer to my civitai instructions)

    October 8, 2025

    • Replace Model 「Kijai Wan2.2 I2V A14 BF16」

    October 3, 2025

    • Fix Video Seams (WanVideoContextOptions)

    FAQ

    Comments (12)

    psspsspsspssspssOct 2, 2025· 1 reaction
    CivitAI

    Is there a resource for explaining what the "Insight" model is? Is it just a drop in replacement for I2V? What is it's purpose? Is there a lora for insight also?

    gsk80276
    Author
    Oct 2, 2025· 2 reactions

    The model comes from China's adjustment of Wan2.2. It is not the official version. It integrates the acceleration model. In terms of high step count, it only needs 1 to 4 steps without using Lightx2v. However, after testing by Chinese players, the effect in I2V is not much different from the official version, and in T2V it is better than the official version.

    psspsspsspssspssOct 2, 2025· 1 reaction

    @gsk80276 yes I have already seen this description, but it doesn't tell me anything. Insight is just a different acceleration method then lightx2v? Or does the name have something to do with consistency like "insight face" models for image gen?

    gsk80276
    Author
    Oct 2, 2025· 1 reaction

    No, it's a Wan 2.2 model. Like the various SDXL and FLUX models on Civitai's website, these are all based on a base model that has been modified to generate models specific to specific images.

    In addition to the fusion acceleration model, this model has enhancements for text input and is another option for Wan 2.2 FP16 and GGUF. After all, there aren't many versions of Wan 2.2.

    kudon44Oct 7, 2025

    @gsk80276 When you say "generate specific images" does that mean it has certain biases like biased towards Asian people, specifically, which could cause undesirable results for non-Asian outputs? Asking because you have a ton of examples that are all Asian and I know it is a thing for some image generator models to be really biased.

    gsk80276
    Author
    Oct 7, 2025· 1 reaction

    @kudon44 According to the test of Chinese users, the author mentioned that he integrated a lot of American movies into the training data, so when using T2V, the character generation will be different from that of Asians. However, this has little effect on I2V. This model adds more dynamic aspects.

    Eth4534Oct 6, 2025· 3 reactions
    CivitAI

    Thank you very much for your workflow ! (Wan 2.2 Insight )
    The result is pretty good, no more slow motion :)

    Still experimenting

    JoeanOct 6, 2025· 2 reactions
    CivitAI

    Thank you, looks amazing. I will try it out tmr

    zono50Oct 6, 2025· 1 reaction
    CivitAI

    How do you actually do long context? I keep getting errors

    gsk80276
    Author
    Oct 7, 2025· 2 reactions

    https://drive.google.com/file/d/18LEV9dnymyZAhzLEH__oXDU7sSDFO2qG/view?usp=drive_link
    https://drive.google.com/file/d/1heih4dXFpp8be8oV0oGTZNcccxmID0VS/view?usp=drive_link

    After turning on WanVideoContextOptions, it will generate frames based on the total number of frames you input, 5 seconds = 81 frames

    Magna401Oct 7, 2025· 1 reaction

    @gsk80276 Okay so I can just put 162 in num_frames and keep 81 in context ? I hope my 5090 can handle 720p on 162 frames ^^

    gsk80276
    Author
    Oct 7, 2025· 1 reaction

    Yes, you can use 162, it will generate extra frames between 81 frames and 81 frames for seam processing, there is no need to change the value of 81 in the context, you can refer to this value 81, 4, 16~48, and the number of seconds of the generated video is converted based on the video rate of the original video, for example, fps 30, using 162 will not be 10 seconds, but 5 seconds

    Workflows
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    6,435
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/1/2025
    Updated
    4/28/2026
    Deleted
    -

    Files

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22AnimateInsight_wan22InsightI2V.zip

    Mirrors

    wan22AnimateInsight_wan22InsightI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    Huggingface (1 mirrors)

    wan22Animate_wan22KijaiI2V.zip

    Mirrors

    wan22Animate_wan22KijaiI2V.zip

    Mirrors