CivArchive
    Wan Finger Licking and Sucking - v1.0-i2v-A14B-Lo
    NSFW

    Wan doesn't seem to understand the concept of licking or sucking fingers, so I decided to remedy that!

    Wan 2.2 version: trained locally on my RTX5090 with Musubi Tuner on 20 videos of various porn stars. 384x384x60 frames. Different training set to Wan 2.1 that has less exaggerated mouth movements which are more natural. Hi: 40 epochs 800 steps (2h45m), Lo: 65 epochs 1300 steps (5h30m)

    Wan 2.1 versions (T2V/I2V): trained locally on my RTX5090 with Musubi Tuner on 34 videos of various porn stars. 256x256 resolution, 3400 steps, 100 epochs. 5 hours total training time.

    My Wan motion LoRA full training process is covered here in my new tutorial.

    I have also written a series of articles here on Civitai on the subject of Video Motion Training.

    Please consider supporting me on RiotModels to help me in continuing to make new LoRAs. Your reward is exclusive content of the Seven Sisters in the most naughty of animations.

    Keyword shouldn't be necessary but it's "fingers_in_mouth" for Wan 2.2

    Prompt is simple. Use any combination of

    • licking fingers

    • fingers in mouth

    • sucking fingers

    • she licks her fingers

    For one person licking the other person's fingers, the starting image really needs the other's fingers close to the mouth otherwise the person tends to suck their own fingers.

    See sample videos for full prompts.

    Versions:

    • Wan 2.2 i2v v1.0 - good natural motion with guaranteed prompt adherence

    • Wan 2.1 480 i2v v1.0 - very nice, detailed licking and sucking action with lips, fingers and tongue

    • Wan 2.1 14B t2v v1.0 - good motion but IMO not as good as i2v. May require more training.

    Description

    Low Noise version for Wan 2.2 A14B i2v

    FAQ

    Comments (6)

    GlowingGuardianGirlMar 18, 2026
    CivitAI

    Hello there. First of all, congratulations, and thank you for publishing a proper musubi guide for motion, you should also write an article here on civit, for more visibility. A lot of users still wonder how it's done and -as I'm watching the guide at this very moment- it seems really solid. The only thing I could think of ; would you mind sharing the text files+toml for the training commands as examples here or linked somewhere on YouTube? It will be a pain to copy them from a screenshot if we want to learn the process from your guide.
    Keep up, thank you very much, and your redhead character looks awesome ๐Ÿ™Œ Cheers

    EDIT: PS; what's the values you should have put in your low noise settings you said were wrong at 1:02:05 in the youtube video? Thank you

    Generation3dX
    Author
    Mar 18, 2026ยท 2 reactions

    Thanks v much for your comments and suggestions. Ok, I'll look at doing a text guide here on civitai.

    The mistake was the min/max timesteps params for hi and lo noise. The correct values are shown below.

    Glad you like Joni (the redhead) - she's my #1 model!

    - - -

    Here is the data you requested:

    toml:

    [[datasets]]
    video_directory = "data/videos_pole_dancing/"
    cache_directory = "data/cache/videos"
    caption_extension = ".txt"
    enable_bucket = true
    resolution = [256, 256]
    batch_size = 1
    num_repeats = 8
    target_frames = [120]
    frame_extraction = "full"
    frame_sample = 1
    frame_stride = 1
    max_frames = 120


    Commands:

    Cache Text Encoder (WAN 2.2):

    python wan_cache_text_encoder_outputs.py --dataset_config config_pole_dancing.toml --t5 D:/ComfyUI/models/text_encoders/models_t5_umt5-xxl-enc-bf16.pth --batch_size 16

    Cache Latents (WAN 2.2):

    python wan_cache_latents.py --dataset_config config_pole_dancing.toml --vae D:/ComfyUI/models/vae/wan_2.1_vae.safetensors --i2v

    Main Training Commands:

    v0.1 i2v2.2hi

    accelerate launch --num_cpu_threads_per_process 1 --mixed_precision fp16 wan_train_network.py --task i2v-A14B --dit D:/ComfyUI/models/diffusion_models/wan2.2_i2v_high_noise_14B_fp16.safetensors --dataset_config D:/development/musubi-tuner/config_pole_dancing.toml --sdpa --mixed_precision fp16 --optimizer_type adamw8bit --optimizer_args weight_decay=0.01 --learning_rate 1e-4 --gradient_checkpointing --max_data_loader_n_workers 1 --persistent_data_loader_workers --network_module networks.lora_wan --network_dim 32 --timestep_sampling shift --discrete_flow_shift 5.0 --min_timestep 900 --max_timestep 1000 --preserve_distribution_shape --max_train_epochs 20 --save_every_n_epochs 1 --seed 42 --output_dir D:/development/musubi-tuner/output --output_name wan_pole_dancing_i2v2.2hi_v0.1 --logging_dir D:/development/musubi-tuner/logs --log_prefix wan_pole_dancing_i2v2.2hi_v0.1 --fp8_base

    v0.1 i2v2.2lo

    accelerate launch --num_cpu_threads_per_process 1 --mixed_precision fp16 wan_train_network.py --task i2v-A14B --dit D:/ComfyUI/models/diffusion_models/wan2.2_i2v_low_noise_14B_fp16.safetensors --dataset_config D:/development/musubi-tuner/config_pole_dancing.toml --sdpa --mixed_precision fp16 --optimizer_type adamw8bit --optimizer_args weight_decay=0.01 --learning_rate 5e-5 --gradient_checkpointing --max_data_loader_n_workers 1 --persistent_data_loader_workers --network_module networks.lora_wan --network_dim 32 --timestep_sampling shift --discrete_flow_shift 5.0 --min_timestep 0 --max_timestep 900 --preserve_distribution_shape --max_train_epochs 40 --save_every_n_epochs 1 --seed 42 --output_dir D:/development/musubi-tuner/output --output_name wan_pole_dancing_i2v2.2lo_v0.2 --logging_dir D:/development/musubi-tuner/logs --log_prefix wan_pole_dancing_i2v2.2lo_v0.2 --fp8_base

    skullzy77Mar 19, 2026ยท 1 reaction
    CivitAI

    Works really well, thanks. If you mix it with the ThroatV3 lora from Civarchive you can get the character to finger fuck their mouths and gag and drool etc. Fun stuff lol

    Generation3dX
    Author
    Mar 19, 2026ยท 2 reactions

    You're welcome. Oh sounds very naughty. I will check this out!

    flex25tbMar 29, 2026
    CivitAI

    awesome, thanks. Any chance of a Wan 2.2 T2V update?

    Generation3dX
    Author
    Mar 31, 2026

    Thank you! I'm being brutally honest. I never use T2V in my workflows and when I've tried to do T2V training in the past it's been much harder than I2V, perhaps because I lack the experience in using it in inference. With all the other projects I have going on it tends to get a lower priority. I do realise there is a demand for it though, so I'll try to give it another go. By the way, you can still use Wan 2.1 LoRAs in 2.2 - just add it to both Hi and Lo noise.

    LORA
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    1,472
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/18/2026
    Updated
    5/1/2026
    Deleted
    -
    Trigger Words:
    fingers_in_mouth

    Files

    wan_finger_licking_i2vA14B_LOWNOISE_v01-000065.safetensors

    Mirrors