CivArchive

    The goal of this lora is to reproduce the video style similar to live wallpaper, for those who play league of legends remember the launcher opening videos, that's the goal, but you can also use it to create your lofi videos :D enjoy.

    [Wan2.2 TI2V 5B - Motion Optimized Edition] Trained on 51 curated videos (24fps, 96 frames) for 5,000 steps across 100 epochs with rank 48. Optimized specifically for Wan2.2's unified TI2V 5B dense model and high-compression VAE.

    My Workflow (It's not organized, the important thing is that it works hahaha): 🎮 Live Wallpaper LoRA - Wan2.2 5B (Workflow) | Patreon


    Loop Workflow: WAN 2.2 5b WhiteRabbit InterpLoop - v1.0 - Hardline | Wan Video Workflows | Civitai

    Trigger word: l1v3w4llp4p3r


    [Wan2.2 I2V A14B - Full Timestep Edition]

    Trained on 301 curated videos (256px, 16fps, 49 frames) for 24 hours using Diffusion Pipe with Automagic optimizer, rank 64. Uses extended timestep range (0-1) instead of standard (0-0.875), enabling compatibility with both Low and High models despite training only on Low model.

    Trigger word: l1v3w4llp4p3r

    Works excellently with LightX2V v2 (256 rank) for faster inference

    [Wan I2V 720P Fast Fusion - 4 (or more) steps]

    Wan I2V 720P Fast Fusion combines 2 Live Wallpaper LoRA (1 Exclusive) with Lightx2v, AccVid, MoviiGen and Pusa LoRAs for ultra-fast 4+ steps generation while maintaining cinematic quality.

    🚀 Lightx2v LoRA – accelerates generation by 20x through 4-step distillation, enabling sub 2-minute videos on RTX 4090 with only 8GB VRAM requirements.
    🎬 AccVid LoRA – improves motion accuracy and dynamics for expressive sequences.
    🌌 MoviiGen LoRA – adds cinematic depth and flow to animation, enhancing visual storytelling.
    🧠 Pusa LoRA – provides fine-grained temporal control with zero-shot multi-task capabilities (start-end frames, video extension) while achieving 87.32% VBench score.
    🧠 Wan I2V 720p (14B) base model – providing strong temporal consistency and high-resolution outputs for expressive video scenes.

    [Wan I2V 720P]

    The dataset used consists of 149 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 480p and 64 frames with 64 dim (L40s).

    Trigger word was used so it needs to be included in the prompt: l1v3w4llp4p3r

    [Hunyuan T2V]

    The dataset used consists of 529 videos (each one hand-selected) in 1280x720x96 resolution but was trained in 244p and 72 frames with 64 dim (multiple RTX 4090).

    No captions or activation words were used, the only control you will need to adjust is the lora strength.

    Another important note is that it was trained in full blocks, I don't know how it will behave when mixing 2 or more loras, if you want to mix and are not getting a good result, try disabling single blocks.

    I recommend using lora strength between 0.2 and 1.2 maximum, resolution 1280x720 or generate at 512 and upscale later, minimum 3 seconds (72 frames + 1).


    [LTXV I2V 13b 0.9.7 – Experimental v1]

    The model was trained on 140 curated videos (512px, 24fps, 49 frames), using 250 epochs, 32 dim, and AdamW8bit.
    It was trained using Diffusion Pipe with support for LTXV I2V v0.9.7 (13B).
    Captions were used and generated with Qwen2.5-VL-7B via a structured prompt format.

    This is an experimental first version, so expect some variability depending on seed and prompt detail.

    Recommended:

    Scheduler: sgm_uniform

    Sampler: euler

    Steps: 30

    You can generate captions using the Ollama Describer or optionally use the official LTXV Prompt Enhancer.

    For more details, see the About this version tab.
    ------------------------------------------------------------------------------------------------------

    For more details see the version description

    Share your results.

    Description

    🎮 Live Wallpaper LoRA – Wan2.2 TI2V 5B Edition

    Live Wallpaper LoRA for Wan2.2 TI2V 5B is a specialized model trained specifically for the unified Text-to-Video and Image-to-Video dense architecture, optimized for superior motion quality and live wallpaper aesthetics.

    đź”§ Training Specs:

    • 51 curated video samples at 24fps, 96 frames

    • 5,000 steps across 100 epochs with rank 48

    • Optimized for Wan2.2's high-compression VAE architecture

    • Trained on the efficient 5B dense model (non-MoE)

    Trigger word: l1v3w4llp4p3r

    🎯 Motion Excellence:

    • Superior movement quality compared to larger model variants

    • Enhanced motion fidelity leveraging TI2V 5B's unified framework

    • Consumer-GPU friendly with maintained high-quality output

    ⚡ Performance Advantage:

    • Works seamlessly with Wan2.2 TI2V 5B's fast inference

    • Compatible with high-compression VAE (64:1 ratio)

    • Excellent motion consistency on consumer hardware

    🎯 Perfect for: Live wallpaper aesthetics with exceptional movement quality on budget-friendly setups.

    Note: Optimized specifically for the 5B dense model - motion quality may vary with MoE variants.

    FAQ

    Comments (71)

    xdljllqs186Aug 4, 2025· 1 reaction
    CivitAI

    I noticed you trained on the Low noise model. Do you have plans to release a high noise as well? They work better in conjunction.

    NRDX
    Author
    Aug 4, 2025

    So I even trained but I didn't notice much difference, but yes I will train again and make it available. The problem is that making both available here at Civitai can normally only carry one Lora.

    xdljllqs186Aug 4, 2025· 1 reaction

    Alissonerdx Nice, have seen some other people do both and it definitely increases quality. I think the Diffusion Pipe trainer is the one that works currently from what I hear. Your 5B Lora looks great though :)

    NRDX
    Author
    Aug 4, 2025

    xdljllqs186 Yes, I'm following and helping people understand the best ways to train every day (over at banodoco) and in my opinion, training high doesn't have many benefits for my type of lora, because it's focused on the initial timesteps, but I'm going to test it more.

    GRIJAYAug 4, 2025· 1 reaction
    CivitAI

    tried your 5B workflow im getting this error can you help?

    WanVideoSampler

    The expanded size of the tensor (80) must match the existing size (160) at non-singleton dimension 3. Target sizes: [48, 1, 44, 80]. Tensor sizes: [16, 1, 88, 160]

    NRDX
    Author
    Aug 4, 2025

    Have you updated the Kijai WanWrapper nodes?

    AINerdAug 5, 2025· 2 reactions
    CivitAI

    loving the IT5B lora Thanks a lot!!
    When a seemless loop workflow surges please let me know :D

    NRDX
    Author
    Aug 5, 2025· 2 reactions

    That's great to hear, I'll try to adjust the loop workflows to work with 5b and let you know as soon as I do.

    yx233Aug 6, 2025· 1 reaction

    Alissonerdx good job

    Fractured_FateAug 14, 2025

    NRDX Looking forward to it!

    bowiba1265909Aug 6, 2025· 1 reaction
    CivitAI

    Hi there. Sorry to bother but I am confused. In wan 2.2I2V lora the description says:

    "⚡ LightX2V Compatibility:

    Works great with LightX2V v2 (256 rank) for faster inference

    Recommended starting point: Strength 2.0 for both LoRAs to avoid artifacts

    Adjust as needed - sometimes higher strengths required

    Test combinations thoroughly as different LightX2V versions impact quality"

    Does this mean both lightx loras set to 2.0, high and low. Or does it mean lightx + your lora both 2.0.

    Also: The link for LightX2V v2 leads to the T2V one is that intended or should I use the I2V one?

    Thanks for reply and this concept lora. ^_^

    NRDX
    Author
    Aug 6, 2025· 1 reaction

    Hey, no problem, so 2 is for the lightx2v lora, I need to improve this description, besides now there are versions of lightx2v for the I2V model so if you want you can try those.

    mayleshopAug 7, 2025· 1 reaction
    CivitAI

    any one know where to get these files: umt5-xxl-enc-fp8-_ee4m3fn.safetensor, wan2_2VAE_bf16.safetensor, Wan2_2-TI2V-5B_fp8_e4mefn_scaled_kj.safetensor, thank you :) Jaina

    AINerdAug 7, 2025· 1 reaction

    the links are under the comfyui documentation https://docs.comfy.org/tutorials/video/wan/wan2_2

    ApplePirAug 8, 2025· 3 reactions
    CivitAI

    you think its possible to add first and last frame to the workflow?

    LovelaceAAug 9, 2025· 2 reactions

    I tried the first and last frame method multiple times since Wan 2.1, trying to create perfect looping video, but the method really tend to make the video very static, losing all the subtle motion from Live2d......In Wan2.2 there are some slight improvement but still many times the result is very static

    NRDX
    Author
    Aug 9, 2025

    LovelaceA I actually noticed that too, it makes the video more statistical precisely because you can't loop any movement, it even makes you blink your eyes more from what I noticed.

    LovelaceAAug 10, 2025

    Alissonerdx Thanks for the reply. That was exactly what I see. Although I dont know the technical behind it, my guess was that putting first and last frame the same will make the latent become very static to "fit" the 2 identical images.

    ApplePirAug 10, 2025

    @Alissonerdx with the ollama vision its getting things very wrong its counting tony tony chopper from one piece as a "young energentic girl" lol i did change the seed

    NRDX
    Author
    Aug 10, 2025· 1 reaction

    ApplePir It's probably the prompt template, the ideal would be to change it and focus only on the movement, here's another example of a prompt template:

    Analyze the visual content of this single frame and return a single-paragraph description of up to 120 tokens that starts with l1v3w4llp4p3r followed by a vivid description of the character’s appearance, posture, and imagined dynamic movements as if captured in an ongoing scene. Describe hair sway, fabric motion, subtle breathing, eye movement or blinking, environmental effects like particle drift, light shifts, wind, or rippling water, and any other dynamic elements that could logically be occurring. Emphasize continuous motion and energy as if the frame were part of a seamless animated loop. Avoid static details unless tied to movement. Output only one paragraph, no extra commentary, no special formatting, starting directly with the trigger.

    ApplePirAug 12, 2025

    Alissonerdx sorry for late reply that actually created very good results ty

    yorgashAug 14, 2025

    LovelaceA In Wan 2.2 I have near perfect results, sometimes even perfect. Though my method is multiple video extension, with one in-between frames. My last two videos for example were made like that.

    ArtificialSweetener_Aug 16, 2025· 5 reactions
    CivitAI

    Pretty desperate to get looping working. All the nodes I tried for injecting last frames took issue with WAN 2.2 5b :[ I got close by injecting "WanVideo Add Extra Latent" from the WAN Wrapper node pack but even though it does seem to loop, it also adds a ton of strange noise on the last 4 frames which correspond to the last latent frame.

    If you have any advice on this I'd greatly appreciate it. Your LoRA is perfect for loops and 2.2 5b is the perfect model for WAN on consumer GPUs!

    PrettyAIGirls With 2.2 5b? Can you send me your version? When I try that it throws errors.

    WhatTheactoualluckAug 18, 2025

    ArtificialSweetener_ 
    Did you find a way? 🤔

    OverlayZone I actually did! I even wrote a couple of custom nodes for it I'll release soon.

    Using WanVideoWrapper...
    1. use WanVideo Encode node to encode your first/last frame.
    2. Connect the samples output of that node to a WanVideo Empty Embeds node
    3. Additionally, connect the samples of the output of "Encode" to "WanVideo Add Extra Latent" node. Also connect the image_embeds from Empty Embeds
    --Now we have our "start" and "end" frames in place. The "start" frame comes from the "extra_latents" input on Empty Embeds. The "end" frame comes from the extra_latents input on "Add Extra Latent" node.

    4. Send your embeds into the WanVideo Sampler and then through the Decoder.
    5. Connect your Decode node to "Wan Skip End Frame Images" node. This is from a custom node pack, "WanStartEndFramesNative". You don't need to use this node if you want to be complicated. You need to manipulate your batch of images coming out of Decode to drop the last 4 frames. These are corrupted frames.

    --Now for the most "complicated" part. In the workflow PrettyAIGirls posted, this exists as "Magic Stuff" group.
    6. Take the first and last frame from your image batch. This is your first ever frame and then the last frame that was not corrupted; the new last frame after you chopped off the last 4 from step 5. How you do this is up to you - I coded custom nodes for this part specifically.
    7. Send the last frame and first frame through an interpolation node like the one from "comfyui-frame-interpolation". Set multiplier to 3. This will create new frames to replace the end of your loop which was corrupted.
    8. Re-assemble your frames so that your first frame comes first, then the rest of your frames sequentially, with the last frame at the end followed by the new interpolation frames.

    That's it. You're done. You can assemble these frames into a video, now, and it will loop seamlessly. You can get fancier by color matching the frames before interpolation; that helps smooth out the loop transition even more. I like the one from EastColorCorrector custom node pack because it batches them which is faster.

    I'll release a workflow along side my custom nodes :3

    PanyxAug 20, 2025

    ArtificialSweetener_ What is the purpose of step 3? It seems that loop can be realized without this.

    @Panyx Not if the end frame your video ends on is too far away from the start frame. Step 3 exists to ensure that your end frame is close to your start frame so that interpolation can smoothly bring the two together. Interpolation should in theory loop back for you if you give it enough frames, but this gets dicey.

    Pixel_Music_AiAug 24, 2025

    @ArtificialSweetener_ Looking forward to it, really liking 2.2 5b but sticking with 2.1 atm since I can't get the them to loop

    @Panyx I'd love to see examples!

    meowmeow12345Aug 19, 2025· 3 reactions
    CivitAI

    I tried this a couple times a strength 1, it really changes the style of the eyes which I didn't like. Perhaps I could try at lower strength. Just thought I'd mention it but otherwise very cool!

    LovelaceAAug 19, 2025· 2 reactions
    CivitAI

    When using the Lora in Wan2.2, do you have a suggested value for the "Shift" Parameter? Seems quite sensitive, 3-5 may lead to frequent mouth and eyes movement, and too low or disable may be too static or lead to other problems

    NerdyPixelAug 24, 2025· 5 reactions
    CivitAI

    Damn dude, this is sick! Super easy to use and great outcomes, amazing job :D

    Pixel_Music_AiAug 26, 2025
    CivitAI

    first of all thank you for the lora and workflow! noob question, I noticed that if I use any other wan 2.2 loras it throws an error with your provided workflow, do I have to use specific TI2V 5B loras ?

    NRDX
    Author
    Aug 26, 2025· 1 reaction

    yes because if you use 5b you will only be able to use loras for 5b.

    civitaimasterAug 27, 2025
    CivitAI

    Hello, thank you for making this public!

    May I ask you what resolution were the training videos used in Wan2.2 TI2V 5b training?

    Thank you!

    ArtificialSweetener_Sep 5, 2025· 9 reactions
    CivitAI

    Looping video workflow for 5b version:
    https://civitai.com/models/1931348?modelVersionId=2185956

    I developed a ton of custom comfyUI nodes just for this!

    ArtificialSweetener_Sep 12, 2025· 12 reactions
    CivitAI

    Updated my loop workflow to 1.1. Much more consistent, fixed bug!

    https://civitai.com/models/1931348?modelVersionId=2206587

    LordGrandeSep 26, 2025· 2 reactions
    CivitAI

    Sorry i have this error: RuntimeError: shape '[3072, 3072]' is invalid for input of size 26214400 :(

    durachellSep 27, 2025

    Found the issue, 5B checkpoints with 5B lora, I was using 14b lora.

    LordGrandeSep 27, 2025

    @durachell Thank you! I think i have the same issue here! I will try to switch it !

    robbiebunny1883Sep 29, 2025
    CivitAI

    I'm having issues with WAN checkpoint and vae compatibility. I tried 2.2 5B with wan 2.2 vae, i tried wan 2.2 14B with wan 2.1 vae I only get one error after another. what is the clean mixture get this to work? most common error : WanVideoModelLoader

    Can't import SageAttention: No module named 'sageattention'

    NRDX
    Author
    Sep 30, 2025

    This error you are having is probably because you do not have Sage Attention installed, it is not a model problem, it may be that in the workflow you are using there is something there telling you to use Sage Attention, in this case you either install it which is normally a little complicated but it is possible or you change Sage Attention to a Flash Attention or SDPA or any other attention that does not give an error.

    robbiebunny1883Sep 30, 2025

    @NRDX I triedall those other options and they produce other issues. Do I actually need this workflow to use the lora?

    robbiebunny1883Sep 30, 2025

    @NRDX I tried those other options and had other issues. Do I actually need that workflow to just use the lora?

    NRDX
    Author
    Oct 1, 2025

    @robbiebunny1883 You can use whatever workflow you want with Lora.

    katana88Nov 12, 2025· 1 reaction
    CivitAI

    It's amazing! Works with 8gb vram and the speed isn't horrible at all. Thank you for sharing!

    LoeeeNov 27, 2025· 2 reactions
    CivitAI

    Why are there no high LoRA models? I only see low LoRA models

    NRDX
    Author
    Nov 27, 2025· 2 reactions

    Because I trained the low model with a 100% range instead of training it from 0.875 to 0, I trained it from 1 to 0. In other words, it was trained to handle the entire process on its own, both high and low noise. The low model of the WAN 2.2 is basically the WAN 2.1 with some fine-tuning.

    NRDX
    Author
    Dec 3, 2025

    @LindezaBlue I don't understand, what do you disagree with in my statement? What do you need me to prove to you? Because I can prove everything I said

    120458Dec 3, 2025

    @NRDX The go-to method for WAN 2.2 LoRAs is training two dedicated versions.

    High-noise LoRA: Trained specifically on the high-noise base model (timesteps ~875–1000). This handles the "creative overhaul" phase, big changes in composition, motion, and style that happen early in generation.

    Low-noise LoRA: Trained on the low-noise base (timesteps 0–875). This refines details, characters, and prompt adherence in the later "polishing" phase.

    This is because WAN 2.2's split diffusion process works better when each phase gets its own tailored LoRA, and it also leads to better motion consistency, sharper details, and fewer body artifacts, not only that the LoRAs integrate more seamlessly without fighting the base model's internals.

    Training one model on the full range (1.0 down to 0, basically combining high + low timesteps) is a clever fine-tune of the WAN 2.1 style. It definitely simplifies things if you're not deep into custom workflows. However this is for Wan 2.2, it's not a best practice because it can introduce issues in the long run.

    - A full-range train might over-optimize for one end (e.g., details in low noise) at the expense of the other. (e.g., wonky motion or composition in high noise)
    - High-noise needs aggressive training for dynamic changes; cramming it into one model can lead to "slowed-down" actions or deforming. (like bodies twisting unnaturally)
    One trainer even switched from single-model to dual after fixing deformation bugs.

    - You lose the precision of specialized models. Low-VRAM setups (like a 4060) might hit OOM errors easier without the swap options in dual workflows.

    I'm not saying it won't work, I just don't agree it's the best practice. It's similar to using a 1.5 lora on SDXL, will it work... yes, is it a good idea. Probably not if you want the best quality and compatibility.

    NRDX
    Author
    Dec 4, 2025· 2 reactions

    @LindezaBlue I know everything you said, including searching for my name on Banodoco and you'll see that I've been talking about WAN training since the launch of WAN 2.1. Try it yourself: if you generate a video with WAN 2.1 and then take the WAN 2.2 low model and use only that to generate a video, you'll notice that the result is almost the same. The WAN 2.2 low model is the WAN 2.1 with some fine-tuning (or do you think they like throwing money away?). It's almost the same model. What the people maintaining One Trainer must have realized is that training in both models in a unified way doesn't yield good results, and I've never advocated for that. This is different from what I said about training in the full range. What I'm arguing is that you can train the low model of 2.2 in the full range since it's literally Wan 2.1; you don't need to have the high model to create a video with Wan 2.2. I even discussed this with Kijai himself, and he confirmed it. However, if you want to improve your results in terms of movement, new concepts, etc., then training the high model is important.

    120458Dec 4, 2025· 3 reactions

    @NRDX Hey, I really don’t want to turn this into a big debate. I completely agree that merging the models does simplify things and it absolutely works well in practice. I’m just sharing my personal take that it might not always be the “best practice” if someone wants the absolute maximum control, since combining them means you lose the ability to adjust the two aspects independently.

    That said, please don’t take this as criticism at all, you’ve done an incredible job with this model, it performs beautifully, and I’m genuinely impressed! Huge props and thank you for sharing it with everyone.

    Just to explain my thinking a little more gently:
    Imagine a guitar amp with two separate knobs, one for volume and one for reverb. Both are useful, and they do different things. If you soldered them together into a single knob, the amp would still sound good, but you’d no longer be able to dial in exactly how loud you want it versus how much reverb you want. It’s the same basic idea here: separate models give you two independent “knobs” to play with.

    Of course, it’s not a perfect analogy nothing ever is with these things, but I hope it gets the idea across. Merging is awesome when you want convenience and consistency, but if someone is chasing the very finest control over their generations, keeping the parameters separate usually feels more flexible.

    Anyway, that’s just my two cents.

    Your work is fantastic either way, and I really appreciate you putting it out there!

    salhashiur136Nov 29, 2025
    CivitAI

    H

    2490456711680Dec 16, 2025
    CivitAI

    Excuse me for bothering you, but the image style of the result generated using the workflow in the example video has changed compared to the input image. I'm using wan2.2 GGUF format model. How can I resolve this?

    NRDX
    Author
    Jan 21, 2026· 5 reactions
    CivitAI

    Do you want this Lora version in one of the newer models, or is this one still usable?

    ElyDKJan 22, 2026· 2 reactions

    I refresh daily to see if you've made one for LTX-2 haha!

    NRDX
    Author
    Jan 22, 2026· 3 reactions

    @ElyDK I am going to do!

    condaaaaJan 29, 2026

    Olá! Notei que a equipe do LTX deve lançar a versão 2.1 dentro de um mês. Talvez valha a pena esperar um pouco antes de treinar, para evitar gastar recursos computacionais no modelo 2.0 agora.

    Além disso, gostaria de fazer um pedido: seria possível treinar um LoRA focado em loop perfeito ("seamless loop")? A ideia é garantir que o primeiro e o último quadro sejam idênticos (ou muito próximos), permitindo criar wallpapers dinâmicos que rodam infinitamente sem cortes visíveis.

    NRDX
    Author
    Jan 29, 2026· 3 reactions

    @condaaaa Ah yes, cool, I'll wait. Regarding the loop issue, I believe I can train something using LoRa ICs, but I'll still have to study how to do that because there must be a reason why no one has done it yet, haha.

    condaaaaJan 29, 2026· 1 reaction

    @NRDX Na verdade, combinando o Wan 2.2 com o seu LoRA, já é possível atingir uns 95% de um loop perfeito, pois o modelo suporta nativamente a geração baseada em quadros iniciais e finais. O problema é que os últimos 4 quadros apresentam cintilação (são "dirty frames").

    Eu notei que o Wan 2.2 codifica o primeiro quadro individualmente no espaço latente (latent space), enquanto os demais são codificados em unidades de 4 quadros. Acredito que isso explique parcialmente por que os últimos 4 quadros piscam — eles pertencem à última unidade — mas ainda não descobri a causa exata desse erro.

    Além disso, descobri que usar o Wan 2.1 nesse workflow de First/Last frame também atinge 95% de loop. O 2.1 não tem esse problema de cintilação, mas o alinhamento entre o início e o fim fica quase lá, falhando por muito pouco. Haha, é um pouco frustrante (meio "sem saída").

    Eaglet12IQFeb 12, 2026

    Yes, we need a version for LTX 2.

    ElyDKMar 17, 2026

    Any luck training a new one?

    loneillustratorMar 24, 2026

    any luck on training a new one 2.3? ltx

    NRDX
    Author
    Mar 24, 2026

    @ElyDK @loneillustrator I haven't train this model yet, but I will. I have some other models I'm training. Sorry for the delay.

    ElyDKMar 24, 2026

    @NRDX Let me know if there's any way we can support you!

    StraitjacketApr 20, 2026
    CivitAI

    So the lownoise is for both the high and low lora loader, am I understanding that correctly?