🌀 Wan 2.2 T2V UNet (Low VRAM FP8 CLEAN Versions).
I’ve .repackaged the official Wan 2.2 T2V UNet checkpoints into low VRAM FP8 versions, so they’re easier to run on smaller GPUs while still working with ComfyUI video workflows.
Note: Clean versions do not contains any lora(s)
✅ Available Variants
Wan2.2_T2V_High_Noise_14B_Fp8-LowVRAM
Wan2.2_T2V_Low_Noise_14B_Fp8-LowVRAM
🔗 Original Models (credit goes here, not me)
These are directly converted from the official checkpoints released by Comfy-Org.
I did not train or create these models — I only adapted them for lower VRAM use.
📌 Notes
Works with WAN 2.2 ComfyUI pipelines.
FP8 format helps with VRAM efficiency, useful if you’re on GPUs <8GB VRAM or higher.
Quality is close to FP16, but some precision differences may occur.
These are UNet-only checkpoints (not full model packages).
⚠️ Disclaimer
I am not the creator of Wan 2.2 or these checkpoints. Full credit belongs to Comfy-Org and the original developers. This upload only provides low VRAM-converted versions of the original UNets.
Description
FAQ
Comments (13)
I would love to see some of your work you have created with this wan2.2 low vram models :)
I have no idea what sorcery you've performed here, but whatever it is, it is legendary on a whole different level!
I loaded my usual WAN 2.2 workflow - usually on my RTX5070 12GB, I'd expect the following:
720x720, 81 frames - takes about 4-5 minutes with Lightx2v ... here it took about 80 seconds!
so I tried 720 x 720 121 frames, which would usually net me 99% VRAM usage and either an OOM or a silent crash... less than 3 minutes later I had a video!
Feeling daring and a bit reckless, I thought let's try 728 x 1024 at 81 frames, surely a guaranteed OOM... Well, the generation is about to complete after 4 minutes of running!
I've tried many many diffusion models, workflows and all sorts. I've struggled with GGUFs (which I'm convinced are worse than just running a huge FP8 model) and had more memory issues than I can count... but this... this is magic!
Bravo maestro!
Glad its working like magic for you :)
Can you test this model too for me? https://civitai.com/models/1997508/wan22-ultima-14b-t2v-low-vram-fp8
I have RTX 5090 so its hard for me to test it out lol.
@OmegaWPN I would love to 😊 I'll get it downloaded when I'm back at my PC tonight and will let you know how it goes! 😁
Bro, i have to request. Can you make some FP8 T2V UNET models for high/low with the high/low lightning 4-step loras merged in?
All of the models that include the accelerator loras merged in also have a ton of other loras merged in too, and it's always been annoying there's not just a simple Lightning+Wan Unet file out there like they made for 2.1.
It would be so convenient to have those.
Yes i can do that but which lora are you talking about?
Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1 - HIGH_NOISE
Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1 - LOW_NOISE
@OmegaWPN yeah thos Lightning loras for T2V, as in a merge the high FP8 unet model with the high lightning lora and merge the low FP8 unet model with the low lightning lora.
Thanks a billion bro.
@Griphen116 I can do that but it will take some time to combine, test the weights, repack everything., etc.
@OmegaWPN understood.
I would but i barely understand how to use pipelines or merges with ComfyUI and barely have time to learn.
Thanks again.
WOW for my first test, this works really wel and fast!!!
Update: This is incredible RTX3080 12Gb + 64Gb Ram 200frames @768*768 8steps shift 20 in 1min 30sek
Can you test the T2V version too with the same workflow you're using. I have RTX 5090, so i can't fully test it out sadly.
https://civitai.com/models/1997508/wan22-ultima-14b-fp8
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.