Modified workflow from the AI Verse model: Click here to see the AI Verse version.
I tested this workflow on an RTX 3060 video card with 12GB and 34GB of RAM, at 480x848 and 576x1024 resolutions.
I modified the model loader and started using the Load Diffusion Model. I installed the Power Lora Loader (rgthree). For export, I started using Video Combine VHS.
It works, but it is recommended not to have other activities in the background, as it required all the system resources.
Base model: Hunyuan Video FastVideo 720 Fp8 e4m3fn - Place in: ComfyUI_windows_portable\ComfyUI\models\diffusion_models
Dual Clip Loader: Clip_I.safetensors and llava_lhama3_fp8_scaled.safetensors - Place in: ComfyUI_windows_portable\ComfyUI\models\text_encoders
Vae: hunyuan_video_vae_bf16.safetensors - Place in: ComfyUI_windows_portable\ComfyUI\models\vae
You can leave everything else as default, or follow the notes. For low VRAM, try decreasing the tile_size and overlap values.
Description
Original version.
FAQ
Comments (9)
Hi! Your workflow processed (which is a plus for it, a lot just don't even get that far) but the result was... odd. I can vaguely see the intended result, but it's covered in hundreds of tiled staticky colorblobs. Is there a setting I messed up?
Hello! I'm not sure, but it might be related to the lore's strength (something similar has happened before). Try reducing the strength gradually.
I usually test from 0.6 and increase or decrease it according to the result.
Also, check if there are any instructions on the lore page that tell you how to change the strength, steps or CFG.
video tutorials Fast Hunyuan GGUF
I'm maybe too stupid but i downloaded base model fastvideo, the lora video, dual clip loader. I used your workflow. And when i check i don't see any difference, it sound slow as normal hunyuan video. i don't see any difference with same option, when i change model time doesnt move.
Hello.
That's strange. How many steps are you using?
See the base model version as well. I tested it here with hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors.
If the problem persists, let me know here.
This works so well on my RTX 4070 and it's nice and easy to understand - would you consider making a Vid2Vid version?
Thanks for the tip!
Yes, I plan to do a vid2vid. My first attempt wasn't 100%. As soon as I get a good result I'll post it.
@alceman Great news, all the best
this one worked, in a alienware 4080 rtx gpu 12gb. And runs very fast! Congrats. My respects.
