A workflow uses the latest Wan2.2 Animate model to swap faces.
Free try-out link of the workflow: https://www.runninghub.ai/post/1970320770746490881
Please check the video for detailed instructions before you start:
If you think this is helpful, please like the video and subscribe for more.
The video generation is resource-heavy. If your local computer is unable to run this workflow or you just want to see how good the model is and check all the workflow parameter settings before downloading, please open the link above to run it online for free on RTX4090. Just click and run. No brain-racking local setup.
1000 credits upon signing in using the above link(One gen only takes about 10-30 credits), with an extra 100 credits on daily login.
Description
FAQ
Comments (10)
A effective workflow, the only disadvantage is take too long....1 output needs 22 minute in 4090.I don`t know why.
The wanvideo sampler node #63 takes 1040.561s :
Frames 0-54: 25%|█████████████████▊ | 1/4 [00:46<Frames 0-54: 50%|███████████████████████████████████▌ Frames 0-54: 75%|███████████████████████████████████████████████████Frames 0-54: 100%|███████████████████████████████████████████████████Frames 0-54: 100%|███████████████████████████████████████████████████████████████████████| 4/4 [03:57<00:00, 59.31s/it]
Frames 54-108: 25%|█████████████████▎ | 1/4 [00:09<Frames 54-108: 50%|██████████████████████████████████▌ Frames 54-108: 75%|██████████████████████████████████████████████████Frames 54-108: 100%|██████████████████████████████████████████████████Frames 54-108: 100%|█████████████████████████████████████████████████████████████████████| 4/4 [03:36<00:00, 54.03s/it]
Frames 108-162: 25%|█████████████████ | 1/4 [00:09<0Frames 108-162: 50%|██████████████████████████████████ Frames 108-162: 75%|█████████████████████████████████████████████████ Frames 108-162: 100%|█████████████████████████████████████████████████ Frames 108-162: 100%|█████████████████████████████████████████████████ ███████████████████| 4/4 [03:38<00:00, 54.67s/it]
WanAnimate: Padding pose latents from torch.Size([1, 16, 7, 104, 60]) to length 14
Frames 162-216: 25%|█████████████████ | 1/4 [00:09<0Frames 162-216: 50%|██████████████████████████████████ Frames 162-216: 75%|█████████████████████████████████████████████████ Frames 162-216: 100%|█████████████████████████████████████████████████ Frames 162-216: 100%|█████████████████████████████████████████████████ ███████████████████| 4/4 [03:29<00:00, 52.33s/it]
+1, same issue for me
Yes, it is weird, on 4090, it shoud take less than 300 seconds. what it your output resolution? try change the frame window size to 77. since you got 4090.
@FaboroHacks I use the default setting parameters, download the workflow, just change the lora and model path, upload input, and generate.
@sekaiwlc07860 only the first run took so long or all took that long? since if you run it the first time it might be longer
@FaboroHacks i run 2 times and all the same....
Hi :) I would like to try but...
Where can i find some dancing people samples videos ? ^^"
For some reason this gives OOM when other workflows do not, also seems inefficient when it goes back to sampling 2-4 times instead of doing everything at once