This is a high-noise LORA model I trained based on Wan Video 2.2 I2V-A14B. You can connect it to both high-noise and low-noise pipelines simultaneously, with the model weight set to 1.0.
Its function is to realize this specific dance move. You can use it solely for entertainment or in your film and television works.
I trained this model using a 4090 24G graphics card for approximately 6 hours, and this is the version from the 2400th training step. A total of 8 vertical-resolution materials were used in this training, so the effect will be better when using vertical resolution during application. In my tests, it performs well with full-body shots and clothing such as short-sleeved tops and shorts.
The video displayed on the page was generated by me using this LORA on the basis of the quantized model Q8 with 8 steps. If you use a better base model and a higher number of steps, I believe you can generate higher-quality videos.
It should be noted that the performance of the model is not very stable at present, and multiple card draws are required. If you all like it, I will continue to optimize it and launch a better version.
You can use the following text as its trigger word: (HUPOYZM,画面中的女性,以单脚稳稳站立,另一条腿向上抬起并伸直,身体向后倾斜,呈现出富有张力的动态姿势。)
Local use of video models has high requirements for computer configuration. If you want to have a try, you can click the link below—I have turned this LORA into an AI application and hosted it there. By registering through my link, you can get 1,000 points, and you will also receive 100 points every day when you log in. With these points, you can generate many pictures and videos for free!
https://www.runninghub.cn/ai-detail/1960617834315472898/?inviteCode=rh-v1158
Description
FAQ
Comments (10)
a bit worried when not one of the examples is good quality, especially what's happening to their hands
Its just user error, Ive used it, it works extremely well.
@fox23vang226 ironic when the author can't make it work well
@duAIguy I think our workflows just suck, its very hard to find decent workflows, everyone gate keeps them. Even my own samples and workflow sucks. Im doing something wrong in my workflow because my models faces are always blurry.
@fox23vang226 that's not true at all. People just don't even know how to look for them. Been using Fusion-X Ingredients workflows, which are available here, and they work great. Wan2.1 and Wan2.2
Number 1 tool developer on this site (Umeairt) gives away not just a whole gaggle of workflows but setup scripts that install sage/triton/pytorch while people lie to themselves and each other that somehow sage and triton and all that somehow doesn't work well with Windows. To say nothing of everything folks like kijai give away. No, it's this stuff is too hard for them, they can't follow instructions and/or they don't even know how to help themselves.
I get it there's a lot of posers on Youtube and elsewhere that are just regurgitating the same errors and same bad workflows (ie. benji) but nothing being gatekept behind a patreon is unique. Nothing.
@fox23vang226 to your point about it working, yes, I see people already using it looking at recent video uploads. It does in fact appear to work well.
It works amazingly!!!, works in any position, side, back, and front!. Im amazed at how well it understands what it needs to do, even when the heels come back down and slap the ground its realistic. Posted some samble vids, should show up in an hour in the feed because civitai flagged them for some reason.
Your sample vids look a thousand times better than the original samples. I was going to skip it because it looked so much worse than the Wan 2.1 lora out there, but your videos changed my mind. It looks far better than the 2.1 lora.
why is eveything so slow.
Hey, read this article: https://www.reddit.com/r/StableDiffusion/comments/1mr0sep/how_to_fix_slow_motion_in_wan_22/ that helps. Happy new Year.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.