SMOOTHMIX WAN 2.2 T2V v3.0 UPDATE! - 03/14/2026
Just tweaked the effects of the prompts "smoothmixanime" and "smoothmixrealism" and realism in general.
All videos on the High and Low Showcases were made using "WAN 2.2 Smooth Workflow v4.0" with settings: 900x600 resolution/Steps 8/Sampler Euler/Scheduler simple.
Just as T2V v2.0 it has light2xv baked in it.
The effects of the prompts "smoothmixanime" and "smoothmixrealism" were a little too strong - now you need to complement them with more prompts related to the visual style for the effect. Adding "Realistic Style" or "Anime Style" prompts should be enough. ^^
By popular demand (lol) you can make more normal sized breasts now - no flat chests thought, sorry flat chest lovers.
More details to skins if you try going for more realistic style videos - as long you don't use the "smoothmixrealism" prompt. In that case the skin will be very smooth automatically.
Added some Abstract concepts to it! It adds more variety and colors to the results.
GGUF MODELS For I2V v2.0 and T2V v2.0!
Great news for those that need GGUFs versions!
The user @BigDannyPt managed to convert SmoothMix WAN 2.2 Img2Vid v2.0 and SmoothMix WAN 2.2 Txt2Vid v2.0!!
Be sure to thank him for his efforts! =D
GGUF - SmoothMix WAN 2.2 Img2Vid v2.0
GGUF - SmoothMix WAN 2.2 Txt2Vid v2.0
SMOOTHMIX WAN 2.2 I2V v2.0 UPDATE!
For more info about the update and differences between versions check out this article.
All videos on the High and Low Showcases were made using "WAN 2.2 S. Workflow v2.0" with default settings except the resolution - they all used 900x600 on the workflow.
Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low.
To render futanari characters and male figures correctly, LoRAs remain essential. Try using mine or any of your favorites.
Be aware that hyper realistic content could possibly suffer some morphing since the model will gravitate towards the style of "SmoothMix Animations". That effect can be mitigated a bit by using Loras trained only on realistic content and by using prompts that pushes towards realism.
Sorry for the irregular posts and updates. I’m currently pretty busy and need to reorganize a lot of things, so free time has been scarce. If everything goes smoothly, I expect to have considerably more free time starting in February. Yay ^^
Have fun!
SMOOTHMIX WAN 2.2 T2V v2.0 UPDATE!
ITS FINALLY DONE! T_T
SmoothMix WAN 2.2 Txt2Vid v2.0 is what the model should have been in the first place - now it can show what can really be done!
Merged with Loras made with only images and videos generated from the SmoothMix Checkpoints!
Very high quality image and smooth animations! Use it with the updated version of the Smooth txt2vid Workflow in case you haven't downloaded yet!
Much - MUCH - more variety of clothes, hair styles, clothes, poses, body types and skins colors.
You can use captions or prompts! Both will work! Use both to ensure what you want is generated!
Fox girls, cat girls, demon girls, oni girls - all the girls (and MILFs) are here. ;)
It responds to the prompts 'SmoothMixAnime' and 'SmoothMixRealism'! All Loras merged to it had those key promtps from the SmoothMix Animations Style and they have the same effect here! Check the SmoothMix Animations Style page for details!
Its completely uncensored so it also should work MUCH better with NSFW Loras. Give it a try. ;)
IT CAN'T generate male anatomy reliably! You are going to need Loras for that! SmoothMix's priority is the ladies.
Smooth Mix Wan 2.2
A Smooth Mix version of the Wan 2.2 A14B!
I tried to make it as versatile as I could, I hope you guys like it!
Every video on the showcase used an image from my Gallery! All of them have a comment with a link to the source image used.
Key Points:
Every video on the showcase was made using my new Wan 2.2 Workflow v2.0/Txt2Video Workflow v.20 on its default settings. Make sure to use it!
No Loras were used to make the videos on the showcase. Try to make a video without using Loras first.
When using Loras, start by setting their weight between 0.3~0.5 and increase it if necessary.
Recommended Settings
Steps: 4 or 6
CFG: 1
Sampler/Scheduler: Euler a/Normal or UniPC/Simple
Resolutions:
Use the resolution that your Setup can handle. As a start I recommend these:
For HighEnd Spec PCs: 560 x 940 - 940 x 560
For Mid Spec PCs: 480 x 720 - 720 x 480
After testing these resolutions adjust as needed.
Have fun! =)
Description
FAQ
Comments (304)
why am i getting this plastic skin always? unlike the og wan 2.2 t2v, this smoothmix always gives plastic skin even i put it on negative prompt
You can try the smoothMix as highnoise and the regular wan2.2 checkpoint as lownoise to get better results, btw the movement will be impacted
Also try SmoothMixRealism in the positive prompt. The smoothmix animations lora was merged into this one and it will respond to it.
smooth mix is not good for realism videos , thats when dasiwa model is so much better , u need to work on ur merge plz
black output
Will it be a some fp16?
Only a black screen is displayed!
Where?
also have this issue
me too
@lightandmild594
@jshnr006
What? What's the errors? Are you sure you do not use outdated ComfyUI. Also, I recommend to install Comfy Kitchen
I don't know what happened, but it worked fine when I used someone else's workflow.
same issue, did you solve it?
THX a lot!! dude! this is amazing. I just downloaded the things just as you explain nothing more. I can do very good stuff like 5s videos 1020x1440 generated in 3min. I have a 5090 24gb laptop.
How good is this for cartoon/anime videos?
dasiwawan tends to realisticate my anime/cartoon videos.
How does i2V 2.0 pair up with T2V 2.0? I've started doing extended videos and noticed when I switched to T2V 2.0 from I2V 1.0 for VACE it made pretty dramatic changes to the output, so I had to revert back to T2V 1.0
Been messing about with diff lightx2v and lightning LoRAs.
Working nice. Still testing some more stuff.
I'll post some now.
Great work. And thanks for the opportunity to do some testing and being part of the process.
I enjoyed listening to you talk about it on Discord. I could tell you were passionate about creating something awesome for the community. ;)
Any lightx2v/lighting loras you recommend?
I downloaded the v2.0 High noise model and any video I try to make produces a black screen. tried with and without any LoRA. I get unexpected unet block spam errors in my comfyUI terminal, all of which appear to end in '.quant'. I thought maybe I needed to load it as GGUF but it doesnt show up in the GGUF loader. Maybe there is an issue with some merged elements. I will keep trying and perhaps rename it to gguf and try that way as well but something isnt loading correctly for me.
Piping latent from HIGH Noise to LOW Noise doesn't seem to be producing refinement. It seems to produce the same output as the output from High Noise.
No matter what settings I use, I only get a half-black screen. I downloaded the workflow from this page and also a black video. I thought it was corrupted and downloaded it again, but I'm still having the same problem
try to update comfyUI, its fix same problem for me
Yes, I updated to the latest version and it worked, thanks.
Produces great results (I didn't use the workflow provided), I tried it without lightning loras and the result is great! I appreciate you posting it without them so I can see the "best" possible result, it's night and day difference.
How many steps and what CFG did you use without loras?
@kliz71 10 steps, 5-6 CFG for SVI at least - 6 steps works too but less fine detail. Low model I use lightx2v rank 256 at strength 1, with 4 steps (start step 4, end step 8), 1 CFG. Simple scheduler, shift 7 all around.
To everyone just getting "black videos":
I run like 3 different local Comfyui-portable installs. Two of them I don't update as the use older workflows.
One of them I have updated.
I get black outputs on my two older installs. The updated one works fine.
I suggest updating your comfyui. Or maybe do another install. Dont blame me if you brick a working install though. ;)
Hope this helps someone.
I think it's fp8 mixed
Yes, as you said, Comfyui ≥ 0.4.0 seems required; my output was also black. Since I only use the portable version, I always reinstall new versions from scratch.
@jorihalgo Yep. Not worth bricking already working installs. ;)
Can you please release a I2V v2.0 version with w/e accelerator you use built in? I tried using the lora you recommended, but can't get it to work outside comfy, which I don't use because WanGP runs much better on my 3080 potato.
我也想要,还是内置加速的更好用。
I was originally using an older version of Comfy, and I found that the output video was black. Then I updated to a newer version of Comfy, and the video appeared, but the movement of the characters in the video was very small, almost static. (I 2 V)
My settings:
resolution 1280*720, 81 generated frames, 6+6 sampling steps, Shift 5, and I added the author-recommended accelerated LoRAs with a high noise weight of 3 and a low noise weight of 1.5. Other than that, I did not add any other LoRAs.
My PC environment:
Ryzen 7 9700X
RTX 5090 32GB
64GB RAM
I suspect it might be due to room for improvement in support for the 50 series graphics cards. But I have no evidence.
I hope someone can help me solve this problem.
我原本使用的是较旧版本的Comfy,发现输出的视频是黑色的。然后我更新了Comfy的版本,可以出现画面,但是画面中的人物运动的动作幅度非常小,基本接近静态。(I2V)
我的设置:
分辨率1280*720、生成帧数81、采样步数6+6、Shift 5、添加了作者推荐的加速LoRA并且设置高噪声权重为3,低噪声权重为1.5 。除此之外,我没有添加任何LoRA。
我的PC环境:
Ryzen7 9700X
RTX 5090 32GB
RAM 64GB
我猜想有可能是对50系显卡的支持有待改进。但我没有证据。
希望有人能够帮我解决这个问题。
I use a 50 series card. While there is definitely less motion overall than Smoothmix I2V v1, there should not be close to none. Try changing shift to 8, and using only 4+4 sampling steps.
If that does not work, try increasing high noise CFG to 2.0 and reducing the accelerated LoRA for high noise to 2.
I am wondering could I2V v2.0 work with original SMOOTH XXX Animations lora? I tried but the results were bad.
Same here, maybe we need to make the weight a lot higher than before.
At least to my understanding reading the description, haven't had the time to try high weight Lora's yet though.
I played around with it for a little while. I like it. Thanks for your work!
My graphics card is a 5090, and I also encountered a black screen issue. After updating to the latest CFUI, the generated video is no longer black but appears as a blurry mess.
I've tested numerous cfg and least configurations. Still can't achieve satisfactory results. The resulting visuals are blurry and chaotic.
yep, all results are pure blur after 2 frames of animation. i use 1.5 cfg and 6 steps for each pass, high and low, for a total of 12 steps.
Same, there are no issues with other mods
请问,为什么我不使用lightx2v_I2V_14B_480p_cfg_step_distill_rank16_bf16.safetensors加速,生成的视频就是模模糊糊的呢?我使用的模型是SmoothMix_I2V_v2_High-Q8_0.gguf。 讲这句话翻译成英文。
Why is the generated video blurry when I don't use lightx2v_I2V_14B_480p_cfg_step_distill_rank16_bf16.safetensors for acceleration? The model I'm using is SmoothMix_I2V_v2_High-Q8_0.gguf.
I dont know why my videos always turn out to an extremely blurry mess, i tried no loras or anything. Its just cooked for some reason, im on 4090, 32gb ram, so specs arent the issue.
same for me. came here to ask that too. maybe we need to use a ggfu loader instead of the diff model loader
no, gguf loader only loads gguf files. doesnt help
@bluenightlagoon I did some testing, adding a dreamer nsfw loras helped, plus, a prompt has to be really descriptive, so try asking an uncensored AI to formulate you one based on the image. But its still not as clean as illd like.
@KoujiAI Adding dr34ml4y introduces lots of transformation on anime styles with the new smoothmix version even at lower strengths, really not ideal. I have yet to find a way to not make v2 look like a blurry mess that doesn't transform the character :(
Edit: Using 2.1 lightning LoRAs at 3high and 1.5low seems to be the only way, too bad.
请求融合了加速的版本!还是那个版本更好用!
I'm asking for the version integrated with lightx2v_I2V_14B_480p_cfg_step_distill! Or is the original version more user-friendly?
May I ask why the screen gradually becomes extremely blurry? I have no problem using other models, except for this one
RTX4090 24g RAM 64G
You're probably missing the "Lightx2v Lora", it isn't merged so u must add it manually. Check out the model description.
how do i slow them down? i specify "slowly" before every verb in the prompt, say all the movement is slow and smooth, put "rapid movement" and "fast movement" in the negative and they STILL move really fast like its a cum speedrun
You are using NAG so those negatives work right? Also could try changing the fps
If you're using CFG 1, negative prompts have practically zero effect. So as the post before said, either use NAG with CFG one, or raise the CFG.
@Yc3k what is NAG and raise the CFG to what?
@wannakm It depends on your setup but most of the mixes will have recommendations included in the notes for what CFG is required. Sometimes I'll bump CFG up from 1.0 to like 1.3 just to see if it helps it get past something.
Google says "NAG (Normalized Attention Guidance) is a technique used in ComfyUI to enable negative prompting for video models like Wan2.1, especially when running at high speeds."
Find a workflow that includes it to see how it is used with lightning mixes.
@wannakm Even tho the Ghost already replied let me try to explain it a bit more detailed. CFG is in essence a way to tell the model how much should it follow your prompts vs how much "creativity" is allowed. Higher the CFG number is the more the model will follow your prompts and less will it tend to be creative with the video. It works basically the same in picture generations as well as videos. Normally the CFG is set up to 3.5 or higher. Now, the problem arise when you are using the Lightx2v Lora. That lora enables WAN to make videos in very few steps, like 4-6 steps making the video generation much faster than it normally would be, but if you use that Lora the recommended setting for the CFG is 1.0 as it works the best with that CFG. Now, if you put your CFG setting at 1.0 "negative" prompts will have no effect on the generation of the video. They are basically ignored by the model completely. To overcome this problem, since negative prompts can be very important sometimes, there are 2 ways you can "include" negative prompts into your generation.
1. Raise your CFG number to at least 1.5. This will make the negative prompts work again, but in return the vide will take considerably more time to generate due to the high CFG.
2. Use the NAG nodes explained by the Ghost, which will enable you to keep the CFG at 1.0, making the generations fast but sill take the negative prompts into consideration and let them impact the end result.
blurry output as others mention here... why ?
You're probably missing the "Lightx2v Lora", it isn't merged so u must add it manually. Check out the model description.
@boulbi78 you are correct, Lightx2v fixed blurry output :-)
@boulbi78 Man, I can't believe I missed this step! Thank you so much—the videos come out so clear now.
v2 not working? just blurry near-noise video?
UP: i see, lightx2v is mandatory now
thanks, this fixed my issue
how to use that? high pass? low pass? cfg1? steps 1?
Didn't make a difference for some of us unfortunately. Did you just use the lora model and weights DigitalPastel recommends?
@bluenightlagoon sorry it took some time. lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors into both high and low lora, strength 3 for high and 1.5 for low.
@crocusflowerparadigm lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors into both high and low lora, strength 3 for high and 1.5 for low.
Veni vidi vici. - appreciate leaving the light loras out. Relax a bit, skip LTX2 and go for LTX2.1 instead which is supposed to have improvements in i2v. Just mentioning it, in case of a Smooth LTX... :P - Other than that - nice model!
It is recommended to integrate lightx2v and not separate it. Most users are too lazy to test various lora combinations by themselves.
it gets updated all the time so he'd have to update his too. Sometimes you don't want to run it on the high noise model, and there are all sorts of variants like 1130 and 1022 combination that I thought was the standard but now there is 1217 too. Best to leave it off, or use v1 if too lazy
Any tips ? Tried doing same img2video i made with the previous version, ofc i used the lightx12v lora so no blurry issues. But i had much better outputs using the previous version. So i'm wondering what do we have to change from the previous version in our workflows ? And i'm not talking about the lightx12v lora. Used the exact same settings with the lora for having really bad results.
Thanks for your hard work !
For people with blurry images:
I am not sure if this will help but I had to mess about with the lightx2v LoRAs a bit to get my videos working:
Using the Smoothmix wan2.2 workflow, I leave the shift at 8.
I run 4/4 (8steps) but I have changed the CFG from 1 to 1.2 for both ksamplers.
1st KSampler: start at 0, end at 4.
2nd Ksampler: start at 4, end at 10000.
I use the recommended lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16 LoRA in high noise at strength 1.0 but then wan2.2_i2v_A14b_low_noise_lora_rank64_lightx2v_4step_1022 in the low noise LoRA, also at strength 1.0.
Setting strengths of 3.0 and 1.5 etc gave me terrible results. (Although I need to test this more)
I bounce my videos at 720x1024.
Lastly, I also use Dr34ml4y LoRA quite a lot, even at low strength in high and low LoRA (e.g 0.25), or something like bouncing boobs LoRA, seems to give me a lot of stability.
I suggest testing the above and seeing if it helps anyone?
Cheers.
please:Lightx2v Lora merged.
First off, I think it's a great idea to keep the acceleration loras separate. What makes the best acceleration lora is changing all the time, so it will keep your checkpoint more relevant and useful without you having to always change it.
One issue with the latest version is it doesn't really work with SVI Pro, which is effectively the new "meta" for Wan 2.2, so we can't get the benefits of longer generations with your smoother motion.
Outputs don't seem to respect the reference latents that SVI uses. If I swap this checkpoint into an existing workflow I get really bad blurs and cosmic horror, lol.
Hmm, using v2.0 i2V every video I make has an insane amount of blur, using the recommended workflow and settings too :(
yeah u need the light lora he recommends, otherwise it will always look like shit
high lightx str=3, low=1.5 i missed that first too
When using the V2 model, you have to add the lightx2v acceleration LoRA model yourself; otherwise, the result will be blurry. This is because this acceleration model was not merged into this version.
配置不够高的朋友,可以参考一下我的搭配。
GGUF - SmoothMix WAN 2.2 Img2Vid v2.0 Q6K+lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16 ,比GGUF - SmoothMix WAN 2.2 Img2Vid v2.0 Q8+ightx2v_I2V_14B_480p_cfg_step_distill_rank64_bf16 的效果要更好一些。
This is about the i2v v2 version published Jan 10, 2026.
Pros:
- keeps good character adherence. Details like eyes and facial features are generally well preserved.
- good scene adherence. It is common for thing like the penis or other details to disappear during a foreplay scene, whereas Smoothmix does a good job of preserving unprompted details.
- good micromovements. Hard to say exactly what this is, but in general feels less 'cookie cutter' and has more life to them.
- separating out the lighting LORAs is a great move since SVI2 Pro has some slowdown issues which can be resolved through tweaking the LORA strength.
Con:
- male genitalia warping: I know it has been mentioned already that the model does not support male genitalia. However this has caused issues beyond what is probably intended. To give some examples, sometimes a nude male can just walk across the scene and the penis will randomly stick onto a surface. Other times a surface will liquify and merge itself with the penis for no apparent reason, just because the penis is nearby. For more hardcore scenes, it makes insertion scenes much more difficult to create, often needing the most literal base image for a chance of success. More dynamic scenes is very difficult to do.
EDIT: Smooth's 'Futa and male genital' LORA solves a bit of the main penis issue, though there are still some issues with the penis 'glooping' in some generations.
how exactly do you deal with svi2 pro slowing down? I'm having slowdown only on smoothmix I2V V2, while V1 works fine
@vamorand according to the official SVI page, reducing the lighting LORA strength can help. Apparently removing the Inspector Blend node from the official SVI workflow helps too. I am currently using AIstudynow's SVI2 Pro workflow with Smoothmix and it is currently fine (google AIstudynow SVI2 Pro workflow), works with other models as well.
Lastly for the male genitalia issue from my first post; I downloaded and used the Smooth's 'futanari and male genital' LORA and the issue is now not as extreme. There is some occasional sticking issues such as when the tip of the penis blends with the tongue in some i2v generations, but I've seen this in other models as well so I won't really count it against this model.
@McClippy , thanks for reply. I currently tweaked my workflow a bit, and now it seems fine for my purposes (removed shift from SD3 node, downloaded exact lightx2v model with 128 rank instead of 256)
Slow mo speed up works well on this model, but I don't know why my characters can do some weird stuff with this model. When i switch to the standard wan 2.2 models, same wf and same seed, none of those weird things happens, but then i suffer the slow mo effect.
So, it seems we still can't win at the moment.
I used your new model with your recommended light lora and weights etc and I'm just getting a black screen.... any ideas?
Update your comfyUI then it should work.
I2V v2.0 is not working properly. the output videos is getting overblurred
Why does V2 perform worse than V1 in terms of motion dynamics and prompt following/accuracy?
If you're experiencing blurriness issues like I was, check your workflow. In my case, simply restarting ComfyUI reset the resolution to 0x368 for no reason.
0x368 where? I've tried different workflows and all of them the output is totally brurred =/
@rodszera Same here. I have the blurry problem
Reading better the instructions, it's indicated that you need in the high/low loras fields the corresponding light2xV. The link is provided in the instructions. That solve the blurry for me.
Hi, if I have an RTX 5080, should I use this version of GGUF Q8?
None, in fact, GGUF is mainly suitable for graphics cards of 30x or earlier generations.
@AI_Master_Workflow works fine with a 4080 super. No version of WAN gives problems as long as it fits.
Wow, I truly love this. Excellent job!
我使用正常的工作流 出来的效果直接变成模糊的了 为什么
这次训练过程中把Lightx2v Lora给去除了,所以生成的都是模糊的。如果要解决模糊问题只能自己手动加这个Lora,但是我测试下来也没有1代的效果好。
Well Done!
This new version understands prompts a lot better, thank you🙏
Yes it's good idea
@wealthyturkeyvvkj533 Thanks, if you can create one, that would be great. Because the ones I found here on civitai take 40 minutes to make a 5-second clip :(
@wealthyturkeyvvkj533 @ApexStorm_Ai Did you guys tried to open DigitalPastel profile? The owner of this smooth mix checkpoint? Hahaha come on guys, you're not even trying!
Seems he have a workflow for this ckpt: https://civitai.com/models/1847730/smooth-workflow-wan-22-i2vt2vfirst2last-framemmaudio , also look for more civitai sugestions, this creator has a lot of content!
@wealthyturkeyvvkj533 @ApexStorm_Ai If you are using it, please let me know if it is good for making expressions of faces in 16;9 portraits (SFW faces :)
I just tested this version and the video rendering is really awful. I don't know if it's because you shouldn't use high-noise and low-noise models together, but separately. Well, yes, I can confirm that. Check out my videos posted here.
I use SMOOTHMIX WAN 2.2 I2V v2.0 with my special model FEMBOY WAN2.2-REALFEMBOYMix_LOW_00002_. TO GET A GOOD RESULT
Instead of https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors on low noise, use this one https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Lightx2v/lightx2v_T2V_14B_cfg_step_distill_v2_lora_rank128_bf16.safetensors it's give much better result, 1.5 weight.
On high use whatever u like.
Thx @Alberist, I just saw that he was using this version and gave it a try.
Sorry, where did you see this? Could you possible share the workflow?
yo bro could you post the comparison
what about high noise? do you put the same one on both?
"For HighEnd Spec PCs: 560 x 940 - 940 x 560
For Mid Spec PCs: 480 x 720 - 720 x 480"
What you consider HighEnd Spec and Mid Spec?
Is 16 GB VRAM even Mid Spec? (48 GB RAM)
i have rtx 5060ti 16gb vram and 32gb ram 480 x 720 it took me 120 secs for 5 sec I2V
The faster motion indeed does work! But I find that smoothmix has slightly lower quality and the color is slightly washed out compared to standard wan 2.2 model. Anyone else notice this?
Is only subtle, not a huge difference, but I can notice it.
是的 我也注意到了
I found it changes to low quality from 2th frame
@1589673625848 yes, smoothmix degrades faster over time it seems.
hey @DigitalPastel Does this version ONLY lack the Lighting Loras, unlike the previous version? Is everything else the same?
Its a pretty big departure. For all the changes check this article.
@DigitalPastel so if I'm understanding this correctly, it merges all the loras under https://civitai.com/models/2040641 except for the newest "Futanaris and Males" so we don't need to load those loras anymore?
Took my time updating from the previous generation of I2V but the new version is really worth it overall. Only thing I'm having trouble with is generating small breasts (it seems to really want to default to big ol knockers) but otherwise really happy with this
I mean who doesn't like them big ol' tiddies... :)
variety
@seaborgiumthemad I'm messing. I hadn't noticed this but then I don't think I've tested for it.
@NyxxiNyx To each their own - I like big ones if they are well-formed, but once we get into the 'huge' department they are a huge turn-off for me.
Same with badonkies.
I'm just getting a black output with the 2.2?
Do you have the right files? The GGUF files are larger than these?
same i could only get the gguf files to work in any respect, wanted to use the safetens, and used the given work flow i only get blank/black out put as well,
@Fleshcrafter ok, you need to use the smooth xxxanimations high and lows in the loaders, or something like that, you can't pipe em through like older versions, i guess,
same still don't know how to fix this
The same
how can I speed up the motions? they come out a bit slow, although i have "fast motions" in my prompt
rapid "action", rapidly, swift "action", swiftly, fast "action", her breasts are bouncing on impact, her body shakes on impacts, rough "action", violently, violent "action", the shaft goes in and out on full length, the penis is inserted up to the base, rough sex, hard sex, looping motions
Use it in your prompts and it will improve motions
increases the value of cfg
@fdefreeza109 Well If you increase cfg your gen speed slows down like 1.5-1.7 times with not so big difference in output video. It is better to use WanNag which gives you opportunity to provide negative prompt without time loss and it works well.
well i think i had figured it out, it was wrong lora weights, had to decrease them like 2 time
Anyone care to share how to get this to work correctly without it becoming a blurry mess? Does it specifically need rank128 lightning lora? A very specific workflow?
No matter what I generate, everything is blurry, but it works fine when I switch back to the old version.
mine is blurry as well the first i2v worked well and i use the older workflow with a few things deleted like saving images
It works if you follow exactly what is explained in the guide about adding the lighting loras.
Great work. Which tool are you using for merging?
I don't know if its just me, but it seems as if there is a lora and diffusion model mismatch. I keep getting an error "lora key not loaded". and a HUGE list of block of code essentially saying that the lora wasn't used... I got early access to this, so I hope I can get some help to get this resolved... thanks
The lightx2v lora introduced here is for WAN2.1, so if the key does not match it will be displayed in the command list, but it actually works fine, so you can ignore the command list.
Is it normal for I2V V2 to give really bad results when NOT using any lightning loras?
i can get it to work but the prompt is not working from the Qwen VL prompt enhanser im getting error Prompt execution failed
Prompt outputs failed validation: AILab_QwenVL_PromptEnhancer: - Value not in list: model_name: 'Qwen3-VL-4B-Instruct-Abliterated' not in (list of length 21)
But i can pick any of the other ones in the drop down and it works but it wont output nsfw
Does this model support the realistic style worse?
Any style will shift towards SmoothMix realism/animation style (only face). If you generating videos where characters are very emotional (screaming, moaning or crying) their faces will change over time and will not be like in ref image coz this checkpoint has backed in style loras.
If your chars are calm and emotionless then the shift is very small and unnoticeable
PSA: It does not need specifically rank128 lightning lora.
I'm running it with a 4step Lightning lora: wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise and wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.
You can finde them both on Hugging Face. Works with 4,6 and 8 steps...everything above is a waste of time.
This is only for I2V.
For T2V you'll need wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise and wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise
You can finde them also both on Hugging Face. I have not tried them yet, so i don't know how good the work.
You can use the worklflow that is provided in the description -> enter hash C6E44FA361 in the search bar.
I guess it would also work with the now workflow V3.0 (hash: EDF72F373D) but i did not try it.
Can confirm this works perfectly and simply, if you're coming from v1 of this checkpoint just add these 2 lighting loras and you'll be ready to go.
Wait... speed loras are not included in this one by default?
@CrystalVisage Yepp...not included in this version.
@Herastura but he recommends 4-6 steps? 4 for each, he means?
@CrystalVisage 4-6 in total. i recommend 6 steps. 4 is a bit low. I always use 6 and it looks good.
@Herastura bit difficult on 12gb vram 🤣🤣 Thanks for your reply.
@CrystalVisage Then you should try the GGUF version. It need less VRAM
Hash -> 5A4B1B9C62
This version is also without lightning Lora
@Herastura I mean I'm generally fine with 4 steps. Sometimes if I don't get what I want from 4 I do 6, but with 6 it's usually around 12-14 minutes or so. Too long.. 4 steps is 6-8 minutes. I'm now more curious about the best possible lightning lora. I'm trying different combos to find the ideal mix. wan 2.2 lightx2v 1022 seems like a good candidate (for 4 steps).
@Herastura what weight are you using those loras with? the combination op wrote works for me, but I tried other combinations and nothing works, always blurry
@Derpxten You replying to me? If so, I did try that. Still worse visuals than the model's previous version. I tried both at 1, and then high at 3 and low at 1.5 (what model's author had recommended but with other loras, not these) both had artifacts.
@Derpxten For the lightning lora i use strength/weight 1
For low- and high noise.
@Herastura Thanks! I just tested it and got pretty good results. There's also another comment that recommended using a low T2V lora that also worked well, not sure yet which of them was better but both are awesome.
UNETLoader
'NoneType' object has no attribute 'Params'
please help
update your comfyui
@Frank22 thank you friend
2.0 i2v always makes the breasts huge.
impossible to make small beasts.
also its way worse following prompts.
the only improvement i see from my testings is that it keeps the face resemblence more consistent but i rather stick with the old version + reactor until the issues get fixed.
exactly, no matter what prompt you type the breasts are always the same
for i2v 2.0 I'm getting a black output. I'm using lightning loras and tried multiple. any tips on how to fix this?
The same
把comfyui版本更新到最新版试试。
4.0 steps, euler a, 1.0 cfg, 8.00 shift. aside from that it could be anything loras too high, too many?
me too
Update comfy
"Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low."
@discreet001 Actually, not using the Lightning LoRA. It doesn't really make result in a black square video; it just makes the output blurry.
@g1263495582 I finally had a chance to test it, and I found better results with both high and low loras set to 1.0 strength.
If you are not using lightx2v, how many steps and what cfg are you using on your ksamplers?
Pardon a n00b question but if this is the HIGH one, where is the LOW one?
when you scroll up. it's named "I2V v2.0 LOW"
What speed loras combination have you found to work nicely guys? ive been mixing and matching different loras and strengths but i either get blurry messes or weird abnormal bodies, and the settings recommended by the author dont really seem to work for me that cleanly.
See Herastura's comment from January 25th, it's working pretty well for me
getting a blurry mess with the recommended settings
I can confirm - recommended settings are absurd and generate noise blobs.
@yeahrightiwill confirmation seconded. i can't get anything but smeary noise blobs.
@yeahrightiwill You need to use lightning Loras. They are not integrated in this version.
@zombycowvonzombe7433 You need to use lightning Loras. They are not integrated in this version.
"Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low."
@Herastura ah, thank you.
@zombycowvonzombe7433 I finally had a chance to test it, and I found better results with both high and low loras set to 1.0 strength.
I am getting a black output. I am using a really basic setup that works for well for A14b I2V base model. No loras or anything fancy.
Is there something I need to add that isn't required by the base model? I think I saw clipvision in the linked workflow which is not something I had been using up until now.
"Lightx2v Lora is NOT merged this time so be sure to use pick any LoRA you prefer to accelerate generation as well as how much weight you use on them - all videos on the showcases used "lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16" set with weight 3.0 on High and 1.5 on Low."
update comfy
@g1263495582 This is what fixed it for me. I had my ComfyUI version locked because updates were blowing up my workflows. Latest build is working, thanks!
@g1263495582 updating it worked! tysm.
Is it correct that adding the triggerword 'SmoothMixRealism' doesnt do anything ?
all depends on what workflow you are using, what time of wanImagetoVideo sampler you are using.for remixWan2.2 i use painterI2V node. for smoothMix i am using regular wanImagetovideo node.
It changes the face beyond recognition. I look forward to this model. shame.
i have tested smoothMix with REMIXwan2.2
i think smooth mix had been able to keep the face consistent (if you are using SVI)
RemixWan2.2 is one which loses face identity if expressions weren't neutral throughout.
@NOTORIOUS_EDDY 뭐라는거냐 이새낀 다른 얘기를 하고있네.
is there an option to not save the input image?
preview image/video node.
@NOTORIOUS_EDDY you mean the node where i load the image?
if you're using the videohelpersuite nodes to save vids with vhscombine, you can connect the "filenames" output from that to a "prune outputs" node.
@bpbp2 I don't know what any of that is, I just want to eliminate clutter in the output folder by not having the input image unnecessarily saved. I only want the last frame and the mp4 saved
OOM, 16G VRam, Is it happend in anyone's comfyui local Environment? I used smooth v2.0 I2V + lighting lora rank128.
a better practice for testing any new model is
set following settings.
res: 400x600 OR 450x450
frames 25-29.
if the output is there, keep increasing resolution and frames. and see where you find it'd peak point. i have 16GB vram too i do this always.
make sure you also have a VAE tiled decoder. normal vae often is root cause for OOM for me.
good luck.
modelsamplingSD3 'NoneType' object has no attribute 'clone' error. help??
I mainly use it to create Japanese- animation style videos(I2V), but I feel that SmoothV2 causes facial distortion more easily than SmoothV1. I'm wondering if other creators with similar needs have encountered this problem?
I use this checkpoint for I2V. By default, the workflow includes something like "stylized, artwork, painting, illustration" in the negative prompt. Remove that and replace it with something like "realistic, photograph". With the right approach, I've quickly solved these kinds of problems. Perhaps you could also achieve significantly better results with a style Lora like "Anime Style" by user RelativlyObjectv.
Yeah same, I can't find any workflow that keeps style consistent.
@Fuzetsu I configured it according to the author's parameters, but the facial deformation and limb distortion are still extremely absurd. Furthermore, I noticed that most users of V2 don't seem to have any significant advantages over V1. Therefore, I've abandoned V2 and switched my daily model back to V1.
@mmrabati2154 I can only continue using version V1.
i have been having issue with v2 also, i just post a new video showing v1 vs v2 and version one still does a lot better. is there any other creator that can chime in on what v2 actually improve?
@charlenebelmont I also end up with more stable and better outputs using the old version. I wont change it since it actually work properly.
@boulbi78 same here. i always thought the new one might be better since version one was already so good. but i will keep both version and keep testing more.
@charlenebelmont I think it is AI, we can't all use the same workflow, can't all get same results. Such as some softwares updates. Back up the more stable and stay on it. try updates, but don't force urself changing for something which doesn't work just because it is new. Good luck !
@boulbi78 talking about workflow. is there any t2v or i2v you recomend?
v2 is so much worst than v1
I2v v2.0 destroys penises with awfull quality render. What happened ?
Honestly, it is worse than v1. Boobs are always disproportionally large compared to the subject. This is mostly a downgrade. Also, why would you remove the Lightning LoRA when you built your whole model around it? You essentially must use the exact one you recommend, otherwise the model produces awful videos.
true
That's 100% true, i tried other lightx2v Loras and only the one author wrote about works. I tried 1022 lightx2v and lightx2v 1030. 1022 seems to work, but results are still meh.. 1030 doesn't work at all
Wan2Gp user's I've been messing with some settings and got some helpful info. first off I'm using anime i2v so i have no idea if these settings work the same for realistic.
this is at 480p i have my own custom resolution of 512x696 or 696x512 this probably dont matter but needed to give my settings for best help just try to match the image shape as best u can horizontal or vertical
Model / Guidance Switch Threshold: 950-970 i use mostly 970
Number of Inference Steps: 4-6
Guidance (CFG): both are at 1
Sampler Solver / Scheduler: unipc i havent tried the others
Shift Scale : 5 im not sure if there's a best or worst for this use what ever
Lora :Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1_high_noise_model set around 0.5 for low and high for both phases u can go higher or lower for example more movement High 0.6 and LOW 0.4 use what ever lighting lora this is just the one i had, the results change based on the input image style so ajust in that range till u find something that works
Extra lora: DR34ML4Y set at 0.6 high and 0.4 low i got this form one of the images below and it works well for better movement results , i believe u can just use what ever movement enhance lora here just keep the strength around the middle and ajust till it works
Skip Steps Cache Type :none
Temporal Upsampling: Disabled
Spatial Upsampling: Disabled
those can be used on output video after if its good
all other settings are default i may have changed some of the Performance settings like Text Encoder Precision: 16-bit (more RAM, better quality)
I'm using a 10gig 3080
my gens keep getting brighter and brighter.
should i use euler a instead of unipc? or is there some other setting i need to change?
having same issue, yeap and I use normal Eualer, but the vids are always getting brighter
@evilstormhot109 Do you use the smoothmix animations loras with this model?
@Scringleblip I've started to use SmoothAnimation Loras only recently, like for a week or so. Previously I barely used andy Loras with SmoothMix v1
@evilstormhot109 try genning a video with and without the lora, see if it does the brightening happens with or without it
Does anyone else have anime eyes flickering issues with i2v v2.0 (high/low noise) + lightx2v? Is there any way to fix that?
the v2.0 is super blurry and do alot of morph compare to the v1
Same for me. Workflow gives me only blurry results.
Yep, even in higher resolutions output is bad
Yeah, I also noticed it and was wondering if it was only me, or the prompt.. But I started to see the blur effect everywhere on my vids
apparently this is indeeed the case, i wish i knew that as i've been trying EVERYTHING for two weeks now to get it running without blur (mostly noise) (especially around the eyes). what a nightmare and wasted time, downloading v1
Tip for Lightx2v's slow-motion:
Set High to 4.0 or higher if 3.0 isn't cutting it. You'll be at risk of deformed motion, but that can be mitigated by detailing the prompt more and/or LoRAs. This is based on the SmoothMix workflows.
Workflows involving three-chain-Ksamplers allegedly help, though I haven't tested them enough yet.
Is it just me, or is v2.0 actually worse than the previous version? I’m noticing odd blur artifacts and strange noise that simply weren’t present in v1 🤷♂️. I’m using the same prompts as before, and v1 never had these issues. I’m also running Q8 quantization, which should be essentially original quality.
On top of that, using the recommended lightx2v LoRA makes the motion either too slow or not dynamic enough.
PS: I’ve now made several direct comparison videos, and unfortunately v2.0 shows a lot more flaws compared to v1.
My conclusion:
With v2.0, not only do you have to use the lightx2v LoRA, but you also need additional LoRAs just to get acceptable results 😞. There’s noticeable noise, a brighter image overall, and much slower motion.
With v1, no LoRAs are required at all—the output is easily 2× better using just a prompt. Same seed, same prompt, same i2v flow.
I’m not sure what changed under the hood, but for now I strongly recommend sticking with v1, at least for i2v (I haven’t tested t2v).
Indeed. I'm using the old version. It works perfectly. This is blurry and horrible.
Even when I got it working, I was not satisfied with the results which were either weird or not as nearly as good as what I got with the original Smoothmix.
Yep, v1 is great, v2 is terrible. Not sure what's the point of this new version actually, and it's not very clear from the author's article either
I keep getting this error - Error while deserializing header: header too small
could you fix the code for linear weight-scale bug as i merge a few models together and I can't with this one
The older version has better prompting, movements/motion and the chances to deform the body pretty low compared to the 2.0
'NoneType' object has no attribute 'Params'
I got a couple diffusion models giving me this error...
am I missing something? or i need to roll back comfyui? Wan2.2-remix still works, but not this one.
everything should work, but i would alway use the lates stable release, never use the nightly unless you are testing, i had this issue earlier, check the requirement.txt make sure you Venv have the same package version for stability atleast. i also have 2 venv build. one fore a stable environment for a torch plust cuda wheel i use specifically for isolation of comfyui, and the other is for the latest torch and cuda build to test if updates is worth it and would it break anything.
ノイズが多くて、困っています、だれか助けてください
workflow / settings は何ですか?
@DarkEngine2024 Lightning lora は使わないで.すでに含まれています
@DarkEngine2024 日本人でもないのになぜ日本語を話しているのですか?XD
@DarkEngine2024 I was wrong. v2.0 needs lightning for faster generations
I always have a problem: I read reviews and trust the negative ones. I postponed testing of this version for later... And how wrong I was! I literally just struggled with v.1, trying to make a complex prompt work, got tired, downloaded v.2... and everything is just perfect! I should have said this a month ago: "Thanks for the new version of the great model, buddy!" A new toy, hooray!
Hi, I'm having a problem with this model v2. Everything wobbles and moves randomly and without my input; the movements are almost like electric shocks. Is there any way to prevent this? Thanks for sharing; your effort is appreciated.
How to fix SUPER SUN LIGHT effect when usin light lora? its super bright...
you have to lower the cfg of the high noise model a bit
Love the checkpoint in general, its really good otherwise. But I have a issue when I try to make a video showcasing one of my OC's the video randomly balloons the breasts of one of my characters which is completely inaccurate to my characters look. I can't for the life of me figure out how to get it to stop doing that as that ruins the video generation for me when it happens. If there's anything I can do to prevent that from happening, then I'm more than happy to try it.
You may be able to prevent it with a small breast lora https://civitai.com/models/1983079?modelVersionId=2244856
Or try using NAG to negative large breasts maybe?
increase the lightx2v lora to 1.23 for high noise, if there isn't any hard core porn , for low noise you can use wan2.1 or full fp16. (official models are not trained on genitalia so everything else works just fine). low noise works as refiner
Hi, thanks for the model!
I found that without the lightning lora, the model by itself will generate a very fuzzy image.
Does anyone know where I can learn to use that? I don’t have the slightest idea how it could work. I use Confyui, I don’t even know where to place it, I have looked a bit at the information provided but it’s very cryptic.
Read the description carefully! There is a workflow, it is basically plug and play. Just unzip, drag and drop the json file into your comfyui. Install any missing nodes with the comfyui manager. download all the models (high and low), vae, and clip files and put them in the relevant folders in your comfyui/models. You will also need the lightx2v lora (read the description carefully and follow all recommendations.) You should be able to generate videos like in the examples if you followed everything TO THE LETTER (don't deviate until you know what you're doing.) After that, start looking for loras to finetune your output. Smooth Collection (to the right of the example videos above) is a great place to start. Hope this helps, happy gooning!
Step1: Download the files (2 models, High and Low)
Step:2: Save them in "where is your ComfyUI? /models/diffusion_models/{here}
Step3: Download VAE and save it in 'vae' folder (https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors )
Step4: download text encoder and save it in "text_encoders" folder ( https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors )
Step5: download lightX2V LoRAs and save them in "loras" folder (2 files:
- High noise: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_i2v_lightx2v_4steps_lora_v1_high_noise.safetensors
- Low noise: https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_i2v_lightx2v_4steps_lora_v1_low_noise.safetensors )
Step 6: Load a wan2.2 workflow. Open CmonfyUI templates panel and search for "Wan2.2 14B Image to Video" and click (https://imgur.com/a/WHFcbqb )
Step 7: hit the "R" key in your keyboard and select the downloaded files for each field. (second image: https://imgur.com/a/WHFcbqb )
Experiment,
Good luck!
Here's the tutorial series I followed to get started ;) This goes above and beyond basic video generation, but I believe he walks you through downloading files to the appropriate folders etc. in a straightforward way. The whole series is incredibly useful. Just replace downloading the wan_2.2 diffusor models with the smooth_mix versions. The rest is the same; VAE, clip models etc.
ComfyUI Video Models: InfiniteTalk + Wan 2.2 + SCAIL + LTX-2 (Ep06)
Thank you, I looked at the tutorial, finally until I no longer understand anything, I drop it, no need to answer me, I deleted the model, thank you for your attention.
i have 9060xt 16gb and ddr5 16gb ram can i run this itov ?
Explain to me what the point is of making a model without the LightX LoRAs if it can’t generate without those LoRAs anyway? Moreover, this model works normally only with some lightX loras. What’s the problem with integrating them if you can’t build models properly?
some people like to use whatever LoRA they want for this purpose. And they were using a wan 2.1 lightx2v at 3.0 strength for the merge. I dont use wan 2.1 lightx2v in wan 2.2 because it reintroduces issues from wan 2.1 back into wan 2.2 that got fixed, and how wan 2.2 converges is different fundamentally from wan 2.1. It works fine with Lightning LoRA in my experience. Leaving it up to personal choice instead of just baking it in and saying 'deal with it' is the right choice in my opinion.
1. There are many different lightx2v loras and this gives you control over which you want to use
2. You CAN generate without lightx2v, you just need a LOT more steps and time, but it actually should generate better results...
Nice model overall. One thing I noticed is that it always adds makeup to female faces, and seems to be no easy way to avoid makeup. I give starting image without any makeup as input, and then one millisecond later the lips turn pink. What
I'm going to explain but, simply put, for I2V I'm recommending a 2-model combo approach: High Noise: SmoothMix v2 + lightning (as described); Low Noise: DaSiWa Lightspeed SynthSeduction v9 (https://civitai.com/models/1981116?modelVersionId=2555652)
Explaination:
Over the past months I've been using Smoothmix v1 and v2, and DaSiWa v8 and v9 for many different I2V generations, dialing each of those in with prompting / LoRAs, etc. Admittedly, DaSiWa v9 has been my go-to lately, but I'll get into that next.
I decided to take a moment to go through all my outputs and try to judge how I'll continue moving forward.
I realized the pros and cons for each model. For Smoothmix, I really liked the overall movement but there's always something off in the details, textures seem to smooth out a bit, faces lose similarity, etc. On the other hand, DaSiWa has been fantastic with preserving original details and the quality of the movement seems much more natural - BUT overall movement is much less desirable and is much more dependent on LoRAs and very careful prompting.
With that, I decided to try this mix - allowing Smoothmix to set the overall movement and composition, and DaSiWa to refine that. This is the winning combination, I am getting much better results than I ever did from any one model High/Low package deal.
Params for most of my generations:
Shift: 5.0
High noise steps: 3
Low noise steps: 4
cfg: 1.0
Sampler: lcm
Scheduler: beta
Agreed. Smooth is better for motion but DaSiWa is better for details and detail preservation
Your link is already dead.
What are the weights for the two Light LoRAs?
@jesper123160 I don't think so? I only included 1 link and it is going to the DaSiWa Lightspeed model page on my end.
@KuKu12345 DaSiWa Lightspeed already has baked in LoRA so you do not need to apply any lightning LoRA when using this as the Low Noise model. SmoothMix v2 does not have baked in LoRA so you just use lightning as described in the model page ("lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors" at strength 3.0 for high noise)
@altoiddealer I assume then you are taking whatever your standard workflow is and you using smooth for high noise then and dasiwa for low noise? Any chance you can share the workflow?
@CallMeMaybe I would have preferred to just send a quick pastebin link but dear lord is it finicky about NSFW language. I've uploaded my workflow just now, here. https://civitai.com/models/2453238
@altoiddealer I didn't realize it scanned the content for nsfw language! Thanks for posting <3
Thanks a lot man... It indeed works very well & very fast !
Had not even dared to mix my gguf high noise with a not distilled low noise model.... (I m low on RAM - 12Go - & 16Go of Vram)
Strangely, it works well at 512x768 but it gives me very strange-out-of-control outputs for 640x960...
@BretChampagne I'm getting good results resolutions like you stated, and up to 1280 x 720
@Polkster oh boy... I tried so many mixes since I discoverd I could use this "Dasiwa" (great) model for low noise... Tonight I m playing with this great "enhanced camera prompt" model gguf4 for high noise & it works beautifully too...
@altoiddealer with CivitAi messed up these days, I just stopped posting and took the time to try to improve my workflow... finally managed to have a cool SVI one that suits my needs. I guess I won t go above 640x960. But this Dasiwa model is indeed a good pick for low noise fast refining.
SmoothMix is a very popular one, but I m not so sure I ll keep it as my 1rst choice for High noise...
@BretChampagne Glad to hear you've tested and verified what I've shared here, and a part of me was expecting my comment to be met with criticism, turn out to be some hallucination, or something. If you determine a more superior High Noise option or some other combo, would love to know - I'll be coming back to add to my comment if I find further improvement.
I would actually in many cases recommend using the vanilla Wan 2.2 checkpoint for the low noise. Why? It preserves faces the best and LoRAs actually work and don't destroy the quality.
I took a shot at this and it actually generated really good results, really good idea here. Trying to consider other "combos" of models I can try now :)
@aurelius Thanks for the piece of advice. You re right, it work very well too ;-)
@aurelius I tried using vanilla wan but it added a lot of weird fog and stuff, not really sure why, any ideas?
@Derpxten If you have a nvidia rtx, make sure to use the FP8 version (of course the Light one)... and don t forget the lighting lora
@BretChampagne I tried, weird results. Which LightX2V lora do you use with vanilla wan?
@Derpxten Any of them work, including ones meant for Wan 2.1. The results vary, also it matters how much strength you put. I don't know what's the best one, honestly. I use 1.0, but they also work with other values, maybe better in some cases.
And yes, your problem is almost certainly with lightx2v. The fog is what you get when you gen without it with CFG 1.0.
@aurelius hmm odd... I use a pretty complex workflow with 3 KSamplers + SVI, shift 8, and for some reason, only when I use vanilla wan 2.2 on the low sampler, I get those gooey / water splashes artifacts. If I use in the exact same workflow the Smoothmix low - it works. if I turn off lightx2v + use DaSiWa - it works. I've been messing with it for the past 2-3 hours and nothing seems to work well, some were slightly better but I still get weird artifacts
@altoiddealer OMG! I’m really glad I came across this post! I checked out your profile, found the combo workflow, and decided to try it myself—and I can confirm it delivers much better results. You should seriously consider writing an article about it <3
I was getting super frustrated with Smoothmix constantly messing up some of the detials like nipples— they would come out blurry, distorted, or just plain ugly. But Dasiwa fixed it! In about 90–95% of generations, it preserves all the details, making the whole video look much cleaner with far fewer artifacts. Thanks so much for sharing this <3
PS: though IO am using SmoothMix v1 for high (I don't like v2 it has too much bugs) and Dasiwa v9 for low. This combo is perfect if you want to get the best out of two :)
@evilstormhot109 Glad my comment here was so well received :) I agree that SmoothMix v1 has better consistency in the high noise, but I personally preferr using v2 because it tends to be a bit more creative with movement/composition. If result comes out wonky, switching to v1 usually fixes it. Looking forward to see what V3 will be like if I2V model ever releases
HOLY $%#t
this works amazing! Especially when using the 480p lightning I2V meant for Wan2.1 on strength 3 in conjunction with smoothv1 or 2. The action and motion it adds is honestly, wow. But the standard lightning I2V WAN2.2 high on strength 1 does also fantastic work when you want more subtle movements.
The DaSiWa does a excellent job with presving the details. I've even tried with the PainterI2VAdcance node that adds more believable action and the results are great. Gonna have to test more model combos. Getting gold results is totally worth it.
"I’ve tested a lot of parameters and videos, and honestly, for NSFW content, I feel like the results from V2 aren't as good as V1. The thrusting motions in V2 look very unnatural, even though the generation speed is much faster than V1. Is anyone else getting the same results?"
Having a tough time getting pov bj to work. The head just sort of wiggles around. Any suggestions?
Having a tough time getting pov bj to work. The head just sort of wiggles around. Any suggestions?
Use something like https://civitai.com/models/1395313?modelVersionId=2235288 with it at like 0.5 weight, or any of the other DJ/deepthroat ones.
@NyxxiNyx Hey thanks! That helped! I'm still learning how all this stuff works.