Since someone asked me how to generate a video, I shared my comfyui workflow. Compared to the workflows of other authors, this is a very concise workflow.
It must be admitted that adjusting the parameters of the workflow for generating videos is a time-consuming task,especially for someone like me with low hardware configuration. If I got better parameter, I would be happy to share it with everyone.
Release Note:
V2.0 : Adjusted parameters, workflow remains unchanged
Features:
ComfyUI workflow (not Stable Diffusion,you need to install ComfyUI first)
SD 1.5 model (SDXL should be possible, but I don't recommend it because the video generation speed is very slow)
LCM (Improve video generation speed,5 step a frame default,generating a 10 second video takes about 700s by 3060 laptop)
How to use:
You can change the model and prompt text,just like generating images .There are three parameters that need to be noted:
ImpactInt & batch_size & frame_rate : This is a set of interrelated parameters, frame_rate is your video frame rate, which is 30 by default. ImpactInt&batch_size is the total frame count your input video. If the frame rate is 30 and the duration is 10 seconds, then these two parameters should be 30 * 10=300
The LCM Lora : LCM-LoRA Weights - Stable Diffusion Acceleration Module - LCM for SDXL | Stable Diffusion LoRA | Civitai
Description
FAQ
Comments (26)
Where do I find the LCM_LoRA-Weights_SD15.Safetensors? It is in the Lora Name of the Efficient Loader line.
How do you make it match the aspect ratio of the video though?
LCM Lora is download from here.
You need to edit input video for matching the aspect ratio,or change the resolution parameter in the workflow for matching input video.
@AlexLai Thank you for the link. Yeah I changed the aspect ratio but It is coming out blurry and more cartoonish and smooth. not a lot of detail. Am I missing a setting to tweak then? Thank you for taking the time to respond too!
@Nibot Node:HighRes-Fix Script -> denoise : I set it to 0.4 by default, you can try 0.2, the smaller it is, the more it looks like the original video
@AlexLai coming back many months later lol. I am jumping in and out of workflows and taking breaks :( but jumping back in. I have a nice look at .5 but it is too in consistent but lowering denoise on highres fix makes it close to the video but loses the style but does better keeping consistency for the most part. Is there way to lock to the style of .5 but keeping it close to the video of .2? I was tempted to try a depth map to lock the characters but change the style but do not have the knowledge yet on what works / need. Thank you if you get a chance at answering this!
May I ask if this workflow can achieve the effect of changing clothes?
I'm afraid not. In most cases, it's just a change in style. But you can try raising the denoise parameter to see if it can be implemented.
Im confused, I have it rigged up and it does run but it seems to be just outputting the exact same video, it doesnt look changed by my prompt?
HighRes Fix Script ->Denoise: You can try raising this parameter to 0.5
Hi thank you for the great workflow.
1) why when changing the denoise levels it is changing the angle of the video, and the camera become still?
2) why the legs of the model is always inside the floor?
3) why the prompt have no effect?
4) And if there is a way to run XL checkpoints with this workflow?
1-3: Denoise parameters need to be debugged repeatedly to achieve good results. If precise control is required, OpenPose is needed
4: You can use SDXL checkpoints to run this workflow
This works incredibly well!
I have a question, how could I lower the fidelity of the input video to get (or at least try to get) a different character in the final result?
You can try to increase denoise in HIRES FIX script (say, 0.6 or more), or add a controlnet (a way to extract the video to controlnet DEPTH for example), then the resulted latent image could be re-generated. I haven't tried that because I use other workflows which use DMD2 from Adobe and others which implements Ti adapters and other creative solutions.
Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'ModelPatcher' object has no attribute 'model_keys'
How should I solve it
I solved it by uninstalling and reinstalling the AnimateDiff Evolved custom node
Thank you so much!
@PressWagon Hello brother, could you please ask why the video I generated is blurry and not very clear
Nice workflow, unlike others that are a mess and hard to understand.
I've made some tests, for example, a video with a girl dancing in a living room. I wanted to transform to lara croft in egyptian environment. By putting STEPS 5 in Ksampler ADV (Efficient), while setting upscale_by 1.00, denoise 0.80 in Highres-Fix script, it's faster as it does not upsample (up to 1 hour instead of almost 4 hours in my 6GB VRAM system) , BUT the ending result is too blurred out, full of noise. You know its Lara by the 'sillouette'. Perhaps improving this workflow to extract a controlnet DEPTH with the source video can help, or by increasing STEPS to, say, 10 or more (I use Paseer LCM 2mb LORA).
*Solved it. I set the same set to false and make it 0
I get this error from the HighRes-Fix Script.
Prompt outputs failed validation HighRes-Fix Script: - Value -1 smaller than min of 0: seed
What do I do to fix this?
I had to delete the node, re-add it (sets ControlNet to false, which was not visible), set the seed to false, and then add a random one. I'm running a job and then I'll test ControlNet to true and see what happens
Can this be updated for SDXL / PONY ?
I'm trying to generate a video, but the output is just a blurry version of the input. Changing the denoise scaling just changes how blurry the output is. I've tried SDXL and noobAIXL, but neither seem to be working with some simple prompts. I also changed the animatediff model to the SDXL one. Do you know what the issue might be?
'ModuleList' object has no attribute '1' 오류는 어떻게 해결하나요?
