Since someone asked me how to generate a video, I shared my comfyui workflow. Compared to the workflows of other authors, this is a very concise workflow.
It must be admitted that adjusting the parameters of the workflow for generating videos is a time-consuming task,especially for someone like me with low hardware configuration. If I got better parameter, I would be happy to share it with everyone.
Release Note:
V2.0 : Adjusted parameters, workflow remains unchanged
Features:
ComfyUI workflow (not Stable Diffusion,you need to install ComfyUI first)
SD 1.5 model (SDXL should be possible, but I don't recommend it because the video generation speed is very slow)
LCM (Improve video generation speed,5 step a frame default,generating a 10 second video takes about 700s by 3060 laptop)
How to use:
You can change the model and prompt text,just like generating images .There are three parameters that need to be noted:
ImpactInt & batch_size & frame_rate : This is a set of interrelated parameters, frame_rate is your video frame rate, which is 30 by default. ImpactInt&batch_size is the total frame count your input video. If the frame rate is 30 and the duration is 10 seconds, then these two parameters should be 30 * 10=300
The LCM Lora : LCM-LoRA Weights - Stable Diffusion Acceleration Module - LCM for SDXL | Stable Diffusion LoRA | Civitai
Description
FAQ
Comments (2)
Error occurred when executing ADE_AnimateDiffLoaderWithContext: PytorchStreamReader failed reading zip archive: failed finding central directory File "C:\Users\tdami\pinokio\api\comfyui.git\app\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "C:\Users\tdami\pinokio\api\comfyui.git\app\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "C:\Users\tdami\pinokio\api\comfyui.git\app\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "C:\Users\tdami\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\nodes_gen1.py", line 131, in load_mm_and_inject_params motion_model = load_motion_module_gen1(model_name, model, motion_lora=motion_lora, motion_model_settings=motion_model_settings) File "C:\Users\tdami\pinokio\api\comfyui.git\app\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 353, in load_motion_module_gen1 mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True) File "C:\Users\tdami\pinokio\api\comfyui.git\app\comfy\utils.py", line 20, in load_torch_file pl_sd = torch.load(ckpt, map_location=device, weights_only=True) File "C:\Users\tdami\pinokio\api\comfyui.git\app\env\lib\site-packages\torch\serialization.py", line 1005, in load with openzipfile_reader(opened_file) as opened_zipfile: File "C:\Users\tdami\pinokio\api\comfyui.git\app\env\lib\site-packages\torch\serialization.py", line 457, in init super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
What is this error
“PytorchStreamReader failed reading zip archive” --- extract the zip,drag the json file to the ComfyUI interface
