CivArchive
    A Simple vid2vid AnimateDiff ComfyUI workflow - v2.0
    NSFW
    Preview 6883171

    Since someone asked me how to generate a video, I shared my comfyui workflow. Compared to the workflows of other authors, this is a very concise workflow.

    It must be admitted that adjusting the parameters of the workflow for generating videos is a time-consuming task,especially for someone like me with low hardware configuration. If I got better parameter, I would be happy to share it with everyone.

    Release Note:

    V2.0 : Adjusted parameters, workflow remains unchanged

    Features:

    • ComfyUI workflow (not Stable Diffusion,you need to install ComfyUI first)

    • SD 1.5 model (SDXL should be possible, but I don't recommend it because the video generation speed is very slow)

    • LCM (Improve video generation speed,5 step a frame default,generating a 10 second video takes about 700s by 3060 laptop)

    How to use:

    • You can change the model and prompt text,just like generating images .There are three parameters that need to be noted:

    • ImpactInt & batch_size & frame_rate : This is a set of interrelated parameters, frame_rate is your video frame rate, which is 30 by default. ImpactInt&batch_size is the total frame count your input video. If the frame rate is 30 and the duration is 10 seconds, then these two parameters should be 30 * 10=300

    • The LCM Lora : LCM-LoRA Weights - Stable Diffusion Acceleration Module - LCM for SDXL | Stable Diffusion LoRA | Civitai

    Description

    FAQ

    Comments (26)

    VectorVandalMar 13, 2024
    CivitAI

    Where do I find the LCM_LoRA-Weights_SD15.Safetensors? It is in the Lora Name of the Efficient Loader line.

    How do you make it match the aspect ratio of the video though?

    AlexLai
    Author
    Mar 13, 2024

    LCM-LoRA Weights - Stable Diffusion Acceleration Module - LCM for SDXL | Stable Diffusion LoRA | Civitai

    LCM Lora is download from here.

    You need to edit input video for matching the aspect ratio,or change the resolution parameter in the workflow for matching input video.

    VectorVandalMar 14, 2024

    @AlexLai  Thank you for the link. Yeah I changed the aspect ratio but It is coming out blurry and more cartoonish and smooth. not a lot of detail. Am I missing a setting to tweak then? Thank you for taking the time to respond too!

    AlexLai
    Author
    Mar 15, 2024

    @Nibot Node:HighRes-Fix Script -> denoise : I set it to 0.4 by default, you can try 0.2, the smaller it is, the more it looks like the original video

    VectorVandalSep 22, 2024

    @AlexLai coming back many months later lol. I am jumping in and out of workflows and taking breaks :( but jumping back in. I have a nice look at .5 but it is too in consistent but lowering denoise on highres fix makes it close to the video but loses the style but does better keeping consistency for the most part. Is there way to lock to the style of .5 but keeping it close to the video of .2? I was tempted to try a depth map to lock the characters but change the style but do not have the knowledge yet on what works / need. Thank you if you get a chance at answering this!

    59321715349Apr 6, 2024
    CivitAI

    May I ask if this workflow can achieve the effect of changing clothes?

    AlexLai
    Author
    Apr 27, 2024

    I'm afraid not. In most cases, it's just a change in style. But you can try raising the denoise parameter to see if it can be implemented.

    LocalYokelApr 27, 2024
    CivitAI

    Im confused, I have it rigged up and it does run but it seems to be just outputting the exact same video, it doesnt look changed by my prompt?

    AlexLai
    Author
    Apr 27, 2024· 1 reaction

    HighRes Fix Script ->Denoise: You can try raising this parameter to 0.5

    AnikushkushJun 2, 2024
    CivitAI

    Hi thank you for the great workflow.

    1) why when changing the denoise levels it is changing the angle of the video, and the camera become still?

    2) why the legs of the model is always inside the floor?

    3) why the prompt have no effect?

    4) And if there is a way to run XL checkpoints with this workflow?

    AlexLai
    Author
    Aug 4, 2024

    1-3: Denoise parameters need to be debugged repeatedly to achieve good results. If precise control is required, OpenPose is needed

    4: You can use SDXL checkpoints to run this workflow

    typiakJun 5, 2024
    CivitAI

    This works incredibly well!

    I have a question, how could I lower the fidelity of the input video to get (or at least try to get) a different character in the final result?

    schschJul 21, 2024

    You can try to increase denoise in HIRES FIX script (say, 0.6 or more), or add a controlnet (a way to extract the video to controlnet DEPTH for example), then the resulted latent image could be re-generated. I haven't tried that because I use other workflows which use DMD2 from Adobe and others which implements Ti adapters and other creative solutions.

    1426700087Jun 16, 2024
    CivitAI

    Error occurred when executing ADE_AnimateDiffLoaderWithContext: 'ModelPatcher' object has no attribute 'model_keys'

    How should I solve it

    PressWagonJun 18, 2024

    I solved it by uninstalling and reinstalling the AnimateDiff Evolved custom node

    1426700087Jun 18, 2024

    Thank you so much!

    1426700087Jun 20, 2024

    @PressWagon Hello brother, could you please ask why the video I generated is blurry and not very clear

    bobvr2Jun 27, 2024
    CivitAI

    Nice workflow, unlike others that are a mess and hard to understand.

    schschJul 22, 2024
    CivitAI

    I've made some tests, for example, a video with a girl dancing in a living room. I wanted to transform to lara croft in egyptian environment. By putting STEPS 5 in Ksampler ADV (Efficient), while setting upscale_by 1.00, denoise 0.80 in Highres-Fix script, it's faster as it does not upsample (up to 1 hour instead of almost 4 hours in my 6GB VRAM system) , BUT the ending result is too blurred out, full of noise. You know its Lara by the 'sillouette'. Perhaps improving this workflow to extract a controlnet DEPTH with the source video can help, or by increasing STEPS to, say, 10 or more (I use Paseer LCM 2mb LORA).

    AlexLai
    Author
    Aug 4, 2024· 1 reaction

    Sorry for taking so long to reply to you.

    Your requirements are beyond the ability of this workflow. you need to use a workflow with an ControlNet-Openpose

    schschAug 5, 2024

    Great, it worked! Yes, low-end systems end up taking 3hours or more but it works! Thank you!

    VectorVandalSep 19, 2024· 1 reaction
    CivitAI

    *Solved it. I set the same set to false and make it 0

    I get this error from the HighRes-Fix Script.

    Prompt outputs failed validation HighRes-Fix Script: - Value -1 smaller than min of 0: seed

    What do I do to fix this?

    TheDude363Dec 27, 2024· 3 reactions

    I had to delete the node, re-add it (sets ControlNet to false, which was not visible), set the seed to false, and then add a random one. I'm running a job and then I'll test ControlNet to true and see what happens

    1573380Sep 30, 2024· 1 reaction
    CivitAI

    Can this be updated for SDXL / PONY ?

    goobnoobFeb 2, 2025
    CivitAI

    I'm trying to generate a video, but the output is just a blurry version of the input. Changing the denoise scaling just changes how blurry the output is. I've tried SDXL and noobAIXL, but neither seem to be working with some simple prompts. I also changed the animatediff model to the SDXL one. Do you know what the issue might be?

    youngjoo282874Oct 22, 2025
    CivitAI

    'ModuleList' object has no attribute '1' 오류는 어떻게 해결하나요?

    Workflows
    SD 1.5

    Details

    Downloads
    7,340
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/20/2024
    Updated
    4/30/2026
    Deleted
    -

    Files

    aSimpleVid2vid_v20.zip

    Mirrors

    CivitAI (1 mirrors)