CivArchive
    AnimateLCM Image to Video Workflow (I2V) - v1.0

    Introduction

    This is a demonstration of what is possible with AnimateLCM Image to Video. After the AnimateLCM I2V pass for the initial video generation, I am doing a 2nd pass using AnimateLCM T2V with Cseti's general motion lora to clean up the details and get consistency across 32 frames. IPAdapter is used to keep the genration more faithful to the original image.

    Please note that this workflow requires you to tweak the settings for each input image to get a decent video output. Also, the output video does not animate as much as the image to video services you get online. The video is limited to about 2 seconds at 30FPS.

    CogVideoX (for 16GB VRAM)

    Only use this workflow if you do not have the VRAM to run CogVideoX. If you have the VRAM (16GB), please install kijai's CogVideoX wrapper node (https://github.com/kijai/ComfyUI-CogVideoXWrapper/) and load the I2V workflow in examples folder. To lower VRAM usage: enable fp8 transformer in the (Down)load CogVideo Model node and enable_vae_tiling in the CogVideo Decode node.

    Nodes

    Red: Requires user to download/or select a model

    Blue: Requires user input to change settings or provide an input image

    Brown: Notes for the workflow, please read carefully while using the workflow

    Custom nodes (install with manager)

    • ComfyUI Frame Interpolation

    • ComfyUI_IPAdapter_plus

    • AnimateDiff Evolved

    • ComfyUI-VideoHelperSuite

    • ComfyUI Essentials

    • KJNodes for ComfyUI

    • ReActor Node for ComfyUI (optional)

    Models Needed

    Description

    Workflows
    SD 1.5

    Details

    Downloads
    889
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/19/2024
    Updated
    9/30/2025
    Deleted
    -

    Files

    animatelcmImageToVideo_v10.zip

    Mirrors