Smittie's SVI2 1080p 60 FPS Workflow
Description
With this workflow you can generate a longer video in 1080p and 60 FPS without color or motion mismatch between the video segments (or choose cross-fade or jump-cut as transition) and inject anchor images (not end images) as further detailed guidance. You can also continue videos of ANY length (series, movies, etc.) without running into RAM issues.
If you just want to use I2V or continue videos that are only a few seconds long, then you can still use workflow v4. This one doesn't require the ComfyUI-Terminal node.
At least 16 GB of VRAM recommended, but with more quantized versions of the Wan 2.2 I2V diffusion models you can go even lower. This workflow was tested with 24 GB of VRAM and 64 GB of RAM.
Install Instructions
Those are specific to the workflow version. Download a workflow and do what the "Initial Setup" note says.
Video Instructions
If you don't know what to do, you can watch the Stable Video Infinity Tutorial by AI Search. He works with a different workflow, which I have built upon.
Acknowledgement
Most credits goes to the Stable Video Infinity, Lightning, Wan, ffmpeg, ComfyUI and CivitAI team, as well as kijai, AI Search, darksidewalker, Joviex, GACLove, jeankassio and Firetheft. Thank you guys!
Description
added continue video functionality besides I2V
increased video generation segments from 3 to 10, but only 3 are enabled by default (you can easily enable more, without having to copy paste it anymore)
changed saved video crf to 15, which results in ~10 Mbit/s bitrate and is more reasonable for 1080p videos
updated notes
FAQ
Comments (7)
Thanks for the update. It's a great idea to make a sequel to the video :) I haven't come across any schemes that actually work yet, but yours might do the trick. There's just one small problem:
ImageBatchExtendWithOverlap
Source and new images must have the same shape: torch.Size([416, 752]) vs torch.Size([624, 1128])
How can this error be fixed?
Thank you.
Seems like you tried to continue a video with a different resolution than the generation has. My workflow takes the previous video as is and doesn't upscale it or anything, so if the new generation is different, this error occurs. You can do either:
1. Upscale your source video to match the new generation.
2. Downscale/Resize your generation to match the source video.
3. Generate a new video with the size of the source video.
I have another one, which should be the easiest of all:
4. Bypass the "Upscale image by" node in step 6
Hi
Which custom node controls your RIFE interpolation?
Unfortunately, without Comfy Manager it’s quite difficult. On the newer ComfyUI versions it doesn’t work yet / no longer (again), and the built-in manager solution is completely useless.
These are the custom nodes I’ve updated or added (what mixlab nodes suggested) :
Frame-Interpolation
RifeInterpolation
KJNodes - Update
RifeInterpolation
StableVideoInfinity
Thank you for the feedback.
It's the "RIFE Frame Interpolation" from VFI.
images -> images
source_fps = 16
target_fps = 48
scale = 1.0
model_name = flownet.pkl
batch_size = 16
use_fp16 = true
This is the best SVI workflow I have found so far (and I tried many), it really keeps the character face stable for long enough.
I am using it with Wan22 Remix and it can decently follow the prompts.
I wonder if this could be adapted to include I2V steps for even better control
Thank you for the positive feedback!
The next update will introduce anchor images per video segment that can be used as further guidance. However, the video will not look 100% like the given in-between anchor image as they are not work like end images.
