Smittie's SVI2 1080p 60 FPS Workflow
Description
With this workflow you can generate a longer video in 1080p and 60 FPS without color or motion mismatch between the video segments (or choose cross-fade or jump-cut as transition) and inject anchor images (not end images) as further detailed guidance. You can also continue videos of ANY length (series, movies, etc.) without running into RAM issues.
If you just want to use I2V or continue videos that are only a few seconds long, then you can still use workflow v4. This one doesn't require the ComfyUI-Terminal node.
At least 16 GB of VRAM recommended, but with more quantized versions of the Wan 2.2 I2V diffusion models you can go even lower. This workflow was tested with 24 GB of VRAM and 64 GB of RAM.
Install Instructions
Those are specific to the workflow version. Download a workflow and do what the "Initial Setup" note says.
Video Instructions
If you don't know what to do, you can watch the Stable Video Infinity Tutorial by AI Search. He works with a different workflow, which I have built upon.
Acknowledgement
Most credits goes to the Stable Video Infinity, Lightning, Wan, ffmpeg, ComfyUI and CivitAI team, as well as kijai, AI Search, darksidewalker, Joviex, GACLove, jeankassio and Firetheft. Thank you guys!
Description
streamlined the workflow; every node that can be hidden is now part of a sub-graph
changed all LoRA loader nodes to LoRA gallery, which let you choose multiple LoRA's at once, see a preview of each, set the model weight and also, if set by the LoRA creator, gets the trigger words automatically
changed that the text prompt for each segment can now be written immediately and that the general text prompt and the LoRA trigger words are automatically added to it; re-linking is not necessary anymore
added anchor image node to the first video generation segment, that can be useful for continue video
updated notes
FAQ
Comments (13)
0.0 seconds (IMPORT FAILED): D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\Jovimetrix ((
The fix in the comfyui manager does not help. What could be the problem, and is there any way to continue the video without this node?
Seems like that you have not correctly installed the Jovimetrix custom nodes. Try to install the latest version of that again via the comfyui manager.
If you skip this node, the continue video is not choosing the correct frame count. You can bypass it, but then have to set a fix value (81 frames) in the "Load Video" subgraph yourself.
Thank you. The rim of the node helped. I would like to be able to set a fixed seed value for each segment. It helps a lot to redo the last segment without having to render the whole process.
@dirtysem A seed for each segment -> Copy the noise part in "2. Settings" with the 42 value, but without the "Set_..." node. Then link each copy to the noise dot for each segment. Then you have a seed for each segment.
Appreciate your effort with this workflow! I'm just a little confused / frustrated with the length of video you get with each segment. It seems with default settings, 3 separate generations equates to roughly 5 seconds of video? Is that right? So you need to select the same LoRAs and copy/paste prompts multiple times to get the same equivalent length as a typical single generation.
Is this an SVI quirk? side effect of speeding up the video for a more natural speed output? Would you recommend increasing the length from 49 to 81?
It is just my recommendation, as for example 3x 33 length is faster to generate than 1x 97 length. But of course you can go up with the length to about 8 x 16 + 1 = 129; it will just take longer, but is of course less repetitive work. 81 is the default.
Hi, qq.
Is 3b "Continue Video" mandatory? in other words, do I need to load one video before processing?
For other authors workflow (e.g., wan22_SVI_Pro_native_10-segments.json), I didn't needed to specify any video before processing, and bit confused.
Why do we need to specify video?
3B Continue Video is not mandatory, but, unfortunately, the "lazy" switches still load those nodes, even though the bool "is_continue_video" is set to false. Therefore, if you don't have chosen a video, even though it will not be used, it throws an error.
I just build it that way as I thought it would make it not necessary anymore to bypass those nodes. But as I already had a valid video set, I didn't noticed this weird behavior.
So, easy fix: Just choose any video and you're good to go, it will not be used, if you set the bool to false.
@Smittie Thank you for clarification. That's totally makes sense.
Workflow boolean is not well setup. Kinda garbage the way you handle this...
Thanks for the feedback. Do you know a solution how I can use the bool and switches better?
I would need a way to completely stop the processing, if the switch wouldn't lead to the previous nodes anyway. But the switches still process everything and just let the false or true option pass then.
I would potentially offer 2 different workflows and keep things cleaner and less confusing for the end-user. It took me a while to figure how to make this work despite the instructions in the current state.
Potentially possible to have a boolean but the workflow is already quite large on it's own so maybe 2 workflows make more sense.
Good luck and apologies if I sounded blunt on my initial comment.
@maximethurston910 That's feedback I can work upon. Thanks for the clarification.
My goal was to essentially start with I2V, then save the result if you got no RAM anymore, then use the same settings (no need to setup stuff again as it is the same workflow) for continue video to work on your previous created generation further as RAM was freed.
