EDIT: tutorial video added to the examples of the 1.0 version, take a look.
This is an i2v test workflow to generate seamless (or close) video using flow-two ComfyUI-WanStartEndFramesNative.
It's not perfect. if you got better solution for seamless loop, i'm aware.
details in the workflow.
help: RuntimeError: output with shape [1, 14850, 5120] doesn't match the broadcast shape [2, 14850, 5120]
it's pretty rare but sometime a random error can pop, can't find why, it's totally random.
solution: increase or reduce the video length to the next or previous step and re-run.
Description
-adding sliders to control the output video fps and interpolation allowing a better speed and fluidity control. adding note on how to use.
-correction of an issue, i've seen after the final interpolation, some frames miss between the last and the first frame, it's invisible with low interpolation but the gap increase with the final interpolation number and can create a jump, it's now good, the output video is perfect seamless.
-the clean vram node was not connected on the 1.1, corrected.
-be carreful, the "save output" button is now turned off by default in the final group to allow user to tune output settings without save every iterations, and the output video framerate is set to 60 fps.
-minors improvements.
FAQ
Comments (92)
Anyone try this with Blackwell (50 series)? With my card, I'll work maybe once, then go increasingly crazy- either ignoring the prompt and rendering a series of identical images, or render garbage images very quickly. This is with and without teacache and sageattention, with different GGUF loaders, and every amount of virtual VRAM. I can only conclude the frame to frame node (at the heart of this workflow) is broken on Blackwell.
sorry, i'm not rich enough to use a blackwell gpu. :D did you try to use an other version of the workflow to see if the issue persist, especially the version 1.0, she use a standard ksampler in place of the custom sampler. i hope with 1500 download someone using a blackwell make a feedback sooner if it's a problem. also, be sure to use the version 1.0.5 of the start-end node.
I have been using this with a rtx 5090 and I haven't had any issues so far. It could be the model you are trying to use. I am using the 14b 720p Q6 K GGUF model. The results I've gotten so far are really good compared to other workflows I have used.
@thecoco254 ok, cool, if you're issue is solved it's the most important. thanks for your feedback, have fun.
Really great !! Did you consider making one for T2V ?
thanks. i don't think so, first because i try a lot of things with the hope to full auto this workflow and it take a lot of time. second because i don't think it's doable easly. doing thing for t2v require, first generate a full t2v video with t2v model, then switch to the i2v model and use the last and first frame of the t2v vid to close the loop. that's mean a double work on the interpolation things to join the 2 videos properly. the best thing maybe will be to train a seamless loop lora train with video create with this workflow, i think about that til some times, this will be great to this workflow and t2v models.
You might be interested to know that Wan released their own 720p Start-End frame model, and Kijai made a workflow for it. I imagine 480p and GGUF versions will be coming soon. Could make loop creation easier, who knows.
sure i'm interested ! i've done quick searches and can't find anything interesting. do you have some links pls ?
@ekafalain probably this , maybe https://huggingface.co/Wan-AI/Wan2.1-FLF2V-14B-720P
@blo01 oh thanks, i've look at the huggingface and miss the latest news :/
i'm actually doing tests on the official flf2v model and comfy core start-end node and results are really bad, i mean, if the first and last frames are the same, just nothing happen between. i'm really disappointed :/
yup.. tried other workflows , could be that default way is not how it should be used with flf2v models
@ekafalain I also tried the gguf and it doesn't work with 2 identical images, perhaps it's a matter of time before it's functional for loops. In any case, it's a big step, it already gives me very good results if used normally.
so, i've tried a lot of differents things with the flf2v model but nothing good happens for now on loops :/ but, the comfy core start-end loop seems to do full-auto seamless loop by it's own, awesome ! but the 3 or 4 last images are burned and unusable, what a sadness !! i've tried a lot of thig but i can't save this lasts images. it don't come from my workflow, i've seen that on every others workflow using this node. maybe we must wait a little more for perfect full auto.. 😒
Where do i get the resources for the "UnetLoaderGGUFDisTorchMultiGPU" and the CLIPLoaderMultiGPU"?? i have scoured the internet and this post and cannot find any info anywhere....
@ekafalain thanks a bunch. i am up and running now but getting this error at the "sampler custom advanced" hoping you could help with this "Sizes of tensors must match except in dimension 1. Expected size 13 but got size 12 for tensor number 1 in the list."
@anthonybyrne6890566 just to know, did you change any setting? and did you try with an other image ?
@ekafalain i did not modify any settings, and i have tried with multiple different images of various sizes
@anthonybyrne6890566 hum... i've got issues with the kjnodes recently, try to bypass "sage attention" node and "teacache node" before run and tell me. or, if you want just try, you can use the v1.0 of the workflow, i use a standard ksampler in place of the custom sampler in this version, some features miss but it's a good start.
I keep getting this:
"Missing Node Types When loading the graph, the following node types were not found UnetLoaderGGUFDisTorchMultiGPU No selected item"
I have also tried a fresh install and it says the node is installed but I still cant make it work, any help here? I'd love to try this.
hello there, never seen this error before, first thing to do i suppose is to remove the multigpu unet loader node and replace it by the standard "unet loader (gguf)" to do test. if the issue persist with the clip loader or the vae loader, maybe try to manually download the multigpu files from the github and replace the ones in the "comfyui/custom nodes" folder. copy/paste the full console log can help me to find where the problem come from. you can also replace all the multigpu nodes (unet loader, clip loader and vae loader) by standard one, if you don't want to overthink about it but this will have an impact on the genration time.
@ekafalain wow thats great feedback, ill try it :)
ReferenceError: clamp is not defined
i'm ready to help, but please, detail a little more.... i don't even know what node cause the issue...
How do you use this in wan 2.1 for pinokio?
i don't know, i don't use pinokio. it's a comfyui workflow i don't think you can use it with anything else.
Hi, I might be wrong but I think I saw an installation .bat file for the setup of the stuff you need for comfyui with this, can that be correct? if so I'd love the link I cant find it anywhere
no sorry, no bat here, only a workflow and the oldschool way to install the custom nodes.
The workflow doesn't seem to follow my prompts properly. Any tips? I am only trying to animate foreplay scenes instead of any sex scenes.
the base model is not made to do this kind of thing, so if you don't use lora, result will be terrible, but if you use a lora not trained for what you want, result will be terrible too. the best way will be to use a lora trained to your specific need.
Hey again! v1.2 has been my go-to recently, really love it!
Sometimes the interpolation at the end makes it a bit blurry to loop it back and I think it's like you mention that it's related to how the lora used was trained on number of frames.
It seems like the blurry effect could potentially be fixed if it was passed through a final sampler, but when that happen it needs to get into the last part of the workflow, the interpolation. So it might be harder on the hardware to process it. I am not sure if you have a solution for it? Some loops works perfectly for specific loras, but not all.
I think this final touch would help a lot, not sure if you have an idea for it.
hey, what's up ! sorry for the long time to answer. i've don't notice any blur issue but sometime the final rife vfi module create blinking white really annoying, maybe it's related, i've done test with the "film vfi" node and that fix the issue, maybe try to change this final node. they're plenty of interpolations node in the interpolation suite, i need to do some tests to find the better one and change the rife vfi in a potentially future update. a blurry issue can also be due to the fact that the final interpolation interpole fake frames already generate by a previous interpolation... it' a lot of fake frames. but in first time, try to use a different interpolation node. i don't think a final sampling is really good, resample something like 100/150 frames will be soooooooo long and i'm not sure this will be very good even with a low denoise :/ with a standard/custom sampler i mean, i'm pretty sure some node a specialised in that but no one come in mind for now. i've tryed fastblend (https://github.com/AInseven/ComfyUI-fastblend) a long time ago but that's not really good, long, and heavy. i'm glad you like the workflow, i'm always do some test about a clean upscale, the best solution i think will be tensorRT, but first, you fear me about the the hard installation in pm and i can't integrate an hard to install node to the workflow when a lot of people got issue with basic nodes.
edit: maybe you can try to generate a blurry vid on purpose (one you can share) and send it to me, this will help to identify the issue.
@ekafalain Hey! Forgot to reply after reading your message. I realized that this issue only happens for illustrated/2.5D style. Realistic seems to not have this issue. Seems the interpolation node is indeed the issue, so I'll have to replace that one. I haven't tried further with tensorRT as it seems to not have specific upscale model available for 2.5D, but I think it works well for realistic. It's 2-3x faster for sure - just not sure if it keeps full detail from reference.
How to extend video length? to make it more than 1 second
just change the lenght parameter in the wanvideo node, 1st row, 4th column.
I get: "PathchSageAttentionKJ
No module named 'sageattention'" on a fresh install with all the modules installed.
Do you have any advice on how to get around this? I have looked into how to do it for 2 days and failed to fix it.
When [Don't touch me daddy!] in the last step is switched on, the whole screen becomes dark and the bright areas flash like a flicker phenomenon, but is this only in my environment?
If [Don't touch me daddy!] is bypassed, this problem does not occur.
What is the node that processes [Don't touch me daddy!]
hi. thanks for your feedback, this node just remove the last frame. it's tricky to explain why i need this, basicly, it's to avoid a micro freeze at the loop end. i've already seen flickering white issue and plan to correct this in a futur update by replacing the "rife vfi" node by the "film vfi". but i've never seen a screen becoming dark problem, can you take some screenshots of the issue, post it somewhere and send me a link please. this node is not supposed creating this kind of issue, bypassed or not. (nota: the last frontend of comfy create a lot of issue in a lot of workflow, can you please, check wich version of the frontend you are currently using, it's wrote in the console, thx)
First I spent a bunch of nerves to figure out how it worked, then to pick the right opening shot. And it was well worth it! Thanks for posting this WF!
thanks for your feedback, i try a lot of things in hope to full auto the loop process to avoid this kind nervous breakdown. have fun with this WF !
any advice you can offer Gudvin? I can't get it to run on me end due to a failure in the workflow.
Hi i really love this workflow, but from yesterday it started giving me black video. I didn't update comyui. How to fix this
Great workflow, thanks!
I have an image with a woman that has her left hand at her chest. But the prompt doesn't listen if I say anything to move her hand to a different body part. Only at 41 frames it moves but it doesn't look naturally. >81 frames and the arms barely move.
There is a node package that is never installed because it will be ComfyUI-WanStartEndFrames :C
after a few hours i figured how to fix thix, the manager sets to install version 1.0.5, you need to clclick in the version and select "latest".....can't believe it took me so long and just this is the fix.
I tried using the workflow and got an error from the SamplerCustomAdvanced stage: "not enough values to unpack (expected 2, got 1)"
Is this a known issue or does anyone know what I may have done wrong?
Did anyone get CausVid lora to work on this workflow?
is this a lora or....??????
a workflow using a start-end node for looping back to first frame when video ends
bro, 45KB sized lora ???
@reuteradrian84618 It's a workflow lol, no model file in this
Bro, this is basically rocket science—I stared at it for an hour and still couldn't figure out how it works...
And,,, I'm missing the nodes WanVideoEnhanceAVideoKJ and WanVideoEnhanceAVideoKJ—do you know which version of kjnodes I should be using, or if there's an alternative node I can use instead?
https://civitai.com/models/1416594/wan-aio-vace-seamless-motion-extension-loop?modelVersionId=2009031
If you never figured this out its because you probably needed to update ComfyUI and nodes and fully restart
Firstly wanna say thanks! This is an awesome workflow which I've used a lot for animations/projects. Just had a quick question is it possible to alter this so instead of interpolating between one image to create a loop to itself you could interpolate between a start and different end image? So basically filling in the blanks between two scenes.
That is the standard way the node is used. You define the first and last frame. The prompt is used to fill in the middle. This just tries to make it loop with identical images.
Works pretty well, I'm trying to include half-esrgan and half-wan based resampling upscale phase inbetween (at the end of the workflow), but obviously it's too technical for me to achieve that. Any chance you could provide a version with upscaling included?
Works really well, but I have issue with my anime characters when I run full interpolation. It introduces some flickering. How can I fix it?
I cannot get this to work because "ComfyUI-WanStartEndFrames" has missing parts regardless of version. Has anybody the same problem?
Does this work the same as first and last frame workflows? Where you can just use the same frame for start and end?
The workflow isn't following my inputted image. It's making it's own video instead...
Use specific LORA's + trigger words or you won't get decent results.
The workflow worked well, I did some tests and tweaks.
But I couldn’t make a video longer than 2 seconds.
Is there anywhere to change this?
I adjusted the FPS and interpolation but still couldn’t go beyond 2 seconds.
I don't exactly remember how this workflow works, but nowadays people have succeeded with infinite length using either a chaining pattern with VACE or some custom nodes that I never can install lmfao. Just looking through the more recent WAN - Workflow posts with the filter should find you everything
Like this for example: https://civitai.com/models/1416594/wan-aio-vace-seamless-motion-extension-loop?modelVersionId=2009031
Incredible post too, because it gives a full tutorial.
Please update model to Wan I2V so it's shown when filtering ;P
Triton only support CUDA 10.0 or higher, but got CUDA version: 12.9 , I wonder which of us have the worse math, me or triton
How to change the size of the output video and the upscaler factor? I will send buzz to anyone who helps
error "No module named 'sageattention'". how to solve?
Encountered the same problem :(
Install sageattention
Panthy how
install comfyui easy install, it had shortcut for installing sageattention easily
Just in case someone else hits this: I got the error No module named 'sageattention'. The fix was pretty simple. Go to your ComfyUI embedded Python folder (for me it’s C:\AI\Comfy-UIPrtbl\python_embeded), open a terminal there (in explorer you can just type cmd in the address bar) and then run:
python.exe -m pip install sageattention
If that doesn’t work, install it directly from github with:
python.exe -m pip install git+https://github.com/cloneofsimo/sageattention.git
after that you can test it with:
python.exe -c "import sageattention; print('SageAttention OK')"
and if you see SageAttention OK, it’s all good
I don't quite understand why the process of listing each frame is
Does this mean I can make minor edits?
I'm not sure about the arrow in the note.
Workflow is too monstrous. I'm afraid to even look at it, let alone touch it.
I tried as many combinations as I could but cannot overcome this issue: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)... Is this able to run on multigpu or nah?
Pls Wan 2.2
already follow the config instruction but why it still fast, inter 4 fps 60
Looks like it still needs to cut out the redundant frame(s) between loops to prevent those dupe-frame pauses
Hello, would you like to create an updated version for wan 2.2? I really like your workflow!
Has anyone tried updating this workflow to work with wan 2.2? I really like it, but I'm new to this and can't update it myself. It seems like the author has abandoned this project :(
colormatchimage node is red. I can't find any info how to install it and no solution for 'WanVAE' object has no attribute 'clear_cache'
install ComfyUI-Image-Filters
WanImageToVideo (Flow2) breaks the generation saying: WanVAE object has no attribute "clear_cache"
Looks like we don't have an active mirror for this file right now.
CivArchive is a community-maintained index — we catalog mirrors that volunteers upload to HuggingFace, torrents, and other public hosts. Looks like no one has uploaded a copy of this file yet.
Some files do get recovered over time through contributions. If you're looking for this one, feel free to ask in Discord, or help preserve it if you have a copy.