CivArchive
    Wan2.2 V2V VACE One-Click 'Seamless' Workflow Loop, Preserving Subject - v1.0
    NSFW
    Preview 97439145

    "The power of the jiggle physics in the palm of my hands"

    -Doc Ock or something like that

    This is a VACE V2V workflow in Wan2.2 designed to take a subject based on your reference image provided, and replace the subject in your reference video provided. This means the subject in the image does whatever action the subject in the video does.

    V2: NOTES - IMPORTANT!!

    I couldn't find a non-conflicting node that let one generate some sort of silence. Given the output limitations of what I could find/use, I ended up having to write my own custom node for the situation. The custom node is provided within the download for this workflow. Simply take the folder "Silence Generator" and move that folder and its contents to your "custom node" folder in ComfyUI. Restart ComfyUI and you should be good to go.

    If you know of a node that can generate one second of silence, you can simply replace the custom node with that (and then tell me about it!)

    I'll update here with some quirks along the way I've been seeing. Not really bugs per se but things to help resolve issues:

    • Problem: "LayerUtility: Purge VRAM V2 is not being found even though LayerStyle nodepack is up to date"
      Solution: clone it directly into your custom_nodes directory from here: https://github.com/chflame163/ComfyUI_LayerStyle
      Root Cause: ComfyUI registry possibly caching wrong version

    • Problem: "My VRAM is bad, and the models are too large for my machine. Slow/OOM"
      Solution: Use a GGUF version. You may need different loaders. If you connect the model output to the set nodes input behind the loaders, it should work fine. If you're still confused and google isn't helping, let me know and I can give some guidance.
      Root Cause: I created this on an H100 VM.

    • Problem: "I'm getting a numpy error upon running the workflow from InspyrenetRembgAdvanced."
      Solution: I've been seeing this time to time. I'm really not liking that node and may find a way to replace it soon. For now, if you run the workflow again, it ignores the error.
      Root Cause: Node be jank.

    Here's how it works in a nutshell:

    1. You input your image reference, video reference, size, VACE models, iteration number, frames to process per iteration, overlap frames, and whatever other params

    2. It runs edge detection, pose, plus some fancy masking (optional) on your video iteration slice. It runs processing on your image to pad it if it's not in line with your video aspect ratio.

    3. It runs VACE processing on your video/ref image/etc..

    4. It replaces the user specified overlap frames with grey frames on your VACE output.

    IF IT'S THE 0th INDEX (first iteration), IT BATCHES AND GOES TO NEXT ITERATION, ELSE...

    5. It processes the transition frames in a separate VACE workflow, using the black/white masking trick, but still in workflow (not separate).

    6. The grey frames inserted earlier from step 4 get replaced with your processed frames in step 5.

    7. The last frame from your iteration gets sent back to the beginning of the workflow, the subject in that is masked out, and your original ref image character is overlayed on top. (This is important to stop the typical cooked look each iteration and how it differs from the usual 'sending the last frame over for reference' like other workflows.)

    8. Steps 2 through 6 repeat until your iteration total is hit.

    9. It chops off the very first grey frames from your overall video.

    Result?

    You should now have a near seamless video with processed transitions from VACE. Your subject should not get messed up. The background may get a bit cooked for high iterations.

    My intention on this workflow is that it's easy to operate, even though some of the math and conditional nodes everywhere may seem kinda crazy.

    Disclaimer: I'm running this on an H100. Unless your gpu is genuine 100% angus beef power and not a potato pc, you will almost certainly want to change out the diffusion models/text encoders/etc.. to make things run faster. I tend to place quality first and speed secondary.

    Future places this can go:

    • Canny incorporation...

    • ...and/or Maintaining the mouth shape while removing the mask better - medium priority

    • Understanding the pose blend better so I can incorporate more pose without it coming up in the final video as an object. (maybe it's the color? blend type? need research) - medium priority

    • ̶A̶u̶d̶i̶o̶ ̶-̶ ̶h̶i̶g̶h̶e̶s̶t̶ ̶p̶r̶i̶o̶r̶i̶t̶y̶,̶ ̶s̶h̶o̶u̶l̶d̶ ̶b̶e̶ ̶u̶p̶c̶o̶m̶i̶n̶g̶ ̶s̶o̶o̶n̶.̶ ̶f̶i̶g̶u̶r̶i̶n̶g̶ ̶o̶u̶t̶ ̶s̶o̶m̶e̶ ̶s̶y̶n̶c̶i̶n̶g̶ ̶s̶t̶u̶f̶f̶.̶ - done

    • Background options: letting user pick vod background, image background, or a new T2V style generated background - low prioirty

    • Keeping background consistency better from initial image to first generation (possibly need some masking on the control video) - medium priority

    • I̶n̶t̶e̶r̶p̶o̶l̶a̶t̶i̶o̶n̶ ̶s̶t̶e̶p̶,̶ ̶m̶a̶y̶b̶e̶ ̶u̶p̶s̶c̶a̶l̶e̶ ̶-̶ ̶m̶e̶d̶i̶u̶m̶ ̶p̶r̶i̶o̶r̶i̶t̶y̶ ̶b̶u̶t̶ ̶e̶a̶s̶y̶ ̶t̶o̶ ̶d̶o̶,̶ ̶I̶ ̶j̶u̶s̶t̶ ̶w̶a̶n̶t̶ ̶t̶o̶ ̶f̶i̶g̶u̶r̶e̶ ̶o̶u̶t̶ ̶t̶h̶e̶ ̶i̶d̶e̶a̶l̶ ̶w̶a̶y̶ ̶t̶o̶ ̶d̶o̶ ̶i̶t̶ ̶f̶o̶r̶ ̶q̶u̶a̶l̶i̶t̶y̶/̶s̶p̶e̶e̶d̶ - done

    • Adding an optional use for different image refences on user specified iterations. could add some cool possibilities. - low priority

    • Changing out the deprecated resize image v1 for v2 - medium priority, in progress

    • Bug fixes - medium priority depending on the bug

    Personal notes:

    This started out with a "there's a source video I like but I hate how renaissance masks look and want to replace the person." Then, I decided to loop the process. Then I thought "but what if I make it seamless" from the overlap, so I built out a full FL2V step in it, but that wasn't seamless. It had this coloration difference and jumps. Then, I saw some "seamless" workflows on CivitAI. Those were neat! ...but they used filepaths and stuff in essentially a separate workflow. I wanted to click one button and process a full video, so I continued building this out. It's still definitely not perfect. It doesn't give a 1 for 1 replacement exactly like I want, but it's pretty cool I think for what it does so far at least. From here on, it's mainly fine tuning on everything and then fixing edge cases + adding some more features.

    Description

    initial commit. see description for things possibly upcoming.

    FAQ

    Comments (56)

    JezzAug 31, 2025
    CivitAI

    Hi I cant get this workflow to load I get this error.

    Loading aborted due to error reloading workflow data TypeError: Cannot read properties of undefined (reading 'type')

    can you help please?.

    TIA.

    ralphtandyAug 31, 2025

    I am also getting this error - I have never seen this before in my time using ComfyUI. What do we do

    gumpbubba721291
    Author
    Aug 31, 2025

    Hey @Jezz and @ralphtandy ! I can try and help, but does it show when you are getting this error? What node is highlighting red? Or does it give a number for what node is acting funky? It may give more clues on the terminal.

    randomsmasher266Aug 31, 2025
    Here's the log: # ComfyUI Error Report ## Error Details - **Node ID:** N/A - **Node Type:** N/A - **Exception Type:** Loading aborted due to error reloading workflow data - **Exception Message:** TypeError: Cannot read properties of undefined (reading 'type') ## Stack Trace ``` TypeError: Cannot read properties of undefined (reading 'type') at nodeType.onConnectionsChange (http://127.0.0.1:8188/extensions/ComfyUI-Impact-Pack/impact-pack.js:309:54) at ComfyNode.configure (http://127.0.0.1:8188/assets/index-aiK1t-_Q.js:216702:33) at ComfyNode.configure (http://127.0.0.1:8188/assets/index-aiK1t-_Q.js:295890:15) at LGraph.configure (http://127.0.0.1:8188/assets/index-aiK1t-_Q.js:228834:34) at LGraph.configure (http://127.0.0.1:8188/assets/index-aiK1t-_Q.js:302073:26) at LGraph.configure (http://127.0.0.1:8188/extensions/comfyui-custom-scripts/js/reroutePrimitive.js:14:29) at ComfyApp.loadGraphData (http://127.0.0.1:8188/assets/index-aiK1t-_Q.js:302370:18) at async app.loadGraphData (http://127.0.0.1:8188/extensions/comfyui-manager/components-manager.js:774:9)

    gumpbubba721291
    Author
    Aug 31, 2025

    @randomsmasher266 I notice it doesn't say the specific node failing, but it does mention ComfyUI-Impact-Pack which is something I used. Can you make sure that ComfyUI-Impact-Pack node pack is installed and if it is returning the same error?

    JezzAug 31, 2025

    I have about a dozen unnamed nodes and the workflow looks like it has exploded with nodes all over the place and piled on top of each other, I've tried to update Comfyui but still not working.

    gumpbubba721291
    Author
    Aug 31, 2025

    @Jezz I'm working on a version that removes ComfyUI-Impact-Pack dependencies. That may resolve it. Uploading shortly.

    gumpbubba721291
    Author
    Aug 31, 2025

    @Jezz @ralphtandy or @randomsmasher266  published v2. Can anyone tell me if that resolved your issue?

    JezzAug 31, 2025

    @gumpbubba721291 Hi thank you I'm just downloading it now I'll try it and let you know how I get on.

    JezzAug 31, 2025

    Problem getting ( LayerUtility: PurgeVRAM V2 and SilenceGenerator ) nodes to install all other nodes look ok so I'm almost there.

    gumpbubba721291
    Author
    Aug 31, 2025

    @Jezz For silence generator, use the node included in the download.
    Basically just take the whole folder Silence_Generator, and put it and its contents in your custom_nodes folder.
    I couldn't find a custom node for initiating audio that output in AUDIO format, so I had to custom write one myself.

    For PurgeVRAM V2, it's part of LayerStyle.
    Granted, if you don't have it, it's not the end of the world. Basically, you'll get an OOM error likely if you try to upscale. But then if you immediately restart the workflow, your video will be cached and it should restart, and it will work then since ComyUI will automatically unload your models and other cache. The PurgeVRAM makes it so you can directly go into the upscale without an OOM and restart.

    JezzSep 1, 2025

    @gumpbubba721291 thanks for the advice ,also how do I use gguf instead of safetensor models do I just replace the loaders, I only have 12gb 4070. Thanks.

    gumpbubba721291
    Author
    Sep 1, 2025

    @Jezz for a 4070, I would most definitely not use the models I have on the workflow right now. Those are like 30 gb per model and you will be stuck waiting forever for your video to process.

    I would change to gguf if I were you. I'm pretty sure you need to change the loader. Behind the loader, you'll see nodes like "Set_vaceLowNoiseModel" or "Set_vaceHighNoiseModel". If you change out the loaders, just make sure the model output is connected to their respective Set node input. Same thing goes with clip or vae or whatever loader you change out. Also make sure the slice you are using matches. like if you use Q4 on your clip, use a Q4 on your diffusion model.

    For the vace models: https://huggingface.co/lym00/Wan2.2_T2V_A14B_VACE-test/tree/main

    For text encoder, aka your clip: https://huggingface.co/city96/umt5-xxl-encoder-gguf

    For the VAE, it's... probably(?) fine, but otherwise I would use the https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors

    If your output is a bunch of garbage, chances are something is not compatible with each other (if you tweaked nothing else). If you get a tensor mismatch error once it hits sampler, chances are it's not compatible then too between your text encoder/diffusion model.

    gumpbubba721291
    Author
    Sep 1, 2025

    @Jezz I went to a different VM I had running to try and set up my workflow from scratch. Ran into Purge_VRAM_v2 not being found. Fixed it.

    remove the current layerstyle folder in custom_nodes. run the command while in your custom_nodes directory - '/ComfyUI/custom_nodes/ ' to clone it directly
    git clone https://github.com/chflame163/ComfyUI_LayerStyle.git

    For some reason ComfyUI's 'latest' is not really the latest. I confirmed via comparing the actual repo code and my local. Maybe a comfyui registry caching issue.

    JezzSep 1, 2025· 2 reactions

    @gumpbubba721291 Hi, hi thank you so much for the work you are doing to help me I will try your suggestions when I finish work later and I'll let you know how I get on.

    I have been using another workflow that works great and is very fast with nice results but the girls always seem to change by the end of the video and even body shape can change plus a loss of detail.

    Thanks again and will message you later.👍

    JezzSep 1, 2025

    Hello my friend I have the workflow loading now with all nodes thanks to your help, after sorting a few errors I now have it running with GGUF models , it goes quite a way through the workflow until it gets to sampling section ,it then gives me this error ( ClownsharKSampler_Beta mat1 and mat2 shapes cannot be multiplied (37x3584 and 4096x5120) ) I do not have a clue why I get the error I have searched online but without any luck finding the answer, have you seen this error before?.

    Thanks.

    gumpbubba721291
    Author
    Sep 1, 2025

    @Jezz So there are a couple possibilities on this.

    First thing I would check - are your diffusion models & text encoder compatible? Like if your model has somethingsomething_Q8.gguf on it, make sure your text encoder (clip) also a Q8 on it. This is the most likely culprit if I had to guess.

    Second thing I would check - are your images sized correctly. I account for some of this (still working on improvements though) with the workflow. Check the previews to the right of processing depth. Are you seeing your video in a sort of white ghost format? If you're seeing what looks like a big black rectangle over your preview, check your terminal. You're likely getting mismatch errors then. That could cause issues going into the blender and possibly sampler.

    JezzSep 1, 2025

    Hi I have tried changing a few things , when I run it I select the image to us and the video to use both look fine that produces 4 more images ,a cut out a grey silhouette plus 2 more it then goes to the sample and decode section an gives me a

    ClownsharKSampler_Beta

    shape '[1, 16, 45, 2, 45, 2]' is invalid for input of size 2851200 error now.

    I'm not sure that I will be able to get this to work its a bit beyond my knowledge but if you have any more suggestions I'll try them out but I don't want to waste your time.

    gumpbubba721291
    Author
    Sep 1, 2025

    @Jezz hmm it's tough for me to troubleshoot just off text

    if you list the models/clip/vae names (basically whatever you changed)

    also list what you put for "largest dimension" and width/height of the video

    I can likely tell you where things went wrong from there, because then I'll be able to recreate it

    JezzSep 3, 2025

    @gumpbubba721291 Hi sorry for the delay I've been a bit busy.

    I am using the following: Wan2.2_T2V_High_Noise_14B_VACE-Q4_K_S.gguf + low noise model, umt5_xxl_fp16.safetensors, Wan2_1_VAE_fp32.safetensors, image size is 896 x 1088, video size is 640 x 480, hope this info helps . Also you are using Diffusion Model Loader KJ which node would I use for gguf models, would it be UnetLoaderGGUFDisTorchMultiGPU, hope this helps.

    Thanks.

    JezzSep 5, 2025

    @gumpbubba721291 Hi sorry for the delay I've been a bit busy.

    I am using the following: Wan2.2_T2V_High_Noise_14B_VACE-Q4_K_S.gguf + low noise model, umt5_xxl_fp16.safetensors, Wan2_1_VAE_fp32.safetensors, image size is 896 x 1088, video size is 640 x 480, hope this info helps . Also you are using Diffusion Model Loader KJ which node would I use for gguf models, would it be UnetLoaderGGUFDisTorchMultiGPU, hope this helps.

    Thanks.

    gumpbubba721291
    Author
    Sep 5, 2025

    @Jezz What is the number you put for "largest dimension"? (the parameter on the left side in the list of stuff)

    I have a feeling it's an issue with umt5_xxl_fp16.safetensors not being compatible with the vace gguf you are using (although not sure yet, still need to check)

    JezzSep 5, 2025

    @gumpbubba721291 Hi I have it working now not exactly sure what I did because I changed a few things but it seems to be running now, thanks for all the help, I'll keep an eye open for any feature workflows from you. Thanks

    gumpbubba721291
    Author
    Sep 5, 2025

    @Jezz Awesome glad to hear it!

    PgneeAug 31, 2025
    CivitAI

    Well with a 5090 I cant find a good combo of even ggufs to get this to run. You must have a beast computer! :)

    gumpbubba721291
    Author
    Aug 31, 2025

    So the timeline 1.mp4 is my reference video. I'm not sure how familiar you are with VACE, but basically this is a subject replacement workflow.

    You put in your reference video- let's say, a woman with a renaissance mask fondling her breasts in my case.

    You put in your reference image.

    It recreates the video, but with the background and subject of your reference image instead of the reference video.

    If you have any more questions lemme know.

    gumpbubba721291
    Author
    Sep 1, 2025

    Oh did this comment get edited or did I respond to the wrong one? oops lol
    But yeah I'm running this on a VM through hyperstack with H100s on it, so it can handle just about anything. It's pretty easy to setup, as long as you got the funds for it and a little bit of know-how in terminal.

    To bring over a part of a comment I put elsewhere:

    Those are like 30 gb per model and you will be stuck waiting forever for your video to process.

    I would change to gguf if I were you. I'm pretty sure you need to change the loader. Behind the loader, you'll see nodes like "Set_vaceLowNoiseModel" or "Set_vaceHighNoiseModel". If you change out the loaders, just make sure the model output is connected to their respective Set node input. Same thing goes with clip or vae or whatever loader you change out. Also make sure the slice you are using matches. like if you use Q4 on your clip, use a Q4 on your diffusion model.

    For the vace models: https://huggingface.co/lym00/Wan2.2_T2V_A14B_VACE-test/tree/main

    For text encoder, aka your clip: https://huggingface.co/city96/umt5-xxl-encoder-gguf

    For the VAE, it's... probably(?) fine, but otherwise I would use the https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/vae/wan_2.1_vae.safetensors

    If your output is a bunch of garbage, chances are something is not compatible with each other (if you tweaked nothing else). If you get a tensor mismatch error once it hits sampler, chances are it's not compatible then too between your text encoder/diffusion model.

    Hopefully that helps!

    PgneeSep 1, 2025

    @gumpbubba721291 Super helpful will try this tonight. What do you run this on? Seems Wan2.2 14B is too much for a 5090 :(

    Edit:
    Set everything up and testing different unet combos now. Thought I had a good rig! I am humbled. Which loras did you use out of curiousity? I'm learning all this and part of it is just recreating what others do. My first submission got denied ... they didnt think it was AI so I have to include all my stuff now. but still learning how to submit. Hoping to submit one for this once I figure it out! If I can get it rolling on the rig.

    Ty btw.

    gumpbubba721291
    Author
    Sep 2, 2025

    @Pgnee For loras with VACE, it's a bit different than your typical I2V or T2V workflow. Most things are taken care with your video reference. So far, all I've needed were the light2x lora linked in the note in the workflow near the sampler, and if I'm doing anything that has a penis, I throw in the mystic xxx lora here https://civitai.com/models/1295758/nsfw-fluxorwan-22-mystic-xxx?modelVersionId=2149217 since wan is really bad at dicks. I haven't had the use for anything else (at least yet).

    Also, you could probably get away with changing the step count to 6 and having decent enough quality, if you want some better speed that way. Would definitely recommend checking out the vace models linked before. Typically speaking, the lower number of gb, the faster things will run, but it's a tradeoff on quality. Use the gguf models instead of the beefy 30gb safetensor models I was using in the workflow.

    Oh and for submitting stuff on here, make sure you put the prompt and details! I've learned things can get rejected otherwise.

    PgneeSep 2, 2025

    @gumpbubba721291 I've been able to get past everything so far with the Q5_K_S set but couldn't get the clip ggufs to work so I used umt5-xxl-enc-bf16. The clip gguf l;oader and the apporpriate Q5KS umt5 file didnt work.

    Now I'm stuck at the mat1 and mat2 shapes cannot be multiplied (77x768 and 4096x5120) - Lets see if Grok can help!

    Working on it! Cool setup. Have a bunch of 3070s laying aroudn but dont wanna daisy chain them. Too loud and too much power. That 6000 looks so tempting!

    Thanks for the advice also, I konw you dont have to help a newbie, so its appreciated. Really neat workflow though. I want to make something intravascular, maybe a blowcell moving through a vessel in a loop or something would be pretty cool once I figure it out.

    gumpbubba721291
    Author
    Sep 2, 2025

    @Pgnee Could you list down the models, clips, and stuff you are using that changed from the original? Then I can try and replicate it on my end. I'm guessing it's probably a conflict between your model and text encoder, but would have to check

    PgneeSep 2, 2025

    @gumpbubba721291 That problem was simple I am so new I forgot to change stable diffusion to wan in the loader. Now I'm stuck with OOM again at the ClownsharKSampler node 178.

    Tried Q4_K_M Low/High and umt5 with the regular wan_2.1_vae.safetensors.


    I kept mixing and matching for a while to finally get to ClownsharKSampler node. But just keep running OOM now. If it weren't the memory issue I'd have it! I"m sure.

    gumpbubba721291
    Author
    Sep 2, 2025

    @Pgnee In that case, I would try lowering the resolution you are trying to do for the workflow, and then upscale it at the end. Though you'll want to change the amount of frames it processes in the upscale at one time, because what I have it now is going to be way too high I would guess. You can also change the model of the upscaler if it gets OOM.

    PgneeSep 2, 2025

    @gumpbubba721291 Y ea I think this workflow needs more VRAM than I have. Its for you big boys! :) Wife wont let me blow money on a new rig for AI Vid Processing (and while I could just build one its just a big cost for not much gain since its a hobby)

    gumpbubba721291
    Author
    Sep 2, 2025

    @Pgnee Have you considered spinning up a VM? Super easy to configure on hyperstack (if you need a quick walk through guide I can send my notes), and much much cheaper than buying a new GPU and rig. there is also vast ai as an alternative. Then you can just pick your GPU, install comfyui on there, and run it through your browser but with that GPU.

    GooodisSep 2, 2025

    Care to share any working workflow for home setups?

    gumpbubba721291
    Author
    Sep 2, 2025· 4 reactions

    @Gooodis Maybe later I can try and optimize it for home setups and create a less VRAM using alt version, though no guarantees when I'll be able to complete it

    bowiba1265909Aug 31, 2025· 3 reactions
    CivitAI

    Hey there. Thanks for your hard work this looks promising. And the "rage note" gave me a good laugh.

    I just cannot find the VACE checkpoints for HIGH and LOW you use. Or did you rename them?

    gumpbubba721291
    Author
    Aug 31, 2025· 3 reactions

    here is a snippet from my notes so you can find the stuff easily!

    Heads up though-- these are the large 30gb models, so they're pretty chunky!

    ##############################DIFFUSION MODELS##########################################################

    #Diffusion Model for Wan High Vace

    wget -P ~/ComfyUI/models/diffusion_models https://huggingface.co/lym00/Wan2.2_T2V_A14B_VACE-test/resolve/main/Wan2.2_T2V_High_Noise_14B_VACE_fp16.safetensors

    #Diffusion Model for Wan Low Vace

    wget -P ~/ComfyUI/models/diffusion_models https://huggingface.co/lym00/Wan2.2_T2V_A14B_VACE-test/resolve/main/Wan2.2_T2V_Low_Noise_14B_VACE_fp16.safetensors

    ##############################CLIP##########################################################

    #Clip

    wget -P ~/ComfyUI/models/clip_vision https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors

    ##############################VAE##########################################################

    wget -P ~/ComfyUI/models/vae https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1_VAE_fp32.safetensors

    ##############################LORA##########################################################

    #LORA STUFF

    #Light2X Stuff

    wget -P ~/ComfyUI/models/loras https://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/high_noise_model.safetensors

    wget -P ~/ComfyUI/models/loras https://huggingface.co/lightx2v/Wan2.2-Lightning/resolve/main/Wan2.2-T2V-A14B-4steps-lora-rank64-Seko-V1.1/low_noise_model.safetensors

    ##############################TEXT ENCODER##########################################################

    #Text Encoder

    wget -P ~/ComfyUI/models/text_encoders https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors

    gumpbubba721291
    Author
    Aug 31, 2025

    Also just noticed, I have a couple I2V models still loading but not being used from when I was working on the FL2V side of things. You can delete those along with the setters. I'll clean it up for version 1.1 or v2.

    bowiba1265909Aug 31, 2025· 1 reaction

    @gumpbubba721291 Thanks for that. Will tinker around and try it out later tonight.

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 I had to dl a bunch of nodes and solve some errors but now I got it to work. Except for one thing and I hope you can help me out where to find that node "AudioTrim". I thought it was from the "Ryanontheinside" pack, but I tried to install manually and by manager and it is not there. Google does not find it for me, nor does the manager.

    I tried to just disable audio for now, even tried to unplug a few noodles, but then I always get an error "no loop input" or something like that after it finishes with part "E" and comes back to "For Loop...". Until that happens everything seems to work fine.

    gumpbubba721291
    Author
    Aug 31, 2025

    @bowiba1265909 This is the audio trim plugin I'm using, from also the RyanOnTheInside pack. https://comfyai.run/documentation/AudioTrim

    I wonder if there is a conflict going on so it didn't download it. If you check your terminal when you restart comfyui it may state if there is an error. Unfortunately, there didn't seem to be a ton of audio nodes out there for audio manipulation, let alone ones that won't cause conflicts with everything (while working, I had one completely crash my comfyui bleh).
    Hmm maybe I'll make another custom node for a future version if it's causing issues.

    gumpbubba721291
    Author
    Aug 31, 2025

    Also, if there is anything that you would suggest to make it more user friendly, let me know! I've stared at this workflow for hours so I've gotten used to where everything is so it's tough for me to tell where the problems are.

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 Yes it seems indeed to be that node pack giving the issue. Everything else seems fine but I cannot fix the ryan node pack no matter what I try. Also older versions do not work for me. So sadly I miss that one node AudioTrim and am stuck because I have no idea what else to try and fix it.

    gumpbubba721291
    Author
    Aug 31, 2025· 1 reaction

    @bowiba1265909 Ok, I'll see if I code something for it. Audio manipulation in comfyui is cursed, I'm telling ya 😂

    gumpbubba721291
    Author
    Aug 31, 2025

    Can you try v2.1 out?
    Just like silence generator, simply take the folder for "ComfyUI-gb-vace-custom" and put it into your custom_nodes folder & restart ComfyUI. The Audio Trim node that was having issues should be replaced with "GB Audio Trim" now. 👍

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 Yes thank you, I will try it out.

    I tried the 1.0 version in the meantime and it works for me without problems.

    What makes me a bit worried is the error I got in 2.0 "easy forLoopEnd No frames generated" even after deleting all the audio stuff. It does not happen in 1.0. But maybe I was not deleting everything by mistake. Anyways will let you know once I tried the 2.1 one.

    gumpbubba721291
    Author
    Aug 31, 2025

    @bowiba1265909 huh, weird. like it just never outputs anything? I wonder if you may have accidentally disconnected the 'value 3' on the forloop while removing stuff, because if you do that anywhere, that will disrupt the video batching and flow

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 I have to admit I am not an as experienced user like you seem to be but I double checked and also pulled the v1.0 for a side to side view to see if I messed something up or if it is different at all. I do not think so.
    Tried the 2.1 now. First time my input had no audio which resulted in an error. ^_^
    Second time I used a video input with sound but got this error: "Sample rates of the two audios do not match".

    gumpbubba721291
    Author
    Aug 31, 2025

    @bowiba1265909 
    "First time my input had no audio which resulted in an error" - ok I didn't test with videos without audio. you may have caught me there 😅. Will need to test on that.
    "Sample rates of the two audios do not match" - if this is coming from silence generator, there is a sample rate option on there. if it's not 48000 hz, chances are it's 44.1 khz (41000), though 48000 khz is generally pretty standard for video.

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 Haha. It was not on purpose. Obviously it makes no sense to use a no audio clip to test this but I was working without sound. It might be an idea to build in a switch to disable audio completely from the WF. I will give it another try with different sample rate.

    gumpbubba721291
    Author
    Aug 31, 2025

    @bowiba1265909 ahh yeah I originally had it under a bypass switch, but the problem is that bypassing the nodes still led to issues from the connections. That's why I went with a switch for a true/false path instead. Downside of that is that comfyui isn't smart enough to not run a path that is always false. I'll have to think up some ways to go about it, maybe automate the sample rate while I'm at it if possible.

    bowiba1265909Aug 31, 2025

    @gumpbubba721291 Oh I did not know that. Well if you can automate that would be best of course.

    The input I used is said to have 22050khz but the node will not allow that. Can only change the node by per 1000 so 44100 is not even possible either. I will try with 44k now and a 44100 input video. There is another option on the node called "direction". It is set to "right". Is that right? ^_^

    EDIT: Tried with "left" does not change anything it seems. Do I need to change anything else? The duration seconds in the silence generator is set to "1". I really never did Audio work so I have no clue what I am doing here.

    gumpbubba721291
    Author
    Aug 31, 2025· 1 reaction

    @bowiba1265909 oh damn 22050. ok I'll have to check out the node and update it. I'll probably have a v2.2 in some hours later after some errands

    gumpbubba721291
    Author
    Aug 31, 2025· 1 reaction

    direction wouldn't have a relevance here - that is where to place the new audio incoming... the right or left of the main audio you are trying to concat it with

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    518
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/30/2025
    Updated
    4/28/2026
    Deleted
    -

    Files

    wan22V2VVACEOneClick_v10.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)