CivArchive

    Yet Another Workflow : easy t2v + i2v

    I've aimed at a user-friendly UI for ComfyUI. There's a balance between complexity and ease of use, and this workflow aims to give you useful controls with clear guidance on what you need to care about. I hope these will be helpful to anyone strugging with quality and the general UI-isms of ComfyUI. I've taken the time to color code and add lots of notes. Please read the notes, I've tried to make them useful!

    This is the workflow I use, it's not aimed at a skill level. It's designed to be easy to use and adjust with some UI concessions and labeling to ensure you can pilot it with less experience in a way that is more sophisticated than the official example workflows, which can be easy to break.

    The primary goal with this workflow is to give you a strong foundational place to generate either text to video (T2V) or image to video (I2V) outputs without having to fuss too much. Lightx2\ning is on by default. (It's an accelerator that trades variety for generation speed.)

    The green controls are the stuff you generally want to mess with.

    The secondary goal here is to provide a consistent interface to interact with different samplers.

    Versions

    The "main" workflows (the one's without parenthetical version labels) support the basic ksampler node, but also includes a toggle to enable the ClownsharKSampler sampler and the TripleKSampler once you have some experience and want to mess around.

    I generally recommend the main workflow. It's my daily driver. It offers the most control with the least fuss. Each version has its place tho!

    If you are extremely new to Comfy and Wan? Consider using the MoE version. It removes a few nodes and options while providing mostly the same interface with slightly less visual complexity to help you get acclimated. Once you get comfy with this, step up up to the main version for more options.

    Want better edge case prompt adherance? I've created a version of the workflow that supports the WanVideo nodes. I don't recommend using this one until you're more comfy with the standard version. It has increased visual complexity. These nodes work completely different to other systems, and I hope to make it more accessible by providing you with the same interface to engage with it. WanVideo tends to produce completely different results, so it can be another intersting thing to explore.

    Want more fluid motion and jiggle? I've also created a Smooth Mix version to support the Smooth Mix checkpoint. What is it? Like with Stable Diffusion checkpoints, it merges many LoRA's into the base Wan 2.2 model to create a more opinionated model to create videos with. This version follows the recommendations in their official workflow, while offering you the improved YAW UI experience. I like this checkpoint for it's detail and motion, but it is also more prone to motion artifacts. It also has some built in support for anime styles. A self-forcing LoRA (Lightx\ning) is built-in, so the sampler options are kept simple for this one. Please note that, due to the additional 80gb of size, my RunPod template will only include this as an optional download. Also checkout the LoRA version, which I find much more useful as you can adjust the strength of the effect.

    Expect an update of the RunPod template to include the new templates soon after.

    As of version v0.38, I'm doing a revision of this article so patchnotes will be removed for clarity. The changes are noted in the file details section (and in the templates themselves).

    Like it?

    Give it a like! Tag it as a Resource when you use it! Support on Patreon or a tip on Ko-fi are also welcome. Yellow Buzz will go towards promoting awareness here on Civit.

    Need help?

    I like helping people get going with this stuff, so if you want help message me. If you want extended one-on-one help, there's an option on the Patreon. I'm happy to walk you through the details, answer your questions, and give you some extra tips and tricks, and scripts. I've done this for a few folks, I'll save you money and headaches.

    I've also written an article here on getting it going with my Runpod template. The template will vastly expedite and simplify getting things up and running.

    General Advice

    • Make lots of videos! Post your videos! Don't fuss with the tech! Be smart about how you spend your time with this stuff. It's easy to burn out if you spend more time trying to get things to work than making videos you like. That's really why I'm posting this.

    • Use RunPod. Use the RTX 5090 or the H100 SXM. Use my Wan 2.2 template. If you've not used RunPod before, sign up with my link; we'll both get some free credit. See the article for more.

    • If you use a service like RunPod, if you're doing I2V, it can be smart to have your images ready in advance to make sure the server stays busy while you are using it.

    • If you run this outside of Runpod, you'll need to install some custom nodes. To do that, click the "Manager" button at the top of the Comfy interface, and then click the "Install Missing Custom Nodes". Click "Install" on each one - I recommend in order; you'll need to wait till each has installed. Do not bother restarting ComfyUI until they are all installed. The RunPod template has them preinstalled. (There's a manual patch for the LTXFilmGrain node here.)

    • If the wires bother you, there's a button in the bottom right on the floating UI that will hide them.

    • What is Lightx2\ning? That's just my short hand for refering to Lightx2 and Lightning (which is just the Wan 2.2 version) self-forcing LoRA's .

    • I've made it easy to turn off Lightx2\ning as well, if you want to try without, but note that it's much slower! I really only recommend this with the H100 SXM. Do try though, especially with text-to-video! The full Wan 2.2 has some amazing capability.

    • This workflow is setup for .safetensors models, but you can use GGUF if you want to make the changes node changes.

    • If the having the Clownshark/TripleK sampler in the UI is distracting, you can delete the group with no negative consequence. (You could also delete the purple mute node for the sampler selection as well.)

    Costs?

    I'm updating the data here to reflect additional testing: In case you are curious, the example videos take around 4.5 / 3 minutes (720x1280). (I don't normally do that resolution when I'm just making stuff and experimenting.) I can generally make nice looking videos in 1-2 minutes. I'm generally running at either $0.93 or $2.69 per hour with the RTX 5090 or the faster but more expensive H100 SXM; in generally I think between 15 to 68 high quality videos per hour is what I tend to see, so about $0.02 - $0.13 per video, (rounding up). (With a session startup cost for loading the pod, probably adding a cent to so to that.) 1 to 2 minutes is probably my gen sweet spot for time, so it's either great or a bit over my ideal depending on resolution/scene complexity, but that's a cost consideration.

    Troubleshooting

    If a node is missing (bright thick red outline with a warning when you open the workflow), you can install them by going to Manager > Install Missing Custom Nodes, and pressing Install on any the nodes that show up there.

    If you are getting any errors related to a custom node, it's possible something has changed recently in the software. It might be useful to change a version back to the last "stable" build in these situations.

    For example, the nightly build of WanVideoWrapper might introduce an error that wasn't there last time. With a workflow open, you can go to Manager > Custom Nodes in Workflow. This will show you all of the custom nodes. If you click, Switch Ver, you can see all of the releases. Consider trying the first numbered on at the top of the list.

    If that doesn't work, or there seem to be more significant problems and you are using RunPod, you may have forgotten to select CUDA 12.8. Try restarting the server. If that doesn't work, terminate the pod, and make a new one. This will fix a surprising number of possible issues.

    Longer video generation support?

    One day. Probably.

    I'm always looking for a good solution to this. I've not found a good solution to this problem yet that isn't very complex. To talk through them a bit:

    There are some specialized solutions like Wan Animate and Infinite Talk that achieve longer videos by utilizing other technology to specific ends (remapping motion/making a talking head video), and while VACE is promising, it's very complex to setup and use and requires multiple steps. There are also techniques that involve making keyframes for your scene and using first/last frame to fill in the actual animations, and you can use interpolation as a post processing step to blend those clips in a way that can hide seams. Most of this also requires color correction or ipadapter to keey faces consistent.

    The SVI LoRA is a newer technique. It stabilizes consistency across videos, but lowers the base quality (everything gets less sharp), and the scenes become volitile to big changes while increasing the overal consistency across multiple videos. It's not perfect, and cannot go infinite, but if you're dead set on longer videos, this is a decent technique. It doesn't meet my quality bar. I find the overall drop in fidelity to be disappointing.

    At the end of the day, it's either a ton of work to make a still-short video, or you've introduced a ton of compromise on what's already a compromise. That's not what I'm selling here.

    I see this as the biggest problem in the AI video space, whether you do this as a hobby, like most of us, or you're a company trying to figure out how to seriously use this stuff commercially. These problems are also not unique to Wan, though they vary from company to company. There's a technology problem for how to extend video, so I suspect that there's a lot of economic pressure and research effort that will probably lead to better videos that aren't "more VRAM", as that doesn't scale well.

    To be clear: You can do this now by using the last frame as the first frame. v0.38 adds that capability. You'll generally get 2 or 3 decent extensions, but you're taking a quality hit each time, but any camera movement or motion may not look consistent between clips. (Using the same seed, sadly, not not ensure consistency.)

    Sound?

    Once it get's much better. Sora 2 and the other private models can do amazing sound, but the available public models create audio that I really dislike. You can certainly add it yourself, if you like it, but I won't officially support until it improves. LTX-2 can do decent sound and lipsync, but has a lot of issues which I'll cover elsewhere.

    Description

    A minor update that adds support for the WanVideo version of the Triple sampler custom node found in the standard workflow. In addition, there are some minor UI adjustments.

    FAQ

    Comments (49)

    ByteCrafterNov 18, 2025
    CivitAI

    What VRam and Ram setup is required for this, it seems 3090 says its running out of memory on use of this?

    boobkake22
    Author
    Nov 18, 2025

    It's not optimized for low memory, but I am going to work on a version for that. The WanVideo node has block swapping, so it might work for you now? Otherwise you can try adding a blockswap node to the the standard version yourself in the mean time. (WanVideoBlockSwap, set to 40, use_non_blocking = true). Otherwise you'll want to add nodes for using quantized GGUF models.

    ByteCrafterNov 18, 2025

    @boobkake22 oh so this is for more of a non consumer GPU workflow then as a 3090 has 24GB? Thank you for the quick reply. I shall look into those changes.

    boobkake22
    Author
    Nov 18, 2025

    @ByteCrafter It's easy to add, but if you follow along, I'll post as those updates are made. I rent, so that's my primary build target.

    ByteCrafterNov 18, 2025

    @boobkake22 i was able to get it running on the 3090 with vey minimal changes, and have to say its produced a better video than the one i was currently using, this is far improved, incredible work.

    boobkake22
    Author
    Nov 18, 2025· 1 reaction

    @ByteCrafter Nice! Appreciate the kind words!

    ByteCrafterNov 23, 2025

    Hey man would this workflow be able to incorporate s2v do you think? i only ask because what ever your doing behind the setting its getting even better quality than my 3x sampler workflow was getting.

    boobkake22
    Author
    Nov 23, 2025

    @ByteCrafter It could be added, for sure. I really dislike all of the S2V stuff I've heard so far, so I'll be waiting till the tech improves further before I would add it officially.

    ByteCrafterNov 23, 2025

    @boobkake22 i used to use HuMo but that was for 2.1 and personally i think 2.1 is lacking now compared to 2.2, but 2.2 is native support for s2v without the need for extras like infinite talk or HuMo so the HuMo creators wont support 2.2 from what i have researched.

    ByteCrafterNov 28, 2025

    im not sure what has broken but I did an update to comfy and now im getting memory errors again when running this workflow. Do you know if comfys most recent update broke stuff again?

    ByteCrafterDec 2, 2025

    Can you walk me through what needs to change for gguf support please, as the most recent comfyui update has broken this on the 3090vram, even though it worked before. But you cant just plug in the gguf as it seems there is alot more to it than that.

    boobkake22
    Author
    Dec 4, 2025· 1 reaction

    @ByteCrafter Sorry for the delay, I was away for last holiday. At the moment, I don't have an official low memory support version. I don't use bleeding edge ComfyUI at the moment, so I'm not able to address this concern immediately. If you want to commission me to work on this, we can figure something out, but I cannot promise a specific timeline otherwise at this moment. Blockswap would be the first thing you want to add, if you want to mess with it yourself.

    ByteCrafterDec 25, 2025

    @boobkake22 ok so i got this working again after a update, but the exact same output put back in and re run now do not produce the same quality. I'm now getting stumped what the issue is as its not the workflow. If you have done any comfy updates yourself have you noticed any quality changes?

    boobkake22
    Author
    Dec 26, 2025

    @ByteCrafter I'm still using the same pre-Nodes 2.0 version currently, so I cannot report any changes; but... with same workflow with same nodes with same models/LoRA's should give the same results.

    ByteCrafterDec 27, 2025

    @boobkake22 Unfortunately, for some reason, when running the same file, I im getting very different results from the original. Its very strange.

    boobkake22
    Author
    Dec 28, 2025

    @ByteCrafter There are a few possibilities there, given how sensitive generation is to change (complete speculation):
    - a node implementation has changed that changes the underlying varibles for creating the latent / running the scheduler.
    - a node implementation has changed where the default arguments have been reset in some way that causes the loading to use new default values (this can be kind of suble, but usually something is weirdly misconnected when this happens).

    If you can give me the version you're using, I can try to find a template that's using the version to do a quick test.

    ByteCrafterDec 29, 2025

    @boobkake22 version of what, sorry, and I'll get that information when i know what version you want.

    boobkake22
    Author
    Dec 30, 2025

    @ByteCrafter Sorry: Which version numbers of ComfyUI?

    ByteCrafterDec 31, 2025

    @boobkake22 I have tried both -
    ComfyUI version: 0.6.0, ComfyUI frontend version: 1.35.9, PyTorch version: 2.9.0+cu130, Python version: 3.13.6.
    and
    ComfyUI version: 0.5.1, ComfyUI frontend version: 1.34.9, PyTorch version: 2.9.1+cu128, Python version: 3.12.10.

    But neither of them now performs the same as they did previously.

    boobkake22
    Author
    Jan 1, 2026

    @ByteCrafter Dunno if this is you, or just a similar situation:
    https://www.reddit.com/r/comfyui/comments/1q111xo/comfyui_update_v060_has_anyone_noticed_slower/

    I'll report the version in my image during the next boot-up, I forget at hand, for you to test against.

    ByteCrafterJan 1, 2026

    @boobkake22 No, that’s not me. I haven’t noticed much of a change in generation speed, but the output quality has definitely changed. I may need to set up a separate workspace using an older version to see if that fixes it.

    It seems that T2V still works perfectly, but I2V is showing degraded results.

    david469Nov 23, 2025
    CivitAI

    Longer videos - what if there was a way to define each character and it generates the character. If you like what you get, then it auto-creates a LoRA of that character.

    Then you use those auto-created LoRAs as needed for each scene to define each character in the scene.

    Would that approach work?

    ps - thanks for the work you've done.

    boobkake22
    Author
    Nov 23, 2025

    Hey, David. There are workflows that try to do this for SDXL that I've seen. They are fairly complex multi-stage processes that focus on just creating a basic set of images for a character while aiming at consistency. (I don't have the link handy, but this is a fairly common problem.)

    As far as multiple characters, this is a problem with character LoRA's in general. They tend to influence ALL characters in a scene with multiple. At the cost of complexity, it is possible to do video-to-video processing where you mask the characters and try to convert them using a character LoRA. This is a crude and labor intensive solution, but it does work.

    Because AI tends to be very random, I don't have a ton of interest in really complex proccesses. It's rare that I want to take a video and do multiple passes.

    At the end of the day, longer videos are a challenge, though if I read you correctly, you could create a process that works this way. There is a bigger problem at the moment is that each "step" in the video process, even with a LoRA, often loses fidelity. The last frame of a given sequence is lower quality, and the artifacts of that process continue to compound. While a LoRA for character consistency would help keep the characters looking consistent across videos, there are no great solutions to avoiding the "baked"/"fried" effect that becomes quite noticable across multiple videos. (Everything will get flatter, texture will disappear, and colors will oversaturate.)

    This will probably get solved, but I haven't seen a solution for it yet.

    david469Nov 23, 2025

    @boobkake22 I'm guessing there's some serious effort going on about this. It's a tremendous market opportunity and that means the first to solve it will have a very profitable product.

    That's the lifelong entrepreneur in me - if I had not sworn off starting another company I'd look at this. Do the equivalent of what CURSOR has done for programming AI.

    Anyways, thanks for your answer. I'll stick with short clips for now until someone solves this.

    boobkake22
    Author
    Nov 23, 2025· 1 reaction

    @david469 I hear ya. It's a competitive space. There are a lot of avenues to explore there, all of them extremely expensive and time consuming. The core problem with... all of this... ::gestures at AI video:: is this implicit desire for control over what is implicitly very chaotic. You see it manifest in a lot of ways: prompt key words being ignored or confused, even when weighted, over swinging the other way and being entirely too represented with something like a LoRA.

    There's a lot of oportunity there for sure, AND it's an interesting problem, but there are some fundamentals that are really big technical challenges.

    To get a little philisophical: That's also why AI is a bit of a trap, and I think we see that in the over-valuation in the market. It works well enough that it's easy to extrapolate a very big possibility space for what it could maybe do based on what it can kind of do now, without a more complex appreciation of how it works and the very real challenges in getting it to actually do the imagined things.

    Sora 2 is very state of the art, but even it has some massive coherency problems, and, no-doubt costs both arms and both legs to use. Which is to say, even the folks with all the money can only kind-of solve some of the problems.

    It's an interesting space though, and I'm really curious about the space around creatives using these tools to suplement their capabilities rather than some kind of "look what anyone can do"/"slop" approach that we're seeing now.

    PedjaNov 28, 2025· 1 reaction
    CivitAI

    I just wanted to mention in case anyone wants to try it: simple First frame/Last frame i2v works great with almost no changes to the normal workflow. I added a new group just duplicating everything from the I2V group (there is just the perfect gap to the left of the I2V group) with the same connections, added another image loader and reziser nodes and connected the output to end_frame input in "WanVideo ImageToVideo". Then I just connected the outputs to input 3 in their respective switches and I made the new group green so it was automatically added to the fast groups muter and that's it, it just works!

    boobkake22
    Author
    Dec 4, 2025· 4 reactions

    Good tip. I should add support for it officially at some point.

    ValorizandoDec 6, 2025· 1 reaction
    CivitAI

    Your work looks amazing, but I'm having a lot of trouble getting it to work. I just installed ComfyUI, installed WAN 2.2 14B, and then installed your nodes through the manager, but there's one that even when I click “update,” it doesn't fix. When I click on another version, there are no options. I even tried installing it manually through GitHub, but nothing works. Missing node: LTXVFilmGrain.

    qekDec 7, 2025

    Remove the node, it adds terrible noise

    boobkake22
    Author
    Dec 9, 2025

    It's an easy fix, but the node is not necessary. As I noted in another comment:

    "There's an issue currently with the LTXVideo nodes that add film grain. It's an easy manual fix:

    https://github.com/Lightricks/ComfyUI-LTXVideo/issues/283#issuecomment-3496676441

    Just comment out the one import specified in Jupyter Notebook, and it will behave."

    I obviously disagreee with qek or the node would not be there. The purpose of the node is to add a subtle amount of film grain noise. This can help add a subtle amount of texture to AI videos which tend to end up overly smooth. I like the effect, but it's QOL, not core to the process.

    10689358Dec 20, 2025· 1 reaction
    CivitAI

    with i2v in the wan workflow, I get:

    AttributeError: 'dict' object has no attribute 'clone'

    in the log, just after model.clone()

    Any suggestions about troubleshooting that?

    boobkake22
    Author
    Dec 20, 2025

    Hmm. That's a new one. I assume you're running locally?

    10689358Dec 20, 2025

    @boobkake22 thanks for the reply. no, on runpod

    boobkake22
    Author
    Dec 21, 2025

    @aiim00 To clarify, is this preventing the workflow from running? On reading your question, it's not 100% clear whether you're experience something breaking or observing a error in the log only? 

    Can you specify which of the template versions you saw the error with? As long as you select a CUDA 12.8 card with my Runpod template, the workflows should just work.

    10689358Dec 24, 2025

    @boobkake22 The workflow is failing at node 259 (WanVideo Set LORAs) with that error. I'm using your workflow version 0.37 for Wan and running with a CUDA 12.8 card on Runpod, but I'm not using your runpod template. FWIW I might be misunderstanding, but for doing I2V I'm assuming I disable the T2V folder of nodes and enable the I2V ones. Maybe I have that wrong?

    boobkake22
    Author
    Dec 24, 2025· 1 reaction

    @aiim00 The I2V and T2V toggle should make the correct changes for you. That is correct. If you do use my Runpod template it will reduce the room for error, but obviously not required. I can only say it definitely SHOULD work on my template, as that's what I test on. I'd suggest verifying that behavior there, but again, not a requirement. Just puts us on the same page.

    One other thought: Are you actually using any LoRA's? One difference between the non-WanVideo and WanVideo nodes is that LoRA loader can be fussier than the standard rgthree equivalent. If you have either missing LoRA's or no LoRA's, need to clear them or bypass the loader - insofar as I can recall this moment.

    10689358Dec 24, 2025

    @boobkake22 Thanks, I was about to reply and say that I had worked it out. After a lot of trial and error, I discovered that (I think) the initial import of missing nodes caused some rgthree nodes to be replaced with a different package (https://github.com/aining2022/ComfyUI_Swwan). That made the I2V/T2V toggle non-functional. The error I was getting was because I was manually bypassing instead of muting T2V.

    Thanks for the workflow! Would you consider posting the source code for your runpod container on github or elsewhere?

    ChickenNoobDec 26, 2025· 1 reaction
    CivitAI

    Great work, I've been struggling with hand movements lately (maybe because I just use 4 steps in total - lol), but I tested and your workflow is very good with small movements, never thought high rank light2x lora goes with 10 steps would be that good. Will post result soon.

    boobkake22
    Author
    Dec 26, 2025· 1 reaction

    Thanks for the kind words. Andy yeah, I know it's weird to do so many steps with Lightx\ning, but it does work well and is still much faster than without. (Which is also good but very painful if you're not using a very hefty GPU.) By all means, experiment with turning it down to 8 or 4 per as well, but I've preferred the results at a more steps, which is why I've left it that way as the default.

    ChickenNoobDec 27, 2025· 1 reaction

    @boobkake22 Yes it took a little bit longer than usual but the result is stunning, better than I expected, thank you, let's keep up with great work.

    boobkake22
    Author
    Dec 27, 2025· 1 reaction

    @ChickenNoob I've been testing an update for a bit. Still working some details out, but a new version will come before too long with a few extra options.

    fancypants2789872Dec 31, 2025
    CivitAI

    Your workflow has been great and easy to use as a first time user! I'm trying to use t2v using your v0.37 as is. I'm noticing that for some scenes that are supposed to be dim (like candlelit), there is naturally a spotlight shone at them. I tried to use a lot of negative and positive prompts, but none seemed to work. Is there anyway you can circumvent this lighting? Thanks!

    boobkake22
    Author
    Dec 31, 2025· 1 reaction

    A few possibilities. Some LoRA's force that effect, because it's trained into the data. So I suspect you're using a LoRA that's encouraging that behavior? Can you confirm either way? (And thanks for the kind words!)

    fancypants2789872Dec 31, 2025· 1 reaction

    @boobkake22 I ended up figuring it out. A little silly. I think had to be more of me being over descriptive with my positive prompt where even if I add negatives at the end of my positive and negative prompt, it didn't have a big effect. Could also be certain scenes or words trigger it but I can get around it for now. Thanks!

    RandomA1Jan 1, 2026· 2 reactions
    CivitAI

    Whenever I install the "missing nodes", it keeps asking me to install them again, even though I applied already. Trying the MoE version. Is this known?

    boobkake22
    Author
    Jan 2, 2026

    Hmm. Missing a lot of information here. If you're using my Runpod template, you should have no issues. If you're running local, there are myriad issues with your installation that could cause problems, depending on the specifics.

    farfromawayJan 2, 2026· 1 reaction
    CivitAI

    Hi, I cannot get the workflow to run I keep getting "Cannot execute because a node is missing the class_type property.: Node ID '#193'

    Show ReportHelp Fix This

    Find Issues"

    :(

    boobkake22
    Author
    Jan 2, 2026

    Sorry to hear. I can try and help. Which wokflow are you using? Are you running locally or in the Runpod template?

    Noldor130884Jan 13, 2026

    Same here but the node is 139. Running locally.

    After a bit of tinkering, I understood that the installation on windows does not really support all the stuff that needs to be there. I used an LLM to guide me through the process, had to install python and a few more dependencies to get it to work properly. Had also to create a .bat to enable python to work with the installed windows version...

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    1,587
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/17/2025
    Updated
    5/4/2026
    Deleted
    -

    Files

    yetAnotherWorkflowEasyT2v_v037Wanvideo.zip

    yetAnotherWorkflowEasyT2vI2v_v037Wanvideo.zip