CivArchive
    LTX-2.3 All-In-One workflow for RTX 3060 with 12 GB VRAM + 32 GB RAM - v1.0
    NSFW

    [edit:

    24.04.2026: Update version 4.3 (see version description).

    Minor update and bug fix.

    Thanks to all users for the many inputs over the last days and weeks 🙂

    Attention:

    If you struggle with node conflicts or you get errors while running the workflow, please have a look at my short Trouble Shooting Guide note in the wokflow first. Most importent is to update all components sucsessfully! ]

    Special thanks to:

    @ArcleinSK for investigation and solving the FLF issue, as well as forcing the First-Mid-Last Frame option and last but not least for charing fantastic knowlage.

    @boinobin730 for initialising, forcing and supporting this project in all kinds of matter, like providing links, running tests, sharing knowlage and inspiring diskussions.

    @Urabewe for publishing the original, perfectly running 12 GB VRAM LTX-2.3 workflows mainly used here in this workflow.

    Features:

    Simple to use all-In-One LTX-2 workflow with options for:

    • Text to Video

    • Image to Video

    • First/Last Frame to Video

    • Fisrt/Mid/Last Frame to Video

    • Video to Video

    • Text + Audio to Video

    • Image + Audio to Video

    • First/Last Frame + Audio to Video

    • First/Mid/Last Frame + Audio to Video

    • easy switching between all options,

    • all steps highly automated: no manual frame or width/hight calculations necessary,

    • easy to set inputs by predefined sliders and aspeckt ratio inputs (no risk to set wrong frame counts or wrong width/hight values),

    • completely automated resizing and cropping (if necessary) of your input images/videos.

    • brilliant audio generation (speech/sound) with LTX-2.3.

    LTX-2.3 specifications:

    Workflow version v4.3 consistently follows the LTX-2.3 specifications for 16:9/9:16 aspect ratios, including automatic width/hight calculations, as well as automatic input image/video resizing/cropping.

    In addition you can simply choose now any other aspect ratios according to your needs while still getting the right values calculated for width/hight and automatic image/video resize/crop.

    Requirements:

    • GPU with 12 GB VRAM (some users reported they got it running with 8 GB too),

    • 32 GB VRAM,

    • Swap file size: 64 - 128 GB.

    Speed and video length:

    Runs very fast: 5 second (1280 x 864) Video: < 10 minutes.

    Generation of long high quality videos in one run possible: 10 - 20 seconds without any issues,

    Testrun: 30 second video (1024 x 704) tooks around 40 minutes without any OOM errors. Longer videos might be possible, but not tested yet.

    Important:

    This workflow is intended for advanced comfyui users who know how to install and operate the system and are able to resolve basic system errors themselves, like as node conflicts, or general system issues.

    About this workflow:

    This workflow is mainly based on the fantastic LTX-2.3 workflows of @Urabewe.

    As far as I know, those were the first workflows running LTX-2 with 12 GB VRAM. All credits goes to the original creator.

    My job was only to combine and organise the different workflows in a simple to use all-in-one design.

    Description

    First "beta" version - should run with all options:

    Text 2 Video

    Image 2 Video

    Video 2 Video

    Image + Audio 2 Video

    FAQ

    Comments (151)

    ashen7106224Feb 1, 2026· 1 reaction
    CivitAI

    im buying a rtx 3080 can enyone plz tell whether you can generate videos and images comfortably with it

    arkinson
    Author
    Feb 1, 2026· 1 reaction

    As descriped, I use a RTX 3060 and published the results of some very basic tests for a 5 second and a 30 second video. Yes, with 12 GB VRAM (or even less) you can generate videos, but of course, this is all close to the limit. Much more "comfortably" you will generate with 24, 48 or 128 GB VRAM 😉 So the short anwer is: check the prices, check the specifications/benchmarks and make a decision what you are able or willing to pay.

    1fdpFeb 1, 2026

    You can, but it will be really slow. I have a 4080S w/ 16gb vram & 128gb of ram and it takes quite some time to generate a few seconds at a good framerate & resolution (~20-30 minutes)

    mmohrandroid954Feb 1, 2026· 1 reaction
    CivitAI
    That's an awesome workflow, thank you! Ubuntu 25.10 Nvidia 590 RTX 5060 TI 16GB 32GB RAM

    arkinson
    Author
    Feb 1, 2026

    Thank you so much for your feedback 🙂

    UrabeweFeb 1, 2026· 2 reactions
    CivitAI

    Awesome! I'll have to check this out. Thanks for the mention!

    arkinson
    Author
    Feb 2, 2026

    @Urabewe Hi - thank you so much that you are stopping by here. Your workflows are awesome 🙂

    I had never thought, that LTX-2 would run with 12 gb vram. I had tried the template workflows some times ago and had to give up. Did not even got the basics running...

    @boinobin730 mentioned your workflows and sent me the link. Incredible - your model combinations and all workflows worked out of the box. We had just to add the Image2Vid-Adapter Lora to bypass the no motion issue. LTX-2 is really a lot of fun and I hope more users will try to get in 🙂

    arkinson
    Author
    Feb 2, 2026
    CivitAI

    @DerDaAgropesca Hi - thank you for your feedback 🙂

    jefharrisFeb 3, 2026· 1 reaction
    CivitAI

    Great looking workflow, nice and clean. Can't seem to get the taeltx_2.safetensors VAE to load, put it in the vae_approx folder but node doesn't see it.

    arkinson
    Author
    Feb 3, 2026

    @jefharris Are you using your own model folder via extra extra_model_paths.yaml? If so, please move the vae_approx folder to the standard comfyui path as a test.

    jefharrisFeb 3, 2026

    @arkinson Ended up using a what ever VAE. Great workflow, works great!

    DocueiFeb 4, 2026

    also works if you create a symlink of the vae folder and name it as vae_approx

    DocueiFeb 4, 2026

    I'm now generating thanks to it after getting the same error.

    arkinson
    Author
    Feb 4, 2026

    @jefharris What means "using a what ever VAE"??? Please give an understandable description what you have done. If there is a real issue I would try to find a solution or publishing a FAQ.

    arkinson
    Author
    Feb 4, 2026

    @Docuei Thank you for the hint with the symlink 👍 There was no way to configure the path in the extra_model_paths.yaml file? I haven't had time to test it yet.

    derrickfanpov165Feb 4, 2026
    CivitAI

    Nice workflow~ I am using this on my 4070 12GB VRAM and 32GB RAM and the result is very good. Thank you so much.

    arkinson
    Author
    Feb 4, 2026

    @derrickfanpov165 Hi - thank you so much for your feedback 🙂

    HadoukenTSFeb 5, 2026· 1 reaction
    CivitAI

    Muito Bom! :-)

    arkinson
    Author
    Feb 5, 2026· 1 reaction

    Muito obrigado! 🙂

    NRubricFeb 6, 2026· 1 reaction
    CivitAI

    Most of the videos produce loud "music" but I only want sound effects.
    How can I fix that?

    arkinson
    Author
    Feb 6, 2026

    Wich subflow/option??? Prompt?

    assmetFeb 7, 2026· 1 reaction
    CivitAI

    Please do A+V2V wich replace lipsync only

    arkinson
    Author
    Feb 7, 2026

    @assmet Could you provide a working example workflow?

    fakolonyaFeb 7, 2026· 1 reaction
    CivitAI

    never thought I could make a video using LTX2 with my 12gb vram and 32gbram. works fine. decently fast. but most of the times i get still image and the speech i asked for. no movement in the image. maybe my prompts suck not sure.

    arkinson
    Author
    Feb 7, 2026

    @fakolonya Wich workflow option do you use? Image2Vid-Adapter Lora is activated? Search for ltx-2 prompting -> use allways "he walks" instead of "he is walking" for example and/or try to force movements with weights in your prompt.

    The Lora and right prompting (in combination with "right" start images) is most important. The problem you describe only occurs very rarely for me.

    10709959Feb 8, 2026· 1 reaction
    CivitAI

    Quite the nice workflow, definitely works well for generating LTX-2 vids in a variety of ways...I am encountering some strange audio distortions when I generate though. It seems that the audio is coming out warped, like it were being recorded through a distortion filter. Is there a good way of getting cleaner audio gens on this, or would I need to run the video through a dedicated audio-generator instead?

    arkinson
    Author
    Feb 8, 2026

    @Skunkylicious This is strange. I never had such issues. Do you use the right audio vae?

    10709959Feb 8, 2026

    @arkinson Yeah, it's the same one as in the workflow. I did notice that for some reason the actual audio quality was surprisingly dependent on the length of the video clip being generated...it seemed like the quality of the clip improved if it was shorter. Also found that I could work around the distortion issue by essentially running the generated video through an audio foley gen in ComfyUI (also using LTX-2 interestingly), though it still sounds a bit odd. Could just be the limitations of running LTX-2 on consumer hardware too...hmmm.

    arkinson
    Author
    Feb 8, 2026

    @Skunkylicious In my experiance I get pretty good audio results with ltx-2 out of the box. All my examples here are "raw" outputs without any postprocessing.

    Without any detailed information about your settings, hardware, possible ram/vram issues and some examples it's all just speculation.

    Just a hint: if you like to tweek the audio output you can try to play with the audio enhancements in the subgraphs.

    10709959Feb 9, 2026

    @arkinson Hmmm, will need to give that a try. While I'm at it though, I noticed that your workflow has no spot for a negative prompt. This is a bit of a problem, as I find that I do get anatomical anomalies depending on how I prompt things...would it be at all possible to pop in a negative prompt area somehow? It all gets crushed down to a "Conditioning" Set Node before going to the vid gen subgraph, which then splits into positive and negative...how does that work anyway?

    arkinson
    Author
    Feb 9, 2026

    @Skunkylicious Far as I know ltx-2 don`t need a negative prompt and for myself I have never used it. But if you like to test, you can simply add the negative part.

    SheyMoFeb 16, 2026

    @Skunkylicious @arkinson  i can second that. Sometimes, mostly also when i prompting for sound only ( like a video in techno club ) they sound is very distorted and not even fit and it tend to be like that if videos longer than 4s

    Don't know if i am allowed to post the sample, because of persons of interests and suggested drug usage :D

    arkinson
    Author
    Feb 16, 2026

    @SheyMo In my small experiance some "concepts" working very well out of the box with ltx-2 (especially with t2v) while other ideas don`t work at all - as usual with any kind of AI I would say. In my opinion these issues are ltx-2 problems and not workflow related.

    What you could try is: 1. better/special ltx-2 prompting and 2. experimenting with different Loras, steps, samplers, etc. But believe me, the last point is all a lot of work and needs much computing recources and time of course. Together with @boinobin730 we did run a lot of pre tests "to get the pictures moving" and finding some most "common" settings.

    Mostly it is much easier to bury your concept and try something completely different.... 😉

    Nevertheless, if someone will find better solutions and is capeble to run "serious" side by side tests I would be happy to adopt new ideas here.

    10709959Feb 16, 2026· 1 reaction

    @arkinson From research I've done, it seems that the duration of video you generate has a direct impact on how distorted audio is, so it seems to be as you say - some things don't work as well out of the box with LTX-2 as we'd like. Can definitely rework things to be a bit more sensible on that score anyway, and as you say, sometimes you just have to bury a concept because for whatever reason the generator just won't cooperate. I've had to axe a few specific I2V gens for that exact reason...

    yajukunFeb 8, 2026
    CivitAI

    Works well, thanks for making this. It's cool to produce videos with integrated audio, neat. Would like to see a first frame/last frame feature in the next version like the Wan WF (please!).

    arkinson
    Author
    Feb 8, 2026

    @yajukun Thank you for your feedback and buzzing 😋 Yes, audio is a lot of fun - and looking older videos without becomes allready boring 🙄

    Integrating first/last frame is in my mind and should be relistic/possible. If I remember right I saw some workflows with special nodes for the last frame allready....

    arkinson
    Author
    Feb 9, 2026
    CivitAI

    @blargg Hi - thank you for buzzing 😋

    SheyMoFeb 9, 2026· 1 reaction
    CivitAI

    Hi,

    i got Node 'ID #201:183' has no class_type. The workflow may be corrupted or a custom node is missing.: Node ID '#201:183'

    There are no missing nodes or conflicts regarding comfy.


    Node 201 is T2V but inside there is no 183, or a node with a red border.

    Only what i see is that the following nodes have a red cross.

    -Audio Enhance

    -AudioNormalize

    -LTX Sampling Preview

    Which node is 201:183 ?

    arkinson
    Author
    Feb 9, 2026

    @SheyMo You get the message when you open the workflow or when you run it??? What means a red cross?

    SheyMoFeb 9, 2026

    @arkinson i got it when i run it. But i finally fixed it with ChatGPT. Red cross means that it was not correctly installed, even i git pull it several times.

    - Audio Enhance

    - AudioNormalize


    was part of AudioTools, which worked after i installed the missing Dependencies (soundfile resampy librosa)

    - LTX Sampling Preview

    was part of KJNodes, which was very outdated :D Had 1.2.1 installed and got this fixed after updating to 1.2.9

    After this was still not able to generating as the GGUF loader crashed with Unpin Memory Errormessage. So i git pulled ComfyUI-GGUF as well and tada i was able to generate a video :)

    But Video Quality is not really good, maybe miss some settings here, at least he does not something.

    arkinson
    Author
    Feb 9, 2026

    @SheyMo Thank you for posting your solution. It is strange that you had to install the nodes manually. What comfyui do you use? There was no way to fix it with the manager???

    Poor video quality? There is really nothing to set then the highes resolution.

    SheyMoFeb 10, 2026

    @arkinson I using ComfyUI Desktop, updated to latest Version. I installed firstly all missing nodes via the "Install missing nodes" Function, and everything was green and no missing nodes displayed. Also in the Manager was no missing nodes displayed anymore. Only what i recognized was, that after every restart, he recognized them as missing again, but seconds later switch to green and show no missing nodes anymore.

    For the next video quality was better, don't know what happen before. But its nice to see that LTX is capable of generating german speech :)

    arkinson
    Author
    Feb 10, 2026· 1 reaction

    @SheyMo Thank you so much for your informations, especcially to Comfyui Desktop 👍 I saw a lot of comfyui issues from different users (mostly regarding to my other workflows) since the last major comfyui updates.

    For myself, I swiched from Comfyui Desktop to Comfyui-Easy-Install, cause of the many problems (even like you described) with the new manager in the Desktop version. Even the latest Easy-Install versions still provides the "classic" manager. In my experiances, this is much more usefull to handle node conflicts and installation issues and there are some other advantages to use it especially for video generation.

    Yes, speech generation is a lot of fun. German language works perfect, even "all kinds of most exotic" 🙄 languages. I did a small test here, just for fun 🤣

    Vixxii2780Feb 11, 2026· 1 reaction
    CivitAI

    This thing is amazing at how fast it makes the video. One question though, is there anyway to disable sound generation? I like the option of using the sound or getting possibly a faster gen by not having sound. I tried to simply bypass the audio vae but it told me that wouldn't work.

    arkinson
    Author
    Feb 11, 2026

    Oh my - we just got sound ON and and you want to switch it OFF again 🤣🙂

    Short answer: No and never! 😉😂

    Long anwer: I would say ltx-2 is made to generate syncronised speach and video and some kinds of sound out of the box. You might have a look into the subgraphs and try to disable/remove all sound related parts, but I'm not sure if that will work, nor do I know if it will run faster then - but you might try it 🙂

    RBo7X_472Feb 12, 2026· 2 reactions
    CivitAI

    Had to fight with some extension conflicts, but that was because of things I already had installed, not extensions this uses conflicting. But after that, this workflow is really nice. I've only been using t2v and i2v, but they're the best results I've gotten so far. I have a 4070 12gb for now, and did get a crash with out of vram when trying to do a 30 second video though, but 20 seconds worked. Maybe some day bigger card will cost less than a kidney :)
    You wouldn't happen to have one of these for a Wan2.2 workflow would you? :)

    RBo7X_472Feb 12, 2026

    Why yes, yes they do have one for wan.

    arkinson
    Author
    Feb 12, 2026

    Hi , thank you for your feedback 🙂 For longer clips reduce the resolution. Have a look here in the description for my 30 seconds test as an example.

    saptansumitra2001784Feb 13, 2026· 1 reaction
    CivitAI

    Hiii...I want to use Video to Video. And my specs is RTX 3060 12gb and 16 gb RAM. Seen your sample video I find out some wobbling in the background and some othe noticeable artifacts. I dont want my Video to be as long as 20 or 30 sec, Max 6-8 will do. But all I want is consistent and detailed output. Is it possible ???

    arkinson
    Author
    Feb 13, 2026

    @saptansumitra2001784 Hi - this one is v2v for example. All generation options working very well. Short clips are no problem, even with higher resolution.

    Uhh I see, you have 16 gb ram only. But I would bet it will work too. Use a large swap file as described. Start with short t2v clips first and see what you can get.

    v2v: Keep in mind, you need a start video with audio wich you can expand. This will lead to slighty longer videos finally.

    Video quality: with 12 gb vram we are at the lowest limit of what is feasible. Don`t expect Hollywood quality. It mostly depends on your start image/video and your prompt and of course of good luck. Or in other words: you often have to run a couple of generations to get one good shot. And yes, with low hardware this needs all a lot of time.

    153628Feb 15, 2026
    CivitAI

    有的节点找不到

    boinobin730Feb 16, 2026

    which nodes are not working?

    arkinson
    Author
    Feb 17, 2026· 1 reaction
    CivitAI

    @boinobin730 I would like to continue our mostly ltx-2 related discussion here on the right model page. (For all others, you might have a look at the previous "chat" here - but please have mercy, it`s not all ltx-2 bounded 🙄)

    boinobin730 :"Must be early for you." No - late 😉🙂

    Enhancing output quality: If you remember, I tried upscaling and framerate multiplying during the pre tests allready. The result was: generating with lower resulution often ruins the quality and "expanding" bad quality makes no sense. So the best way is to generate in the highest resolution possible with 12 gb vram. Upscaling/multiplying this "high" resolution output makes no sense too, cause it needs endless time and endless ram (not vram). If someone has a better gpu/more vram - just increase the generation resolution and if possible, double the framerate.

    Cause here are often questions about ltx-2 quality: according to our own tests/previous discussions and your previously linked article - to enhance quality I generally would recommend:

    - generate with highest resolution possible (with a high performance gpu you might try to double framerate too),

    - most important: use ltx-2 conform prompting!!!,

    - play with video length (longer clips may run into quality issues),

    - don`t try to force the model too much - if you get your idea (prompt/start image/video) not to work, try something completely different,

    - try t2v: with right prompting this seems the most easiest way to get good results in a short time,

    - try landscape format (not tested for myself yet),

    - if you have sufficient computing power and time for larger test runs: try other loras (camera guiding loras), steps, samplers, etc.....

    Uhh, long story, but I hope this will help some others too 🙂

    boinobin730Feb 18, 2026

    @arkinson My bad. I didn't read this thread. I don't know why it didn't come up as a message. Nevermind.

    Yes. I agree, in regard to my testing. it won't really upscale even when i tried to put it through SeedVr2. I did feed the animation into your Wan workflow and similar, it took a long time and the result was marginal at best. Yes prompting makes a world of difference. The Civitai link I sent you regarding the prompt is very useful. Outputs seem a lot richer. He now has made a version 1.5, which i am testing now.

    arkinson
    Author
    Feb 18, 2026

    @boinobin730 ".....I just installed this" 🤣 https://civitai.com/models/2400306/ltx-2-easy-prompt-by-lora-daddy

    Looks very interesting. I just tried to get it running but my comfyui is not able to download the huggingface model automaticly. I get the error: "We couldn't connect to 'https://huggingface.co'....". Did you installed the model manually??

    Btw. yesterday, after several weeks of struggling, I managed to solve my horrible issue with not to be able to drag-and-drop workflows/images into comfyui anymore. Out of sheer desperation, I simply deactivated all custom nodes - and bingo, it worked again. Some hours later, after systematically enabling/disabling all nodes I found the suspect: the "Slider Sidebar" is out of maintenance and caused the strange behavior after the major comfyui updates. I would never have thought that a node could cause such strange behaviour.... 🙄

    arkinson
    Author
    Feb 18, 2026

    @boinobin730 Uhh - just overlapped... Yes, I have the model installation issue with version 1.5.

    boinobin730Feb 18, 2026

    @arkinson for some reason, his github is not being represented as a proper comfyui node. I don't know the ins and outs of how you submit but yes, I installed manually. I suppose the caveat is install at your own risk etc etc etc. It has definitely liven up the outputs when I used it for SFW T2V. I'm going to try his new I2V update with some NSFW stuff and see what happens. The Lora side of LTXV2 is a little sparse atm because its too early but I guess in time it will grow.

    Yeah. I know how frustrating comfyui can be. I updated Comfyui a few days ago and it stuffed it up ,nothing worked. Same story as you, disable all nodes, slowly add them back in till you find the culprit. At least I am getting better with Comfyui....

    boinobin730Feb 18, 2026

    @arkinson His 1.5 seems buggy on my end. I will wait till he sorts it out. the 1.0 model worked fine, it didn't touch on the Qwen reference. As I said before, I just grabbed the nodes and put it into your LTX2 workflow for T2V and the outputs were better than a basic description. LTX2 seems to hallucinate a lot(see the girl entering the car video clip), I guess its because we are using the low VRAM model.

    arkinson
    Author
    Feb 18, 2026· 1 reaction

    @boinobin730 I just forgot to run the requirements after git clone. Now it is working and has loaded the model automatically.

    boinobin730Feb 19, 2026

    @arkinson It still didnt work for me. I ended up using ChatGPT to work through his code and it found a type mismatch in the file LTX2VisionEasyPromptLD.py

    def describe(self, image, model, offline_mode, local_path):

    hf_id = MODEL_OPTIONS[model]

    Now it downloads the qwen models.

    Not sure if you got that error, he might fix it for good now.

    arkinson
    Author
    Feb 19, 2026· 1 reaction

    @boinobin730 OK, just to specify what I have done:

    I tested the LTX2EasyPrompt-LD node from the v1.5 t2v workflow according to your link and I only used this single node without the additional Vision Describe Node in my t2v workflow:

    - there was no manager installation available for the node, so I did the manual install:

    - git clone, comfyui restart and first online mode run -> huggingface model download error,

    - running requirements.txt -> restart, first run -> the selected low vram 3B-Llama model loaded now,

    - setting up the local path to the loaded 3b_Llama model -> running offline mode -> ok, it is working now.

    I did a couple of test runs overnight (all with the same simple prompt). The results are very mixed – from useless to amazing/unexpected. Nsfw prompting seems to work well. The prompts generated are certainly very interesting.

    Unfortunetely I found no solution for quick prompt generation testing yet. If I use a simple test workflow: LTX2EasyPrompt-LD + text/prompt output node nothing happens after clicking Run (no error message - nothing). This is very strange 🙄

    boinobin730Feb 19, 2026

    @arkinson Thanks for detailing your method. I did very similar steps to you. 1.0 worked fairly well straight out of the box, but 1.5 I2V seems buggy. In particular it is the Vision node. After mucking around with the code I got the vision model to work. only the 3b version since 7b gives me OOM. 3b is SFW unfortunately. I am not really convinced I need to use the vision model anyway, as all it does is describe what I created for the I2V. Also I have a suspicion it makes output worse.

    In the end I just took the 1.5 prompt node and put it into your LTX2 workflow and I get some interesting outputs. I think its a nice addition to a workflow, but results are variable and real life physics are never respected. Prompts are not followed also. Some outputs are crazy fever dreams even though I was fairly specific and it was SFW. I am going to keep testing to see what is consistent.

    edit: this morning I have noticed that LORADADDY has redone the nodes again. So I am going to test the new changes and see. hopefully it works better now.

    arkinson
    Author
    Feb 19, 2026

    @boinobin730 I`m a little bit confused. When I check the Civitai website, the only difference between v1.0 and v1.5 seems to be that v1.5 also contains an i2v node - right?. But if I understand you correctly, the t2v node was also updated with v1.5???

    There just was an update to the py-files on GitHub an hour ago – do you mean that this is an updated t2v node too??

    It is a pity that there is no manager installation available yet.

    As mentioned, I only used TX2EasyPrompt-LD node yet and this worked without any issues for me. If you have some "buggy" behaviour you mean the optional "Vision Describe Node"???

    Can you please briefly explain what the "Vision Describe Node" actually does? Is this for i2v??? Sorry, I only had a look at the t2v workflow yet.

    boinobin730Feb 19, 2026

    @arkinson No problemo. Well as far as I understand the Vision describe node, will describe the scene using 1 of 2 LLM models. It is part of his I2M workflow. Previous iterations were buggy, as of this morning, the py files seems to have been corrected both for the Vision describe and the Prompt lora. It would make sense that the vision describe node is only useful in a I2V case as a T2V wouldn't need it at all. So yes, for your purposes, if you enjoy T2V a lot more then just utilize the Prompt Lora in your T2V workflow. I am not using his workflow at all now. I only used it so that I could work out how it wires up and then just popped it in your workflow for use in T2V and for using both nodes image vision and prompt for the I2V. (I haven't tried with Image+audio or v2v(may not work).)

    After some more testing I am working out best use cases for the Prompt node. It is a very good addition to any workflow IMHO. I made some very quick NSFW videos with a few images I had. I am surprised at how decent some of the quality is especially getting the general vibe and NSFW language coming out. It's obviously not top tier porn level but its generates fast!! and is versatile if thats what floats your boat.

    I will post my examples to the gallery again.

    Posted examples now. No cherry picks, 4 different runs, I turned off the oral lora on a 1 generation because I just wanted her to speak whilst holding the dick. Sometimes she is not speaking, i am thinking because the prompt doesn't actually say she says" ...........". it would be cool to run the prompt, then allow for adjustments in the prompt then hit continue so that the excellent prompt can be tweaked.

    boinobin730Feb 19, 2026

    @arkinson to add. yes. the 1.5 is also updated for the prompt node. it is better in different ways.

    arkinson
    Author
    Feb 19, 2026

    @boinobin730 OK, the ‘Vision Describe Node’ is only needed for i2v, right? I'm not on i2v yet.

    Still, I feel like we're talking about completely different things. Maybe there's something I'm not understand.

    I copied only the TX2EasyPrompt-LD node from the v1.5 workflow and inserted it into my workflow. It works perfectly as described, with no bugs.

    When you talk about the prompt node, I assume you mean the TX2EasyPrompt-LD node?What I don't understand at all, however, is that you talk about a ‘Prompt Lora’. Have I overlooked something???

    As you can see, I'm still completely at the beginning 🙂

    boinobin730Feb 20, 2026

    @arkinson  Prompt node is exactly what it is. Sorry. That was a slip of the tongue. My brain was firing too quickly. I think. Yes the prompt node has changed over time. There is no LORA involved here. Yes the prompt node is the star performer, I am not sure if the vision node does anything substantial.

    arkinson
    Author
    Feb 20, 2026

    @boinobin730 Hi - thank you. Yesterday I thought I am sailing on onother boat 🙄

    I dived a little bit deeper now and had a look at the i2v workflow too.

    Ok, as far as I got it now, the TX2EasyPrompt-LD node generates the final prompt, while the "Vision Describe Node" just grabs the information from the provided image in an i2v workflow.

    After some struggling around and editing the LTX2VisionEasyPromptLD.py the node loaded the following model into the hugging face path: "models--huihui-ai--Qwen2.5-VL-3B-Instruct-abliterated". Unfortunately I get OOM error even with this small model. So actually I have no chance to test i2v. Please, could you check, if you use the same model.

    Btw. your idea to change the model instructions in the LTX2EasyPromptLD.py to use it for t2i is not bad. Have you done any tests yet?

    boinobin730Feb 20, 2026

    @arkinson Are you using his workflow or your workflow, when you are using the I2V? I found his workflow doesn't handle the VRam demands as well as your workflow. So I have been just using the new nodes in your workflow. Ignore my comments before about fixing the code. The OP has just changed the code again in the repo just 7 hours before I have typed this, so the code has changed again. So I need to test again myself.

    I have been having the craziest experience with the old nodes yesterday for I2V. When using just the easy prompt node in a modified Arkinson workflow, i was getting speech but her mouth wasn't speaking, almost like she is just thinking out loud. I took the output directly from the easy prompt node and directly fed it into your original Arkinson workflow and she actually spoke the words. Same seed. same resolution, same generation time length. I then wired up his nodes again and I got the same problem. Voice is speaking but no vocalisation. I then cleared out the nodes again expecting to see her actually speak normally in your workflow and for some strange reason the problem was still occuring even with a base Arkinson workflow. I was hearing the words but her mouth wasn't moving. I gave up then and went to bed.

    I will try his new updated nodes again. but If I hit more stumbling blocks I might just keep the workflow seperate and just use the easy node to just generate a prompt.

    arkinson
    Author
    Feb 20, 2026

    @boinobin730 I allways use my workflow. I just connect the prompt output (TX2EasyPrompt-LD) with the text input (existing Clip Text Encode node). I have installed the latest git files (some hours ago).

    I would like to ask you again to check if your "Vision Describe Node" downloaded the same Gwen model like mine: "models--huihui-ai--Qwen2.5-VL-3B-Instruct-abliterated". Cause it is strange, that you did not get the OOM error. I get the error right at the start, as soon as the Qwen model loads.

    Btw. I had to change the code even with the latest release.

    boinobin730Feb 20, 2026

    @arkinson Sorry. I didn't reply to your question. No i didn't get oom errors using the models--huihui-ai--Qwen2.5-VL-3B-Instruct-abliterated". I do have 64Gb RAM though. so that might of been it.

    boinobin730Feb 20, 2026

    @arkinson I can test your version of the workflow if you like, as an experiment. If it work on mine, then it will probably be a ram issue.

    arkinson
    Author
    Feb 20, 2026

    @boinobin730 Thank you for confirm the model. No, it is a VRAM OOM. Strange thing is, comfyui seems not to manage the ram/vram here, cause there is no swap memory in use.

    If I remember right, the first online run with model downloading was succsessful, but i skipped the generation process and repeated with the offline mode. Since then I get the OOM error. Ok, I will delete the model path and start a complete new run....

    I still use my published workflow, just added the two LTX nodes. So this should be no workflow issue. I haven't changed anything in the start bat files either.

    arkinson
    Author
    Feb 20, 2026

    @boinobin730 Nope. I get the same OOM even after reinstalling the model files. Maybe it is caused by the latest git version? Would be interesting if you get the error too after updating.

    boinobin730Feb 20, 2026

    @arkinson Yes. I can confirm i am getting an error in offline mode for the Vision node. When it is online mode, it works ok. The error was not an OOM error though. I haven't tried the new updated files yet. I am still working out why I only get speech without lipsync.

    arkinson
    Author
    Feb 20, 2026

    @boinobin730 The issue with lipsyncing and i2v I have sometimes too (even without the easy nodes). I would say it depends mostly from the start image and the prompt. Dit you tried very different images?

    boinobin730Feb 21, 2026

    @arkinson . No i didn't, initially. I kept trying the same picture but I definitely had the character speaking on the basic workflow. I ended up changing the picture stripping out any details regarding movement and placed it on a woman standing up. She did not lip sync. I got so frustrated, I deleted the workflow, unzipped a fresh copy of the workflow, fixed up the file locations and it worked. I am gobsmacked at how crazy that exercise became. I have now put in the new nodes of easy prompt and vision attached to your workflow and surprisingly it is behaving itself. The girl is talk and moving at the same time. The only difference is I only have 1 workflow tab open not a plethora of workflow tabs I usually have open. Now.

    in terms of the error message. I definitely get this.

    LTX2VisionDescribe

    expected str, bytes or os.PathLike object, not NoneType

    when trying to run in offline mode for the vision node. So I switched it to offline_mode False and it is processing. Here is an example of the vision it saw.

    Style: photorealistic. This image features a white woman with light skin, short blonde hair styled in a sleek, straight cut. She is lying on her side, with her hand resting on her cheek, showcasing a smooth, well-defined neck and shoulders. Her large, full breasts are prominently visible, and she has a slender, curvy body. The background includes a soft, warm-toned lamp and a blurred interior setting, suggesting a cozy, indoor environment. The lighting is soft and even, highlighting the natural beauty and texture of her skin and hair.

    I need to keep testing, as I think that using the vision node may disrupt the prompt node to allow the character to speak.

    Hope this helps. I am going to keep testing. I know the easy prompt node is good. I also wired up a save the prompt to a file node so that I can capture some great prompts and tweak them a bit for added variety. The dialogue is terrific and definitely NSFW if you use case needed that.

    arkinson
    Author
    Feb 21, 2026

    @boinobin730 Thank you for the explanations.

    Your error "expected str, bytes or os.PathLike object, not NoneType" seems to be an incorrect model path entered. NoneType just means it got no path information.

    [edit: see my last comment first]

    Regarding my VRAM problem: I assume you are still not on the latest github version? Or do you use any additional commands in the start bat file or do you changed any vram management settings in comfyui?

    Please could you check if the Vision node use any swap file memory when you start the workflow? Cause as mentioned, in my case vram usage runs quickly up to 12 gb and stops with OOM without any memory management and any swap file memory used. Very strange 🙄

    arkinson
    Author
    Feb 21, 2026

    @boinobin730 Oh my, sorry for asking a lot about the OOM errors. I just realised that my complete comfyui system seems to be corrupted. I can`t even run a simple t2v generation without OOM errors actually.

    Actually I don`t know the reason why. Maybe latest comfyui update (I am on v0.12.3)? But I did a lot of custom node installations/deinstallations over the last few days testing different prompt/workflow/lora manager.

    Please be patient, I will be back with the easy nodes if I solved my issue.

    boinobin730Feb 21, 2026

    @arkinson No worries. sorry to hear about the comfy problems. It always cracks itself at the most inopportune moment. Thanks for the hint about the path issue for the vision node. I will check it out. I was testing more and I think the vision node is messing the execution of the prompt. As soon as I remove the vision node, I am getting a much better result of speech and lip sync as opposed to voiceover type output. So I am not going to use the vision node. Good luck with the Comfy fix.

    arkinson
    Author
    Feb 21, 2026

    @boinobin730 Argh, this was cruel! After the OOM errors I redownloaded my published workflow and got more and more errors about missing nodes, installed but not working nodes and strange error messages.... 🙄

    Finally I did a completely fresh installation of Comfyui-Easy-Install and I`m back again - ltx is working as it should now 🙂 Thanks god, installation process with Easy-Install is fast and really easy!

    I will install and test the latest EasyPrompt version now. Then I will reconfigure my whole node system - hopefully.

    boinobin730Feb 21, 2026

    @arkinson Comfyui troubleshooting rabbit holes. Glad you got out of it. Definitely agree with you now on Easy install. I will never go back to the stability matrix comfyui nightmare.

    boinobin730Feb 21, 2026

    @arkinson I have a question. Is there a node that allows you to save the last say 5 screen shots of the ltxv2 Video? I am finding that the very last shot is almost always motion blurred to be unusable. So with the ability to continue the video, we could select a better constructed shot which we could upscale fix and then feed back in for another ltxv2 video.

    arkinson
    Author
    Feb 21, 2026· 1 reaction

    @boinobin730 Sorry, I have not all my nodes running yet, so I can`t give you a specific answer. But use the search in the manager for "frame selector" and try "Comfyui-LNL" or google for comfyui frame selector. Maybe you will need some simple math too like: total frames - 4 frames = frame select.

    I have a new issue with LTX2VisionDescribe: "[VisionDescribe] Missing: qwen-vl-utils. Fix: pip install qwen-vl-utils then restart ComfyUI". The funny thing is, qwen-vl-utils are allready installed. Sometimes I really hate all this wild stuff 🤣

    boinobin730Feb 21, 2026· 1 reaction

    @arkinson Thank you. https://github.com/asteriafilmco/ComfyUI-LNL that's exactly what I need. Yeah. that vision node is pain at times.

    arkinson
    Author
    Feb 21, 2026

    @boinobin730 I tested i2v with the EasyPrompt node only too. What worked very well in a few quick tests is the following prompt style:

    1. one short sentence with the action, and a second short sentence with

    2. "the video starts with" + brief description of the start image (look here).

    And probably it would be better to write the second sentence first and add the first one at the end 🙄 I will test this tomorro.

    boinobin730Feb 22, 2026

    @arkinson Gotcha. I will try different prompt styles. Easy node by itself seems to present no problems. Possibly the vision model is putting too much load on the model and it reverts to something safe such as presenting voice but not enough vram to do the lip sync. I don't know, I am just guessing. I have been busy the whole day away from the PC and I think I will be now short on time over the next weeks to months.

    arkinson
    Author
    Feb 23, 2026

    @boinobin730 I was too quick to rejoice about the new comfyui installation. After experimenting with EasyPrompt and installing some simple other custom nodes my system seemed to be corrupted again. Got lots of curious error messages again when running the ltx workflow 🙄

    So I took a deep breath and did a fresh comfyui installation again, just to see that I get a fu**ing "Cannot read properties of undefined (reading 'output')" when I try to start my last Wan22 v4.0 workflow (fresh comfyui and no other nodes installed) 🤬🥵

    The really bad thing is that the workflow doesn't even start and there is no log entry and no indication that any node is not working. I`m really not sure if this is actually a comfyui/python/pytorch version issue or something else 🙄

    Oh my - you might be lucky to be away for some weeks or month from any available computer 🤣🙂

    boinobin730Feb 23, 2026

    @arkinson That's crazy. If you rebuilt from scratch it should be ok.? Can you make another new version of your comfyui setup and rebuild from there with basic workflows? How is the rest of your pc running? I did a full reinstall a few weeks ago. It certainly helped as my C: was maxed. When C: is maxed, all sorts of shit falls down. I presume you have lots of drive space.

    I'm kinda semi retired. So its more about priorities as such. Real life gets in the way of fun (AI generations and workflows). I will be at the PC just not so much generations for a while.

    arkinson
    Author
    Feb 23, 2026

    @boinobin730 When real life gets in the way of fun 🤣 Yeah, thats horrible 😅

    Actually I have 3 comfyui installations in parallel 🙄No, it is not a computer problem in general. It seems more a problem of experimenting with huge ammounts of nodes and all the frequent comfyui updates and outdated nodes on the other hand.

    Wan22 v4.0 workflow: I just saw, on my "oldest" system (comfyui v0.12.x) the workflow works. So it is definately an issue with the latest comfyui v0.14.1 and probably one of the nodes. One user reported the same error message some days ago with a gidpod installation, but was obviously able to fix it after an update (whichever release). I will just wait a little bit, before investing too much work in fixing the workflow.

    LTX-2 issues: Emotionally, I would say that this always occurred after attempts to install the ‘Vision Describe Node’, but that is pure speculation yet.

    Lora Manager: has become one of my most important visual model management tools. Best thing is: with right settings the whole database is useable with every comfyui installation. Unfortunately it not supports saving images + prompts with wildcards yet - otherwise it would be the perfect prompt manager too.

    boinobin730Feb 24, 2026

    @arkinson ok. that's good that it is isolated to a particular Comfyui environment. I have 2 but 1 is outdated. I should clear it and clone this current 1.

    I haven't really looked further into the lora manager. I will when I get a free chance. Just busy now.

    I have been playing a little more with the LTX2. I2V. I maxed the resolution of the output and I increased FPS to 30. It took longer to create but the output is looking a lot better for fast movement. I will test some more but have a look at the bouncing woman in your workflow gallery. Her face keeps shape, hands not too distorted, movement looks a lot better.

    arkinson
    Author
    Feb 24, 2026

    @boinobin730 Please, could you tell me the comfyui-, python- and pytorch version of your running installation?

    I`m on latest comfyui v0.14.1 now, python 3.12.10 and pytorch 2.9.1+cu130. None of my video workflows is running actually and I get more and more reports about issues...

    Lora Manager: To be honest, it is very usefull if you play with hundreds or > 1000 Loras and large amounts of custom Loras. It automatically loads all the sample images, descriptions, metadata, trigger words, etc from civitai and displays it in a clean and easy to use graphic manner with professional search/filter functions. You can add your own/custom Loras even with all data, like sample images. Importing Loras to your workflow including trigger words is just a mouse click. Creating automations with random Loras or Lora collections is very easy too. And with the image save node you are able to publish all metadata automatically displayed in civitai. That all works really great and mainy bug free. The only point I am actually missing, is saving sample images with simple to reuse wildcard prompts and Lora data inside Lora Manager...

    arkinson
    Author
    Feb 24, 2026

    @boinobin730 Yes, saw your 30 fps clip. Hard to say if it is really better. Did you tried to run higher resolutions instead too?

    boinobin730Feb 24, 2026

    @arkinson no problem. Python version: 3.12.10 , pytorch version: 2.9.1+cu130 , ComfyUI version: 0.14.1 , ComfyUI frontend version: 1.38.14. Maybe it is a bad node? Thinking back what was the last new nodes that you installed since the problem started to occur? I hope you sort it out soon.

    Yeah I'm pretty sure it is better, as compared to my girl doing her silly Joker dance near the steps gave a worse output with hands and face. Possibly the jumping girl remains really coherent because there is only a simple background to contend with. I will test a little more. It definitely takes longer though. Also it increases the possibility of OOM. Sometimes I need to close comfy after 2 video generations as it will get stuck on the 3rd and try to use cpu.

    boinobin730Feb 24, 2026

    @arkinson I had an old version of lora manager in my last Comfyui before the easy install comfy. I just installed the lora manager now as I was running without it. It looks a lot better than before. We are up to comfyui 0.15? I'm too scared to change.....

    arkinson
    Author
    Feb 24, 2026

    @boinobin730 Yes, v0.15.0 works for me so far.

    The LTX issue was solved by the original workflow creator. He changed the simple gguf loader to a more complex loader by KJ. I really don`t understand the difference nor I have seen an issue with the original loader, but it works now. I have published an updated version of my workflow too. Strangely enough, you didn't have this problem with the same comfyui version. 🙄 Sometimes this stuff is all a miracle.... Ok, next thing is to get the Wan workflows running again.

    boinobin730Feb 24, 2026

    @arkinson ohhh. I think I know why I didn't have problems. I follow a lot of these creators, saw the new update, downloaded it and immediately updated. Probably almost immediately. Before I generated again, So I never saw the error. I think its to do with the embedding of the ltx-2-19b-embeddings_connector_dev_bf16.safetensors into the GGUF loader model. I don't know how it helps. Don't you just love Comfyui errors and workflow problems....Not...

    Good luck with the WAN fixes.

    arkinson
    Author
    Feb 24, 2026· 2 reactions

    @boinobin730 "Don't you just love Comfyui errors and workflow problems....Not..." Nope 😂

    arkinson
    Author
    Feb 26, 2026· 1 reaction

    @boinobin730 Hurray! My Wan workflow 5.0 is out and running with latest comfyui🙂🙃 This was tricky this time, costs me a couple of days and some brainpower 🙄 There was no node conflict as assumed in the beginning. All components worked properly in seperate workflows. Ultimately, it seemed likely that my somewhat convoluted multi-layered switch logic could be the trigger. After various tests and failures, I realised that my combination of bypass and mute switching was obviously preventing the workflow from loading. That's why only the cryptic error window popped up at start-up without any entries in the console. I then rebuilt the entire switch structure and design. I hope it runs bug-free now. And God knows why it worked with the previous versions of comfyui. 🤣

    Ok, I will be back at LTX now - just had a short look at you latest clips 😅

    boinobin730Feb 26, 2026

    @arkinson The master at work!. Good on you Arkinson. How did you learn comfyui? was it all self taught? Wan is still pretty good despite the slow speed of processing. LTXv2 can't replace it yet. I like LTXv2 because of the fast payoff on generations especially when we are talking about a nice length of video of 15s vs 5-8 s on Wan. But LTXv2 hallucinates a lot and has a memory of a goldfish in regards to images out of frame. Pros and cons I guess.

    arkinson
    Author
    Feb 26, 2026

    @boinobin730 Yes indeed, you are right. And maybe it is a good idea to use both. I will try more first wan generation with simple sound and then use it in ltx v2v to add some funny talking...

    How to learn comfyui? I came from NMKD -> Automatic1111 -> comfyui. Ok, I started as SD1.5 was hip and comfyui was quite simple (just a handfull of custom nodes). I used the very simple SD1.5 template workflow and tried to improve it (very simple steps like adding Loras, upscaling, etc.). Cause there was (and is) mostly no useful documentation I just downloaded several workflows, even more complex ones and simply tried to strip them down to visually more understanable "flow" charts. I believe that`s the main part. It is sometimes a lot of stupid work of course to move a bunch of nodes just to get a structured flow chart, but by this process you will quickly get an overview about the logic behind it, even if you do not understand what every single node means and do - not to mention the settings. And over the time you get more experiances of course.

    boinobin730Feb 26, 2026

    @arkinson Thanks for detailing your knowledge path. That makes sense and it feels like the natural progression for most of us. I think Comfyui's rapid development and change makes it so much harder to learn especially by any formal means. What was current last year is outdated by the next year. Even best practice regarding hand inpainting in automatic1111 is now totally useless and a waste of time, especially with new model edits for flux and possibly qwen. I will go back and test model 5.0 of wan... I have to bite the bullet and upgrade comfyui. probably after I get an rtx 5060ti with 16gb vram next week. I can afford that upgrade. Not rtx 5090. Prices are beyond insane.

    arkinson
    Author
    Feb 27, 2026

    @boinobin730 Good luck with your new gpu. I assume the RTX 5060ti is much faster? Upgrading comfyui should be no problem, cause you are allready on v0.14.1 (I say this without any guarantee or warranty 😅).

    Ltx-2 workflow v1.3 is out. My older versions had a bug with automatic frame calculation for v2v in some special cases. To solve it, I had to dive in comfyui`s simple if-else logic the first time.

    In my opinion, the hardest point for AI/comfyui beginner is, they have to start in a quite complex invironment now. And many of them would like to start with the fancy stuff, like video generation, of course 🙄

    boinobin730Feb 28, 2026

    @arkinson Thanks for the update workflow. I will test it soon. You always amaze me at how fast you can punch out these complicated workflows. I'm still trying to wrap my head around just adding basic nodes. I agree. There is a ton of people who want the payoff of beautiful outputs without truly understanding that there is a lot of knowledge being put into the crafting of said outputs as well as the constant tweaking that goes on to finesse the output + all the trial and error that it is involved with.

    Rtx 5060ti having 16gb vram will help a bit for workflows that are pushing the boundaries, Theoretically it should be 50% to 100% faster than the 3060. Kind of 2 purposes with the gpu upgrade. My son was complaining about his video card but doesn't really want to buy one so he will get the old card as he is running an old gtx 1050. 3060 with 12gb vram is good enough for his gaming needs. I get the benefit of a moderately better card. I can look forward to the better technology that comfyui is taking advantage of in regards to the 5000 series. + its not crazy money only $750 AUD. I will let you know what I think of it.

    arkinson
    Author
    Feb 28, 2026

    @boinobin730 "My son was complaining about his video card". I love the reason why - yeah, the kids make the world go round, especially in the "gaming" age 😂🙂

    boinobin730Mar 1, 2026

    @arkinson Fortunately his taste in gaming are strategy based, not FPS. & The only strategy game I am against is "League of Legends". I hate that game. Too addicitve for kids.

    arkinson
    Author
    Mar 1, 2026

    @boinobin730 Uhh - I didn`know much about gaming. I just saw a lot of computer upgrading over the years around, just to satisfy the "kids" 😂 Remember a story, years ago, with friends of mine they did not understand why the hell their 15 years old "Personal Computer" did not run the "little" game CD, which her grandparents gave her daughter as a birthday present 🤣 There was no point in arguing, because: "It's just child's play. It has to work!" 🤣🙂

    boinobin730Mar 5, 2026

    @arkinson Are you excited for ltxv2.3? its coming soon!!!. Better prompt adherence, better sound, less scratchy noise. It's unknown whether it will use ltx v2 loras or not.

    I've been working on a music video for fun with my new RTX 5060TI. I haven't finished yet, as its so hard to get 3 different people to do different things in LTXv2. So I have to shoot each person and combine it in davinci resolve. I will eventually post it.

    The new graphics card is a little faster. Maybe 30% faster all up. I haven't tried bigger workflows yet. It's probably not worth the money for speed it is probably worth it if you have workflows that are just on the cusp of needing 16gb vram. I just need to get a windfall to afford an rtx 6000.

    Anything interesting on your end?

    arkinson
    Author
    Mar 5, 2026· 2 reactions

    @boinobin730 Anything interesting??? Yeah - look here 🤣 I need some sleep, but I am back soon.

    boinobin730Mar 5, 2026

    @arkinson Lol.!!! people already hitting you up. Cause, your the man!. Get your sleep. LTXv2.3 will be there still.

    boinobin730Mar 6, 2026

    @arkinson The fast LTX GGUF workflow guy has made a new workflow for 2.3 https://civitai.com/models/2443867?modelVersionId=2747788 It works ok. But I always prefer your workflow. I got errors initially with his workflow so I had to upgrade comfyui. now it is 0.163 . I played with his workflow a little bit. its ok. I think in time, it will get better.

    arkinson
    Author
    Mar 6, 2026

    @boinobin730 "I just need to get a windfall to afford an rtx 6000" - yeah, maybe in the next life 😂 but then, I am affraid, we will need an RTX 12000 🙄 Anyway, it sounds good with your new card. And yes, 12 gb is very limited, even for image generation. Did you allready tested Qwen or Flux 2?

    Actually I did a lot of tests with Illustrious, Pony and Z-Image for the nsfw part again, but this is all miles away from Flux.1 dev quality.

    Btw. after several weeks of struggling with a fu**ing errer without any support across the whole internet, I got my Flux Lora trainer working again. Finally I just spent some hours with Bing AI and after a lot of "discussions" it was actually able to narrow down the error: some corrupted fils in the huggingface cache directory, wich is situated in windows user path and not in comfyui 🙄 Absolutely awful. That also explains why it didn't work in any of my newly set up comfyui systems....

    Thank you for the fresh ltx-2.3 link. Yesterday I had allready a look for myself at his page, but probably some minutes/hours too early. This "gay" is really amazing and as I saw, there is really much more to do, then I tried in some first quick attempts. I will dive in now....

    boinobin730Mar 6, 2026

    @arkinson The card is faster for normal Pony generations. at least double the speed even with an upscale. Flux and Qwen still struggle. slight improvement in speed 30%. Qwen I still struggle with inpaint with a mask. It just can't handle it very well. I haven't tried wan yet.

    Damn your AI problems sounded like a real pain in the ass. Glad you got it sorted. AI trouble shooting can be very useful. I use ChatGPT for a lot of AI related things. It enabled me to create reliable character loras. But I wouldn't ever really solely on it. It can be stupid at times too. Also its language and the way it speaks to you is also laughable. People make memes out of ChatGPT speech.

    Late last night I tried to just substitute in the new models into your old workflow. Somebody said they did that on their particular workflow and it worked. but it that didn't work out for me . something is definitely going on.

    I am eagerly but patiently awaiting your version.

    boinobin730Mar 7, 2026

    @arkinson I tried the 5060ti on the newer Wan workflow. A 10 second video only took 15minutes at 720 resolution that included upscaling as well. I went as far as 15seconds but it took well over 30 minutes so probably at the limit for this card. 12seconds took about 20 minutes and 14 seconds took about 25 minutes. Very solid performance as 10seconds took 30 minutes for me before. Those extra few seconds can really make a video clip. Wan has become useful again.

    If you are stuck with ltxv2.3 check out this reddit post. https://www.reddit.com/r/StableDiffusion/comments/1rmhznx/ltx_23_first_impressions_the_good_the_bad_the/

    interesting information about the preview now using a new file taeltx2_3.pth

    https://github.com/madebyollin/taehv/blob/main/taeltx2_3.pth

    arkinson
    Author
    Mar 7, 2026

    @boinobin730 Thank you so much for all your informations. It`s a pitty that Qwen needs so many recources. I did not tried it anymore.

    Flux.1 D: I can run 1024 x 720 + many Loras, 1 pass and 2x upscaling without any vram issues. 25 steps needs up to 3 minutes (<- this is the main problem) and I can not run other heavy tasks in parllel on the same machine of course.

    AI error killing: I love the way, how AI often makes completely false statements with full conviction. But it is also open to "discussion". And sometimes it's even better than in real life. If you say your claim is completely wrong, then at least the AI starts to think, apologises and looks for another solution. An achievement that, unfortunately, cannot be expected from most of our fellow human beings 😆🤣🙂 On the other hand I am allways astonished how well most AI models can handle Python code and error messages. Some times ago I spent some weeks to learn very basic python programming with the help of AI. This worked much better then I expected.

    LTX-2.3 workflow: Yesterday I was allmost ready with implementing all new parts, just to see that my workflow became corrupted. Spent the whole night with cryptic error messages and not starting tasks. Today I found the reason why: the subgraphs seems to be very "sensible" with overlapping set/get names in/outsides the subgraphs. Unfortuneately you did not get a usefull error message, if something goes wrong there... I rebuilt the subgraphs and actually I run the tests for every part, hopfully getting al bugs.

    Thank you for the link for the new preview model too. Maybe I will implement it in a later update.

    boinobin730Mar 7, 2026

    @arkinson I find Qwen very useful for key frames and keeping likeness of characters. But its slowness is a big downside. I actually haven't tried much Flux in a big way. Nor Zit. not yet anyway. I think just having the extra 4gb VRAM really opens up more possibilites. I am so jealous of RTX6000 people now . I am going to sell a kidney!.

    When you are ready with LTX.. 2.3 is still so new. I think a lot of people are just finding out the new advantages of 2.3 over 2.0. Prompting still plays an important part in good outputs as well.

    I

    arkinson
    Author
    Mar 7, 2026· 1 reaction

    @boinobin730 I`m just ready with the ltx-2.3 functionallity tests and will publish it now as "alpha". Couriously, I can not see much difference/advantages to ltx-2 yet, but it should be a start base for much more tests. I believe, actually the ltx-2 Loras are the key problem. I did some quick test with the image2vid adapter Lora, but all I got was a more blurry output.

    boinobin730Mar 7, 2026

    @arkinson I was using the other guy's workflow yesterday, just i2v. and it is a bit cleaner. in the sense that the output feels a bit sharper and looks more coherent, less artifacts. it will listen to your prompting a bit more but it still runs off the rails if you don't put a negative prompt in. The negative prompt was important, especially to get rid of things like gibberish subtitles. so you will probably need it in your workflow as well somewhere. I have seen some T2V and there is less of that annoying hiss in the audio background.

    Ohh a thing I want to mention- both models (2.0 and 2.3) have a really hard time of getting a woman to sing a song in a man's voice. It has definite gender preferences built in. I was trying to get the character to sing a popular rock song (male vocals) and it wouldn't even mime it. I replaced the image with that of an AI man and it did it no problem, used another image of an AI woman and back to the same problem. This was without lora.

    Some lora's just don't play well with action and speaking. I feel that especially complex actions and voice also don't go well in a I2V sense. I tried woman singing and playing the guitar and it couldn't really do it well. no loras either. probably T2V would be fine. In regards to the lora's I think some are redundant, but some of the more popular nsfw ones have been working for me (boobs bouncing etc)

    boinobin730Mar 7, 2026

    @arkinson ok. Thanks for the new model. I'm going to test it now.

    arkinson
    Author
    Mar 7, 2026

    @boinobin730 Please let us continious here.

    boinobin730Mar 7, 2026

    @arkinson 

    I am still getting blur output. even after using this. ltx-2.3-22b-distilled-lora-dynamic_fro09_avg_rank_105_bf16.safetensors 

    the sound quality improved dramatically though, so i am going to use it instead of the old distilled.

    Is your workflow working for you? I must be doing something wrong.

    arkinson
    Author
    Mar 7, 2026

    @boinobin730 Argh - sorry, I meant let us continue here of course.

    creatorjulie743Feb 17, 2026· 1 reaction
    CivitAI

    Man, none of the samples has the workflow in it. WTF.

    SheyMoFeb 18, 2026· 1 reaction
    CivitAI

    Is there a way to time specific loras in this workflow ?

    I playing with the LTX face_punch lora, but this lora "overwrites" everything, all what happened before without the lora, and what should happen before that punch happens, is gone, its only the face punch with location and character information's from the prompt.

    arkinson
    Author
    Feb 18, 2026

    @SheyMo I assume you use v2v and you try to expand your existing video??? Generally there is no way to say a Lora should work in a special time range.

    But as a workaroud: I would save the lastframe from your existing video and use it as a start frame in i2v.

    SheyMoFeb 19, 2026

    @arkinson it was T2V to be honest :D But i try V2V in the moment, after i shorten the first clip to frame which fit for a start. The first results was not was i had expected, but at least they was funny :)

    But ChatGTP writing should be possible via Checkpoint/Model → Set Model Hooks( hooks=Create Hook LoRA (MO) ) → Sampler


    or

    Create Hook LoRA (MO) → Set Hook Keyframes → (optional Create Hook Keyframes Interp.) → Set Model Hooks → Sampler

    I tried it with a:

    - Create Hook Lora ( MO ) Node for the lora
    - Timestep Range Node for Timing
    - Cond Set Props Node do connecting everything

    But was running into:

    - "AttributeError: 'Linear' object has no attribute 'temp'"

    arkinson
    Author
    Feb 19, 2026

    @SheyMo Ahh - sorry, I have no experiances with this stuff. I just did a quick "research" - so the headline seems to be "masking and sheduling". Sounds interesting. Unfortuneately I did not found any understandable descrption about sheduling on the first quick view, except some mostly "caotic" tutorials about masking.

    Unfortunately, I can't be of any help to you at the moment, nor I have played around much with ltx-2 + loras. If you get any progress, please let my know.

    drfaker911219Feb 19, 2026
    CivitAI

    Why do I get this error?
    MathExpression|pysssss

    'Constant' object has no attribute 'n'

    purplerude643Feb 20, 2026
    CivitAI

    i have Error

    Cannot read properties of undefined (reading 'output')

    Rifler1Feb 21, 2026· 3 reactions
    CivitAI

    I just want to say thank you. Your workflow works great.

    A little tip for beginners: If your video quality is poor, try Euler Ancentral. It worked for me.

    arkinson
    Author
    Feb 21, 2026· 1 reaction

    Thank you so much 😋 Yes tweeking and playing around with samplers/shedulers can improve quality or the "general" output in many cases. I choosed Euler as the "standard" setting, cause in my experiances it will work as an "allrounder" in "most" use cases - but not even with the best quality.

    AIden_AIzawaFeb 21, 2026· 2 reactions
    CivitAI

    The best wf I've tested for low vram gpu! 15 secs 1280x720 resolution in less than 4 minutes on a 5070ti 16gb!

    arkinson
    Author
    Feb 21, 2026· 1 reaction

    Hi - thank you so much and thank you for buzzing too 😋 I´m really glad this stuff is usefull. Happy generating 🙂

    PopHorn1956Feb 21, 2026· 2 reactions
    CivitAI

    This WF is so simple and easy to use! Thank you very much, dude! You are the best!

    arkinson
    Author
    Feb 21, 2026

    Hi - thank you so much 😅 and happy generating 🙂

    StableVibrationsFeb 24, 2026· 1 reaction
    CivitAI

    Edit: Issue seems to have been caused by the latest comfyui update

    arkinson
    Author
    Feb 24, 2026

    @StableVibrations If I remember right, you had the same error? Wich comfyui version and release do you run?

    StableVibrationsFeb 24, 2026

    @arkinson Yeah, issue seems to be specifically with 14.2

    Currently using the following without issues:
    ComfyUI 0.14.1
    ComfyUI_frontend v1.39.14

    edit: never mind, saw your other comment now, if you have issue with 14.1 then I have no idea 😭

    arkinson
    Author
    Feb 24, 2026

    @StableVibrations Thank you. Uhh, there is allready a v0.14.2 out? I have to check this.

    I just uploaded workflow v1.2 with KJ gguf loader. Please give it a try and let me now if it works for you or state the error message please.

    arkinson
    Author
    Feb 24, 2026

    @StableVibrations Oh my devil - we are on v0.15.0 now. Please update your comfyui, use my latest workflow and give me a hint if it work for you too.

    leed831100200Feb 24, 2026· 1 reaction
    CivitAI

    I have updated comfyui to date, but the workflows still warms "object object" node is missing, and the clip length node seems to be not working. How can fix this?

    arkinson
    Author
    Feb 24, 2026

    Not your error message, but please look here. Wich comfyui version and release do you run?

    leed831100200Feb 25, 2026

    @arkinson Hi, thanks for replying. I have tried your v1.0 and v1.2 workflows with comfyui 0.12, 0.14 and 0.15, but none of them worked for me. The error message says: SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)

    arkinson
    Author
    Feb 25, 2026

    @leed831100200 Ok, lates release 0.15.0. This should work with my latest workflow v1.2.

    Wich comfyui version??? Wich node sends the error message or when did you get it? Do you checked all models in every loader model?? Did you googled like this result here??? All nodes up to date???

    arkinson
    Author
    Feb 24, 2026· 3 reactions
    CivitAI

    Attention: Issues with latest comfyui release (v0.14.x).

    I actually tested a fresh installed system (Comfyui-Easy-Install, comfyui v0.14.1) and get the following errorr with this LTX-2 workflow: "SamplerCustomAdvanced mat1 and mat2 shapes cannot be multiplied (1024x3840 and 15360x3840)". Older comfyui (v0.12.x) seems still working.

    As far as I can see it happens after the latests comfyui updates and my other Wan2.2 workflows are also affected, but with other error messages.

    The problem seems to be that specific nodes actually conflicting with the comfyu version.

    I hope the node creators will update their packages soon. Meanwhile I will try to identify conflicting nodes and maybe we fill find some quick workarounds.

    If you have any hints, ideas or solution, please publish it here. If we get it running, I would like to publish updates of my workflows.


    arkinson
    Author
    Feb 24, 2026

    OK - LTX-2 workflow is fixed now. Please use new version v1.2.

    [edit: Please update your comfyui, we are actually on v0.15.0.]

    [edit: don`t forget to update all custom nodes too]

    girlswithafrosFeb 24, 2026
    CivitAI

    SamplerCustomAdvanced

    not enough values to unpack (expected 4, got 3)

    ____ Is this a custom node conflict ? Thanks !

    arkinson
    Author
    Feb 24, 2026

    @girlswithafros Not sure about your error, but I just published a fixed version wich works with latest comfyui. Please try v1.1.

    girlswithafrosFeb 24, 2026

    @arkinson Thank you !

    girlswithafrosFeb 24, 2026· 1 reaction

    @arkinson It works, thanks a lot.

    arkinson
    Author
    Feb 24, 2026

    @girlswithafros Thank you for your feedback and good luck with generation.

    Workflows
    LTXV2

    Details

    Downloads
    2,332
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/31/2026
    Updated
    4/30/2026
    Deleted
    -