update : April 14th 2026 : Lightricks has updated their LTX 2.3 distilled model to 1.1 (and Lora):
Model (1.1 fp8 _scaled by Kijai): https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/diffusion_models
dist. Lora 1.1 : https://huggingface.co/Lightricks/LTX-2.3/tree/main
V2.5 LTX-2.3 DEV & Distilled Video with Audio
Image to Video and a Text to Video workflow, both can use own Prompts or Ollama generated/enhanced prompts.
works with latest LTX 2.3 Distilled model (8 steps, CFG=1) or Dev model (20 steps, CFG=3)
Updated the processing for DISTILLED and DEV model, select the DIST or DEV model in loader node and switch to dedicated DIST or DEV processing pipeline, so each model has its own processing.
DIST model pipeline: Standard Guider and Basic Scheduler, follows the manual sigmas issued by Lightricks
DEV model pipeline: MultiModal Guider and LTX Scheduler + Distilled Lora on latent upscaler
Included a workflow version with "RTX Video Super Resolution" node, which upscales videos in highspeed.
Tip: With latest Comfy and LTX updates, the processing got faster for me, so I can increase the scale_by in sampler node from 0.5 to 0.6 or higher to have crisper videos with minor impact on render time.
V2.3 LTX-2.3 DEV & Distilled Video with Audio
Downloads for LTX 2.3:
LTX-2.3 Distilled & Dev Models (fp8_scaled): https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/diffusion_models
Textencoder1: (fp8_e4m3fn, same as LTX-2): https://huggingface.co/GitMylo/LTX-2-comfy_gemma_fp8_e4m3fn/tree/main
Textencoder2: (projection_bf16): https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/text_encoders
Video & Audio Vae: https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/vae
Loras:
Spartial upscaler (x2-1.1): https://huggingface.co/Lightricks/LTX-2.3/tree/main
Distilled Lora for upscaler (lora.384): https://huggingface.co/Lightricks/LTX-2.3/tree/main
Smaller, alternative Desitilled Lora by Kijai: https://huggingface.co/Kijai/LTX2.3_comfy/tree/main/loras
Detailer Lora (same as LTX-2): https://huggingface.co/Lightricks/LTX-2-19b-IC-LoRA-Detailer/tree/main
Ollama Model (prompt only, fast): https://ollama.com/mirage335/Llama-3-NeuralDaredevil-8B-abliterated-virtuoso
alternative model with Vision (reads input image+prompt, slower): https://ollama.com/huihui_ai/qwen3-vl-abliterated
other model with Vision (great for I2V): https://ollama.com/huihui_ai/qwen3.5-abliterated
smaller LTX 2.3 GGUF Dev or Dist. models work as well. (replace Checkpoint loader node with Unet loader node from this custom node: https://github.com/city96/ComfyUI-GGUF ):
models: https://huggingface.co/unsloth/LTX-2.3-GGUF/tree/main
save to models/unet/
V1.5 LTX-2 DEV Video with Audio including latest π π £π § Multimodal Guider
Image to Video and a Text to Video workflow, both can use own Prompts or Ollama generated/enhanced prompts.
Replaced the Guider node with latest Multimodal Guider node, see more details in WF notes or here: https://ltx.io/model/model-blog/ltx-2-better-control-for-real-workflows Before we had 1 CFG parameter for audio and video. With multimodal guider, we now can tweak audio and video seperately with even more parameters...
added a Power Lora Loader node to inject further Loras
use Image to Video Adapter Lora to improve motion for I2V: https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa/tree/main
replaced a node to no longer require comfymath custom nodes
V1.0 LTX-2 DEV Video with Audio:
Image to Video and a Text to Video workflow with own Prompts or Ollama generated/enhanced prompts.
setup for the LTX2 Dev model.
uses Detailer Lora for better quality and LTX tiled VAE to avoid OOM and visual grids
2 pass rendering (motion+upscale). Upscale process uses distilled and spatial upscale Lora
setup with latest LTXVNormalizingSampler to increase video & audio quality.
Text to Video can use dynamic prompts with wildcards.
Download LTX-2 Files: (Workflow V1.0 and V1.5 only)
Find Model/Lora Loader nodes within Sampler Subgraph node.
- LTX2 Dev Model (dev_Fp8): https://huggingface.co/Lightricks/LTX-2/tree/main
- Detailer Lora: https://huggingface.co/Lightricks/LTX-2-19b-IC-LoRA-Detailer/tree/main
- Distilled (lora-384) & Spatial upscaler Lora: https://huggingface.co/Lightricks/LTX-2/tree/main
- VAE (already included in above dev_FP8 model, but needed if you go for GGUF models): https://huggingface.co/Lightricks/LTX-2/tree/main/vae
- Textencoder (fp8_e4m3fn): https://huggingface.co/GitMylo/LTX-2-comfy_gemma_fp8_e4m3fn/tree/main
- Image to Video Adapter Lora (more motion with I2V): https://huggingface.co/MachineDelusions/LTX-2_Image2Video_Adapter_LoRa/tree/main
Save Location:
π ComfyUI/
βββ π models/
β βββ π checkpoints/
β β βββ ltx-2-19b-dev-fp8.safetensors
β βββ π text_encoders/
β β βββ gemma_3_12B_it_fp8_e4m3fn.safetensors
β βββ π loras/
β β βββ ltx-2-19b-distilled-lora-384.safetensors
β βββ π latent_upscale_models/
β βββ ltx-2-spatial-upscaler-x2-1.0.safetensors
β βββ π Clip/
β βββ ltx-2.3_text_projection_bf16.safetensors
Custom Nodes used:
https://github.com/Comfy-Org/Nvidia_RTX_Nodes_ComfyUI (RTX VSR Version)
Text 2 Video only:
Ollama help:
Install Ollama from https://ollama.com/
download a model: Go to a model page, chose a model , then hit the copy button, i.e. https://ollama.com/huihui_ai/qwen3-vl-abliterated
open terminal and paste the model name, i.e.: ollama run huihui_ai/qwen3-vl-abliterated
model will be downloaded and can be selected in green comfy node "Ollama Connectivity". Hit "Reconnect" to refresh.
Example longer Video
Description
LTX2 DEV Image to Video and Text to Video
FAQ
Comments (13)
I like it. Issue is that all my gens are mostly static. Can you please add the "anti static" lora? I'm unsure where to connect as there is no room on the sampler node to add third lora
You can add another lora loader node to the existing one for the detailer. If you copy the existing lora node and press strg+shift + paste, I think it will connect it right. Assumable your additional lora is only requred on the first pass, not on the upscaler.
@tremolo28Β Ah inside the sampler i2v you mean, ok
@drfaker911219Β nah, outside the sampler, above ollama group on the left, the detailer lora group π
Thank you for posting all the models and nodes needed but I am having trouble figuring out where to finding LTXVTiledVAEDecode and LTSXVNormalizingSampler to make this workflow work. Could you also post or lead me to them? Thank you!
Hi, those nodes are from lightricks: https://github.com/Lightricks/ComfyUI-LTXVideo
@tremolo28Β Thanks! It wasn't properly installed. Now I'm having a problem with LTX-2-19b-IC-LoRA-Detailer not loading even though it's in the Lora's folder.
Failed to validate prompt for output 75:
* LoraLoaderModelOnly 360:343:
- Value not in list: lora_name: '94_LTX2\ltx-2-19b-distilled-lora-384.safetensors' not in ['ltx-2-19b-distilled-lora-384.safetensors', 'ltx-2-19b-ic-lora-detailer.safetensors', 'ltx-2-19b-lora-camera-control-dolly-in.safetensors', 'ltx-2-19b-lora-camera-control-dolly-left.safetensors', 'ltx-2-19b-lora-camera-control-jib-down.safetensors', 'ltx-2-19b-lora-camera-control-jib-up.safetensors']
* LoraLoaderModelOnly 314
@iimacgyverii221Β you need to select your local Lora in the Lora loader nodes in the workflow. Comfy cant find your Lora, that seems to be the issue. "94_LTX2..." is the location I have saved it locally, your Lora location is likely different.
@tremolo28Β Thanks again! I understand now but after changing the 2 in detailer it would still fail on the distilled 384 and i couldn't find where that one was being set so I made a 94_LTX2 folder in my loras folder and copied all of the loras in to that folder. It's running now. I'll report back after I use this workflow for awhile. Thanks again!
@iimacgyverii221Β Hi, "distilled 384", that lora loader is in the subraph of the sampler, where the model loader is. Click that symbol in the upper right corner of the sampler node to enlarge.
@tremolo28Β awesome! Thank for being so helpful. One more question, if I were to add another lora, where would be the best location?
@iimacgyverii221Β suggest to add it after node "Detailer Lora Main" above Ollama group in the WF. Most cases it is enough to have another lora on that main pass only, where the images are generated. The upscaler pass does not require an additional lora, except it is created for that.
@tremolo28Β Thanks I will try that!
