Source: https://huggingface.co/unsloth/LTX-2.3-GGUF/tree/main
Workflow: https://huggingface.co/RuneXX/LTX-2.3-Workflows/blob/main/LTX-2.3_-_I2V_T2V_Basic_GGUF.json
Distilled version: https://civarchive.com/models/2484503/ltx-23-distilled-gguf-unsloth
💪Train your own model: https://runpod.io?ref=gased9mt
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Description
FAQ
Comments (11)
Thank you very much! It is great! Very helpful!
Also the workflow is very nice! I love it!
(Some information on what i did on the workflow to work for me. Since i have 8gb vram, i added a ram and vram cleaning node before VAE decoding, since it gave me OOM error, i added it before LTXVSeparateAVLatent node. And now it works, it is great, thank you very much!)
Which model did you use? Q4, Q5, Q3?
I feel like Im cursed to never be able to use things like this. I just get errors all over the place. I make them not error, and then I get "AttributeError: 'VAE' object has no attribute 'latent_frequency_bins'" on the LTXV Empty Latent Audio node
just throse these errors at grok ai and claude ai. usually they point in the right direction.Register with free mail so you dont get limited to 5 questions. They are better than chatgpt for m my months of experinece.
@pink0909 good point, I am using https://chat.qwen.ai/ which is pretty free since I signed up
@RalFinger ah ok , i know qwen only from local uncensored(abliterated) LLMs i load from huggingface. i will checkt this one out how its online.
It isn't a curse, it's taking the time to learn
I have a 16GB 4060 Ti and 32GB Ram, somehow the workflow keeps on crashing for me though at the VAR decode node because my RAM keeps on getting full. It also takes like 20 minutes to generate, im wondering what the heck im doing wrong.
Assuming the VAE OOM's before the saved video, it's due to RAM being full. The only solutions are upgrading to 64GB of RAM or you can lower your resolution to 512x512 and 3 seconds. You can experiment with those 2 settings. Even with a 5090 and 64GB of RAM, I still run out of RAM with the fp8 base model, it happens but less often. You can also lower your monitors resolution in windows to free about 500MB of VRAM (Be careful as your desktop icons will get rearranged), windows uses about 1.2GB VRAM on 4K monitors, you can save another 2GB of system RAM by switching to Linux OS. Steam also uses 500MB of RAM in the background. RBG software can also use 250MB of RAM too depending on what you use. At 32GB you basically have to run the AI program and that's it. Anything that opens or runs in the background can make the difference between it works or not working, that includes windows deciding it wants to update. The only real solution is to upgrade to 64GB of RAM. LTX2 for me, is more memory friendly than wan2.2, at the end of the day, any video model, I still OOM.
Try cachedit ltx-2 and ltx2 attention tuner
Love Unsloth! Thanks
