Loras used - beeg - (and my sheer lora)
No real Loras where used here for female Dominance - this is all from prompting.
Add more loras. get more from it. ;)
New style : Femdom



This has become such a big project i am struggling 
to find every flaw, so expect some.
It will be updated every 2 days until i feel like i cant fix anymore - i wont be adding more features i think just tweaks.
Will update over the next few days. - but i need a break lol im tired :D
The old Lora daddy Easy prompt was 2000 lines of code,
This 1 + the library is 14700 - 107,346 words Between your prompt and the output.
DELETE YOUR ENTIRE - Comfyui\custom_nodes\LTX2EasyPrompt-LD
FOLDER AND RE-CLONE IT FROM Github
Also you will need The lora loader
WORKFLOW
if the tool is great and love you it and only if. consider buying me a coffe <3
keep in mind i only do text to video, this stuff would probably work better image to video like the loras associated with this stuff would suggest.
Sex Position Triggers ( I HAVE ONLY TRIED Cowgirl and dildo so far - this is a placement holder for future loras, as it has full instructions on how to do it + dialogue)
missionary missionary, face to face, on her back
cowgirl cowgirl, on top, riding him, rides him, sitting on him
reverse cowgirl reverse cowgirl, facing away on top
doggy doggy, doggy style, from behind, on all fours, bent over
blowjob blowjob, blow job, going down on him, fellatio69 / sixtynine69, sixty nine, mutual oral
riding (solo toy)riding a dildo, riding a toy, riding a vibrator, dildo riding, solo riding
So this has been a fun little project for myself. This is nothing like the previous prompt tools.
it has an entire dialogue library Each possible action had 30 x 4 selectable dialogues that SHOULD match the scene
plus there is other things it can add like swearing / other context - (this is assuming you don't use your own dialogue or give it less prompt to work with.
Now i've added a music Genre preset selector
44 music genres, each mapped to its own lyric register and vocal style: ๐ท Jazz ยท ๐ธ Blues ยท ๐น Classical / Orchestral ยท ๐ผ Opera ๐ต Soul / Motown ยท โจ Gospel ยท ๐ฅ R&B / RnB ยท ๐ Neo-soul ๐ค Hip-hop / Rap ยท ๐ Trap ยท โก Drill / UK Drill ยท ๐ Afrobeats ๐ด Dancehall / Reggaeton ยท ๐บ Reggae / Ska ยท ๐ถ Cumbia / Salsa / Latin ยท ๐ช Bollywood / Bhangra โญ K-pop ยท ๐ธ J-pop / City pop ยท ๐ป Bossa nova / Samba ยท ๐ฟ Folk / Americana ๐ค Country ยท ๐ชจ Rock ยท ๐ Metal / Heavy metal ยท ๐ธ Punk / Pop-punk ๐ซ Indie rock / Shoegaze ยท ๐ Lo-fi hip-hop ยท ๐ Pop ยท ๐ House music โ๏ธ Techno ยท ๐ฅ Drum and Bass ยท ๐ Ambient / Atmospheric ยท ๐ชฉ Electronic / Synth-pop ๐ EDM / Big room ยท ๐ Dance pop ยท ๐ด Emo / Post-hardcore ยท ๐ Chillwave / Dream pop ๐ Baroque / Harpsichord ยท ๐บ Flamenco / Fado ยท ๐ถ Smooth jazz ยท ๐ฎ Synthwave / Retrowave ๐บ Funk / Disco ยท ๐ Afro-jazz ยท ๐ช Celtic / Folk-rock ยท ๐ธ City pop / Vaporwave
and on top of that Pre defined scenes, that are always similar (seed varied) for more precise control
-
**57 environment presets โ every scene has a world:**
๐ Iconic Real-World Locations
๐ฐ Big Ben โ Westminster at night ยท ๐ฝ Times Square โ peak night ยท ๐ผ Eiffel Tower โ sparkling midnight ยท ๐ Golden Gate โ fog morning
๐ Angkor Wat โ golden hour ยท ๐ Versailles โ Hall of Mirrors ยท ๐ Tokyo Shibuya crossing โ night ยท ๐
Santorini โ caldera dawn
๐ Iceland โ black sand beach ยท ๐ Seoul โ Han River bridge night ยท ๐ฌ Hollywood Walk of Fame ยท ๐ Amalfi Coast โ cliff road
๐ฏ Japanese shrine โ early morning ยท ๐ San Francisco โ Lombard Street night
๐ค Performance & Event Spaces
๐ค K-pop arena โ full concert ยท ๐ค K-pop stage โ rehearsal ยท ๐ป Vienna opera house โ empty stage ยท ๐ช Coachella โ sunset set
๐ Empty stadium โ floodlit night ยท ๐น Jazz club โ late night ยท ๐ท Speakeasy โ basement jazz club
๐ฟ Natural & Remote
๐ Beach โ golden hour ยท ๐ Mountain peak โ dawn ยท ๐ฒ Dense forest โ diffused green ยท ๐ Underwater โ shallow reef
๐ Desert โ midday heat ยท ๐ Night sky โ open field ยท ๐ Snowfield โ high altitude ยท ๐ฟ Amazon โ jungle interior
๐ Maldives overwater bungalow ยท ๐ Japanese onsen โ mountain hot spring
๐ Urban & Interior
๐ Grand library โ vaulted reading room ยท ๐ Train โ moving through night ยท โ Plane cockpit โ cruising ยท ๐ NYC subway โ 3am
๐ฌ Tokyo convenience store โ 3am ยท ๐ง Rain-soaked city street โ night ยท ๐ Rooftop โ city at night ยท ๐ง Ice hotel โ Lapland
๐ Underground club โ strobes ยท ๐ Bedroom โ warm evening ยท ๐ช Penthouse โ floor-to-ceiling glass ยท ๐ Car โ moving at night
๐ข Office โ after hours ยท ๐ Hotel room โ anonymous ยท ๐ Private gym โ mirrored walls
๐ Adults-only
๐ Casting couch ยท ๐ช Private dungeon โ red light ยท ๐จ Penthouse suite โ mirrored ceiling ยท ๐ Private pool โ after midnight
๐ฅ Adult film set ยท ๐ Back seat โ parked at night ยท ๐ช Voyeur โ lit window ยท ๐ Rooftop pool โ Las Vegas strip
๐ฟ Secluded forest clearing ยท ๐ธ Rooftop โ Tokyo neon rainThere's Way too much to explain.
Description
FAQ
Comments (11)
โ Complete Beginnerโs Guide: How to Set Up and Use the LTX Video Workflow
(Made with help from Grok โ blame El00n if the PC explodes)
@LoRa_Daddy โ Please feel free to delete or edit if anything is incorrect.
### What This Workflow Does
This tool inside ComfyUI lets you create short cinematic videos. You give it a photo of a person and a few words. It automatically adds music, dialogue, camera movements, and special effects โ like having a real film director inside your computer. Perfect for funny comedy videos or over-the-top action-comedy scenes (including Spooderman spoofs).
### Step 1: Install ComfyUI (The Main Program)
1. Go to this website: https://github.com/comfyanonymous/ComfyUI/releases
2. Download the latest portable version for Windows.
3. Extract the zip file to a new folder (example: C:\ComfyUI).
4. Double-click run_nvidia_gpu.bat (or run_cpu.bat if you donโt have a good graphics card).
A black window will open. Wait until it finishes.
### Step 2: Install ComfyUI Manager
1. Open the folder: C:\ComfyUI\ComfyUI\custom_nodes
2. Download this file: https://github.com/ltdrdata/ComfyUI-Manager/archive/refs/heads/main.zip
3. Extract it and rename the folder to ComfyUI-Manager.
4. Close and reopen ComfyUI.
5. You will now see a Manager button at the bottom right.
### Step 3: Install the Required Custom Nodes
- Click the Manager button.
- Search and install: KJNodes, VideoHelperSuite, and ComfyUI-LTXVideo.
For the two special LTX nodes:
1. Open a terminal window inside the custom_nodes folder.
2. Copy and paste these two commands one by one:
```
git clone https://github.com/seanhan19911990-source/LTX2EasyPrompt-LD.git
git clone https://github.com/seanhan19911990-source/LTX2-Master-Loader.git
```
3. Restart ComfyUI.
### Step 4: Download and Place the Required Models
Most models are downloaded automatically by the workflow, so you donโt need to hunt for them manually. However, placing these files manually gives the best results:
- Main Video Model โ C:\ComfyUI\ComfyUI\models\diffusion_models\
- Preview VAE (Note: This is not the main VAE used by the workflow) โ C:\ComfyUI\ComfyUI\models\vae\
Download: https://huggingface.co/Kijai/LTX2.3_comfy/resolve/main/vae/taeltx2_3.safetensors
- Spatial & Temporal Upscalers โ C:\ComfyUI\ComfyUI\models\latent_upscale_models\
- Spatial Upscaler: https://huggingface.co/Lightricks/LTX-2.3/resolve/main/ltx-2.3-spatial-upscaler-x2-1.1.safetensors
- Bonus (recommended by the author): Also download the Temporal Upscaler and place it in the same latent_upscale_models folder. This helps the workflow output double FPS videos.
Restart ComfyUI after placing the files.
### Step 5: Install RTX Super Resolution (For Sharper Videos)
1. Go to your ComfyUI folder and open the python_embeded folder.
2. Right-click inside it and select โOpen Terminal hereโ.
3. Copy and paste this exact command, then press Enter:
```
.\python.exe -m pip install -U --no-build-isolation nvidia-vfx --index-url https://pypi.nvidia.com
```
4. Restart ComfyUI.
### Step 6: Set Up the AI Models (So They Never Download Again)
The workflow uses two different AI models that work together:
- prithivMLmods/Qwen2.5-VL-7B-Abliterated-Caption-it โ This is the image reading model (shown as LTX-2 Vision Describe By LoRa-Daddy). It reads your reference photo and understands the character, clothing, pose, and context.
- huihui-ai/Huihui-Qwen3.5-9B-abliterated โ This is the prompt generation model (shown as LTX-2.3 Easy Prompt Qwen By LoRa-Daddy). It takes your short text prompt + the image information and creates a detailed cinematic prompt and dialogue.
To stop them from re-downloading every time:
1. The paths will look similar to these (the long number at the end may be different on your PC):
- Prompt model: C:\Users\USER\.cache\huggingface\hub\models--huihui-ai--Huihui-Qwen3.5-9B-abliterated\snapshots\05b9e7c9b978ba29bdb8f50a49c30e4b91183339
2. Make sure each snapshot folder contains the four large model.safetensors-0000x-of-00004.safetensors files.
3. In the LTX-2.3 Easy Prompt Qwen By LoRa-Daddy node, set offline_mode to True.
If models are missing the first time, set it to False temporarily, run once, then set it back to True.
### Step 7: Load Your Workflow
1. In ComfyUI, click Load and select your workflow JSON file.
2. If any nodes are red, click Manager โ Install Missing Custom Nodes.
3. Restart ComfyUI.
### Step 8: Custom Audio โ Use Your Own Song or Let It Generate
There is no switch named โCustom Audioโ. Look for the group called:
> Audio - Trimmed to length of video
Inside this group, find the small round switch button (labelled switch).
- To use your own audio file: Set the switch to True and load it in the Load Audio node.
- To let the AI create its own music and dialogue (recommended): Set the switch to False.
You will also see the MelBandRoFormer Model Loader (model: MelBandRoFormer_fp32.safetensors).
### Step 9: Best Settings for Spooderman Video
In the node LTX-2.3 Easy Prompt Qwen By LoRa-Daddy:
- Use image information? โ True
- Let the LLM create dialogue? โ True
- Creativity โ 0.85
- Shot angle โ High angle โ vulnerable
- Camera movement โ Slow push in
- Music Genre โ Comedy or Pop
- Emotional state โ Joyful or Playful
- Reference Image Strength โ 0.92
### Step 10: Make Your First Spooderman Video
1. Load your Spooderman reference image in the LoadImage node.
2. Paste this prompt:
```
over-the-top hilarious Spooderman spoof action scene, ridiculous parody of Spider-Man, clumsy hero named Spooderman wearing a cheap red-and-blue suit with visible zippers and loose threads, oversized web shooters that spray silly string instead of webs, terrible aim, constantly tripping over his own feet, accidentally swinging into lampposts and trash cans, comically failing at heroic poses, dramatic slow-motion falls, exaggerated facial expressions, goofy sound effects, chaotic city street at night, cars honking, pedestrians laughing and filming him, buildings covered in silly string, villain is just a confused robber in a cheap costume, pure slapstick comedy, silly action parody, very funny, light-hearted, cartoonish chaos
```
3. Use the Spooderman settings from Step 9.
4. Click Queue Prompt.
### Step 11: Quick Troubleshooting
- Red nodes โ Install missing custom nodes again.
- No sound โ Set the switch in Audio - Trimmed to length of video to False.
- Model not loading โ Check offline_mode and restart.
- Blurry video โ Enable RTX upscaler + use both Spatial and Temporal upscalers in latent_upscale_models.
You now have everything you need! The workflow itself doesnโt change much anymore โ only capabilities and presets may update in the future.
Take your time and enjoy making funny Spooderman videos!
That kinda-scottish accent on the preview vid makes it so funny ๐
the face isn't the same as i uploaded. Spooderman lost his mask and now all his friends bully him at school! Not fair! I want justice for Spooderman! Or a way to control denoise strength, steps and cfg. and yes, I have a backup keyboard. for times when I need to fight for my heroes! for Spooderman!
could you help me with this problem?
ValueError: The checkpoint you are trying to load has model type qwen3_5 but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command pip install --upgrade transformers. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command pip install git+https://github.com/huggingface/transformers.git
File "C:\Users\artur\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 524, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\artur\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 333, in get_output_data
return_values = await asyncmap_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\artur\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 307, in asyncmap_node_over_list
await process_inputs(input_dict, i)
File "C:\Users\artur\AppData\Local\Programs\ComfyUI\resources\ComfyUI\execution.py", line 295, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "C:\Users\artur\Documents\ComfyUI\custom_nodes\LTX2EasyPrompt-LD-Pre-Extra-feature-Main\LTX2EasyPromptQwen.py", line 2747, in generate
self.load_model(offline_mode=offline_mode, local_path=local_path, model_id=selected_model)
File "C:\Users\artur\Documents\ComfyUI\custom_nodes\LTX2EasyPrompt-LD-Pre-Extra-feature-Main\LTX2EasyPromptQwen.py", line 2372, in load_model
self.model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\artur\Documents\ComfyUI\.venv\Lib\site-packages\transformers\models\auto\auto_factory.py", line 549, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\artur\Documents\ComfyUI\.venv\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 1362, in from_pretrained
raise ValueError(
When y'all download this, make sure you delete the pycache files. just best security practice
how to disable music genre? if i generate txt2video+audio, why i need to load mp3 and image?
Can GGUF be used for EasyPrompt offline?
Awesome work. Iโm a huge fan. However with this version, after it produces the prompt, it ends and doesnโt try to create the video.
Hi, Sean! I still can't make it work correctly :(
I tried different "width", "height" values and also "frame count" more than 300, as you advised me on reddit(my name is Constantine).
Transformers version also is latest (5.3.0).
Can you share arguments for running Comfyui and also list of pip packages with versions?
Costom audio + Use audio for the llm (On or off)
Is not working.. Can you check this for me?