CivArchive
    Cameraman IC-LoRA for LTX2.3 22B - v1.0
    NSFW

    Let me share with you this IC-LoRA I just cooked which is intended to replicate camera movements from reference videos.

    You can find more training details and also can download the lora from here: https://huggingface.co/Cseti/LTX2.3-22B_IC-LoRA-Cameraman_v1

    You can download an example workflow from here: https://huggingface.co/datasets/Cseti/ComfyUI-Workflows/blob/main/ltx/2.3/ic-lora-cameraman/README.md

    As I mentioned in the HF repo, the model has its limitations. Especially with more dynamic movements. In some cases the reference can also turn out to be "too strong", and in those cases it sadly ends up just generating the refence video. I believe these issues are due to the dataset size. If I find the time, I'll try retraining it with a more diverse dataset.

    Description

    FAQ

    Comments (11)

    MikeflowerApr 7, 2026ยท 7 reactions
    CivitAI

    ๐Ÿ‘๐Ÿป๐Ÿ‘๐Ÿป๐Ÿ‘๐Ÿป

    olivettyApr 9, 2026ยท 2 reactions
    CivitAI

    Woah! More IC LORAS! Nice one Cseti!

    bennyboy_77Apr 9, 2026
    CivitAI

    Thanks for posting this - a really useful lora and workflow.

    I've been experimenting with mixed success. At first, all my videos seemed to successfully depict the person from the input image but would merge the location from the motion video into the output video creating some very odd results. If, for instance, the motion video was outside and the starting image for the video was inside, you would end up with a hybrid inside/outside video!

    Somewhat counter-intuitively, I found that dropping down the "image strength" from 1.0 to (for example) 0.5 seemed to solve this issue so the person and the environment are now exactly as in the input image and motion is taken from the motion video. Does this imply that "image strength" relates to the input video rather than the input image?

    For full disclosure, I have slightly tweaked the workflow e.g. I'm running the dev model with distilled lora at 0.6 strength (just by adding a simple lora node) and have added a multi-lora loader with one active lora which seems to be working within your workflow. I've also bypassed the sage and triton nodes as I don't have them running on my PC.

    If you've got time to answer a couple of other questions, firstly: I noticed that it completes 8 steps then runs the result through vae to produce the final video. Is this completely skipping the extra 3 steps I'm used to seeing run when producing most LTX2.3 videos (an upscale stage?). Do you have any more information on this change to the usual process e.g. the second stage isn't necessary, it causes OOM errors etc?

    Finally a bit of a noob question. I don't yet have "nvidia rtx vsr" running on my PC so have switched the upscale process to "nearest exact" for now. If I do get VSR running, will that automatically boost the output resolution beyond the stated width and height values I've input into the workflow. At the moment, I'm getting the exact same pixel count on the output video that I've input in the workflow parameters.

    Thanks again for all your work and, if you get a moment to reply to any/all of these questions, that would be amazing.

    Cseti
    Author
    Apr 10, 2026

    Hey, thanks for the kind words and for testing the lora and sharing your experience with it!
    - sometimes parts of the reference video merges / bleeds out into the output. it happens unfortunately, mostly if there are ppl or objects in the foregound. But I also found that some videos has stronger effect so bleeds out more. in some cases you even get the input video as output completely. I believe these problems happen due to the small dataset I used.

    - you mention that lowering the str of the input image might help. It is interesting, I haven't noticed that.

    - upscale stage is completely unrelated to this workflow. if you use the upscale pass, then you send the already generated latents to the sampler used with the upscaler model. in that case I recommend using the upscale pass without this IC lora.

    - regarding the vsr, I think it isn't really matter what method you use here as probably your input video / image will be larger than the desired output so it will downscale your input video / image.

    bennyboy_77Apr 10, 2026

    @Csetiย Thanks for the reply. Yeah - weird about lowering the strength but it seemed to completely resolve the problem for whatever reason (may be worth knowing, in case anyone else gets this issue).

    I managed to get what I was after - which is a smooth orbit of the subject so I can create a 3D figurine effect for viewing on my Quest VR headset (extracting separate frames from the video and offsetting them by a few frames for side-by-side stereoscopic frames stitched back together via Stereo Photo Maker). I've used various WAN rotating/orbiting loras before with minimal success but your IC lora works really well.

    I noticed in your huggingface notes that some of the data you trained on was orbiting the subject so that might account for why it works so well. Thanks again.

    isitApr 10, 2026ยท 1 reaction
    CivitAI

    i dont see your workflow on your huggingface? its just a readme with the previews again?

    sy0ww4bb1984Apr 13, 2026
    CivitAI

    Amazing... wow... combine this at 0.6-1 strength with the union control lora at 0.3 strength and do video to video without using a controlnet. Just send the video directly into the ic lora node. This lora makes that work.

    Cseti
    Author
    Apr 13, 2026

    Quite interesting usecase. I'm not sure I understand what you do exactly.

    - So you have a reference video that provides the camera motion

    - You have an input video that you'll manipulate. Do you connect this to the image input of the IC node or you encode it into latents with the vae encode node?

    - you also connect the union lora into the model pipe

    Do you have any example outputs maybe to share?

    sy0ww4bb1984Apr 13, 2026

    @Csetiย Hi my video uploads are my LTX tests. Latest ones use vid motion transfer like i state here. I am cleaning up the workflow and will throw it up for others to look at.

    The concept is actually quite simple but hard to explain.

    IC lora union control should use controlnets to do video to video using the ๐Ÿ…›๐Ÿ…ฃ๐Ÿ…ง Add Video IC-LoRA Guide Advanced, but if you instead send the original RGB video (resized) to the node it accepts it. Using the add guide node you can add start/end images to the sequence for your ref subject. By adding the union control at 0.3-0.8 this setup attempts to change the video into your image.. It fails mostly. But... add cameraman lora at 0.6 to 1.0 and it all of a sudden works.

    sy0ww4bb1984Apr 20, 2026

    https://civitai.red/models/2550125/ltx-23-simple example flow using this lora for video to video. It mostly doesnt work without this lora :P

    LORA
    LTXV 2.3

    Details

    Downloads
    908
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/7/2026
    Updated
    5/5/2026
    Deleted
    -

    Files

    LTX2.3-22B_IC-LoRA-Cameraman_v1_10500.safetensors