How to Use
This is a style LoRA, designed to be the base layer of your LoRA stack. It creates the foundational aesthetic of realism, upon which you can add character or concept LoRAs.
Note: The included ZIP archive contains both the high-noise and low-noise LoRA variants, along with our recommended ComfyUI workflows.
Trigger Word:
InstacamRecommended Strength:
1.0. Start here and adjust in small increments.
I would also like to thank Danrisi, who originally taught us how to train LoRAs and helped make our work possible.
Description
What's New in v2.3: This is a focused update aimed at enhancing prompt control and aesthetic consistency. All existing features of V2 remain, now with greater precision.
Superior Prompt Adherence: The entire dataset has been meticulously re-captioned using a more natural and detailed language structure. The LoRA now understands your prompts with higher accuracy.
Refined Aesthetics: We've curated the dataset to reduce the frequency of unwanted elements like excessive finger rings and belly piercings.
Enhanced Cohesion: Further dataset cleanup has improved overall image quality and consistency, reinforcing the hyper-realistic style.
FAQ
Comments (45)
did you included non-white people in the dataset and tested the outputs? (genuine question, not trying to start an argument about identity here)
Yes
Anyone else not having low not being seen by comfy? only high is showing up, despite both files being in the folder.
Same issue here. No idea why.
Figured out why. Typo in the spelling. search for "Insat" instead of "Insta".
fonso2 omg, thank you, lmao.
Why is the included workflow only for t2i? Where's the t2v workflow?
Join our discord and find it in #resources: https://discord.gg/instara
Sorry we forgot to include!
VLRevolution Should be easy enough to correct and add now.
galaxytimemachine They're trying to have people add into their discord and start scamming them to pay money for crap
Miramir Which is exactly why I'm not joining it.
Base model naming is weird (additional "-A14B") for v2.3, can't attribute offsite gen videos to this LORA unless I make a post here.
Attributed latest gen to 2.0 instead.
Good lora, but you are doing browsewrap agreements, which are not enforceable under US law. A 'TERMS OF USE.txt' buried in a zip or just pasted in the Civitai description, without a click-through or explicit 'by downloading you agree' checkbox at the download button, is what's called browsewrap.
I would suggest hosting the model with a click-through if you want a binding contract.
yup.
On the other hand, the court in Nguyen v. Barnes & Noble held that Barnes & Noble’s browsewrap was unenforceable despite the fact that the hyperlink was prominently placed next to the buttons users must click in order to complete online purchases.
All in all, today’s courts are highly unlikely to find browsewrap agreements enforceable unless the parties can establish actual or inquiry notice. But establishing such notice often requires more than just a simple implementation of the browsewrap agreement. Instead, the website owner must show that it did something additional to provide the user with notice of the online agreement and collected their affirmative assent. As a result, the likelihood of a court enforcing a browsewrap is tenuous at best.
does that work in i2v?
What's with the garbage license?
> Restricted Commercial Use
lol yeah pretty lame imo, made me not want to download it 😂
Cant even use our generated content ?? wth xD.. I could kind of see everything else, but not even the content our own prompts make? Gonna be a pass cus i am trying to monetize some ai influencers.
@Yetiterror Same it's pretty pointless to use if we can't even use it to monetize AI influencers which is the whole point.
Is it working with Qwen ?
good
Give her a cellphone, which is capable at least for 2 megapixel photos.
Seed Generator is missing, how can i fix it? is this bug?
just use another node for that or disconnect that seed generator. it just gives more control over seed to you. if want to replace with smth else; i use "easy seed" node from: https://github.com/yolain/ComfyUI-Easy-Use
How is everyone using this? I cannot get triton to work with python 3.12 and it has a runtime error when getting to the TorchCompileModelWanVideoV2. Any help would be appreciated
Triton isn't necessary to use this. Just disable the node and it will work fine. Disable the sageattention node too if you have it.
I was having the same problem, I think the latest ComfyUI uses Python 3.13 and Triton doesn't work with it. I had to use this installer, which conveniently installs Triton and Sageattention as well as the Comfy Manager and some other things: ComfyUI auto installer WAN | SageAttention | 50XX cards compatible | - v3.2 | Wan Video Other | Civitai
I'm not advocating this, but wondering how would AI model creators KNOW if you did use some of commercially? Genuinely curious.
@clevnumb Unless they get their hands on the metadata they've got no way of knowing.
I ran into the same issue — Triton doesn’t yet support Python 3.12 properly, which is why you’re seeing runtime errors. The fix for me was to set up a proper Python 3.12.7 venv inside ComfyUI, then manually copy over the Include headers and python312.lib into the portable embed, so Torch and Triton could compile correctly. After that, I reinstalled dependencies (torch==2.8.0+cu129, triton-windows, etc.) directly inside the venv, then ran pip install -r requirements.txt to catch missing modules like scipy, psutil, and einops.
Once I did that, everything — including TorchCompileModelWanVideoV2 — ran fine.
@Malikona Yeah, I ran into that too — Triton and SageAttention gave me headaches on 3.13. I ended up setting things up with Python 3.12.7 and it’s running solid now. Did you have any issues with the sampler and scheduler nodes though? 🤔 I had to swap in different ones to get everything working smoothly.
@Malikona Yeah, I ran into that too — Triton and SageAttention gave me headaches on 3.13. I ended up setting things up with Python 3.12.7 and it’s running solid now. Did you have any issues with the sampler and scheduler nodes though? 🤔 I had to swap in different ones to get everything working smoothly.
Wasted several hours of my life so save the time for others
It's mentioned in the documentation -> GitHub - woct0rdho/triton-windows: Fork of the Triton language and compiler for Windows support and easy installation · GitHub
Just unpack these 2 folders in you Python (python_embeded in case of portable version) folder.
That's all ¯\_(ツ)_/¯
Good
Why did you enable "add_noise" for the KSampler of the second model (low noise) ?
The official workflow doesn't do that
this lora is botted and astroturfed on here and reddit. mass bot downvoting campaign on anyone who criticizes them. sleazy as hell. blocked. do not trust.
Also the proprietary license. And it doesn't even look very good judging from the videos they posted.
@gxdcqagntkwniobpfq416 That license bollox is absolutely fkn hillarious, like anyone gives a fk and as if they would be able to prove shit or do anything about it... fkn clowns aahahah
Could you please make this for the wan2.2 ti2v 5b model?
Came to ask the same thing. Nobody has a 5b Insta-model-type lora
This model is insane! 🥵🤩
Finding it difficult to create asians, indians and black women with dark skin. It seems like they're all a bit "white washed". Amazing lora though, hoping someone will give me tips on how to improve my generations. Thanks!
Yup, this is why we switched to mostly QWEN-IMAGE. WAN has trouble with diversity, our QWEN model is posted on our profile








