This model is specifically designed to simulate the real world. Photos taken by smartphones are carefully selected as the training set, enabling the model to have a better real - life texture. The LoRA weight of Qwen-image can be increased to 2, but it is recommended not to exceed 1.8. The initialization can start from 0.6, and generally, a value between 0.9 and 1.2 works well. The following is a set of image comparisons.



Parameter Recommendations
Recommended Weight: 0.6 - 1.5 (default 0.9)
Recommended Size: 768/1024/1536 (default 1024)
Sampling Algorithm: 3.0 - 3.5 (default 3.5)
Step: 20 - 40 (With the acceleration model, it can be reduced to 4 - 8 steps)
Sampler: euler/heun
Cover Prompt Examples:
从侧面拍摄,一位亚洲女性坐在电影院的红色座椅上,她转头微笑着看向镜头,穿着黑色皮夹克,长发披肩,略带卷曲。她右手拿着一个大号的纸杯,杯身印有红色和白色图案,杯中插着一根吸管,她正用吸管喝着饮料。左手拿着一块食物,看起来像是一个包裹在纸巾里的三明治或汉堡,正准备吃。
背景是昏暗的电影院内部,巨大的电影幕布上呈现出标题“Reality Simulator”,上方有微弱的灯光,天花板上有一些光斑,可能是放映时的反光或灯光效果。周围的座椅都是深红色的,呈弧形排列,营造出一种安静、私密的观影氛围。
🙈🙉🙊🐵OK! That's all
If you like my work, please give it a 👍.
Thank you for reading this far.
English is not my native language. The above is translated by AI. If there is anything inappropriate, welcome to correct me. Finally, I hope you have fun!
👉👉👉My CivitAI homepage👈👈👈
FAQ
Comments (23)
the samples look great, but the comparison photos look backwards - the non-lora looks more realistic when compared side-by-side
You're right. Those are from the previous version. I've updated to the new version. You can choose whichever you prefer.
Needed some adjustment before this would work with huggingface, we need to replace the "diffusion_model" prefix with "transformer" in the lora file:
import safetensors.pytorch state_dict = safetensors.torch.load_file('./models/lora/iphone.safetensors', device="cpu") state_dict = {"transformer"+ k.removeprefix("diffusion_model"): v for k,v in state_dict.items()} safetensors.torch.save_file(state_dict ,'./models/lora/iphone_hf.safetensors',metadata=metadata)I don't understand what you mean.
I think he's saying that Hugging Face only allows loras to run on-site if they're prefixed with "transformer" instead of "diffusion_model". I don't know what prefix means in this context but I'm pretty sure that's the gist of it
big bro is working overtime, respect o7
I don't understand what you mean.
nice work! How many images did you use as your dataset for this LoRA?
40-90,I can't remember exactly。Thank you for your like
兄弟,感谢分享你的杰作,才发现你所有的模型都是那么牛,尤其是是这个千问的,但是展示图的色彩过于鲜艳了,如果可以略减少,会更自然。
好像是有这个问题,我得查查数据集,谢啦伙计!
any posibililty to bring this one to chroma?
If I remember correctly, Chroma uses the SDXL core. If that's the case, even if it's trained, the effect won't be this good.
@vjleoliu but its pretty much flux schell based, wont work anyway?
@alternative_Universe Yes, this LoRA is only effective for Qwen-image. In fact, Qwen-image's performance in terms of facial and finger rendering is comprehensively superior to other models. Therefore, I personally believe that the future development trend will move towards Qwen-image. That is to say, you will see more and more LoRAs and workflows in Qwen-image versions.
@vjleoliu hahah Yeah come on step up! I hate how realistic Qwen is
any chance of an SD1.5 & Dall E 3 version?
But seriously Yeah 100% Qwen Image and Wan T2I are untouchable at this point, -For photo realistic images they are already better than real photos.
Chroma is a huge downgrade
For the time being, if you are looking for a one-and-done model, then qwen and wan are undoubtedly the best choices.
Top quality Lora! - Plays lovely in a stack! Thanks for bringing this to the community!
thx bro, If you like it, give it a like. This is very important to me
Come back and 2512 this! Please.
















