These LoRAs were extracted from three sources:
- Oficial = the original SRPO (Flux.1-Dev): tencent/SRPO
- community checkpoint: rockerBOO/flux.1-dev-SRPO (For now, the loras have not been published here on civitai but they are on huggingface.)
- R&Q = community checkpoint (quantized/refined): wikeeyang/SRPO-Refine-Quantized-v1.0
They are designed to provide modular, lightweight adaptations you can mix with other LoRAs, reducing storage and enabling fast experimentation across ranks (8, 16, 32, 64, 128).
You can choose between multiple ranks: 8, 16, 32, 64, or 128. Lower ranks are lighter and faster to use; higher ranks preserve more detail.
⚠️ Note: Depending on the quantized model you choose as a base, you may need to adjust the LoRA strength. I personally had very good results with flux1-dev-SRPO-Q&R r128. Sometimes it may be necessary to increase the strength above 1.0 — for example, 1.1 or 1.2.
Keep in mind that the required strength can vary depending on the quantized model you use. For example, my tests were done with a GGUF Q8 build, but other Flux Dev quantized versions may need different adjustments.
The recommended config for evaluating differences between models is:
Sampler: Euler
Scheduler: Beta
Steps: 50
CFG: 1.0
This setup makes it easier to notice the differences across models.
If you want results that look closer between them, you can instead try:
Sampler: Euler
Scheduler: Beta
Steps: 25
CFG: 1.0
These settings still need further testing, but so far they’ve shown promising consistency.
These LoRAs are fully modular — you can mix them with other LoRAs, adjust their strength as you wish, or even merge them into other models.
Other models not posted here can be found on huggingface: HERE
🙏 Credits & License
SRPO by Tencent → tencent/SRPO
Flux.1-Dev by Black Forest Labs (licensed under the FLUX.1 [dev] Non-Commercial License)
⚠️ Important Notice
These LoRAs are provided for research and personal non-commercial use only, in compliance with the licenses of SRPO and Flux.1-Dev.
This project is an independent extraction and adjustment of LoRAs — it is not affiliated with or endorsed by Tencent or Black Forest Labs.
Description
FAQ
Comments (28)
if you dont mind, can you please make it compatible with nunchaku version, it is giving incomaptible keys detected error. normal flux works great though. and thanks for the lora
yes,we need this for nunchaku
I'll try to apply the patch to the nunchaku, I thought comfyui was already doing this process automatically when loading the model.
I converted some if you want to test to see if it works, then I added it here on civitai.
Alissonerdx/flux.1-dev-SRPO-LoRas at main
@NRDX Yup it’s working now thank you very much!
@NRDX after testing, it works sometimes, and other times it throws this error
@alex9692 I don't see the error, what you sent is just HTML.
this is the error
the syntax and formatting got messed up
here the pastebin link
@alex9692 the lora file you are using is the one I converted to nunchaku, right?
@NRDX yes, the loras specifically in the nunchaku directory in huggingface,
the srpo lora from rockerboo and official named ones works fine though so im using those
the nunchaku ones most sometimes it is working, but it throws error rarely,
idk maybe error is on my side but removing the lora fixes the issue when error
@alex9692 I will investigate the problem, I used the converter they have in the documentation, but it may be outdated, I will look for a solution, but I don't know if there is another one.
It's a weird one. Don't rescale CFG worked for me. I still get the same error, but I just hit run again and it works the second time. Strange. Nunchaku Turbo + Nunchaku SRPO as the only loras.
I converted some of the models to be compatible with the Nunchaku but I haven't tested it, feel free to try it out.
Alissonerdx/flux.1-dev-SRPO-LoRas at main
its not working
@jaryxx6092915
the main base version(not in nunchaku file) can work with nunchaku, but the Q&R version fails.
@jaryxx6092915 Yes, I noticed that when I convert the R&Q version to nunchaku, an error occurs due to extra layers. I'll investigate why. It could be something related to Refine, or I might add another layer to the model, I don't know.
@NRDX Any chance you will try again with R&Q version for Nunchaku?
@flo11ok874 Yes, I intend to try to adjust this, but I need to see what people do to do it correctly.
The nunchaku lora need to be work alone, will be issued with other lora. However, the none nunchaku lora can also work in nunchaku WF, but need to be work with 4 steps hyper lora for decent results.
So, if I'm understanding correctly, we would use these with Flux 1 D to achieve similar results to SRPO?
To have a result closer to SRPO you use the official one, if you look at the results in their repository the official one has a slightly strange effect sometimes, then a lot of people liked the result of R&Q (Refine and Quantize), that's why there are both, there is also another repository that I used to extract a lora but I didn't put the models here on civitai, they are only on huggingface.
@NRDX Just thought it was interesting but this was the first time hearing of this and compared the results and my preference was R&Q after seeing the images first then reading which is which. I always wonder what sort of hidden psychology there is behind preference lol.
@FemBro For me the best is the RockerBOO version
Great work.
The SRPO 256 LoRa can also be used with Flux-KREA without any problems. Strength 1. Sampler dpm++ 2m / Scheduler beta57 / 25 steps / CFG 3.0 / Denoise 1.0 / Flux Guidance 5.0.
Damn good realistic images. Thanks for your work.
Thanks!
Nunchaku r16 working fine for me!
I really liked the LoRA. I made some images, I hope you like them.


