CivArchive
    Preview 100686083
    Preview 100686595
    Preview 100686928

    These LoRAs were extracted from three sources:

    - Oficial = the original SRPO (Flux.1-Dev): tencent/SRPO

    - community checkpoint: rockerBOO/flux.1-dev-SRPO (For now, the loras have not been published here on civitai but they are on huggingface.)

    - R&Q = community checkpoint (quantized/refined): wikeeyang/SRPO-Refine-Quantized-v1.0

    They are designed to provide modular, lightweight adaptations you can mix with other LoRAs, reducing storage and enabling fast experimentation across ranks (8, 16, 32, 64, 128).

    You can choose between multiple ranks: 8, 16, 32, 64, or 128. Lower ranks are lighter and faster to use; higher ranks preserve more detail.


    ⚠️ Note: Depending on the quantized model you choose as a base, you may need to adjust the LoRA strength. I personally had very good results with flux1-dev-SRPO-Q&R r128. Sometimes it may be necessary to increase the strength above 1.0 — for example, 1.1 or 1.2.

    Keep in mind that the required strength can vary depending on the quantized model you use. For example, my tests were done with a GGUF Q8 build, but other Flux Dev quantized versions may need different adjustments.

    The recommended config for evaluating differences between models is:

    • Sampler: Euler

    • Scheduler: Beta

    • Steps: 50

    • CFG: 1.0

    This setup makes it easier to notice the differences across models.
    If you want results that look closer between them, you can instead try:

    • Sampler: Euler

    • Scheduler: Beta

    • Steps: 25

    • CFG: 1.0

    These settings still need further testing, but so far they’ve shown promising consistency.


    These LoRAs are fully modular — you can mix them with other LoRAs, adjust their strength as you wish, or even merge them into other models.

    Other models not posted here can be found on huggingface: HERE


    🙏 Credits & License

    • SRPO by Tencent → tencent/SRPO

    • Flux.1-Dev by Black Forest Labs (licensed under the FLUX.1 [dev] Non-Commercial License)

    ⚠️ Important Notice
    These LoRAs are provided for research and personal non-commercial use only, in compliance with the licenses of SRPO and Flux.1-Dev.
    This project is an independent extraction and adjustment of LoRAs — it is not affiliated with or endorsed by Tencent or Black Forest Labs.

    Description

    FAQ

    Comments (28)

    lowkeylayersSep 16, 2025· 5 reactions
    CivitAI

    if you dont mind, can you please make it compatible with nunchaku version, it is giving incomaptible keys detected error. normal flux works great though. and thanks for the lora

    jaryxx6092915Sep 16, 2025

    yes,we need this for nunchaku

    NRDX
    Author
    Sep 16, 2025· 1 reaction

    I'll try to apply the patch to the nunchaku, I thought comfyui was already doing this process automatically when loading the model.

    NRDX
    Author
    Sep 16, 2025· 1 reaction

    I converted some if you want to test to see if it works, then I added it here on civitai.
    Alissonerdx/flux.1-dev-SRPO-LoRas at main

    lowkeylayersSep 16, 2025

    @NRDX Yup it’s working now thank you very much!

    lowkeylayersSep 16, 2025

    @NRDX after testing, it works sometimes, and other times it throws this error

    NRDX
    Author
    Sep 16, 2025

    @alex9692 I don't see the error, what you sent is just HTML.

    lowkeylayersSep 17, 2025
    !!! Exception during processing !!! Traceback (most recent call last): File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 496, in execute output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 315, in get_output_data return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, hidden_inputs=hidden_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\custom_nodes\comfyui-lora-manager\py\metadata_collector\metadata_hook.py", line 165, in async_map_node_over_list_with_metadata results = await original_map_node_over_list( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 289, in _async_map_node_over_list await process_inputs(input_dict, i) File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\execution.py", line 277, in process_inputs result = f(**inputs) ^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 835, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1036, in sample output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 1004, in outer_sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 987, in inner_sample samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\custom_nodes\ComfyUI-TiledDiffusion\utils.py", line 34, in KSAMPLER_sample return orig_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 759, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\k_diffusion\sampling.py", line 200, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 408, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 960, in __call__ return self.outer_predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 967, in outer_predict_noise ).execute(x, timestep, model_options, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 970, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 388, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 206, in calc_cond_batch return _calc_cond_batch_outer(model, conds, x_in, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 214, in _calc_cond_batch_outer return executor.execute(model, conds, x_in, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\samplers.py", line 333, in _calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\model_base.py", line 158, in apply_model return comfy.patcher_extension.WrapperExecutor.new_class_executor( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\patcher_extension.py", line 112, in execute return self.original(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\loqal\AppData\Local\Programs\@comfyorgcomfyui-electron\resources\ComfyUI\comfy\model_base.py", line 197, in _apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\custom_nodes\ComfyUI-nunchaku\wrappers\flux.py", line 221, in forward composed_lora = compose_lora(lora_to_be_composed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI\.venv\Lib\site-packages\nunchaku\lora\flux\compose.py", line 89, in compose_lora assert not is_nunchaku_format(lora) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AssertionError

    this is the error

    lowkeylayersSep 17, 2025

    the syntax and formatting got messed up
    here the pastebin link

    NRDX
    Author
    Sep 17, 2025

    @alex9692 the lora file you are using is the one I converted to nunchaku, right?

    lowkeylayersSep 17, 2025

    @NRDX yes, the loras specifically in the nunchaku directory in huggingface,
    the srpo lora from rockerboo and official named ones works fine though so im using those
    the nunchaku ones most sometimes it is working, but it throws error rarely,
    idk maybe error is on my side but removing the lora fixes the issue when error

    NRDX
    Author
    Sep 17, 2025

    @alex9692 I will investigate the problem, I used the converter they have in the documentation, but it may be outdated, I will look for a solution, but I don't know if there is another one.

    subotaplaya565Oct 19, 2025

    It's a weird one. Don't rescale CFG worked for me. I still get the same error, but I just hit run again and it works the second time. Strange. Nunchaku Turbo + Nunchaku SRPO as the only loras.

    NRDX
    Author
    Sep 16, 2025· 8 reactions
    CivitAI

    I converted some of the models to be compatible with the Nunchaku but I haven't tested it, feel free to try it out.

    Alissonerdx/flux.1-dev-SRPO-LoRas at main

    jaryxx6092915Sep 17, 2025· 2 reactions

    its not working

    NRDX
    Author
    Sep 17, 2025

    @jaryxx6092915 

    jaryxx6092915Sep 18, 2025

    the main base version(not in nunchaku file) can work with nunchaku, but the Q&R version fails.

    NRDX
    Author
    Sep 18, 2025

    @jaryxx6092915 Yes, I noticed that when I convert the R&Q version to nunchaku, an error occurs due to extra layers. I'll investigate why. It could be something related to Refine, or I might add another layer to the model, I don't know.

    flo11ok874Sep 23, 2025

    @NRDX Any chance you will try again with R&Q version for Nunchaku?

    NRDX
    Author
    Sep 23, 2025· 1 reaction

    @flo11ok874 Yes, I intend to try to adjust this, but I need to see what people do to do it correctly.

    catmaxzjSep 25, 2025

    The nunchaku lora need to be work alone, will be issued with other lora. However, the none nunchaku lora can also work in nunchaku WF, but need to be work with 4 steps hyper lora for decent results.

    ArchAngelAriesSep 16, 2025
    CivitAI

    So, if I'm understanding correctly, we would use these with Flux 1 D to achieve similar results to SRPO?

    NRDX
    Author
    Sep 16, 2025

    To have a result closer to SRPO you use the official one, if you look at the results in their repository the official one has a slightly strange effect sometimes, then a lot of people liked the result of R&Q (Refine and Quantize), that's why there are both, there is also another repository that I used to extract a lora but I didn't put the models here on civitai, they are only on huggingface.

    FemBroOct 14, 2025

    @NRDX Just thought it was interesting but this was the first time hearing of this and compared the results and my preference was R&Q after seeing the images first then reading which is which. I always wonder what sort of hidden psychology there is behind preference lol.

    NRDX
    Author
    Oct 14, 2025

    @FemBro For me the best is the RockerBOO version

    zeus_onlSep 29, 2025· 3 reactions
    CivitAI

    Great work.

    The SRPO 256 LoRa can also be used with Flux-KREA without any problems. Strength 1. Sampler dpm++ 2m / Scheduler beta57 / 25 steps / CFG 3.0 / Denoise 1.0 / Flux Guidance 5.0.

    Damn good realistic images. Thanks for your work.

    nokoda123Oct 6, 2025· 1 reaction
    CivitAI

    Thanks!
    Nunchaku r16 working fine for me!

    Alan_Turing_HDNov 23, 2025· 1 reaction
    CivitAI

    I really liked the LoRA. I made some images, I hope you like them.

    LORA
    Flux.1 D
    by NRDX

    Details

    Downloads
    170
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/16/2025
    Updated
    4/28/2026
    Deleted
    -

    Files

    srpo_128_base_oficial_model_fp16.safetensors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.