Fixed the issue, now it works with gguf models too
Image Repair Flux.2-Klein9B
Hey everyone! This is my second LoRa. So, I took the original idea from Link
How it works:
I trained this using a LoRA subtraction method (clean originals vs. degraded/pixelated versions). Because it extracts the exact "delta" of the degradation, it only targets the artifacts and noise without altering the core of image. It may not work with all images, on very complex detailed patterns it can hallucinate like the original flux2, try a different seed. Also, as you can see from the examples, the original flux2 saturation is poorly controlled.
Training details:
Latent: 512X512 (Also lora work with higher resolution and other aspect ratio)
I tried 1024 latent size when training, but OOM forced me to optimize the settings.
Lora rank: 16
Steps: 2000
Grad_accum: 4
Source images: 227 (all created in 2k/4k NanoBananaPro + Some Flux2.Kein9B)
These settings were before lora was weakened. She was initially retrained. this is 0.3 Strength of original train.
As with the author of the idea, all settings were created by a neural network (not chatgpt) and Python scripts were used for configuration.
Recommended Settings:
LoRA Weight:
1.0Trigger Words: make image high quality
CFG: 1
Steps: 4-6
I hope for positive feedback, but I hope you find this version useful.
Also think it can be improved further. Later, soon...
Do not use lora with the ImageScaleToTotalPixels node. Some image with this node is totally sh*t
Description
Add training lora rank 32
[CFG = {
"lora_rank": 32,
"lora_alpha": 32,
"learning_rate": 1e-4,
"num_steps": 3000,
"image_size": 512,
"grad_accum": 4,
"save_every": 500,
"log_every": 25,
"warmup_steps": 200,
"max_grad_norm": 1.0,
}]
FAQ
Comments (5)
I looked into the "key not loaded" issue that cause this lora not to work properly on various versions of Klein 9B.
It's just a matter of keys being named a certain way that some lora loaders do not recognize.
Basically, you have stuff like: single_blocks.10.linear1.lora_up.weight when the lora loader expects something like diffusion_model.single_blocks.10.linear1.lora_B.weight.
You can simply rename the keys with a python script and it will work. If the author of this model needs some help doing this, I can show how.
fixed the problem, thanks for the help)
Thanks! I've got a few other LoRAs in my collection that do this, and I never realized how easy it was to fix.
Yes, LoRa now displays no errors in the console when used :)
Unfortunately, this LoRa has virtually no effect on the image. The difference between "with LoRa" and "without LoRa" is negligible.
Im pretty sure the point of this lora is to prevent iterative degradation?
When you edit image with flux 2 klein 9b, there is sliiiight color change even in image space where you dont make any changes. Then if you use that output pic and edit something else in it, itll introduce these changes again.
After 1 iteration, the "yellowing and discoloration//depixelation is minimal, less than 1%. Thanks to alpha editing build to model, which is alredy much better than most edit models out there, but its still there.
After 3 edits, youre looking at 2023's piss filter Dall-E on your hands xD








