Flux 2 was not what we wanted or needed, but Klein appears to be what we deserved!
Let's unlock the full NSFW potential of this model!
The FP8 version of 1.0 got messed up in the upload, TWICE! I've re-uploaded, please re-download.
The images lack metadata as CivitAI can't handle image metadata for Flux2Klein yet it seems :(
Description
Additional models added:
Alcaitiff's [KLEIN 9b] Unchained XXX
razzz's Realism Engine Klein v2
l226's Klein Anatomy / Quality Fixer
Lorian's NSFW - FLux Klein (no face change)
You guys are legends!
FAQ
Comments (56)
Hey, what is it I am not seeing?
While the FP4 version is working OK in ComfyUI with the workflow from your images, the FP8 version produces only garbage. Latest ComyUI on a NVIDIA RTX 3090 24 GB VRAM.
Hmm, both work for me. I'll try downloading from CivitAI and see if there's some file corruption on the upload.
You're right, the uploaded version produces garbage, I don't understand how... it still outputs something, but it is crap.
I'll try re-uploading! Thanks for notifying @kamiwa
The FP8 version of 1.0 got messed up in the upload. I've re-uploaded, please re-download if you got it early!
editing examples?
Didn't attach any but it works fine for whatever thing you want to do that seems like a strength of this model :)
It is good.
Looking at different prompts and scenes I can say that in my eyes, this is a step up from many other models. Plenty of texture without getting gritty.
It does well in low light scenarios, it doesn't clip in bright compositions. I even pulled out histograms to confirm how things looked. I think my previous go to checkpoint has been replaced.
Well done.
is this klein base or the fast one?
Not sure what you mean.
@6tZ he meant base version of the klein 9b variant? i think so
Do you still have BF16/FP16 in order to create a mxfp8? Quality and speed are really awesome with this new quant.
Yeah, that would be awesome!
No BF16 here, I merged on top of the fp8. Will try with BF16 for a new version.
@6tZ have a look at the mxfp8, about size of fp8, very fast on rtx 50xx and quality is really close to bf16. That's really awesome.
@ikrall001893 Can you guide me to "mxfp8"? Never heard about it and having a hard time finding out what you mean :)
Nvidia docs :
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/examples/fp8_primer.html
how to make your quant for comfyui (I can help further with this part, making a clean venv is a good start)
https://github.com/silveroxides/convert_to_quant
It seems to work well with character LoRAs and doesn’t change the faces too much. Can you upload FP16 version?
None available for 1.0. Working on it for next version.
Thank you for including NVF4 model for Blackwell users 👍
You're welcome! I prefer NVF4.
You should have a look to mxfp8. It's our future
@ikrall001893 That sounds interesting! Twice the throughput sounds pretty bonkers if you ask me
A Miracle indeed. I like how well it performs in img2img.
Thanks for sharing. ♥
Try it in edit mode too.
Are there any recommended steps? 4 ~ 40 ?Thanks♪(・ω・)ノ
Nothing specific for my model, same as with any Klein I guess.
Extremely useful. Thanks a lot
Hiya could you elaborate on the model!
Is it the loras you mentioned in the description merged into klein or did you also do additional training?
I assume you used Flux.2 Klein 9B-base + Turbo lora? Or did you merge directly with Flux 2 Klein 9B
Hi.
I didn't train on the core model, I just merged LoRAs. A couple of specific ones I trained for this model in particular to work on the NSFW.
No turbo LoRA included, just the core model, Flux 2 Klein 9B.
@6tZ ah i understand, are you possibly going to release those loras or provide extract lora?
@Kenzato I could, at least some. But many of them are kind of bad on their own, but adds understanding when you blend them to a very small %. I'll try to share some of them.
OMG this is so good. SOO GOOD!
Hi, I' m rather new to this and I'm having issues getting the checkpoint to work in comfyUI. I would like to edit images but the workflow does not seem to be correct. The editor says the VAE in invalid.
What I do is:
- Load checkpoint (with this model)
- Use TextEncodeAdvancedEdit, linked to the checkpoint and a reference image
- linked to a ksampler node
- linked to a vae decode node
I get issues from the vae decode node, which says the VAE is invalid. What would a simple workflow for this goal be with this checkpoint? Any direction to a working solution would be much appreciated
I used this workflow as an example: https://www.reddit.com/r/StableDiffusion/comments/1qeam1k/i_made_a_simplified_workflow_for_flux_klein_9b/#lightbox
I would recommend using ComfyUI's default workflows, start from there.
Ok, this is extremely good especially in edit mode. Getting super good results that don't change the face at all, even with some loras.
Current top of the line IMO.
@6tZ Crazy how the best stuff is often slept on around here. Thank you for sharing your work, keep it up dude!
I'm a complete beginner. Using AI, I was able to run your flux 1 model with the attached instructions, but there is no workflow or information about which text encoders to use. Can you provide guidance?
For flux 1 or flux klein?
Just use whatever others are using. Nothing special required for this.
The included images should contain workflows for both models.
With Klein I use: qwen_3_8b_fp8mixed
From the normal ComfyUI Templates, go to "Images" tab, search for flux, select "Flux.2 [Klein] 9B: Image Edit". Select that workflow, download everything and place it in the correct folder as described. Instead of "flux-2-klein-base-9b-fp8.safetensors" in diffusion_models, use this miraclein one. Works like a charm.
Is this the Flux Klein 9b base or distilled variant? Does it work well with other LORAs too? Great work as always.
It works well with the handful of LoRAs I have tried yeah.
It's trained on distilled.
@6tZ What do you say flux Klein is better with LORAs than z image turbo? As I've noticed was ZIT when you use a Lora and especially multiple ones. It really degrades to image quality big time. There is also a great node for comfy called https://github.com/ethanfel/ComfyUI-LoRA-Optimizer
And it works great because it allows you to use multiple LORAs including character ones and even merge them as well automatically with a node called auto tuner. Definitely worth checking out.
ZiT is not Flux Klein.
Can anyone convert to gguf Q8_0? pls :C
How many images do you guys typically have to generate before you get something useful? 5? 20? 100,000? I get a bunch of extra arms, disembodied heads, six fingered hands, the model completely ignoring the prompt, and on and on. There is something wrong with literally every image I generate, even if one in eight or so get pretty close. There are no instructions with this model. Am I using the correct text encoder? How many steps? CFG? What sampler? Which scheduler? Who knows???
Each attached image includes it's workflow for ComfyUI.
The metadata is not included because Civit doesn't import it properly.
So there are actually hundreds of examples here.
In terms of how many for a useful image, it depends on how picky you are I guess? What's a few extra fingers and a fourth leg anyway? Who would be bothered by such small details. Double pussy = twice as fun, no?
I've generated 998 images with the nvfp4, and I put 419 into a folder for good images that could be shared. So if you think that the sample images above are okay quality, I guess ~1/2 is the answer to your question.
@6tZ Last night I couldn't even download an image. It just sat there and sat there. CivitAI glitch, I guess.
Today I get this:
4 ERRORS
Missing Node Packs
Missing Models (2)
Node 'ID #128' has no class_type. The workflow may be corrupted or a custom node is missing.
But I got the information I needed anyway and am getting somewhat better results. I guess Flux 2 Klein just likes to do weird things, especially when there is more than one person in the image.
@potatometer350 Multiple people is always harder for it. Trained on more solo images no doubt :)
Don't specify any unusual pose for people. Just have them stand facing the camera. Then you can avoid monstrosities. Flux is very safe, thanks for being so safe BFL. This is fun.
Is this possible to run on a 4070ti with the FP8? Or have a reached my max. Love the prompt adherence and realism with unlocked content.
I think you can, but not sure.
I can't download fp8, it says File not found
Same here, let's give it a bit and see if Civit sorts out their servers.
How Does this work with Flux.2 9B 8bit S on civitai?

