Z-Image is here, and it's pretty damn great. Let's make it more NSFW!
Still a work in progress. It can output good genitals, but not always. About 3/5 images are good, working on penis stability specifically for future versions.
ComfyUI: Models should be placed in diffusion_models, and uses the official workflow with the Load Diffusion Model loader, and loading the VAE/CLIP models separately. Models named "_AIO" should go in the checkpoints folder, and uses the normal checkpoint loaders.
Recommended Settings
Steps: 12
CFG: 1.0
Sampler: dpmpp_sde
Scheduler: simple
Description
FAQ
Comments (52)
@6tZ Hey, looks exciting! Can I also use this checkpoint to train Loras with AI-ToolKit? Or do I need the "full" diffusion file set? Thanks! - Geno
I haven't tried it myself yet. I just train my LoRAs on the base model. I just don't know if this whole Turbo version ruins future trainings.
@6tZ Yeah, I've been playing around with several of the Z-Turbo checkpoints. I really like yours!! What's new in the latest version? I might have to buy it!! :-D Thanks for the good work!
@GenoMachino 2.2 added additional penis stability.
It's like, 15% good now... not enough by far but it's the best one yet IMO.
@6tZ Penis stability? Like it's firmer and girthier?? :-P
On different versions, you say that you trained for 246,000 steps. Does this mean that each new version, and even sub-version are trained from scratch?
Good question:
No, this means that I'm using those models still. The key trained NSFW core models are trained to those step counts, and they are the largest trained ones, so the most "mentionable".
@6tZ Ah, so 2.1, 2.2, etc... has some newly trained loras merged in.
@Jellai Correct! And some re-balanced older ones too.
Version 2.0 BF16 is amazing. Thanks! Can't wait for next versions, keep up the great work.
Have you finally managed to conquer the "v*g*na's"?))
REVIEW Jan 14, 2025
I tried the 2.0_BF16_diffusion, unfortunately the colors are a little overexposed and the details are blended in. For example, hair strands are blended/washed together losing that realism. When not using loras I exclusive use this authors version 1. Somehow for my end, this version 1 is better than the 2.0.
Edit: I have tried copy/pasting the OP's prompts as well and they do work great, but the feeling/looks isn't the realism style I'm going for.
Been playing with the checkpoint for realism too, not so much for NSFW stuff. I've found thus far the v2.0 likes
2:3 aspect ratio (832 x 1216)
no clip skip or shift
DPM++2M_SDE + BETA
8-9 steps
Qwen3_4b - Lumina2 (Not tried qwen/wan yet)
flux vae (ae) not any other "detail version" of it.
Loras between 0.1-0.4 especially if stacking.
I feel what you mean, there is a little bit of "styling/softening" going on, but that can be beneficial if you want to upscale/detail/2nd pass.
все очень точно описала .добавить нечего .беспонтовая модель так для пафоса сделали ,бизы срубить ...
please try my zit model. see if u like it https://civitai.com/models/2340701?modelVersionId=2632847
Thank you for the AIO model. I cannot get any other ZIT model to work within Stability Matrix (Comfy) local app. It's always some sort of Clip Error but I have all the clips/vae in their folders. Your AIO simply works. Can't get Chroma to work either. Sucks to be a newb who's late to the party.
Maybe Stability Matrix doesn't support having parts of models loaded separately, and it requires checkpoints, which usually comes with all the parts required.
@6tZ I copy all the ZIT files to their folders just like SDXL, FLUX, ILL, PONY, but ZIT and Chroma fail to load, throwing Clip errors. Apparently Stability Matrix won't allow upgrading Python past 3.10. I read somewhere that mismatched py files could appear as clip errors. Supposedly they're working on it but you'd think they'd have done it by now. Either way, this model is one of my new favs. I just wish ZIT images weren't so similar to each other. Can't wait to try v2.2.
@InsidiousOne Make sure to use varied prompts. Not sure if Stability Matrix supports wildcards. I recommend this.
Otherwise and in general, I recommend moving over to ComfyUI.
It's earliest with new models, and always works. Except for when they update and break shit, but in general, it's working.
@6tZ It's not a prompt error, the models simply crash on execution. It works differently than the web interface. Stability Matrix makes all the "sloppy" connections behind the scenes with the use of dropdown menus. It loads the model, loras, all your settings, your prompts and pops out an image. It's VERY easy... When it works. I know zero about python so my troubleshooting goes very slowly. I see other posts on Git that timed out with no answers. So it's not just me. But since your model works with baked in VAE/Clip, I can only assume, there is something wrong with my VAE/clip files or folders. I've gone as far as copying the files in every folder I think it might search, with zero luck. I probably have a TB of duplicate files by now. LOL
@InsidiousOne May be you better think about switching to ComfyUI.
@velanteg I'm wetting my feet running ComfyUI through Stability Matrix. I'm sure using SM is going to limit what I can do but it makes things so simple. And apparently it is a good way to keep files organized if you want to run multiple app.
In any event, I just got my first ZIT model working. Apparently SM wants the encoder type set to SD3, even though all my searches said to set it as Flux. SM hasn't added QWEN to the list yet. Just Flux, SD3, HiDream. Until ZIT, everything has worked under Flux type as long as the model, VAE, encoder files were in the correct folders. So now I'll start moving ZIT models back to see if they'll all work. Maybe this might help someone else. Thanks again to 6tZ for this model.
@InsidiousOne With Comfy, it would actually be easier, because there are official templates for everything, and everyone can help you basically :)
But whatever gets you going!
I'd still suggest investigating when you have a few hours.
It may look daunting, but once you follow the flow of some official templates, you'll see that it's rather simple.
Ignore workflows that people are posting online. There are so many useless templates fitting for just that person. And if you try to adopt it, you'll spend more time than if you were to make it from scratch. And you'll get so many node packs you'll go insane and get crazy loading times. I was at 300+ node packs until I removed all and started from scratch.
Build your own from scratch and expand on the basics as you need.
@6tZ lol, I only have 31 mods for ComfyUI
@qek I'm not sure what you mean by "mods". Do you mean when I said SM only lists 3 encoder types? I'm sure there's a way to add all kinds of things in SM that I don't even know I need. Did I mention I haven't read a single document on how to use any of this? If it wasn't for SM, I'd have taken one look at Comfy Web and said, No Thanks. Learning DFL was easier than this. As of now, all model types are working including Chroma and ZIT. Last night I finally fixed the onnxruntime error with the help of GoogleAI. Turns out SM made it a LOT easier to install/uninstall pips than posts on reddit. I have no idea what nodes or wildcards are, LOL. I'm the epitome of who Jeff Goldblum in Jurassic Park was talking about. I'm "standing on the shoulders of geniuses to accomplish something" without understanding it. I have a feeling that if I want to do voice cloning and face swapping I'm gonna have to dig into ComfyWeb sooner or later.
@InsidiousOne Only do it if you think it'll be fun or worthwhile. Don't stress about anything and use whatever works.
I'm just mentioning that ComfyUI is usually the first to get the new stuff, and it's not as intimidating as it seems if you have some good video tutorial to follow.
Enjoy the ride regardless.
Version 2.2 is very good, but is there any way to prompt it for nice, small firm pert boobs? Even with a detailed description and negative prompts, it always generates larger tits.
@Iggort Please remove your comment and link
I haven't tried it but you can try this lora...
2.2 is useless for me, as AIO is bigger than my VRAM. I would prefer having text encoder separate, so it can be offloaded to cpu.
AIO models can still offload to CPU/RAM, comfy should be doing it automatically with default settings. I'm using the 20gb BF2 model with 16gb vram 32gb ram.
@Drakeni yes, but is it smart enough to know that it's the text part should be offloaded? Text is processed much faster, so it's better for it to be the one offloaded.
@batart I'm generating 1024x1024 images in 11 seconds, so I don't care much about "faster". It's fast.
I would assume that you can just load only the MODEL part from this model, and use the text encoders with separate loaders? Maybe that doesn't work though, not sure.
I'll upload separate ones when I have some spare time :)
@Drakeni at least with ROCM, even the 11gb AIO models thrash vram on a 16gb card. Not to mention the extra 7gb of disk space with each AIO model to duplicate the vae/text.
Smaller lighter version GGUF Q8 or FP8 for v2.2??????
Does it work well with LORAs?
No/Maybe
I had some trouble using a character lora with it.
I tried several loras. In most of the images, they are working fine. I'm not sure if the broken ones are due to Lora or this model.
Sliders work well even when using several of them at once.
Working for me.
@pipes_46 What values are you applying? In terms of force in LoRa, and what KSampler steps, cfg and sampler are you using? I'm also having trouble getting compatibility.
2.2 gives me exactly the same output as 2.1?
https://civitai.com/posts/26169433
Here's a comparison.
It's not exactly the same, but they are similar.
2.2 is more stable in general, especially with genitals, but look at the hands, fingers and noise in background.
I tried using the models downloaded from the site with these, so it should be exactly what you download as well.
Model can really do numbers, thx for your work
All the way up to 69, I didn't teach it any further.



















