Z-Image is here, and it's pretty damn great. Let's make it more NSFW!
Still a work in progress. It can output good genitals, but not always. About 3/5 images are good, working on penis stability specifically for future versions.
ComfyUI: Models should be placed in diffusion_models, and uses the official workflow with the Load Diffusion Model loader, and loading the VAE/CLIP models separately. Models named "_AIO" should go in the checkpoints folder, and uses the normal checkpoint loaders.
Recommended Settings
Steps: 12
CFG: 1.0
Sampler: dpmpp_sde
Scheduler: simple
Description
Trained for 246.000 steps on a huge dataset. Special thanks to Alcaitiff for their awesome work!
FAQ
Comments (24)
plz try to make for qwen
Yeah maybe
@6tZ yeeee
it already exist with phr00t AiO
@dubardo his models r bad lol
This might be an obvious question, but what is the difference between V3 BFAIO and the V3 Diffusion model? I think the AIO is "all in one"? But what about the diffusion version? Thanks!! Love your work!
I think it means CLIP and VAE are not included. (I didn't try it.)
@hawaiiguitarplayer250
Correct!
The AIO means All In One, and it has the VAE and CLIP/text model included.
You can use them externally as well, all up to you. I prefer to have it all in one model for simplicity. Then I know I'm using the "right" version etc.
Diffusion is the same "model", you just have to provide your own CLIP/VAE nodes.
Great work! I've tried V3. It can generate d**k and vir***na properly now. However absolutely not work when I want to name a sex pose between two people. I guess the model was originally tained without any NSFW images so it will be very hard to make the model understand these movements.
Correct, it needs a lot more training.
But the focus will be on the non-turbo model now, to see if it's easier there.
How diverse are the outputs between seeds? Anyone know?
Extracted the unet from the AIO file and converted to Q8 GGUF format for saving disk space. Since some have asked, you can use the "UNetSave" node from RES4LYF for extracting the unet. Model safetensors file will be saved into the output folder (just link load checkpoint node into unetsave node and run workflow).
Convert using this guide I made for qwen. It works for Zimage as well.
https://drive.google.com/file/d/1QdmtXijMROZC8nTpKPJGQ_Xlt8OIJK6U/view?usp=sharing
Thank you sir! 😘
When separating the diffuser, I get the error: "AttributeError: ‘NoneType’ object has no attribute ‘keys’.
What the hell does that mean? Why does it work for you but not for me?
@heinerheinervonvielen417 Just use the "Save Model"-node in ComfyUI.
Your checkpoints start looking better and better! However it seems like prompt compliance is not working that well if I look at your example prompts in the gallery?
PS: the female genitals in your ZIT are among the top 3!
I noticed that also with the Prompts, I also noticed the negative section being used , in which I Z Image Turbo doesn't a hear to ?
slow clap
It won't be long now until something comes along to replace Illustrious / NoobAI. Most likely that'll be the new Illustrious and NoobAI models down the road. But ZImage might be able to do it before then.
So far I've been disappointed in all of the new models.
I got to say I've personally tried several checkpoints in the last few weeks, testing them thoroughly. Yours definitely stands out above the rest. With about one in three. Images coming out exactly how I want them. With no artifacts. Running a Nvidia. 5060 Ti16 gigabyte. around. 45 seconds on average on generation speeds. Can't wait To see how good this thing gets. As you implement. Your new improvements? Maybe speed will be one of them.
I seemed to have come to the same conclusion as well. This model seems to do much better with the composition. Other models seem to get the photo realism aspects with the lighting and details, but I think I find myself wanting to just image to image literally everything I ever made using this model at this point and then use other models to add anything else.
I will not be updating speed, it's already a turbo model. I don't even like the results with the model's recommended step count. I personally use 12 steps for what I consider higher quality.
Great model..but the character lora doesn't work well with it. It breaks anatomy and deform the body. recommended if you don't want to use lora.
same problem here, I trained a lora character for z-image-turbo and it works beautifully, but with this NSFW model everything brakes, it does not work.



















