Inspired by a discussion with u/ArcherJones1985 (Check out their version here)
This tool is a modular multi-ksampler workflow for the purposes of testing schedulers and samplers.
In it all steps, CFG, and seeds are unified and the ksampler subgraph is limited to only changing the sampler and the scheduler. It also includes a labeling node built into each ksampler (though I didn't bother to break out the color and position settings into their own thing), as well as a 10-image stitching ksampler. It also has a lora loader built in (with more of course being possible).
This is made for Z Image Turbo but I don't see why it wouldn't work for any other model.
Description
FAQ
Comments (12)
So what is your conclusion?
Which sampler and scheduler are best for ZIT and ZIB respectively? (Generation time must also be considered.) Some samplers are too slow.
I'm not the one to say definitively. I'm mostly just sharing this because I had fun building the workflow. You should try it for yourself, though, as what you need might be different from what I need. But personally, I think res multi does the best.
yeah i think res_multi is a really good option if speed is crucial, but dpmpp_2m with beta scheduler is great too if you can afford a little time. i do a two stage process, i do dpmpp_2m/beta for 10 steps at 1.0 denoise and then upscale the latent by 1.3x and pass it to another sampler of dpmpp_2m/beta for 6 steps at 0.70 denoise. The results are really good and for me at least a little more consistent than res_multi.
@SaoirseTeagan The dpmpps all veer away from the prompt too much for my taste.
Like, you look at the black haired woman's shirt in all of theme, the dpmpp does ok but the other two are nonsensical, and dpmpp usually ignores the prompt more than any of the others.
@AirbagGuy Okay, thank you
@SaoirseTeagan Are you referring to secondary sampling? What is the significance of amplifying the latent sampling by a factor of 1.3 in the first-stage results? (What nodes should be used?) I'm a beginner, thank you for your reply (a workflow would be great).
@wyxzddsjj919 Not to speak for Saoirse, but it’s a fairly common practices, particularly with ZIT/ZIB to use the stock latent upscale node and a lower de-noise setting (say about .4-.7, any lower you’re adding artifacts and any higher you’re changing the image too much) to add more detail and fidelity to an image while maintaining composition. It won’t get you to super huge resolutions, but it’ll still get you pretty damn good results for that much more effort.
@AirbagGuy i use the flux guidance node and set it to 7 and that seems to help, especially with detailed prompts.
@wyxzddsjj919 it gives you a little more control and you can mix/match samplers for different results. the big effect for me is that it produces better results in less time. if you do a high number of steps (for zit, like 10-12) on a smaller latent, most of the work gets done there but since it's a smaller latent those 10-12 steps don't take as long. then you upscale the latent (not the image, don't decode yet, just use the latent upscale node) maybe 1.3x or 1.5x and do another sampler at a smaller number of steps, like 4-6. this refines the original image and adds detail that you couldn't get with the smaller size, but since you're using fewer steps it's not taking as long as it would if you used 10-12. the other thing i do is i use controlnet to get my character pose but only on the first pass (the smaller 10-12 steps), and let the second pass (bigger 4-6 steps) just work on the latent - i pass the model directly into that sampler without going through the controlnet setup. controlnet tends to make images muddy and plasticky, but since you're only using it on the first pass, the second pass can clean up what the controlnet leaves behind.
@wyxzddsjj919 as soon as i overcome my worry that my workflow is stupid i will be posting it :) but adding a secondary sampler is super easy, literally all you have to do is connect the latent output from your first ksampler (the one you already have) to the "upscale latent by" node and then connect the latent output from that to another ksampler. make sure that the second one is the regular ksampler not the advanced so you have easy access to the denoise parameter, and set it like @AirbagGuy says around 0.4 to 0.7 or so. too much higher and the second sampler ignores the first one's results, and too much lower and the second sampler doesn't do enough extra detailing. then the second ksampler's latent output can go to the vae decode or wherever you send the latent normally. once you have this set up, you can play around with different sampler combos between the two samplers, and different step amounts. high steps in sampler 1 and low steps in sampler 2 will be fairly quick (not much longer than the one sampler method, especially if you give the first sampler a smaller empty latent than you otherwise would - like 640x640 instead of 1024x1024 for example), but you can also do low steps in sampler 1 to get a very rough sketch and then use high steps in sampler 2 to do a deep refinement. because sampler 2 is on a bigger latent it'll take longer, but you might like the results.
@SaoirseTeagan Thank you for your reply.



