CivArchive
    ComfyUI Pony/SDXL/Illustrious Easy-Use Detailer Workflow - v1.5
    NSFW
    Preview 39056176
    Preview 39060246
    Preview 39062045
    Preview 39075112
    Preview 39077216
    Preview 41012781
    Preview 45724578

    If you are getting errors about the weights_only setting, add your detailer .pt files to the following location, one per line:

    ComfyUI\user\default\ComfyUI-Impact-Subpack\model-whitelist.txt


    This is a workflow designed to generate portrait style images using the standard Stable Diffusion tools (ControlNet and Lora/LYCORIS) and add detail to them while only using a minimal amount of different custom nodes, and keep the workflow simple and straightforward enough that it's easy to add and remove nodes based on your needs.

    The idea is to generate a base image you like, and then let it run to improve it. That way you can abort images at the first step until you get one you want to move forward with, then walk away, confident that the end result will be a better version of that first step every time.

    v5

    This is my current working version of the workflow. It's not as polished as previous versions, but I figured I should post one that isn't broken by the updates to Ultimate Upscale for people who want to keep using it.

    The major changes, at least as far as I can remember:

    • Removed the video workflow. The video generation has grown so much in the last few months that it no longer works to do image generation in this way and then also have video gen in the same workflow.

    • Baked in Lora Manager support. This is truly the only way you should be using Lora/Lycoris models at this point, as you can take the complex input triggers and automatically add them to your workflow with no copying or pasting. It also lets you easily turn individual triggers on and off within a lora.

    • Separated out all of the variables from the samplers and detailers, so you can set them across the whole workflow with one node for each one. This is especially handy for the seed, as if you leave it at increment, you can just roll back to the last seed if you liked an image and then fix it so you can manipulate it more.

    -
    V1+

    I designed this workflow to not hide anything in the pathing - the connections between nodes is straight with no extra blocks, and as few connections as I could get away with while maintaining as much consistency as I could across them, with only 2-3 node links needed between steps.

    I have focused on removing variables that make large changes to the original generated image, so you don't end up generating a base image then coming back several minutes later to find a result that doesn't look anything like it. There are also "save points" (image previews) along each step that allow you to pull a workable result from your work if one of the detailer steps goes awry.

    The workflow as delivered is aimed at NSFW Pony/Pony Realism models, but will work fairly well with SDXL models as well without changes.

    If you want to use it for SFW images or one of the detailers is consistently failing to give you the results you want, you can change the bbox detector to a different one, or simply bypass the detailers you don't need by re-routing the 2-3 link lines that come from the previous step past it.

    Additional detailers can be added to the workflow by simply unpinning and cloning one of the existing detailer node trees and connecting it the same way the original was, taking care to place it in the workflow in the appropriate section. This means that full body items like clothing detailers should be placed before the first upscale, things the size of a head or face should be added before the second upscale, and small details like individual nipples should be done after the second upscale. Separating your ultralytics in this way gives you the most amount of data to work with for each detailer pass while keeping the amount of unwanted results to a minimum.

    Note that this workflow is not a fast one; average complete generation time for a single image on my system (R7950X/64GB DDR5/RTX4090) is between 6 and 10 minutes. It's designed for high-VRAM systems (16GB or more, but it may work with 12GB GPUs), so if you have less than that your system will page into system RAM and run considerably slower or may not work at all. You can reduce the VRAM needed by changing the base image size, but if you do, make sure you also change the upscale tile sizes (the first upscale pass tile sizes are equal to your base image size, the second is 1.5x the base image size) and the resize width field in the Upscaled Original portion of the workflow to equal the width of your final output image. You may also wish to consider setting the guide_size and max_size of each detailer to something smaller, like 1024.

    I haven't tested it, but it should also be able to be tweaked to work with SD1.5 by adjusting the parameters in the loader to be appropriate for SD1.5, changing the upscale tile sizes and resize width as above and setting the guide_size and max_size of each detailer to 512.

    Changing the sampler you use with this workflow from DDIM to something else will work, but results will vary based on the one you choose - some blend better with the detailer than others, and others will cause changes to develop as the image passes through the workflow. The choice to use DDIM was deliberate here, it causes the least variations of all the models I have tested and generates good output with realistic, semi-realistic, cartoon and other subject matters. If you decide to use something else, I recommend avoiding ancestral samplers, as when run through a multi-generation workflow like this one they tend to give results that can vary strongly over each pass. Whatever sampler you choose, keep it the same across the entire workflow to make sure that the results remain consistent. The same goes for the number of steps and the seed, if you change it, change it everywhere. For the CFG keep it at or near 10 - lowering it will cause inconsistencies to develop through the workflow.

    Massive props to yolain for the incredible work being done on their easy-use nodes (https://github.com/yolain/ComfyUI-Easy-Use) that are the basis of this workflow. I've consistently been able to solve various difficulties in other ultralytics detailer nodes and level out any unwanted tone changes to the image using just these nodes - it's truly amazing stuff and more is being added every day at this point.

    Other resources used:

    ssitu's ComfyUI_UltimateSDUpscale (https://github.com/ssitu/ComfyUI_UltimateSDUpscale)

    uwg's SwinIR upscale models (https://huggingface.co/uwg/upscaler/tree/main/SwinIR) - seriously, if you aren't using these, you should be.

    Ultralytics detector models for use with this workflow can be found here on Civitai, on huggingface.co and other places. Since most of them are .pt files, the fall under the "use at your own risk" category and I won't directly link to them.

    Description

    • Added penis detailer stage.

    • Changed seam fix mode to speed up upscale passes and unchecked tiled decode, which suddenly started throwing errors after updating my node list.

    • Added missing preview before final color correction pass.

    FAQ

    Workflows
    Pony

    Details

    Downloads
    314
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/8/2024
    Updated
    5/4/2026
    Deleted
    -

    Files

    comfyuiPonySDXL_v15.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)