CivArchive
    ComfyUI Pony/SDXL/Illustrious Easy-Use Detailer Workflow - v2.1 - Silent Night ed.
    NSFW
    Preview 48032758
    Preview 48032342
    Preview 48032346
    Preview 48032366
    Preview 48032369
    Preview 48033290
    Preview 48033361

    If you are getting errors about the weights_only setting, add your detailer .pt files to the following location, one per line:

    ComfyUI\user\default\ComfyUI-Impact-Subpack\model-whitelist.txt


    This is a workflow designed to generate portrait style images using the standard Stable Diffusion tools (ControlNet and Lora/LYCORIS) and add detail to them while only using a minimal amount of different custom nodes, and keep the workflow simple and straightforward enough that it's easy to add and remove nodes based on your needs.

    The idea is to generate a base image you like, and then let it run to improve it. That way you can abort images at the first step until you get one you want to move forward with, then walk away, confident that the end result will be a better version of that first step every time.

    v5

    This is my current working version of the workflow. It's not as polished as previous versions, but I figured I should post one that isn't broken by the updates to Ultimate Upscale for people who want to keep using it.

    The major changes, at least as far as I can remember:

    • Removed the video workflow. The video generation has grown so much in the last few months that it no longer works to do image generation in this way and then also have video gen in the same workflow.

    • Baked in Lora Manager support. This is truly the only way you should be using Lora/Lycoris models at this point, as you can take the complex input triggers and automatically add them to your workflow with no copying or pasting. It also lets you easily turn individual triggers on and off within a lora.

    • Separated out all of the variables from the samplers and detailers, so you can set them across the whole workflow with one node for each one. This is especially handy for the seed, as if you leave it at increment, you can just roll back to the last seed if you liked an image and then fix it so you can manipulate it more.

    -
    V1+

    I designed this workflow to not hide anything in the pathing - the connections between nodes is straight with no extra blocks, and as few connections as I could get away with while maintaining as much consistency as I could across them, with only 2-3 node links needed between steps.

    I have focused on removing variables that make large changes to the original generated image, so you don't end up generating a base image then coming back several minutes later to find a result that doesn't look anything like it. There are also "save points" (image previews) along each step that allow you to pull a workable result from your work if one of the detailer steps goes awry.

    The workflow as delivered is aimed at NSFW Pony/Pony Realism models, but will work fairly well with SDXL models as well without changes.

    If you want to use it for SFW images or one of the detailers is consistently failing to give you the results you want, you can change the bbox detector to a different one, or simply bypass the detailers you don't need by re-routing the 2-3 link lines that come from the previous step past it.

    Additional detailers can be added to the workflow by simply unpinning and cloning one of the existing detailer node trees and connecting it the same way the original was, taking care to place it in the workflow in the appropriate section. This means that full body items like clothing detailers should be placed before the first upscale, things the size of a head or face should be added before the second upscale, and small details like individual nipples should be done after the second upscale. Separating your ultralytics in this way gives you the most amount of data to work with for each detailer pass while keeping the amount of unwanted results to a minimum.

    Note that this workflow is not a fast one; average complete generation time for a single image on my system (R7950X/64GB DDR5/RTX4090) is between 6 and 10 minutes. It's designed for high-VRAM systems (16GB or more, but it may work with 12GB GPUs), so if you have less than that your system will page into system RAM and run considerably slower or may not work at all. You can reduce the VRAM needed by changing the base image size, but if you do, make sure you also change the upscale tile sizes (the first upscale pass tile sizes are equal to your base image size, the second is 1.5x the base image size) and the resize width field in the Upscaled Original portion of the workflow to equal the width of your final output image. You may also wish to consider setting the guide_size and max_size of each detailer to something smaller, like 1024.

    I haven't tested it, but it should also be able to be tweaked to work with SD1.5 by adjusting the parameters in the loader to be appropriate for SD1.5, changing the upscale tile sizes and resize width as above and setting the guide_size and max_size of each detailer to 512.

    Changing the sampler you use with this workflow from DDIM to something else will work, but results will vary based on the one you choose - some blend better with the detailer than others, and others will cause changes to develop as the image passes through the workflow. The choice to use DDIM was deliberate here, it causes the least variations of all the models I have tested and generates good output with realistic, semi-realistic, cartoon and other subject matters. If you decide to use something else, I recommend avoiding ancestral samplers, as when run through a multi-generation workflow like this one they tend to give results that can vary strongly over each pass. Whatever sampler you choose, keep it the same across the entire workflow to make sure that the results remain consistent. The same goes for the number of steps and the seed, if you change it, change it everywhere. For the CFG keep it at or near 10 - lowering it will cause inconsistencies to develop through the workflow.

    Massive props to yolain for the incredible work being done on their easy-use nodes (https://github.com/yolain/ComfyUI-Easy-Use) that are the basis of this workflow. I've consistently been able to solve various difficulties in other ultralytics detailer nodes and level out any unwanted tone changes to the image using just these nodes - it's truly amazing stuff and more is being added every day at this point.

    Other resources used:

    ssitu's ComfyUI_UltimateSDUpscale (https://github.com/ssitu/ComfyUI_UltimateSDUpscale)

    uwg's SwinIR upscale models (https://huggingface.co/uwg/upscaler/tree/main/SwinIR) - seriously, if you aren't using these, you should be.

    Ultralytics detector models for use with this workflow can be found here on Civitai, on huggingface.co and other places. Since most of them are .pt files, the fall under the "use at your own risk" category and I won't directly link to them.

    Description

    Soooo, that last upscaler didn't work as well as I thought it would; I can't figure out why it decides to make changes to the previous step when there's no change in the pipe and it's working directly from the output of it. About par for the course with Ultimate Upscale at this point. Oh well.

    Replaced it with a highres fix node that does mostly the same job without changing the work.

    FAQ

    Comments (5)

    illrigger
    Author
    Dec 28, 2024
    CivitAI

    Decided while making the sample images for this version to tell a little... cautionary tale.

    OtakuFraJan 21, 2025· 2 reactions
    CivitAI

    Very efficient workflow, I was looking for a good detailer for a long time and I was never satisfied, but mostly because of my lack of knowledge about detailers and how to use them. Anyway, now I can use it and it works really well. But I wonder what is the purpose of the "deepfashion_v2" fixer? What is its purpose exactly, I can't find a real explanation about it. Is it about clothes or body parts?

    Thank you so much btw, i'll post my gens in a few, i'm working on a large set of characters

    illrigger
    Author
    Jan 22, 2025· 1 reaction

    Yep, it's for clothing. It will often still trigger when used on nudes, but in general it won't mess things up (it will basically act the same as the body detector) so it's OK to let it run most of the time. If it gives you trouble, you can just route around it.

    OtakuFraJan 23, 2025

    @illrigger Regarding routing, I tried to bypass the one I didn't want but it makes the workflow fail. I first tried to isolate the whole tree then module by module (pipe edit/ultralytics detector/SAMLoader etc etc), but there too the workflow fails. Any idea why and maybe a way to have a feature like this?

    Thanks for your work and the time you took to answer me.

    illrigger
    Author
    Jan 27, 2025· 2 reactions

    @OtakuFra Bypassing (via the r-click and bypass option) doesn't work because there are multiple inputs and they don't auto-route correctly. You can bypass a section by just manually re-routing the 3 inputs in the pope edit node around the stage you don't want. I've been trying to find nodes that let you choose a route, but haven't figured a good way to do it out yet. I haven't given up, but I don't think there's a node with the functionality I need out there, and I haven't dug into how to make my own yet.

    I'm working on a minor update for the workflow, but in the meantime you may want to move the lip and eye sections to before the genital ones if you plan on using any loras on the genitals. Seems like most of the genital-specific loras are over-trained and once they're in the pipe will almost always mess with your lip and eye outputs. Putting them last fixes this.

    Workflows
    Pony

    Details

    Downloads
    404
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/28/2024
    Updated
    5/4/2026
    Deleted
    -

    Files

    comfyuiPonySDXL_v21SilentNightEd.zip

    Mirrors