CivArchive
    Fdetailer - Adetailer for Furries - v1.0
    NSFW
    Preview undefined

    Version Differences

    v1 and v1.1 are very similar. It's hard to say which is better, so try them both and decide for yourself. v1.1 was trained in an attempt to correct some minor issues, and while it has shown improvements over v1, it has also shown some regression. I think I slightly prefer the accuracy of v1.1, so if you only wanted to try one, I'd recommend that one.

    What is this?

    Fdetailer is an instance segmentation model trained on furry anatomy using yolo as a base model. The dataset is ~20k images with a variety of species and art styles, including both anthro and feral characters. The model is capable of detecting and masking faces, penises, pussies, anuses, sheaths and pawpads. It may struggle a bit with uncommon species or strange anatomy, but overall the accuracy has impressed me for being my first attempt at training a segmentation model.

    Fdetailer is an all-in-one multi-class model. You can choose to only detect faces and pawpads, anuses only, or any combination of the classes available. Some interfaces may not support this, however. Multi-class models are relatively uncommon.

    When given an image, Fdetailer attempts to find and mask objects it has been trained to detect. Instance segmentation is a bit different compared to bbox (bounding box) models that have been popular in the past. Bbox models find an object and draw a box around the object. This can be problematic for inpainting since the area around the detected object is also subject to inpainting and can create ugly seams and strange details outside of the intended object.

    Instance segmentation attempts to solve this by creating point-to-point masks around the objects it detects. These masks are similar to ones you might create by manually painting over a character's face. This solves the issue of ugly seams or inpainting parts of the image outside of the face.

    Who is this for?

    Me. I created Fdetailer because inpainting is annoying. I'm a ComfyUI simp and I wanted to get more detail out my base images before I use other software to edit them. I've found every image I've generated could always use inpainting on faces and genitals, but I can't be arsed to manually inpaint every image before I decide if I even like it or not.

    If you're the type of person that queues up a bunch of generations and then goes through them later to cherry-pick your favorites, you'll find this useful. You can automate the inpainting process of most of the common (in my opinion) parts of your character(s) you'd want to inpaint.

    If you're the type of person to watch each image generate step by step, you may still find this useful. Why interrupt your gooning sesh to inpaint a butthole when tools exist to do it for you? I got you, homie.

    Does it actually work?

    Mostly. I'm just some dumbass pretending to be a pink cat on the internet. This is my first attempt at creating a segmentation model, and plenty of problems have come up during the process. I've attempted to fix most issues as they pop up, but my knowledge and time is limited, so this is an ongoing project.

    Accuracy is okay. Pawpads are the worst offender due to how different they can appear depending on art style and species. Anuses can also be an issue, but in most scenarios accuracy is fine. Sheaths can be hit or miss due to lack of data, but I haven't had too many issues. Everything else is pretty damn good. I'm overall impressed with the current state of the model.

    Uncommon species may confuse the model depending on how niche they are, but in most cases it works well enough.

    How do I use it?

    Reference your preferred UI's documentation for more accurate instructions. Your UI might give you the option to filter detections based on class id or class name. This is useful if you only want to detect faces, for example. The following list contains the class ids and names the model has been trained on:

    0: face
    1: penis
    2: pussy
    3: anus
    4: sheath
    5: pawpads

    For confidence threshold, I recommend between 0.35-0.5.

    ComfyUI: I recommend using the comfyui-impact-pack (you'll also need comfyui-impact-subpack for ultralytics support) set of custom nodes. Place the model in models/ultralytics/segm. Load the model using UltralyticsDetectonProvider and then use SEGM Detector (SEGS) to run inference, filter detections by class name, modify confidence threshold and change mask settings. Pipe the SEGS into Detailer (SEGS) to inpaint. You can use Impact's built-in wildcard processing to modify the prompt based on what is detected. For example, you can add detailed eyes to all face detections. Refer to the Impact wildcard documentation for all the things you can do.

    SwarmUI: Documentation. Place the model in (swarm)/Models/yolov8. Add <segment:fdetailer_seg_v11.pt,denoise,threshold> to your prompt. Change denoise to your desired denoising strength and threshold to your desired confidence threshold. This will inpaint all objects detected in the image.

    You can filter what you want detected by adding the class ids or names in between colons after the model name. For example, if you only want to detect faces and pawpads, you can add <segment:fdetailer_seg_v11.pt:face,pawpads:,denoise,threshold> to your prompt.

    A1111 and Forge variants: You'll need the Adetailer extension. Place the model in models/adetailer. The Adetailer extension does not support class filtering. All classes detected will be inpainted. This cannot be changed until the Adetailer extension has been updated or forked with support for class filtering added. Additionally, the default settings are geared more for bbox masks. Since this model produces segmentation masks, I highly recommend increasing "inpaint only masked padding pixels" to 64 or higher.

    Refer to the Adetailer documentation for further instructions because neither I nor my friends use a1111 or its variants.

    Future Plans

    Accuracy: I'm mostly happy with the current state of the model, but it could be better. I'm planning on doing another tuning pass over the current dataset to improve the accuracy of the masks and retraining the model to use a larger parameter version of yolo.

    More Classes: With a larger parameter model, I'm more comfortable adding additional classes without the fear of it diluting the accuracy of other classes. I'm open to suggestions, but my current plan includes adding eyes, mouths, balls and nipples. I'm also considering adding a penetration class to mask both the object and orifice that's being penetrated, but I'm just dreamin' here.

    HANDS?: Yes, but not anytime soon unfortunately. Hands are complicated and I'd need to do a bit of research on how best to tackle them.

    Description

    FAQ

    Comments (54)

    8747928hpFeb 9, 2025· 3 reactions
    CivitAI

    Bless you on a gargauntuan amount of levels for making this. I wish I could give you 10,000 Buzz but for now, I can only provide 1,000.

    eihpirtsa
    Author
    Feb 10, 2025· 1 reaction

    thank you <3 glad you find it useful. x3

    furpicFeb 10, 2025· 2 reactions
    CivitAI

    Thanks, work good, and finally on swarmui they added to choose what you need without comfy, to use your adetailer more comfy now

    eihpirtsa
    Author
    Feb 10, 2025· 1 reaction

    Glad you're enjoying it! My friend mentioned his PR was merged but I forgot to update the description. Thanks for the reminder. It's been updated now. I believe confidence threshold still isn't supported yet, however. Unsure if a PR is active or if support is planned, but I'll leave my recommendation to set an appropriate threshold in case support is added.

    furpicFeb 11, 2025

    @eihpirtsa Anyway it's work and work good.

    neilarmstron12Feb 15, 2025· 1 reaction
    CivitAI

    I use A1111, and have no idea how to use the other modes. Luckily I just wanted a face aDetailer, and my extension defaults to the face mode.

    binsoo0920178Feb 24, 2025· 1 reaction
    CivitAI

    Can you share comfyui workflow? i dont know how to use it

    DraconicDragonFeb 25, 2025· 3 reactions

    It's in the description in the "How do I use it?" section...

    Anyway since I was playing around with this in a separate workflow, i made one here, you should be able to just drag the image into comfy and get the workflow, or download it and drag it into comfy
    https://civitai.com/images/59961416

    eihpirtsa
    Author
    Feb 25, 2025· 2 reactions

    @DraconicDragon Yep, this workflow looks good to me and would be what I would recommend starting with. Small tip, though. Impact has a node labelled "DetailerDebug(Segs)" which includes additional image outputs for "cropped" and "cropped_refined." These outputs do the same thing as the the mask nodes you have at the bottom of the workflow, providing previews for what objects are detected and can be used as a "before and after" preview. Cleans up the workflow a bit :)

    I also have an example workflow uploaded in the relevant thread on the Furry Diffusion Discord. I'll get around to uploading it to CivitAI tomorrow, just saw this comment as I was getting ready to go to bed x3

    binsoo0920178Feb 25, 2025· 1 reaction

    @DraconicDragon thank you so much!

    GoblintideMar 17, 2025
    CivitAI

    Is there any way to help it get the correct coloring on an inpainted area? even when reducing the denoising strength, sometimes the color will be slightly different to the rest of the anatomy. I'm using a1111/reforge and i haven't had this issue with adetailer before. Is there a simple setting i've missed?

    besides me being overly autistic over that, this works really well, great job.

    ditaMar 20, 2025· 1 reaction

    Random commenter here. I haven't tried this model yet, but here are a few tips to try.

    - in the aDetailer settings, under Inpainting, ensure that 'Use separate VAE' is not turned on and that all the other 'Use separate' settings are also off. If you change them, then it's possible you'll get something that doesn't match the rest of the image, either in terms of the color, the level of detail, etc. For example, I often end up with hands that look old and wrinkled because they are too detailed for the rest of the image

    - In your reForge settings, in the textbox just below the 'Apply Settings' button, type match and you'll see an option come up for 'Apply color correction to img2img results to match original colors.' If it is checked, try it unchecked. If it is unchecked, try it checked.

    - If you've tried both of those and are still having mismatched colors, you might want to consider changing your VAE—not the 'use separate VAE' setting, but the overall VAE for that particular image or set of images. Some VAEs can be slightly inconsistent like this, according to something I read just a couple days ago. I thought it was on the XL_VAE_C page, but it wasn't that, so I can't remember where it was now. I have the same problem sometimes when I use a tiled upscale like Ultimate Upscale. Some squares will come out with a somewhat different color appearance.

    Those are the only ideas I have for you, so I hope at least one of them is helpful!

    eihpirtsa
    Author
    Mar 20, 2025· 1 reaction

    What's likely happening is the inpainted area doesn't have enough "context" of the rest of the image, meaning the mask is too "zoomed in" on a particular area and it can't match the colors/lighting to the rest of the character. I don't use a1111, but looking at the settings available, I believe you'll want to play with "inpaint only masked" and "inpaint only masked padding pixels." If those do what I think they do, it'll increase the context and "zoom out" further to allow the denoising process to blend better. Increase the padding in increments of 32 pixels until you're happy with the results. I think a1111 also has a feature called soft inpainting that I've heard recommended before, which is supposed to help with blending, but I'm not 100% sure how that works.

    ditaMar 21, 2025· 1 reaction

    @eihpirtsa Yeah, those do exactly what you think. If that's the culprit, then increasing the 'padding pixels' will help. I usually use anywhere from 64 to 256 for manual inpaints, depending on the size of the area and how much context it has. and, sometimes, even with 256px of context (padding), it still isn't enough, so when that happens iI have to switch to 'inpaint whole image' instead.

    A1111 does have soft inpainting, but afaik, it's not available to aDetailer. Only for manual inpainting.

    GoblintideMar 21, 2025· 1 reaction

    Thankyou both so much for the detailed replies! Increasing 'Inpaint Only Masked' and 'Inpaint Mask Blur' seems to solve the issue!

    Curvy_BeastiesMar 24, 2025
    CivitAI

    in a1111 there is no possibility to specify classes for detection(yolo classes don't work if the model is renamed to ..-world).

    Is there any possibility to cut out everything except face and pawpads from this .pt? My attempts to cut out unnecessary things with the help of python script (script author grok3) were not successful.... The face is detected, but the pawpads are not.

    eihpirtsa
    Author
    Mar 24, 2025· 3 reactions

    This is a problem exclusive to a1111 and its forge variants. The best course of action would be to ask the developer of Adetailer to update the extension to allow class specification, or if they're unwilling, someone could fork the extension and add the functionality themselves. I'm a ComfyUI user and I made this model with ComfyUI users in mind, so I can't really do much from my end. If someone does decide to fork the extension and add the functionality, I would be happy to link it on the model page.

    As far as modifying the model file to cut out other classes, I honestly have no idea. I've never heard of this being done before. I suppose it's possible since the weights are open source and you could likely filter them in some way? But I'm not sure how well that would work.

    Sorry I can't be of more help. SwarmUI fully supports the model with class specification if you're looking for a more user-friendly gui to use it with.

    Curvy_BeastiesMar 25, 2025

    @eihpirtsa solved with chatGPT o3. Python script works well with truncate classes. Can post here but format got deleted :(

    Curvy_BeastiesMar 25, 2025

    @eihpirtsa anyway thanks for perfect model.
    p.s. i created ticket in adetailer(a1111) branch. hope they fix it soon... I'll use the converted model for now.

    applewubsMar 30, 2025· 2 reactions

    @YakudzaKY Sadly, people has pointed this in the past and Dev ends up closing such tickets as "Not planned" so...

    justAlizardMar 27, 2025
    CivitAI

    Insanely good detection. Pretty much eliminates the issue with seams occasionally found in other auto-inpainters.

    I'm on reForge so only the face function works but personally that's all I was looking for. Edit: False alarm, I was being too hasty!

    Works just as well on non-furry stuff as far I could tell, but with such a tight and accurate mask the default 32 pixel padding was definitely not enough. 64-128 seems more ideal.

    eihpirtsa
    Author
    Mar 28, 2025

    Glad you're enjoying it. Not sure why only the face function would work. It's the first class in the model (index 0), so there's probably something funky going on with a1111's adetailer and multiple classes. Does reforge have its own implementation of adetailer or do you still use the version by bing-su? The default behavior should be to inpaint all classes detected, but unfortunately adetailer doesn't support filtering by class. I've had reports of both users only being able to detect faces, and users who have no issues detecting all the classes available. No clue if it's something to do with a1111/forge/reforge or something else.

    I was surprised at how well it worked on non-furry characters, considering there are 0 humans/human-likes in the training data. I think just due to the volume of data it has a pretty good idea of facial anatomy regardless of species. But yeah, the default settings in adetailer is more geared toward bbox detection, so increasing the mask padding is required for segmentation models. I'll probably add a blurb about that to the description since multiple people have been tripped up by the default settings.

    justAlizardMar 28, 2025

    Yeah I spoke too soon... I just know I had some problems detecting pawpads yesterday so I mistakingly figured only the face class would work in reForge since this is "technically" optimized for ComfyUI.

    But like you said it does indeed seem to detect those other classes perfectly as well since I tried it on pussy/anus prompts today without issues. I guess pawpads is a special case like you already mentioned, certain art styles being harder to detect and all that.

    and yes it is the bing-su version... but I can definitely see detection threshold becoming an issue on a1111 variants when you can't separate those classes, all the more reason to switch to ComfyUI I suppose!

    Edit:

    I was having some issues last night when several classes were detected, namely genitals being generated in infinitum like a house of mirrors, the usual nasty stuff... BUT luckily I was able to find the fix.

    Just had to change Mask Merge Mode to "Merge" under Mask Preprocessing in the adetailer settings.

    eihpirtsa
    Author
    Mar 31, 2025

    @justAlizard The multiple detections of the same object is something I've experienced and has been reported to me before. I think I have a good idea of why it's happening, and it should be fixed in the next version. It's not intended behavior, but I'm glad you found a workaround for the issue.

    Creating a model that mostly works is pretty easy, but finetuning it to be better has proven to be difficult. Fortunately I think I've finally solidified my plans on how to move forward, so hopefully that pans out and I can focus on expanding the functionality of the model soon.

    plsknotthisagainApr 6, 2025
    CivitAI

    Thanks it's awesome! I switched to ComfyUI just for this lol.

    Ball and Bulge detection would be amazing

    eihpirtsa
    Author
    Apr 12, 2025· 1 reaction

    Glad you're enjoying it. Once you get used to ComfyUI, it's hard to use anything else. :)

    Balls are actually planned for version 1.3. Currently working on 1.2, but that's primarily to transition to a higher parameter model and focuses on increasing accuracy rather than adding new classes. When I start working on 1.3, I'll see how viable it is to detect bulges and consider adding it to the model. Thank you for the suggestion.

    KD67Apr 15, 2025
    CivitAI

    Thanks so much for making this! On SwarmUI detection works perfectly but for whatever reason during inpainting it all produces a fully black image where the masks are, I've fiddled with the denoising and threshold as well as switching between all classes and specific classes and it always produces the same result. Any help troubleshooting would be much appreciated :)

    eihpirtsa
    Author
    Apr 16, 2025

    I'm not super familiar with SwarmUI since I don't use it, but it sounds like the inpainting process is failing for some reason. Are there any errors in the console output that could suggest why it's failing? If you have Discord, feel free to DM me and we can go back and forth trying a few things. My Discord name is "astriphie".

    synekzuk885Apr 18, 2025· 2 reactions
    CivitAI

    I was so satisfied with the accuracy that I send tip 2,000 buzz.

    I’d like to request that when PENIS is detected, it also detects the testicles along with it—or at least allows for detecting the testicles on their own—as that would be more useful for my purposes.

    If possible, I would appreciate your consideration.

    eihpirtsa
    Author
    Apr 18, 2025· 3 reactions

    I appreciate the tip and I'm glad you're enjoying it. In future versions, I am planning on adding balls/testicles as additional classes to be detected. I won't add it to the penis class since there are plenty of scenarios when a character's balls are visible but not their penis, and I'd like the model to be as versatile as possible. I'm currently working on version 1.2 which focuses on improving the accuracy of what is currently detected, but version 1.3 and onward will be focused purely on adding additional classes.

    synekzuk885Apr 18, 2025

    Thank you!
    I'm really looking forward to the update!

    AFocApr 25, 2025
    CivitAI

    Index out of range when using 'pawpads' as filter. I wonder why?

    AFocApr 25, 2025

    Ok, lowering threshold helps.

    seakerApr 27, 2025· 1 reaction
    CivitAI

    It works well, but when it comes to the anus, it doesn't. It tends to put something else there, like adds different things from the prompt. I would have separated the anus from this and made it a standalone thing.

    eihpirtsa
    Author
    Apr 27, 2025

    Fdetailer doesn't add, remove or change anything about the image. All it does is create masks for inpainting. It works the same way as if you manually drew a mask around an anus and then inpainted that area, except Fdetailer creates the masks for you. The reason it appears to be adding different things from the prompt is because your denoising value is too high, or you haven't added additional context padding to the masks. I recommend separating the classes into different detailers so you can have settings for smaller objects (anuses and pawpads in the current version). I posted a documented example workflow under suggested resources that shows how you would do that if you're not familiar with the process.

    seakerApr 28, 2025

    @eihpirtsa I know how it works, and no, "ADetailer is an extension for the Stable Diffusion WebUI that does automatic masking and inpainting" Remove, add/change is basically what inpainting is, you cannot have any kind of improvement without a change, and that's what ADetailer does. For denoising strength higher looks better but not always, and separating things like pawpads and anus into a second and third ADetailer would make the image way better. I like higher denoising strength, and lowering it past 0.4 seems useless as well, for only slightly better details. If you can have some of the things separated like paws, and anus, into their own ADetailer prompts, it would overall make image generations way better. I think it would be pretty cool if these were separated into their own thing so we could have more customization, but I get that might take a while. More options wouldn't hurt, and I don't feel like gathering 20,000 images to do that either to do it myself. if only they added a simple fix like maybe a filter, that would be nice :3

    eihpirtsa
    Author
    Apr 28, 2025· 1 reaction

    You are correct in what the Adetailer extension does. I was just clarifying what the model itself does (mask creation), and why it seems like it sometimes adds other things from the prompt when inpainting smaller objects.

    ComfyUI and SwarmUI both support class filtering. A1111's Adetailer extension does not. It's up to either the developer of the extension to add class filtering, or someone else to fork the extension and add the functionality themselves. I trained the model with ComfyUI usage in mind since it's the cutting edge UI. I apologize if I come off as rude, it's just frustrating to hear about users having problems that are out of my control.

    If you're looking for an immediate solution, ComfyUI can separate each class into its own detailer with separate prompt, denoising and mask padding.

    There was someone in the comments who claimed they were able to separate the classes into multiple models, but it's not something I've looked into myself. I don't use A1111 or any of its forge variants. My focus is on creating models with the latest features, and for that reason I've chosen to train it with the intent on usage with ComfyUI. Training separate models for each class would take a colossal amount of time, far more than it would take to just add class filtering to the Adetailer extension.

    I know this wasn't the answer you were looking for. I just have limited time to work on this stuff and creating workarounds for a single UI isn't really a good usage of time. I hope you can find a solution that works for you. :3 ComfyUI isn't as scary as everyone makes it out to be if you decide to go that route.

    taithrahMay 15, 2025

    @isaac04murray930 these models are object‐detection model that tells you where objects are by drawing boxes or masks—it does not actually change any pixels. Inpainting is still handled by the ksampler etc. in the extension.

    The issue you described varies by model, because of how there bounding boxes etc. differ. For me, this is usually fixed by adjusting dilation or crop factor, A lower denoise is not always the fix alone.

    MadMewMay 27, 2025· 1 reaction
    CivitAI

    This is awesome. It would be awesome if there was just an eye class without the entire face in the next version.

    Another bit of feedback: The model has trouble detecting faces that aren't upright even with a very low treshold. It's gets them sometimes but misses more often than not for me at least

    chizhiJun 8, 2025
    CivitAI

    Would you be interested in creating a model specifically for the chastity cage

    JackouslaJun 15, 2025· 5 reactions
    CivitAI

    Can you make a version without the anus? I use it in Forge, on the penis, face and similar parts, it works perfectly, but not the anus part, and it ends up ruining it.

    KajokenJun 16, 2025· 1 reaction

    Second that. The anus part ruins in 8 out of 10 times.

    shotgundirectoryJun 29, 2025· 10 reactions
    CivitAI

    This works great for the most part, but it insists on turning pawpads into sheaths when using fdetailerAdetailerFor_v11 on reForge. Compare pawpads here.

    kimahriJul 24, 2025· 1 reaction
    CivitAI

    This can no longer be loaded in newer comfyui versions due to security updates. It wants a weights_only version. I tried to convert it to a safetensors myself, but the loader can't load it then.

    I also tried creating a whitelist as mentioned here with no luck: https://github.com/ltdrdata/ComfyUI-Impact-Pack/issues/931

    Hmm.

    eihpirtsa
    Author
    Jul 26, 2025

    Admittedly, I'm a bit lazy about updating ComfyUI, but I've just updated and I'm not getting any errors when executing my workflow. The model still loads as normal and performs detections as expected. For reference, I'm running a standalone install in a Docker container. On stable branch, not nightly. Here are my versions:

    Comfyui = 0.3.45

    Frontend = 1.23.4

    Impact Pack = 8.21.1

    Impact Subpack = 1.3.5

    What versions are you running?

    I don't think yolo supports the safetensors format, at least not to my knowledge. I couldn't find any official documentation from Ultralytics about it or any successful conversions by third parties. From my understanding, Ultralytics isn't really a great company when it comes to the open source crowd xD

    3421297Aug 11, 2025

    ill just chime in and say you can work around this by using ultralytics model for comfyUI which still uses pickletester https://github.com/ltdrdata/ComfyUI-Impact-Subpack

    smaugJul 31, 2025· 2 reactions
    CivitAI

    If you are making a new version, could you create us poor sods stuck on webUI (I am using this for a game mod that uses webUI API calls), a version with only face detection? If that's not too much of a bother.

    For some reason Adetailer has some kind of memory leak where over multiple generations with this model the detection slows to a crawl.

    Thanks anyways!

    draco18sAug 20, 2025· 1 reaction
    CivitAI

    For what I'm looking to do, this is really close, but doesn't offer me the detection feature I want. Is there a way I could go about retraining it?

    eihpirtsa
    Author
    Aug 29, 2025

    You'd probably have better results training over the base yolo model if this model isn't capable of detecting a feature you need. You can retrain the model (ultralytics docs have plenty of information on how to train/retrain a model) but I'd recommend using the base yolo model instead. Start with either yolov8 (better compatibility) or yolov11 (smaller and more efficient). There are probably more resources/guides for yolov8 since it's still commonly used today despite being "outdated." From my understanding, Ultraltyics' focus has been on improving model efficiency rather than performance, so it's perfectly fine to go with yolov8. Accuracy/performance is going to entirely depend on your dataset more than version.

    draco18sAug 29, 2025

    @eihpirtsa That is very helpful information, thanks!

    RulingGallantScytheSep 9, 2025· 1 reaction
    CivitAI

    God I hope Webui allows us to detect things by your tag system... I'd use comfyui but then i'd have to use... ComfyUi. It strains my eyes lol

    LovebytesSep 16, 2025· 1 reaction
    CivitAI

    Thank you for this, its fantastic and has become a staple in my single character workflows.

    Although you've already done more than enough, I would definitely like to add that it would be great to have one of these that also understands the difference between multiple furry species for pics with multiple characters, but don't feel like you have to. You've already done magic with this adetailer.

    CaptainFurryTrashDec 4, 2025· 4 reactions
    CivitAI

    This has been so helpful! Civit is lacking in detection models, and we're lucky one of the most useful ones is for furries!

    Darko19Dec 7, 2025· 2 reactions
    CivitAI

    Best furry detection model. It even can detect face with gag and blindfold. Otherwise, it has perfect mask area.

    Detection
    Other

    Details

    Downloads
    199
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/7/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    fdetailerAdetailerFor_v10.pt

    Mirrors

    CivitAI (1 mirrors)