This is a ControlNet model! This model requires ControlNet!
Model Details
This model aims to allow users to modify what a subject is wearing in a given image while keeping the subject, background and pose consistent.
I've produced good results in txt2img, img2img as well as inpainting.
I've produced good results with images generated with Stable Diffusion as well as with pictures I've taken myself.
Installation
Place the .safetensors file into ControlNet's 'models' directory. To use the model, select the 'outfitToOutfit' model under ControlNet Model with 'none' selected under Preprocessor.
Tips for use
Images with a clearly defined subject tend to work better.
This model tends to work best at lower resolutions (close to 512px)
If you run into trouble at higher resolutions, try running a first pass at a lower resolution and then using img2img (or txt2img w/ Hires.fix) with a lower denoising strength to upscale to a higher resolution while continuing to use your original input image as input to this ControlNet model
I recommend starting with CFG 2 or 3 when using ControlNet weight 1
Higher CFG values when combined with high ControlNet weight can lead to burnt looking images.
Experiment with ControlNet Control Weights 0.4, 0.45, 0.5, 0.6, 0.8 and 1.
Lower weight allows for more changes, higher weight tries to keep the output similar to the input
Anything below 0.5 seems to rely more on the Stable Diffusion model whereas anything 0.5 and up seems to weight the ControlNet model more heavily
When using img2img or inpainting, I recommend starting with 1 denoising strength
Experiment with 0.75 denoising strength
When inpainting, I recommend trying "latent nothing" under Masked content
Consider lowering the model's weight when generating higher resolution images
The higher the resolution of the output image, the more difficult it tends to be to alter the content of the image from the input image
If the output isn't changing enough from the input, try increasing the weight of the prompts or decreasing the Control Weight of the ControlNet Unit
Can work well with other models such as OpenPose ControlNet
Description
Trained this model using a completely new dataset using more real data.
Main changes:
Overall improvements in character consistency and quality.
Discontinued the use of "the same man" / "the same woman" in prompts
Reduced NSFW generations
Known issues:
Improvements could be made in maintaining color consistency as well as improving clothing generation overall. Some of this may just be a matter of curating more and higher quality data. Other improvements may require tweaking the training script
FAQ
Details
Files
outfitToOutfit_v20.yaml
Mirrors
controlnet11Models_pix2pix.yaml
controlnet11Models_canny.yaml
controlnet11Models_lineart.yaml
controlnet11Models_inpaint.yaml
controlnet11Models_animeline.yaml
outfitToOutfit_v20.yaml
controlnet11Models_depth.yaml
controlnet11Models_normal.yaml
controlnetQRCode_sd15V1.yaml
controlnet11Models_scribble.yaml
controlnet11Models_mlsd.yaml
controlnet11Models_seg.yaml
controlnet11Models_openpose.yaml
controlnet11Models_softedge.yaml
pixelnetControlnetFor_v00Experimental.yaml
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.