Ladies and gentlemen, we understand that you have come tonight to get a supa-dupa fast XL render of the most kinky NSFW AI-arts... We regret to announce that this is not the case. As instead, we come tonight to bring you... the traditional release of a classic, slow but creative SDXL photorealistic model, with natural skin textures, natural lighting and rich composition. Ladies and gentlemen, prepare to wait your renders forever...
Okay, fck it, I lied. It's D'M'D derivative PONY-SDXL model... What you gonna do?
Version details
R8 A4 (CLIP Skip 2)
Based on R8 Alpha2. More training with same dataset. Very creative and surreal. Better surrealistic aesthetic than in R8 A2 and much more rich composition and details that in R8 A3 DMD. Use two pass workflow: T2I - Euler A+beta, steps 30, CFG 2.5; I2I HiRes 2MPx - Euler A+exponential, Steps 35, CFG 2.7, denoise 0.7. LCM might also work for T2I pass. It will give smoother result. In that case you need increase the contrast on I2I pass by using DPM++ based sampler with Simple scheduler or just increase CFG with Euler A.
R8 A3 DMD (CLIP Skip 2)
Base settings: Pass 1 (T2I) => LCM+Simple, CFG 1.1, 6 steps; Pass 2 (I2I HiRes) => Euler A+Exponential, CFG 1.0, 8 steps, denoise 0.5..0.65.
Alt settings (more texture): Pass 1 => LCM+Simple, CFG 1.3, 7 steps; Pass 2 => DPM++ 2S A+Simple, CFG 0.5, 4 steps, denoise 0.3.
In comparison with R8 A2 this version gives higher separation of characters (stronger bokeh, smoother background) but you can use 6 steps in T2I pass + 8 steps in I2I pass. In I2I you can try DPM++ 2S A sampler - might work with low denoise and lower CFG.
Based on R8 Alpha2. A bit more training with the same dataset as for A2 (read below). It have DMD 4step added (0.94 for Model and 0.77 for CLIP). Anatomy (limbs, fingers, eyes) and textures are little bit better in comparison with A2. Same aesthetics, colors, clarity and realism.
R8 A2 (CLIP Skip 2)
Based on R8 Alpha1. More training with new dataset (realistic, creative, dark scifi and horror, nature, male and female characters, some animals and anthro). Anatomy (limbs, fingers, eyes) and textures are still cooking. Aesthetics, colors, clarity and realism shows noticeably better than R8 A1. Please help me testing it and give a feedback where it is going.
R8 A1 (CLIP Skip 1)
Pre-alpha test version released specially for @Aestherotic
You probably want to skip. It is sharper than R7 final, but far from release stability (just a ~30 epochs of split fine-tuning). Anyway, feedback is always welcome.
R7 Final XL (CLIP Skip 1)
[FULL RELEASE - FP8 + FP16 (will be uploaded at 4096 BUZZ) + FP32 (at 16384 BUZZ)]
I worked very hard to deliver the look of real life photographs made with regular-grade digital cameras with mid-grade attachable lens. Big attention was payed to the quality of anatomy (limbs anatomical correctness and natural pose), eyes (for ex., uneven eyes issue fix), fingers (finger number issue fix), background consistency (basically, fixing of the issue when straight lines in background don't match). I cannot say that this version fixes it all, but I tried to lower the chance of such issues to appear to the low level. I will continue to work on fighting the mentioned issues while trying to increase the creativity and the realism and details of the textures.
R7 smol soft
[EXPERIMENTAL RELEASE VERSION]
It was trained in Full Fine Tuning mode with BF16 precision. CLIP Skip 2 used during training, but CLIP Skip 1 seems also working and gives more sharp renders. Trained version was quantized into FP8 for lower grade computers and lower VRAM requirements.
Low-weight mixture of R7j RC2 applied for better stability and style inheritance.
Goal of training: more detailed and complex backgrounds; improvements of eyes, fingers and structural consistency in background (straight lines matching); contrast and colors control in prompts. Quick tests show the improvements of eyes, fingers and textures. R7j had too sharp unnatural textures in some cases. This version has the chance of unnatural textures lower.
R7j RC2
!!! IMPORTANT !!! For Single Pass Text-to-Image use DDPM (better textures) or Euler A (smoother). Steps 8 gives smooth draft with simplified composition but more stable anatomy and env structures. Steps 60 w/ DDPM gives good texture and mostly stable anatomy. T2I 1Pass works well in high resolutions like 1440x720 or 1440x1024 (didn't test 1Pass in higher resolutions, but may work as well).
Other 1Pass settings may lead to overbaked or too smooth picture with lack of texture. Please give feedback if you will experiment with settings. Double pass (1st with Euler A ==> 2nd with DPM++ 2S A) gives much more opportunities for experimentation.
HiRes Fix works well up to 3.5 MPx with denoise 0.5+.
Still in Release Candidate phase because of Sampler settings limitations for Single Pass workflow and some unpleasant textures of the nature greenery and some other (possibly too strong blemishes and freckles). In general realism it must give better results than R7i.
PS: I stop giving the suffix "LowDMD" because my training don't wash out DMD completely; it still there (for example, composition complexity, smoothness and contrast depends on steps); this version still able to give a OK-ish pictures at 8 steps, but I personally don't like its smoothness and lack of complexity (very good as draft).
R7i RC LowDMD
Most of stuff works pretty well. 1pass workflow gives noticeably better quality than R7f, but deeper test is needed. You can start with Text-to-Image Single Pass mode with DPM++ 2S A + Karras (CFG 2.0 ... 2.5, Steps 8-16 [lower CFG with 16], 1280x1024 or 1440x832). Second pass (HiRes Fix) improves quality and works well even with Denoise 0.5+. Didn't test it with LCM, but should work (expect very smooth results).
Guys and Gals could you please test / compare this version. I really need that feedback with PROs and CONs. Will be very appreciated if you give me some important details about the performance and creativity of this RC.
PS: trying hard to upload FP32, but S3 uploading hangs at last 10% of payload; let me know if you really want FP32 - I will try even harder ;)
R7f Beta LowDMD
Improved performance even in 1pass workflow with Euler A + Karras, CFG 3.0+, steps 30+, high resolution like 1440x832 or 1280x1024... BLA-BLA-YADA-YADA FURTHER MARKETING BS...
Consider it as Beta release because I'm not fully satisfied with eyes and fingers stability. Also, in some cases I'm not fully satisfied with textures and some samplers behavior.
But in comparison with Alpha* releases of R7 it shows noticeably better performance in anatomy, lighting, textures, etc.
R7e Alpha3 LowDMD
0.03 of 8step DMD2 cfg1.5 lora (just to increase fingers and eyes quality).
It is a bit less sharp than Alpha2 (R7d) but better eyes and fingers. Textures and lighting should also be improved.
R7d Alpha 2 NoDMD
Based on R7d Alpha with additional training and reduction of DMD2 LORA influence. Use Euler A for cases where you use LCM on DMD2-based checkpoints. Start with DPM++ 2S A or other DPM++ based samplers for sharper image. Less steps => smoother image. 20+ steps for sharps image. After the 20 steps sharpness / contrast remains the same (as I observed during my quick tests). 2pass workflow is strongly recommended, but you can try single pass with 30 steps and DPM++ 2S A.
NB: this version works OK in extreme aspects like 1440x832 (w x h); I'm waiting for 1440x720 images (will post soon if any good ones produced).
R7d Alpha
It is basically the PoC with lower DMD2 influence. Made for @AFD_0 who have troubles with DMD2 based checkpoints in Forge. Try this version with higher CFG and different samplers (tested quickly with Euler A and DDPM at CFG 3.0..4.0). Should work but no deep testing was made.
UPD: as reported by @AFD_0, adherence might be unsatisfying... So, if it work for you (or no) feedback is still welcome, but probably you may want to skip this alpha.
R7c (E72+)
Still not ideal. Eyes still unstable, but textures and lighting more realistic and detailed. Read Version Info for more details.
2Pass workflow => https://civarchive.com/posts/24204987
T2I: LCM + Karras; 14 steps, CFG 2.1
HiRes Fix: 2.0Mpx; Euler A + Beta; 12 steps; CFG 3.0; Denoise [0.27..0.29].
It produces the image with moderate or low contrast but good sharpness. Probably you can replace the HiRes Fix sampler with smth more contrast or give more steps.
2Pass workflow => https://civarchive.com/posts/24205203
T2I: same as above.
HiRes Fix: DPM++ @m SDE Heun + Beta: CFG 1.4; Denoise [0.27..0.29].
R7-12011-E37
This is not the final version of R7. Local training still in progress... Some issues might appear.
Recommended settings:
Single pass
Don't use LCM in Single Pass workflow! No matter what CFG you set it will give a smooth picture.
Use Euler A with Exponential scheduler (for smooth render) or Beta, SGM_Uniform, DDIM_Uniform, Karras (for extra sharp render).
12 steps is optimal (stability / contrast / speed). 6 steps - rough draft with color artefacts possible. 8 steps - low details and anatomical issues with some not so big chance. 16+ steps - much more contrast and sharp render. 60 steps - very sharp image with more simple forms and bigger details (smooth textures).
At 12 steps you can use CFG 3.0 or more with Euler A. The more steps you use the lower CFG must be set. It is not recommended to set CFG lower than 0.9, but you can experiment with high steps (like 48-60) and contrast samplers like Uni_PC or DPM++ based samplers.
Schedulers. KL_Optimal and Exponential usually give smoother picture. But KL_Optimal + Uni_PC gives a very sharp and overburned result. Karras is smoother that SGM_Uniform.
Two-pass / HiRes Fix / ADetailer
For the first pass you can use Euler A with 6..14 steps. 12-14 steps gives more small details and textures (details simplification starts closer to steps 20).
For the 2nd pass or HiResFix I recommend contrast samplers with low denoise value and high CFG. I tested HRFix at resolution up to 6MPx. The lower the denoise the more safety at extreme resolutions you will have.
DPM++ 3M SDE Heun - I like this sampler with denoise lower than 0.4 for the second pass. It gives textures. DPM++ 2S a and Heun also work for textures rendering.
Amount of steps for second pass depends on what contrast and sharpness you need. Steps 12-16 gives good contrast with good amount of micro-detailing. Steps 20 and more minimizes the texture noise and gives a picture closer to Digital Art and CG.
Fix bad images
If you have a picture with noizes and/or anatomical issues, then I recommend you to use 2-pass render with LCM sampler in 1st pass.
LCM + CFG 2.0...3.0 will fix anatomy and remove noises.
2nd pass with Euler A or other contrast samplers will give a precision and texture.
Make sure 2nd pass uses denoise higher than 0.5.
General description
I'm still fighting smooth textures, poor composition, and over-simplified backgrounds of the most DMD2-based checkpoints.This is yet another round in the war with a very smooth LCM renders.
I've made a separate branch for this NoiceAI Beta release as a request for extensive test. I want to make sure if this is a really viable direction. It works for me, but might not work for you.
Highlight:
It supports Euler_A sampler with CFG 2.3 or slightly more.
It works from 6 steps. Should be stable at steps 8. Becomes more contrast at 16 steps (lowering CFG helps a bit).
Exponential scheduler is a friend. SGM_Uniform or Karras if you need more Texture Noice right at 1st pass.
Gives realistic skin texture in double-pass workflow 8 + 8 steps (or 6 then 8 steps). Use a contrast samplers for 2nd pass (DPM++ 2S A + Karras or SGM_Uniform).
As always, drop my samples into ComfyUI to get the parameters of the render.
And of course show me what you get with this model. Write the comments with PROs and CONs.
Description
Improved realism (lighting, colors, skin texture) and sharpness. Decreased chance of duplication at high resolution (initial T2I rendering in 1440x1024 or 1440x720; HiRes Fix @ 4Mpx resolution w/ denoise 0.5 and more). Best in 2Pass workflow (drop the OP samples to ComfyUI or use carefully read the instructions about sampler params in Model Description). Still Release Candidate because 1Pass workflow has some limitations and flaws, nature textures has some flaws as well.
FAQ
Comments (9)
This has some pretty good outputs.. gives a very instagram, candid vibe, which is great! Unfortunately it seems to fall short with nsfw prompts, while it CAN do them, it starts to lose the aesthetic of candidness... maybe overtrained on nsfw images? Honestly I'm not sure. Like the thighs get super thick, as does the rear, especially spread pussy tag, also lips more full, transforms the people in the photo. Really though I've seen this in most models, but what I've also noticed is how bad it is at cunnilingus, and unfortunately this one does not fare any better.
Prompting it can sometimes get the exact opposite effect and have fellatio instead. I imagine this has to do with the tagging, as oral sex can be seen as both fellatio and cunnilingus, and if only tagged as that, it's going to confuse the ai. Or it could even be it wasn't even trained in it at all, lol.
I remember asking the author of the 2gig super lora they made of all-in-one NSFW about if it's good at cunnilingus and they were like 'oh.. I forgot to add that'. So yeah. I'm only typing all this cause it seems you like constructive criticism (really more should embrace this) and ask for what the model falls short in.
Good points. NSFW images in datasets aren't so diverse indeed. Will think about it in future. Need to do smth about textures and probably fingers (still not ideal). Background consistency and body symmetry is also can be improved. As for NSFW diversity and adherence, it requires more thinking about proper dataset preparation. I don't want to drift back into pron-only checkpoint.
Thanks for feedback.. very useful
How much percent is the pony part? It should be enough for porn alone. No need for specific dataset.
@johannalmir657 TL;DR; have no idea how much in precise percents... have no reliable benchmark to measure.
I didn't mix in the pure PONY stuff for long time already. Dataset has captions containing about 30% of Booru tags from Waifu Diffusion 1.4 tagging model (EVA Large somthing), but it is not precisely the PONY tags. Almost no tags like score_7_up and lower, no "pos_safe", "source_photo", etc. Almost no specific characters (except for Cammy White). So, I can expect the Pony part is weakened noticeably. This is why I decided to stop playing with PNY/ILL suffix in model version names. This strand is a mix of small bits from everywhere which is heavily trained on dataset with WD1.4 + long Natural language captions.
Look at metadata of my samples. If you see the very long mixed prompt - this is the original prompt from dataset without any modifications. You can evaluate how much dataset is Pony-oriented.
@homoludens ok, than waifu diffusion is enough for porn, i think. But why do you use anime model?
@johannalmir657 WD 1.4 is not an image generation checkpoint. There is a custom module for ComfyUI that is called WD1.4 Tags. It converts Image into a Booru tags. For this node there is a bunch of different Image-to-Text models. EVA Large *** is one of such models.
I use this module and captioning models to add a quasi-PONY tags into captions. Hope this makes sens for you.
See my latest samples. I created the workflow with NoiceAI and SAT Magic Reslism as a texture refiner. I like the result so much! ^__^
NoiceAI is creative with more stable fingers. Magic Realism has a very contrast / sharp textures. Very good together!
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



















