ULtraReal (12GB) ft. Simv4 CLIP-G
Full Checkpoint do not load additional CLIP or VAE
This model uses Simulacrum CLIP at a custom weight
This model is slightly less realistic then SuperModel Edition but is more flexible with Anime and Manga
This model is excellent for Anime to realistic image to image.
ULtraReal 2 (SuperModel Edition)
This model uses FP32 Timestamp training
Realistic Faces and Characters across 100's of PONY trainings
Full FP32 precision (UNET can be downcast with no issues)
ULtraReal 8GB PONY
This model was trained using FP32 precision with a focus on realism
The hybrid version is 8GB but will run as fast as base PONY
FP32 CLIP is as fast and superior to FP16 in 99% of cases (CLIP is handled in CPU)
This model is intended for use with up-scaling (See images for workflow)
This model uses FP32 CLIP these commands should be used, this will not slow down your IT's unless you have very low system RAM
Comfy UI --fp32-text-enc
Forge/Auto1111 --clip-in-fp32
Version 1.0 is outdated and should not be used for most cases.
All images should be repeatable when loading the source image.
Note I normally try and credit image remix if you see "your" prompt in comment below and I will link your image.
Description
FAQ
Comments (30)
Garbage.
Maybe a dumb question, but will the 12GB version run on a card with only 11GB VRAM and 16GB system RAM?
The gallery results look significantly better with the 12GB version versus the Lite or Hybrid variants.
Yes you can they all run at the same speed unless your using a 6GB or 4GB video card
@Felldude Appreciate the info, thanks! Wasn't too concerned about it being slower, just wasn't sure if the entire model needed to be loaded into the GPU's VRAM.
@AFD_0 You might have some block loading in forge but with COMFY even 8GB users can fully load the model as the FP32 CLIP is processed in CPU so the size of the UNET is around 6.9GB - With the updated TORCH and CPU block offloading the need to fully load a model has been reduced
@Felldude Just tried the 12GB version and it works just fine on my 2080 Ti with 11GB VRAM using Easy Diffusion. Very impressive results so far!
Wow very good quality Pony real model. Does applying a fp16 lora to the workflow degrade the quality or have a negative impact?
Thank you and no reduction in quality as most people including myself in the example images are downcasting the UNET - If your one of the few forcing Full FP32 the LORA would be upcast before projection
How does one use the command
Forge/Auto1111 --clip-in-fp32
It is part of the webui-user.bat EXAMPLE:
set COMMANDLINE_ARGS= --unet-in-bf16 --vae-in-fp32 --cuda-malloc --clip-in-fp32
with A1111 is this checkpoint not able to run! I'm lucky because runnign Forge too!^^
@Felldude This does mean that I have to restart forge with different commandline arguments each time I switch from your checkpoint to other checkpoints, doesn't it? It's quite a hassle, but your checkpoint is so good that I would do it. I just wish there was a more convenient way.
@niwo439 You can have a separate .bat file - For comfy I nearly always start with FP32 clip and BF16 unet as most modern models benefit from it, unless I am trying to run a video model
thanks a lot for sharing such a treat, one of the best for generating diverse faces, even surpasses many good sdxl models in this regard, and yes prompt adherence is phenomenal thanks, multiple girls, diverse and it did it. from complexion to face two thumbs up boss
Thank you, having a PONY model that looked more natural like SDXL was my goal, but I also did not want to break any PONY prompts
amazing! best realistic model I cant believe.
Thank you
Can you adapt your Checkpoint to Illustrious?
I love the workflow you use :) Thank you for sharing
The 12GB price tag is worth it every bit. Excellent results in general, some prompt combinations result in overly bright/overexposed generations, turning CFG down to 4 or 2 seems to help in those cases.
Thanks, I had some color washout on some generations also
@Felldude Does this model have any Illustrious models mixed in? I'm getting the same over-exposed washed-out results quite often, and it seems to happen with some Pony/SDXL LoRAs, which Illustrious doesn't seem to work with. Just wondering if that might be the cause.
@AFD_0 No it is the result of anime, hentai, to photoreal training - Adding a saturation node to the workflow or injecting noise before up scaling will resolve 99% of the time, but I provided a "simple" workflow for everyone to use
"priceTag" hits it very well lol, I am custom to the neat litte SD1.5 models with "just 1,99" $B .Then the SDXL models with their "6,49" are a huge step and 12,xx$ is 10x :D not there jet i will need to finish playing through all my SD models, then i will checkout the more "expensive" stuff lol. Expensive since i already bought a handful of additional SSD Drives
@pink0909 It saves about 4GB to run the BF16/FP32 CLIP version as which is the fastest for most users.



