Atomix SD3
This model is based on sd3_medium_incl_clips_t5xxlfp16.safetensors
Although Flux has better picture quality and greater operability, it is simply too slow and unacceptable. So, let's try to tap into the potential of SD3.
I found that the SD3-model does not have the ability to refiner, so my current workflow is to use SD3-model for generating images and use SDXL-model for upscaling. The generated image is acceptable, except hands. :D
Using my workflow,took 90s to generate a 1440*2160 image,took 180s to generate a 2160*3240 image (RTX 3060M 12G),which save tons of time compared to Flux.
Atomix Txt2Img SD3 Workflow - SD3 | Stable Diffusion Workflows | Civitai
Description
FAQ
Comments (2)
Wow, Kudos for even trying to do something with SD3 2B xD
But I already see a major problem: FP16 - 15GB of size. That is too much for most users to run... if not to use Civitai service or rent GPU... or wait for RTX 5080-5090.
So, I think it would be nice to separate the text encoder and add prune versions, as not everyone needs to render text.
Flux has an FP8 version that still needs around 16GB to render a 1024x1024 image.
In my experience with LLMs, I can say that FP6 brings nearly the same quality as FP8. But this is subjective, ofc.
Still, I'm tipping my hat 🎩
Please can people start fine-tuning the sd3 clip version without t5xxl it works best on lower ram hardware I'm useing it on my phone in termux it's only 5gb please make models from the 5gb version without t5xxl