Looking for Prompts? https://civarchive.com/articles/1290 <-- Find them here
This will be a collection of my Test LoRA models trained on SDXL 0.9.
Each version is a different LoRA, there are no Trigger words as this is not using Dreambooth.
The Article linked at the top contains all the example prompts which were used as captions in fine tuning.
Black Sun XL
Ancients Society XL
Atomic Society XL
Plague Doctor XL
Yokai Raiders XL
Poster Fusion XL
Arri Style XL
Silhouette XL
JohnsonFace XL
GTAVXL
OtherworldyXL
All of these are considered for testing only. We were testing Rank Size against VRAM consumption at various batch sizes. Happy to report training on 12GB is possible on lower batches and this seems easier to train with than 2.1-768.
I used a collection for these as 1.0 is weeks away. Please feel free to use these Lora for your SDXL 0.9 testing in the meantime ;)
Description
FAQ
Comments (15)
cool
This site is becoming a mess. SDXL models will get lost in the 1.5 avalanche.
lets hope that 0.9 gets buried by SDXL 1.0
1.5 still works, so does 2.1, so they are still useful, but over time people will ask why use the old stuff?
@driftjohnson Or people with weak GPUs and can't afford the latest and greatest
@baddrudge so far (without community optimisations) can use the same batch size to train with SDXL as with SD1.5, however the images can be double the resolution. So this scales to your hardware. If you could train 1.5, you can train SDXL but the images will be twice the size for free.
@driftjohnson hmm, I'm not really sure that this is true
training LORA on batch 1 required 13.9 GB VRAM
Since I have 11 GB I won't be able to do that locally without some smart people doing some optimizations/tweaks
v1.5 Dreambooth requires 11 GB VRAM and LORA only 6 GB VRAM so the statement that you can train SDXL if you could train SD is not entirely true
However this is still in beta, training soft is still being polished so this is all subject to change.
@driftjohnson I can perform inference on 1.5 using my 6 GB GPU with no issues. Can the same be said of SDXL?
@baddrudge during my evaluation of SDXL 0.9, I tested various low VRAM setups. Unfortunately SDXL is pruned down to just over 6GB, you must load the base and the refiner to run inference. Until some optimisations are released, it cannot load in less than 6GB VRAM, however it can with 8GB.
@malcolmrey while testing just last night, it was possible to achieve LORA training at Rank 32 using Batch 30! This is using a dataset of 1024x1024 images. Previously with 512x512 and SD1.5 it was only possible to run Batch 18 with 40GB VRAM.
I have also used T4 and V100 extensively to show the VRAM usage with smaller GPU's. We can run Batch 8 with a T4 and train HOWEVER you need more than 12gb RAM (system not VRAM)
.
Some Training platforms (like google colab in free mode) have made it so that you cannot choose Hi system RAM and limit you to 12GB. This means the project can't even start training. This is unrelated to VRAM and it's a chokepoint that is being leveraged to identify free users and prevent training from starting.
4060TI 16g : ......
@driftjohnson Since locally I have only 11 GB VRAM I always used 1 batch side and never went higher
however, while reading stuff about SDXL training some people mentioned that they had much better results keeping it at 1 rather than putting it higher.
not sure what will be the consensus later on, but it is still interesting
can you introduce witch tool you use to train this lora? or give me a github link?
i used comfyUI and provided some presets here: https://civitai.com/models/106181/comfyui-presets-by-djz
@pengpengzi https://civitai.com/articles/1075/train-sdxl-lora-with-colab <- for the trainer used here :)
@driftjohnson thank you very much


