Umamusume All in One LoRA
This is an All-in-One LoRA designed to generate a wide variety of characters from the popular game Umamusume: Pretty Derby with a single model. Without the hassle of managing individual character LoRAs, you can easily bring numerous Umamusume to life, including Stay Gold, Almond Eye, Loves Only You, and many more, with just this one file.
Notice
This model is a re-upload. The previous model page was accidentally deleted during a version update process. My apologies for the inconvenience!
✨ Features
Extensive Character Library: Capable of generating most of the main characters from the game with high fidelity.
Versatile Costume Support: Faithfully reproduces each character's unique racing outfits, as well as school uniforms, casual wear, and other styles.
Easy to Use: Achieve your desired images simply by inputting character names or feature tags, without the need for complex prompts.
🌟 Character & Prompt Information
For detailed prompt information regarding the characters, costumes, and features included in this LoRA, please refer to the Hugging Face link. Find the precise tags for the character you wish to create and utilize them in your prompts.
License
This model is released under "Derivative work guidelines for umamusume".
And "Fair-AI public license 1.0-SD" License
Description
Put LoRA file in your stable-diffusion-webui/models/lora folder and write lora notation in prompt to applying LoRA. <lora:your_lora_name:weight>
Recommended options
LoRA weight 0.7
Trigger words
[character name] \(umamusume\) or [character name]
For example,
manhattan cafe \(umamusume\)
hayakawa tazuna
Settings
For Stable Diffusion V1.5
Use a model derived from or mixed with animefull model.
DPM++ Series(SDE Karras, 2M Karras, etc.)
about 20 steps, CFG scale 3.5~6.5
CLIP skip = 1 or 2. Use whatever you want.
Use Hires.fix to get higher quality image
Upscaler
Latent, Latent (nearest-exact), Latent (bicubic antialiased) or other Latent series.
Denoising strength 0.50~0.65
Training Info
trained on sd-scripts by kohya_ss and LyCORIS by KohakuBlueleaf
Thank you all a lot!
Base model : Animefull-pruned model
Hardware : 1x A6000 GPU
Training time : 22 hours
Dataset : ULTIMA Dataset
Resolution : 768x768 with aspect ratio bucketing
Dim, Alpha : 64
Optimizer : lion8bit
Steps : 20,210
Batch size : 16
Learning rate : warmup to 1e-4 for 1,010 steps and then kept constantLR scheduler
LR scheduler is constant_with_warmup.
other training settings is included in metadata of LoRA file.
All uploaded images are generated by Counterfeit V3.0(huggingface, civitai).
FAQ
Comments (21)
The Uma Diffusion page on Hugging Face is 404'd
Sorry for that. I fixed link.
At this moment, my expression is the same as your avatar
So, "あげませんーっ! (won't give to you!)", or?
Pretty sure that this kind sir is doing exactly opposite to what his avatar did. 😂
Amazing
thank you
Your avatar is AGEMASEN, but you did not AGEMASEN to us the your learning outcomes. I give you 10$. AGEMASU!!
I hope that you will continue to create models for UMAMUSUME for a long time to come! !
Should be マジ神 (godlike) 😂
啊这
大佬回来啦!!!
That's marvelous! More than 100 characters in one lora! And it can generate the specifc character without any interference! Would please show me how you prevent the conceptual pollution?
There are two things not to polluate concepts.
1. Alignment on image-text. We manipulated all the captions in dataset using dreambooth-like method. Dreambooth uses trigger word as unused 1 token(this is a trigger) for each instances. On the other hand, we used, for each characters, trigger words as their name.
2. Base model(NAI) knows umamusume, whether character is popular(ex. daiwa scarlet) or not(ex. air shakur). It helps to align on text-to-image/image-to-text. With pre-trained data, seperation of concept can be done.
Because NAI knows a concept of shirt also knows manhattan cafe, after, fine-tuning, a prompt with concatenating these two generate her own shirt.
In shortly, well-aligned dataset is the key for fine-tuning.
@mht Thank you so much for your answer! However I still have one silly question. When using the tokenizer in WEBUI I can see a trigger word like "Daiwa_Scarlet" counts 4 tokens "13559, 2663, 318, 20389". If only 1 token set in "keeping n tokens not reshuffled" parameter, will the training scritps ignore/reshuffled the 3 tokens left? That might weeken the effect of trigger word I think. This problem has been bothering me for a long time. So here with your answer, can we say that 1 token is for 1 word?
@DeepDark_Fantasy514 In training time, based on kohya-ss/sd-scripts, keep_tokens option treat one word seperated by comma(,) in prompts.
With this rule, keep_tokens=1 means that daiwa_scarlet, a word, is kept for learning, not "dai" in "dai/wa_/scar/let", tokens by tokenizer.
Here is code from sd-scripts/library/train_util
tokens = [t.strip() for t in caption.strip().split(",")]
I think keep_"tokens" option confuse people, but code is clear.
@mht That's crystal clear. Thank you again for your time and consideration🥰.
Are u planning on making Venus Park? Thanks
bro 你真的是英雄
Please, could you enable this model for online image generation?
I'm not certain, but maybe it works
please comeback we need you
you are really a hero bro













