Aesthetic Quality Modifiers - Masterpiece
Training data is a subset of all my manually rated datasets with the quality/aesthetic modifiers, including only the masterpiece tagged images.
ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models, trigger usage, and workflow/training information.
Version 5.0 [anima-preview-3] (Latest)
(Temporarily including here as the "About This Version" section is having issues)
Trained on Anima Preview-3-base
Assume that any lora trained on the preview version won't work well on the final version.
Recommended prompt structure:
Positive prompt (quality tags at the start of prompt):
masterpiece, best quality, very aesthetic, {{tags}}, {{natural language}}Updated dataset of 386 images, all masterpiece tagged images trained in Kirazuri (Anima) model version 2 dataset.
Trained at 1024 x 1024, 1280 x 1280, and 1536 x 1024 resolutions.
Previews are mostly generated at 1536 x 1024 or 1024 x 1536 .
Training config:
diffusion-pipe commit b0aa4f1e03169f3280c8518d37570a448420f8be
# dataset-anima.toml
resolutions = [1024, 1280, 1536]
enable_ar_bucket = true
min_ar = 0.5
max_ar = 2.0
num_ar_buckets = 9
# Totals
# 386 images
# 15504 samples/epoch
# 153 images
# 48 samples/image - 7344 samples/epoch
[[directory]]
path = '/mnt/d/training_data/0_masterpieces_kirazuri/1536x1536'
repeats = 16
resolutions = [1024, 1280, 1536]
# 44 images
# 48 samples/image - 2112 samples/epoch
[[directory]]
path = '/mnt/d/training_data/0_masterpieces_kirazuri/1280x1280'
repeats = 24
resolutions = [1024, 1280]
# 189 images
# 32 samples/image - 6048 samples/epoch
[[directory]]
path = '/mnt/d/training_data/0_masterpieces_kirazuri/1024x1024'
repeats = 32
resolutions = [1024]
# anima-lora.toml
output_dir = '/mnt/d/anima/training_output/masterpieces-v5'
dataset = 'dataset-anima.toml'
# training settings
epochs = 5
# Per-resolution batch sizes
micro_batch_size_per_gpu = [[1024, 32], [1280, 24], [1536, 16]]
pipeline_stages = 1
gradient_accumulation_steps = 1
gradient_clipping = 1
warmup_steps = 100
lr_scheduler = 'cosine'
# misc settings
save_every_n_epochs = 1
activation_checkpointing = true
partition_method = 'parameters'
save_dtype = 'bfloat16'
caching_batch_size = 1
map_num_proc = 8
steps_per_print = 1
compile = true
[model]
type = 'anima'
transformer_path = '/mnt/c/workspace/models/diffusion_models/anima-preview3-base.safetensors'
vae_path = '/mnt/c/workspace/models/vae/qwen_image_vae.safetensors'
llm_path = '/mnt/c/workspace/models/text_encoders/qwen_3_06b_base.safetensors'
dtype = 'bfloat16'
llm_adapter_lr = 1e-6
flux_shift = true
multiscale_loss_weight = 0.5
sigmoid_scale = 1.3
[adapter]
type = 'lora'
rank = 32
dtype = 'bfloat16'
[optimizer]
type = 'adamw_optimi'
lr = 4e-5
betas = [0.9, 0.99]
weight_decay = 0.01
eps = 1e-8Description
FAQ
Comments (13)
LORA的神
imagine what Anima Preview 3 can do 👀
Quick feedback on the latest Aesthetic Quality Loraof Anima—it seems to alter the base model's knowledge of specific characters, likely due to overall aesthetic modifications. As a result, character appearances are changed significantly and no longer match the intended look.
Hello, thank you for the feedback!
Still adjusting to some changes to how the preview 2 version trains, I'd recommend using a lower weight to avoid concept bleeding in this version.
@motimalu Did you try turning off LLM Adapter training? train_llm_adapter = false. Pretty sure it's what model's author and some lora makers suggest
@gannibal Thanks, I have tried with without LLM Adapter training yes.
Currently trying to refine settings and datasets for full finetuning where the intention is to train the LLM Adapter new concepts, characters, and quality/aesthetic associations.
So I've overlooked this version being overtrained in order to confirm how the model would pick up on my manually labelled quality and aesthetic tags.
@motimalu I am no expert in Models with LLM as text encoder, but won't the model pick up new concepts even without training the LLM Adapter, because it sort of understands the meaning of words already. And the trained lora or "finetunes" remaps their existing understanding. Maybe LLM Adapter shouldn't be trained, period. Just guessing, though.
@gannibal Thanks, I'm no expert either and I appreciate the feedback, it is why I share my training configs in the hope the someone can help me improve them.
The LLM being trained alongside the model does seem to have absorbed a surprising amount of knowledge though, and I think this might be why training it produces good results in learning new concepts - with the downside of quickly risking catastrophic forgetting.
The catastrophic forgetting is definitely not good for the overall model, so I agree with the general advice to avoid training the LLM for that reason. It is a shame to reduce the capabilities of this very interesting model.
I'll try another run without with the LLM adaptor training disabled and see how that goes.
Better images in a lot of ways, but also a lot of NSFW regressions
Can we use Anima 2 Lora with Anima Preview 3?
I think so, a quick test shows the effect is very similar when applied to Anima Preview 3:
https://civitai.com/posts/27883646
The flower stamen in the image are a novel expression of the dataset, so you could consider how they are expressed a measure of how well the Lora applies
@motimalu nice.. are you planning on training for Preview3 though? I think the improvements might not be than noticeable, but seems to be worth it anyways.
Hi @ArtificialOtaku, yes I plan to try giving this both a dataset and training configuration update for preview 3 after completing another project.












