CivArchive
    Preview 50911178
    Preview 50911412
    Preview 50911421
    Preview 50911769
    Preview 50912061
    Preview 50912187
    Preview 50912204
    Preview 50912342
    Preview 50912354
    Preview 50912989
    Preview 50912996
    Preview 50913035
    Preview 50913143
    Preview 50914091

    Aesthetic Quality Modifiers - Masterpiece

    Training data is a subset of all my manually rated datasets with the quality/aesthetic modifiers, including only the masterpiece tagged images.

    ℹ️ LoRA work best when applied to the base models on which they are trained. Please read the About This Version on the appropriate base models, trigger usage, and workflow/training information.

    Version 5.0 [anima-preview-3] (Latest)

    (Temporarily including here as the "About This Version" section is having issues)

    Trained on Anima Preview-3-base

    Assume that any lora trained on the preview version won't work well on the final version.

    Recommended prompt structure:

    Positive prompt (quality tags at the start of prompt):

    masterpiece, best quality, very aesthetic, {{tags}}, {{natural language}}

    Updated dataset of 386 images, all masterpiece tagged images trained in Kirazuri (Anima) model version 2 dataset.

    Trained at 1024 x 1024, 1280 x 1280, and 1536 x 1024 resolutions.

    Previews are mostly generated at 1536 x 1024 or 1024 x 1536 .

    Training config:

    diffusion-pipe commit b0aa4f1e03169f3280c8518d37570a448420f8be

    # dataset-anima.toml
    resolutions = [1024, 1280, 1536]
    
    enable_ar_bucket = true
    min_ar = 0.5
    max_ar = 2.0
    num_ar_buckets = 9
    
    # Totals
    # 386 images
    # 15504 samples/epoch
    
    # 153 images
    # 48 samples/image - 7344 samples/epoch
    [[directory]]
    path = '/mnt/d/training_data/0_masterpieces_kirazuri/1536x1536'
    repeats = 16
    resolutions = [1024, 1280, 1536]
    
    # 44 images
    # 48 samples/image - 2112 samples/epoch
    [[directory]]
    path = '/mnt/d/training_data/0_masterpieces_kirazuri/1280x1280'
    repeats = 24
    resolutions = [1024, 1280]
    
    # 189 images
    # 32 samples/image - 6048 samples/epoch
    [[directory]]
    path = '/mnt/d/training_data/0_masterpieces_kirazuri/1024x1024'
    repeats = 32
    resolutions = [1024]
    
    # anima-lora.toml 
    output_dir = '/mnt/d/anima/training_output/masterpieces-v5'
    
    dataset = 'dataset-anima.toml'
    
    # training settings
    epochs = 5
    # Per-resolution batch sizes
    micro_batch_size_per_gpu = [[1024, 32], [1280, 24], [1536, 16]]
    pipeline_stages = 1
    gradient_accumulation_steps = 1
    gradient_clipping = 1
    warmup_steps = 100
    lr_scheduler = 'cosine'
    
    # misc settings
    save_every_n_epochs = 1
    activation_checkpointing = true
    
    partition_method = 'parameters'
    
    save_dtype = 'bfloat16'
    caching_batch_size = 1
    map_num_proc = 8
    steps_per_print = 1
    compile = true
    
    [model]
    type = 'anima'
    transformer_path = '/mnt/c/workspace/models/diffusion_models/anima-preview3-base.safetensors'
    vae_path = '/mnt/c/workspace/models/vae/qwen_image_vae.safetensors'
    llm_path = '/mnt/c/workspace/models/text_encoders/qwen_3_06b_base.safetensors'
    dtype = 'bfloat16'
    llm_adapter_lr = 1e-6
    flux_shift = true
    multiscale_loss_weight = 0.5
    sigmoid_scale = 1.3
    
    [adapter]
    type = 'lora'
    rank = 32
    dtype = 'bfloat16'
    
    [optimizer]
    type = 'adamw_optimi'
    lr = 4e-5
    betas = [0.9, 0.99]
    weight_decay = 0.01
    eps = 1e-8

    Description

    Trained on NoobAI-XL (NAI-XL) V-Pred 1.0-Version

    Recommended prompt structure:

    Positive prompt (quality tags at the end of prompt):

    {{tags}}
    masterpiece, best quality, very aesthetic

    With the kohya_ss dev branch, v_paratemization ,zero_terminal_ssr enabled, and noise offet set to 0.

    Included some newly rated images and small updates to match the noobai tagging:

    • by {artist} -> artist:{artist}

    • very aesthetic -> very awa

    Previews are generated in Forge with DynamicThresholding (CFG-Fix) Integrated enabled, settings:

    dynthres_enabled: True, dynthres_mimic_scale: 7, dynthres_threshold_percentile: 1, dynthres_mimic_mode: Half Cosine Down, dynthres_mimic_scale_min: 1, dynthres_cfg_mode: Half Cosine Down, dynthres_cfg_scale_min: 3, dynthres_sched_val: 1, dynthres_separate_feature_channels: enable, dynthres_scaling_startpoint: ZERO, dynthres_variability_measure: STD, dynthres_interpolate_phi: 1

    FAQ

    Comments (6)

    Charlotte_MacbethJan 11, 2025· 10 reactions
    CivitAI

    I see 2 different models trained by you. Masterpiece and Complete, which do you prefer or recommend to use?

    HaloSkullJan 11, 2025

    I think masterpiece might be better for character focused stuff and the complete is a full picture character and background included.

    motimalu
    Author
    Jan 12, 2025· 6 reactions

    Hello, would like to recommend the Complete version, as it aims combine the quality and concept knowledge of all my rated datasets.

    It has a much stronger effect due to the size and scale of the training, but has some issues to be resolved.

    (~6000 images, any of which could create problems if mis-tagged or of low quality).

    As an improvement to any generation, I might prefer this relatively much smaller (~300 image) Masterpieces LoRA at the moment.

    It is a dataset of exclusively images I'd manually rated as the best possible after all.

    motimalu
    Author
    Jan 12, 2025· 2 reactions

    @HaloSkull I think the masterpieces dataset would have a higher proportion of images with detailed backgrounds and compositions by comparison

    Charlotte_MacbethJan 12, 2025· 2 reactions

    Thank you for your reply and your nice work🥰!@motimalu

    HaloSkullJan 13, 2025· 2 reactions

    @motimalu I see, Thanks!

    LORA
    NoobAI

    Details

    Downloads
    8,735
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/11/2025
    Updated
    5/3/2026
    Deleted
    -
    Trigger Words:
    masterpiece, best quality, very awa, absurdres