CivArchive
    AfterDark - v2.0-Klein
    NSFW
    Preview 119521242
    Preview 119521246
    Preview 119521308
    Preview 119521317
    Preview 119521328
    Preview 119521329
    Preview 119521334
    Preview 119521338
    Preview 119521341
    Preview 119521690
    Preview 119521357
    Preview 119619047
    Preview 119619915
    Preview 119621727
    Preview 119623693
    Preview 119647531
    Preview 119650056

    AfterDark for Z-Image Turbo & Flux.2 Klein

    This LoRA enhances your images with more punch, contrast, depth of field, and lighting. It's been trained on a mixture of photographic content with a focus on low-key, film noir, and fashion photography. It makes things pop without destroying image quality.

    Suggested Z-Image Turbo Settings
    Model strength: 0.3-0.8
    Samplers/schedulers:

    • seeds_3 / beta

    • ddim / kl_optimal (or beta)

    • dpm_2_ancestral / sgm_uniform (or ddim_uniform)

    Suggested Flux.2 Klein 9b Settings
    Model strength: 0.3-1.2
    Samplers:

    • res_multistep

    • sa_solver

    • seeds_3

    • er_sde

    • ddim

    • ...and so many more

    Distilled
    Cfg Scale: 1-1.5
    Steps: 8-10
    Base
    Cfg scale: 2.5-4
    Steps: 40-55

    This LoRA works with both Flux.2 Klein 9b base and distilled. I often use the distilled version because it generates images much faster and the quality is still really good.

    v2 LoRA Technical Details

    The Z-Image Turbo LoRA ended up with a loss value around 0.336 (this compares to v1 at around 0.71).

    The Flux.2 Klein 9B LoRA ended up with a loss value of 0.5129 (5.129e-01). It was a "low and slow" train with a low learning rate (5.0e-05) over 6,000 steps. This was much longer than the Z-Image Turbo LoRA's training, but I think it was worthwhile. I might use a more powerful GPU next time (I used an A40 for this one).


    A lot of training time went into these models followed by a lot of testing. I decided to keep the same model listing on Civitai simply because they were both trained from the exact same dataset (same images and captions). The training for the Klein version in ai-toolkit started off the same as the Z-Image one. I soon learned that wasn't going to work for Flux.2 Klein 9b so I adjusted the settings.

    Version 2 is very stable for both base models. In fact I find that I don't often like the distilled version of Klein 9b without this LoRA. Images are generally too bright for my taste and while you could apply a LUT or do post processing work on the images, I simply prefer to use this LoRA because it does more than just lighting.

    Description

    I kept the same exact dataset to train a version of this LoRA for Flux.2 Klein. The training was very different from Z-Image Turbo. It was a very long train with a low learning rate. It was cooked low and slow. Things were good at step 4,000 but I wanted to see what would happen at 6,000. The loss at 6,000 ended up being 5.129e-01 which was lower than at step 4,000. This was after it spiked to 9.074e-01 at step 5,000. So this one was jumpy but there were some great checkpoints along the way and I settled on 6,000.

    LORA
    Flux.2 Klein 9B-base

    Details

    Downloads
    237
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/1/2026
    Updated
    2/6/2026
    Deleted
    -
    Trigger Words:
    4ft3rd4rk

    Files

    afterdark2_klein.safetensors

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)