CivArchive
    Preview 43711925
    Preview 43711946
    Preview 43711951
    Preview 43711954
    Preview 43712122
    Preview 43712115
    Preview 43712106
    Preview 43712116
    Preview 43712105
    Preview 43712113
    Preview 43712117
    Preview 43712108
    Preview 43712121
    Preview 43712112
    Preview 43712118
    Preview 43712120
    Preview 43712111
    Preview 43712114
    Preview 43712119
    Preview 43712110

    Generation Guide

    Model Information

    • Model Name: {model_name} (replace with the actual filename you downloaded, e.g., gngsfimZIB.safetensors)

    • Trigger Word: {trigger_word}

    Resolution

    • 2:3 ratio: 821×1232 (portrait)

    • 3:2 ratio: 1232×821 (landscape)

    • Square: 1:1

    • Note: You can vary these resolutions with limited success

    • FT15 models: Lower max resolution at 512×768

    Generation Parameters

    • Sampler: Euler (typically)

    • CFG Scale:

      • Standard models: 3-7

      • Turbo models: 1

    • Steps:

      • Standard models: 20-50

      • Turbo models: 9

    • LoRA Strength: 0.6-1.0

      • If images look "cooked" or overprocessed, lower the strength

    Model Series Identifiers

    • FT15 - Stable Diffusion 1.5 (max resolution: 512×768)

    • XLrd - SDXL Run Diffusion X based

    • CHHD - Chroma models

    • ZIMG - Z-Image Turbo

    • ZIB - Z-Image Base

    • FKFB - Flux Klein 4B

    • QWN - Qwen

    Note: LoRA files are large and can be resized if needed

    Current Recommendation (January 2026): Use ZIB/ZIT or Chroma models for best results.

    Dataset Type Indicators

    • mx - Vastly larger datasets with less consistency, typically trained at lower learning rates for longer durations

    • lncc - Smaller, more specific aesthetic-focused datasets

    Training Data Scale: Datasets vary from 20-30 images to over 1,000,000 images. The median dataset size is closer to 10,000 images.

    Training Techniques: Models starting at SDXL use mixed resolution training, multi-subject crop, and flips for improved generalization.

    Using the Wildcard Prompt Template

    The piped string format below is designed for ImpactPack Wildcard Processor or Automatic1111 Dynamic Prompts. Copy and paste it into either extension to generate a new randomized prompt each time, built on the distribution of the training dataset.

    Prompt Format

    <lora:{model_name}:{0.6|0.7|0.8|0.9|1}> {trigger_word}, {wildcard_tags}

    Example:

    <lora:gngsfimZIB:{0.6|0.7|0.8|0.9|1}> example_triggerword, {additional|tags|here}

    Understanding the Wildcard Tags

    • More pipes (|) in a tag group = rarer tags in the training data

    • Fewer pipes or repeated options = more common tags with better model performance

    • More examples in the training data mean the model is better at that particular task or concept

    Manual Usage (without wildcards)

    If you're not using dynamic prompts:

    1. Load the LoRA manually in your interface

    2. Start with the trigger word {trigger_word} at the beginning of your prompt

    3. Add additional tags after the trigger word to vary the composition

    4. Tags that appear more frequently in the wildcard examples will produce more consistent results

    Tips

    • Always start with the trigger word (the first tag) for best results

    • Check sample images for embedded generation parameters

    • Add additional tags to vary composition and style

    • Experiment with LoRA strength if results don't match expectations

    • Tags with more training examples will be more reliable and consistent

    • Reference the sample images on this page for working parameter combinations


    FAQ: Dataset Filename & Trigger Word Conventions

    What problem does this filename format solve?

    The filename is designed to avoid collisions with generic or common names while also serving as a programmatic signal. It encodes both the trigger word and the dataset type, making it easy for scripts and training pipelines to identify and handle the dataset correctly.

    Why not use a generic filename?

    Generic filenames tend to overlap across projects and environments. This format ensures:

    • Uniqueness across datasets

    • Clear intent when parsed programmatically

    • No ambiguity about dataset content or usage

    What do the suffix codes mean?

    The suffix in the filename specifies:

    • The resolution of the dataset

    • The model architecture tier it is intended for

    This makes it immediately clear what kind of model configuration the dataset targets and helps avoid compatibility issues.

    What does "mx" stand for?

    mx means mix. It indicates that the dataset is diverse and vastly larger (potentially hundreds of thousands to over a million images), though less consistent than focused datasets. These models are typically trained at lower learning rates for longer durations to accommodate the dataset diversity.

    What does "lncc" stand for?

    lncc indicates smaller, more specific datasets focused on a particular aesthetic. These are more consistent but cover a narrower range of content.

    How are trigger words determined?

    Trigger words are embedded in the dataset and filename structure. They function as activation tokens that help the model recognize and generate content consistent with the training data. Always use the specified trigger word at the start of your prompt for best results.

    How large are the training datasets?

    Training datasets vary significantly:

    • Minimum: 20-30 images

    • Maximum: Over 1,000,000 images

    • Median: Approximately 10,000 images

    Larger datasets (mx) enable broader capabilities but may be less consistent. Smaller datasets (lncc) are more focused and aesthetically coherent.


    For best results, always check the sample images on this model page—generation parameters are embedded in the metadata.

    v1.0

    768*512

    512*768

    greatly improved, flexible, high quality,

    few known issues in odd poses.

    <lora:pmbkkFT15:{1|1.2|1.5}> {pmbkk, }{realistic, }{1girl, }{open mouth, |}{solo, |}{cum in mouth, |}{looking at viewer, ||}{cum, }{portrait, |||||}{tongue, ||||||||||}{close-up, ||||||||||}{facial, ||}{cum on tongue, ||}{eyelashes, ||||||||||||}{nose, ||||||||||||||}{after fellatio, |||||}{upper teeth only, |||||||||||||||}{solo focus, |||||||||||||||}{looking up, ||||||||||||||||}{hetero, |||||||||||||||||}{1boy, ||||||||||||||||||}{penis, ||||||||||||||||||}{uvula, |||||||||||||||||||}{bukkake, ||||}{erection, |||||||||||||||||||}{oral invitation, |||||||||||}{half-closed eyes, ||||||||||||||||||||}{saliva, ||||||||||||||||||||}

    Description

    FAQ

    Comments (8)

    wesDec 5, 2024· 12 reactions
    CivitAI

    I've said it before and I'll say it again: Your LoRAs are incredibly difficult to use. The samples looks great, but Forge brings the trigger words in from Civitai with all the pipes and then you have to manually strip them out. Why don't you just do it like everyone else? I always want to use your LoRAs but then give up.

    sarahpeterson
    Author
    Dec 5, 2024· 1 reaction

    use automatic... or just copy the prompt from an image? usually the first 1-5 terms will do it. each lora has a token too which can work if you turn the strength up. this one works with the lora + pmbkk, bukkake, facial. Contact the forge devs to learn/patch the import of the prompt?

    wesDec 6, 2024· 2 reactions

    @sarahpeterson You're the only one that does it this way. I used to us Auto and it didn't work then either. Is there an extension that's needed? You're right that you can go back to Civitai and get the prompt from an image, but it's just a lot of work compared to all the other LoRAs out there. Sorry if I sound entitled - I've just been frustrated. Thanks for contributing.

    sarahpeterson
    Author
    Dec 6, 2024

    @wes yes dynamic prompts. the pipes help sample the training input data space, its explained in the article I posted. For posing there are terms that help vary the pose but are rare, so behind more pipes. "a lot of work" . The first few terms are usually all that's enough, the rest are for advanced detailing. the full prompt with pipes can be copied and pasted then hit generate forever and produce near infinite images with automatic and dynamic prompts. The other models you'd need to manually copy the training tokens from the model card and order them etc

    Sailor_LunaDec 21, 2024· 4 reactions

    @wes but you can edit trigger words in Lora settings, you only need to do this once

    wesDec 23, 2024

    @waitran Thanks - I actually didn't know that and it's very helpful

    sarahpeterson
    Author
    Dec 26, 2024· 2 reactions

    @wes it's also citivai baking them into the metadata. Ask them to strip the pipes. Or whatever ui you're using to strip dynamic prompts special characters

    wesDec 26, 2024

    @sarahpeterson But isn't that field determined by you?

    LORA
    SD 1.5

    Details

    Downloads
    1,320
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/4/2024
    Updated
    5/3/2026
    Deleted
    -
    Trigger Words:
    <lora:pmbkkFT15:{0.6|0.7|0.8|0.9|1.0|1.3}> {pmbkk, }{realistic, }{1girl, }{open mouth, |}{solo, |}{cum in mouth, |}{looking at viewer, ||}{cum, }{portrait, |||||}{tongue, ||||||||||}{close-up, ||||||||||}{facial, ||}{cum on tongue, ||}{eyelashes, ||||||||||||}{nose, ||||||||||||||}{after fellatio, |||||}{upper teeth only, |||||||||||||||}{solo focus, |||||||||||||||}{looking up, ||||||||||||||||}{hetero, |||||||||||||||||}{1boy, ||||||||||||||||||}{penis, ||||||||||||||||||}{uvula, |||||||||||||||||||}{bukkake, ||||}{erection, |||||||||||||||||||}{oral invitation, |||||||||||}{half-closed eyes, ||||||||||||||||||||}{saliva, ||||||||||||||||||||}

    Files

    pmbkkFT15.safetensors

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)