CivArchive
    Preview 111026992
    Preview 111031074
    Preview 111029048
    Preview 111187308
    Preview 111032082
    Preview 111033132
    Preview 111027388
    Preview 111034988
    Preview 111028154
    Preview 111038139
    Preview 111039423
    Preview 111045079

    A completely new approach!

    The original Snakebite was an Illustrious model injected with bigASP's compositional blocks. Snakebite 2.0, however, is primarily bigASP - but enhanced with a number of techniques to dramatically improve its textures and aesthetic capabilities.

    ❤️ If you like Snakebite, you can help offset the cost of training:

    Buy liftweights a Coffee


    ⚠️ IMPORTANT:

    This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results.

    If you're using a different UI that does not support the Flow Matching objective, you can try Snakebite v1.4 instead - that one behaves like a regular Illustrious model.


    Why change the formula?

    While I'm happy with the original Snakebite, there are some "gaps" between the two architectures I haven't been able to close with merging. Over the course of 1.0 through 1.4, I did what I could to minimize weird background objects and extra limbs, but it occurred to me that the perfect solution is already right here, in the form of vanilla bigASP 2.5.

    I don't know if many people realize how good bigASP is... the prompt adherence is almost Flux-level with none of the censorship, plastic skin, steep hardware requirements, or bad licensing. It's pretty remarkable.

    I set out to solve two of its main problems:

    1. bigASP's textures are straight-up scuffed. I don't know if there was an issue with its aesthetic captioning or if it's simply "seen too much" (it was trained on 13 million images!), but no amount of (((high quality, masterpiece, so good))) is going to produce an image that looks even half as good as that of your average SDXL model.

    2. You need to prompt it for everything. This is not necessarily a bad thing. Problem is, bigASP has some very weird ideas about the stuff you fail to mention. For example, if you ask for 1girl, standing it might give you a picture of 1girl, standing, morbidly obese, upside down.

    Both of these problems have been addressed, at least to an extent. It wasn't easy! bigASP's input blocks are really delicate - if you try massaging them with aesthetic LoRAs, the model tends to fall apart completely. Compatibility with SDXL LoRAs is poor, because they were not trained with the Flow Matching technique and bigASP's CLIP is very different.

    Still, I found some blocks that responded well to my cosmetic upgrades. So I have been slowly and carefully introducing these blocks to things like Direct Preference Optimization with the goal of helping bigASP know what to do when you don't provide a 500-word prompt (i.e. make every picture look decent and not insane)


    👍 Advantages over v1

    1. Prompt adherence is UH-MAZING for an SDXL model - check demo gallery

    2. Understands more complex concepts and interactions

    3. Mangled limbs are almost nonexistent thanks to Flow Matching

    4. Very flexible with styles; more photorealistic than v1 while also more capable of generating illustrations

    5. It can spell words pretty well, provided you don't mind re-rolling a few times

    👎 Disadvantages

    1. Aesthetically, it's not as consistent as IL - but it's getting close in newer versions (v2.3)

    2. The lack of IL means booru tag knowledge is worse, but you might be surprised at how much bigASP knows... it can generate tons of mainstream characters and concepts just fine on its own


    Turbo:

    • 6-9 steps

    • LCM sampler

    • Beta, normal, or simple scheduler

    • CFG 1

    • Model shift of 3 (this is the value that bigASP was trained on) or 6 (allegedly even better according to bigASP's author)

    • Sample workflow: https://pastebin.com/Z35kNns6

    Full:

    • 25-40 steps

    • Euler ancestral sampler for speed, dpmpp_2s_ancestral for quality

    • Simple scheduler

    • CFG 4-6

    • Model shift of 3

    • Negative prompt strongly recommended (e.g. worst quality)

    • Sample workflow: https://pastebin.com/ynrJ1Nt2

    Note: increasing the model shift may improve prompt adherence at the cost of quality. This is particularly useful with character LoRAs. Try a value between 6-8.


    📖 Prompting Guide

    The #1 thing is, be careful with your fluff. If you ask for warm lighting, you better believe you're gonna get warm lighting. Like, a lot of it. Even adding a simple high quality to your prompt might change your image completely. So be deliberate. Start with zero fluff.

    The effect is not always intuitive. For example--as the author of bigASP has pointed out--the term masterpiece quality "causes the model to tend toward producing illustrations/drawings instead of photo."

    If it's photos you want, I've yet to find phrases that work better than onlyfans, abbywinters photo. Hey now, I'm being serious! These terms work great for innocent stuff, too. (EDIT: As of v2.2, these helper phrases are optional. Using photograph of a... is usually enough in newer versions of Snakebite.)

    Also, bigASP's training data was captioned with JoyCaption (online demo here, made by same author as bigASP) so you should try speaking to the model in a similar cadence and tone as JoyCaption does. Booru tags work okay too, but they tend to push images in more of a CGI direction.

    Most of the time, if Snakebite is not giving you the image you want, it's a matter of finding another phrasing or adding (((emphasis))).


    🏋️‍♂️ Training LoRAs

    Option A


    There is an official LoRA training script for bigASP 2.5 available here:

    It's easy to install. I'm running it through my kohya-ss venv, as it only required a couple extra (non-conflicting) dependencies. However, it has a limited feature set and has not been thoroughly battle tested.

    The train-lora.py script does not target as many modules as kohya's sd-scripts. This results in much smaller LoRA filesizes, but may prove insufficient for e.g. capturing a character's likeness, even at high rank and alpha. To fix this, search for "target_modules" in the script and update accordingly:

    target_modules=["to_k", "to_q", "to_v", "to_out.0", "k_proj", "v_proj", "q_proj", "out_proj", "proj_in", "proj_out", "conv_in", "conv_out", "ff.net.0.proj", "ff.net.2"]

    That should produce a file equal in size to that of kohya (at fp16 precision.)

    Default settings are good. You can increase the lora_rank and lora_alpha if you want, but the default value of 32 is usually fine. It buckets images. Be aware that it only saves a checkpoint at the end of training.

    Don't train on turbo versions of Snakebite. Either use the full version (once I've uploaded it), or train on bigASP 2.5 vanilla.


    Option B

    There is an unofficial fork of sd-scripts that supports Flow Matching, created by @deGENERATIVE_SQUAD :

    This option takes more effort to set up, but it opens up a lot more possibilities for customization. You may need to adjust the code for compatibility with your environment. In my case, I had to remove the --loss_type="fft" parameter and swap out references to transforms.v2 in library/train_util.py with the original code from the sd3 branch.

    Pass the following arguments to sdxl_train_network.py:

    --flow_matching ^

    --flow_matching_objective="vector_field" ^

    --flow_matching_shift=3.0 ^


    Thank you. As always, I look forward to your feedback. Please share the model and upload some images to help it gain traction. It would be amazing if we could make Snakebite eligible for Civitai's onsite generator someday!

    Description

    Trained on 1400 high-quality photographs using a bunch of modern strategies (PiSSA decomposition, flow matching, Prodigy Plus optimizer.) Snakebite 2.3 sets a new standard for realistic SDXL models in regard to image quality and prompt adherence.

    Try this custom sigma curve for very natural-looking results (less cinematic):

    1, 0, 1, 0.8, 0.6, 0.45, 0.2, 0.0000

    I also recommend starting your prompt with <photograph of a...>

    FAQ

    Comments (19)

    Kitten123Nov 21, 2025
    CivitAI

    when will non dmd be available for 2.3

    liftweights
    Author
    Nov 21, 2025

    This weekend, most likely!

    JBsharpNov 21, 2025
    CivitAI

    Love it, only question is if you have any recommendations to tone down the contrast. My images are coming out a little oversaturated.

    liftweights
    Author
    Nov 21, 2025

    Try this LoRA at a negative strength of ~0.5.

    JBsharpNov 21, 2025· 1 reaction

    @liftweights That did the trick and also improved prompt adherence!

    liftweights
    Author
    Nov 24, 2025

    Great! I also just found this replacement VAE from @Felldude :

    - Felldude/SDXL_NaturalSkin_VAE at main

    I'm really liking it so far. Improves skin tone and makes images a bit less red overall. I'll probably bake it into future versions of Snakebite :)

    JBsharpNov 24, 2025

    @liftweights This definitley makes a difference. Do you by chance have a newer WF that you are using than the one you have listed from earlier?
    My workflow is still causing the hair to be not as detailed.

    liftweights
    Author
    Nov 24, 2025

    @JBsharp I use a massive AIO workflow (txt2img/img2img/inpainting/upscaling). It needs a lot of cleanup before it's ready for public consumption :(

    I'd need an example image to know what's going wrong with the hair... but I would suggest playing around with your model shift value or increasing your step count (sometimes I need up to 11 steps)

    JBsharpNov 24, 2025

    @liftweights lol totally understand about how everyone's WF might be a little messy. If you could DM me, I won't judge! :)

    I posted one image below of one of my better results. My WF is embedded as well.

    JBsharpNov 24, 2025

    @liftweights I finally got my WF to work and make amazing results!

    straytzenscribeNov 22, 2025· 1 reaction
    CivitAI

    glad to see a new update. bigASP really underrated by comunity. People have this idea that the solution is always more VRAM...i remember the time when all we had was SD1.5 and excited creativity. times has changed fast.

    liftweights
    Author
    Nov 22, 2025· 2 reactions

    Totally! SDXL hits the sweet spot for speed and quality on consumer hardware. Models like Illustrious have just about "solved" anime pics, but for some reason we still have the notion that we need giant, bloated models for realism. I don't think it's true - we just need more training on larger datasets, and bigASP is an amazing achievement in that regard.

    XpomulCiviNov 22, 2025
    CivitAI

    Hi,

    Id say 2.3 is a great improvement over 2.2, the overall coherency and stability of the style is way better!

    Did you generate the preview images at that size or did you upscale them?

    liftweights
    Author
    Nov 22, 2025

    Thank you :)

    The preview images are upscaled 2x - my go-to settings are as follows:

    - kl_optimal scheduler

    - Between 50-65% denoise

    - 4 steps

    - Tiled Diffusion on "Mixture of Diffusers" mode

    These settings provide a reasonable level of detail and the upscale only takes 4-5 seconds on an Nvidia 3090.

    XpomulCiviNov 22, 2025

    @liftweights would you mind sharing this WF? I seem to get kinda blurred and oversaturated images when I use SB as img2img, or rather "pixelated" effects when using the same workflow as other checkpoints do in Krita (there a selection is refined with split Sigma, which has quite different results from img2img refining as normal)

    So im in general not sure if my Comfy setup is correct, and repurposing as WFs hasnt been that useful so far either.

    liftweights
    Author
    Nov 22, 2025· 1 reaction

    @XpomulCivi To avoid pixelated results, I recommend decoding the latent image to pixels and then upscaling that with lanczos. Upscaling directly from the latent produces bad images for me too. And of course make sure you're using LCM / kl_optimal / 50% denoise / CFG 1.

    I can share a workflow later, but there's not much else to it.

    XpomulCiviNov 23, 2025· 1 reaction

    @liftweights Ty, Ill try!

    amazingbeautyNov 23, 2025· 1 reaction
    CivitAI

    turbo model is 'turbo' or 'dmd2'? also there will be new version 2.3 non-turbo ?

    liftweights
    Author
    Nov 23, 2025

    "Turbo" uses multiple acceleration LoRAs with a block merging strategy. It is primarily DMD2.

    Non-turbo version will be available soon

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    347
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/21/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    snakebite2_v23Turbo.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.