CivArchive
    Preview 114007562

    fp8 quantized Newbie-image for ComfyUI. And Gemma 3 4b.

    All credit belongs to the original model author. License is the same as the original model.

    Note: Images from bf16 and fp8 models are identical, like this. If image from fp8 model changed drastically, your ComfyUI somehow enabled fp16 mode. Newbie doesn't not support fp16, and you will get deformed image.


    Versions:

    Exp0.1 EP7:

    Scaled fp8 + Mixed precision.

    Exp0.1 base tcfp8:

    Scaled fp8 + Mixed precision + Hardware (tensor core) fp8 support

    Exp0.1 base:

    Scaled fp8 + Mixed precision.

    Gemma 3 4b:

    Scaled fp8 + Mixed precision.


    Note:

    Where is Jina clip v2 ?

    Jina clip v2 is very small (~1gb), seems not necessary.

    Hardware (tensor core) fp8 support:

    TLDR: The file contains calibration metadata. On supported GPU with hw fp8, ComfyUI will automatically do calculations in FP8, instead of dequantizing then BF16.

    torch.compile is recommended, if you can get it up and running. Might be 80% faster.

    More info about tcfp8: https://civarchive.com/models/2172944

    If your GPU does not support fp8, then this version is the same as the normal one, because weights are the same.

    Description

    Scaled fp8 + mixed precision.

    Checkpoint
    Other

    Details

    Downloads
    133
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/16/2025
    Updated
    1/13/2026
    Deleted
    -

    Files

    newbieImageFp8_gemma34bIt.safetensors

    Mirrors