CivArchive
    2127 - Z Image Asian Utopian - Turbo/Base BF16/GGUF - v2.0 - Base
    NSFW
    Preview 120104118
    Preview 120104105
    Preview 120104111
    Preview 120104119
    Preview 120104115
    Preview 120104113
    Preview 120104108
    Preview 120104107
    Preview 120104109
    Preview 120104116
    Preview 120104122
    Preview 120104110

    Recommend VAE:

    https://huggingface.co/Owen777/UltraFlux-v1/blob/main/vae/diffusion_pytorch_model.safetensors

    Online generation

    https://tensor.art/models/952032833109521879/2127-Z-Image-Asian-Utopian-Turbo-v2.0-FF


    v3.6 - Base/Turbo - FFV

    使用 v2.0-FF 加入全新的訓練集,調整了光澤、臉孔、體態。

    We used v2.0-FF to add a brand new training set and adjusted the gloss, face, and body shape.

    バージョン2.0-FFを使用して、全く新しいトレーニングセットを追加し、光沢、顔、体型を調整しました。

    저희는 v2.0-FF를 사용하여 완전히 새로운 학습 데이터 세트를 추가하고 광택, 얼굴 및 몸매를 조정했습니다.


    v3.0 - Turbo - FFDPO

    使用 v2.0 fp32 版本,並重新融合 Lokr 與 FDPO Lora。轉換為 bf16 與 Q8_0 gguf。

    Use version v2.0 fp32 and merge Lokr and FDPO Lora. Convert to bf16 and Q8_0 gguf.

    バージョンv2.0 fp32を使用し、Lokrと FDPO Lora を再マージします。bf16とQ8_0 ggufに変換します。

    버전 v2.0 fp32를 사용하고 Lokr 및 FDPO Lora 를 다시 병합합니다. bf16 및 Q8_0 gguf로 변환합니다.


    v2.0 - FF / v2.0 - Base

    FF = 全片幅,全精度

    使用 2.4MP 高解析度資料集訓練 fp32, dim 64 Lora,並重新調整 v1.5 - ZSVD 融合配重,重新製作為 fp32 全精度版本,轉換為 bf16, GGUF。

    FF = Full Frame, Full Precision

    The fp32, dim 64 Lora dataset was trained using a 2.4MP high-resolution dataset, and the v1.5 - ZSVD fusion weights were readjusted to create a new fp32 full-precision version, which was then converted to bf16 and GGUF.

    FF = フルフレーム、フル精度

    fp32、dim 64のLoraデータセットは、2.4MPの高解像度データセットを用いて学習され、v1.5 - ZSVD融合重みが再調整されて新しいfp32フル精度バージョンが作成され、その後bf16とGGUFに変換されました。

    FF = 풀 프레임, 풀 정확도

    fp32, dim 64 의 Lora 데이터 세트는 2.4MP의 고해상도 데이터 세트를 사용하여 학습되었으며 v1.5 - ZSVD 융합 가중치가 재조정되어 새로운 fp32 풀 정확도 버전이 생성된 다음 bf16 및 GGUF 로 변환되었습니다.


    v1.5 - ZSVD

    額外使用 1000 張高畫質訓練 dim 128 Lora 並 SVD 融合 [Z Image Turbo] Asian Mix Lora v3.78 與原版 Z Image Turbo bf16。

    An additional 1000 high-quality images were used to train dim 128 Lora and SVD fused with [Z Image Turbo] Asian Mix Lora v3.78 and the original Z Image Turbo bf16.

    追加の 1000 枚の高品質画像を使用して、 [Z Image Turbo] Asian Mix Lora v3.78 とオリジナルの Z Image Turbo bf16 を融合した dim 128 Lora と SVD をトレーニングしました。

    추가로 1000개의 고품질 이미지를 사용하여 dim 128 Lora 와 SVD를 [Z Image Turbo] Asian Mix Lora v3.78 및 기존 Z Image Turbo bf16과 융합하여 학습시켰습니다.


    v1.0 - Turbo

    這是使用了 [Z Image Turbo] Asian Mix Lora v3.78 與原版 Z Image Turbo bf16,另外加入了兩組 10k 資料集(我擁有所有使用權,其實,都是我自行拍攝的風景照),使用 gradient_accumulation: 4 額外訓練的 Lora 來調配權重,並根據 AI-Toolkit 的訓練結果中,將 DiT 根據 30 個不同的 Blocks 各別調整融合成的模型。

    如果可以的話,請幫我返圖。

    This model was created using [Z Image Turbo] Asian Mix Lora v3.78 and the original Z Image Turbo bf16, with two additional 10k datasets (I own all rights to these datasets; they are actually landscape photos I took myself). It uses an additional Lora dataset trained with gradient_accumulation: 4 to adjust the weights, and the DiT algorithm is adjusted and fused based on the training results from the AI-Toolkit, tailored to 30 different blocks.

    Feel free to upload the images you generated.

    このモデルは、 [Z Image Turbo] Asian Mix Lora v3.68 とオリジナルの Z Image Turbo bf16 を使用し、さらに2つの10kデータセット(すべて私が著作権を所有しており、実際に私が撮影した風景写真です)を追加しました。重み調整のため gradient_accumulation: 4 で学習した Lora モデルを追加し、DiT アルゴリズムは AI-Toolkit の学習結果に基づいて調整・融合され、30 個の異なるブロックに合わせて調整されています。

    可能であれば、生成した画像をアップロードしてください。

    이 모델은 [Z Image Turbo] Asian Mix Lora v3.78 과 원래 Z Image Turbo bf16 을 사용했으며, 두 개의 10k 데이터 세트 (모두 내가 저작권을 소유하고 실제로 내가 찍은 풍경 사진입니다) 을 추가했습니다. 가중치 조정을 위해, gradient_accumulation: 4 에서 학습한 Lora 모델을 추가하고, DiT 알고리즘은 AI-Toolkit 의 학습 결과에 근거해 조정·융합되어, 30 개의 다른 블록에 맞추어 조정되고 있습니다.

    가능하면 생성된 이미지를 업로드하세요.

    Description

    The fp8 has too much loss; GGUF Q8_0 will be better.

    FAQ

    Comments (12)

    573148556666Feb 7, 2026· 1 reaction
    CivitAI

    希望可以保留FP32版本的发布

    deepediaFeb 7, 2026· 1 reaction
    CivitAI

    I got a size mismatch error when using the GGUF files, how to fix?

    deepediaFeb 8, 2026

    Error(s) in loading state_dict for NextDiT: size mismatch for x_pad_token: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1, 3840]). size mismatch for cap_pad_token: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1, 3840]).

    Is it because my text encoder? My text encoder is Q8 Qwen3-4b. gguf instead of the safetensor, but I can use the text encoder for any non gguf model just fine

    hinablue
    Author
    Feb 8, 2026
    hinablue
    Author
    Feb 8, 2026

    @deepedia This error is the GGUF loader for lumina2 arch but error on Z Image. So I have no idea why others can work fine without the PR.

    deepediaFeb 8, 2026· 1 reaction

    @hinablue thank you, I don't know that exist, I use gemini to help me modify the nodes manually to accept the size mismatch. This is the code patch that make it work for me:

    # init model

    unet_path = folder_paths.get_full_path("unet", unet_name)

    sd, extra = gguf_sd_loader(unet_path)

    # --- START PATCH ---

    # Fix for Z-Image / NextDiT GGUF shape mismatch

    for key in ["x_pad_token", "cap_pad_token"]:

    if key in sd and len(sd[key].shape) == 1:

    logging.info(f"PATCHING: Unsqueezing {key} from {sd[key].shape}...")

    sd[key] = sd[key].unsqueeze(0)

    # --- END PATCH ---

    kwargs = {}

    twinkle99932Feb 24, 2026· 2 reactions
    CivitAI

    刚学习comfyui,有comfyui的工作流吗。我在zit的基础上更改用您的2.0base模型和推荐的VAE,生成出来的图都是花屏T.T,不知道是哪个环节或者参数有问题/(ㄒoㄒ)/~~

    hinablue
    Author
    Feb 24, 2026

    你可以下載我的圖片,圖片都有包含工作流

    twinkle99932Feb 24, 2026· 1 reaction

    @hinablue 好的,我试下,谢谢

    Stefano_038Mar 1, 2026

    你是说伪影/模糊吗?注意力机制别用Sage Attention,换成别的比如xFormers或者SDP就好了。

    wang1818Mar 2, 2026· 1 reaction
    CivitAI

    3.0有点过拟合了

    hinablue
    Author
    Mar 2, 2026

    嗯,強度確實有點高。

    Checkpoint
    ZImageBase

    Details

    Downloads
    1,466
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/6/2026
    Updated
    4/30/2026
    Deleted
    -

    Files

    2127ZImageAsianUtopian_v20Base.safetensors

    Mirrors

    Other Platforms (TensorArt, SeaArt, etc.) (1 mirrors)

    2127ZImageAsianUtopian_v20Base.safetensors

    2127ZImageAsianUtopian_v20Base.gguf

    Mirrors