CivArchive
    Preview 114500621

    NewBie image Exp0.1

    🧱 Exp0.1 Base

    • NewBie image Exp0.1 is a 3.5B parameter DiT model developed through research on the Lumina architecture.

      Building on these insights, it adopts Next-DiT as the foundation to design a new NewBie architecture tailored for text-to-image generation.

      The NewBie image Exp0.1 model is trained within this newly constructed system, representing the first experimental release of the NewBie text-to-image generation framework.

    Text Encoders

    • We use Gemma3-4B-it as the primary text encoder, conditioning on its penultimate-layer token hidden states. We also extract pooled text features from Jina CLIP v2, project them, and fuse them into the time/AdaLN conditioning pathway. Together, Gemma3-4B-it and Jina CLIP v2 provide strong prompt understanding and improved instruction adherence.

    VAE

    • Use the FLUX.1-dev 16channel VAE to encode images into latents, delivering richer, smoother color rendering and finer texture detail helping safeguard the stunning visual quality of NewBie image Exp0.1.

    Prompt

    • XML structured prompt

    • Natural language prompt

    • Tag prompt

    🖼️ Task type

    NewBie image Exp0.1 is pretrain on a large corpus of high-quality anime data, enabling the model to generate remarkably detailed and visually striking anime style images.

    We reformatted the dataset text into an XML structured format for our experiments. Empirically, this improved attention binding and attribute/element disentanglement, and also led to faster convergence.

    Besides that, It also supports natural language and tags inputs.

    🧰 Model Zoo

    NewBie image Exp0.1: Hugging face | modelscope

    Gemma3-4B-it: Hugging face | modelscope

    Jina CLIP v2: Hugging face | modelscope

    FLUX.1-dev VAE: Hugging face | modelscope

    💪 Training procedure

    🔬 Participate

    Core

    Members

    ✨ Acknowledgments

    • Thanks to Google for open sourcing the powerful Gemma3 LLM family

    • Thanks to the Jina AI Org for open sourcing the Jina family, enabling further research.

    • Thanks to Black Forest Labs for open sourcing the FLUX VAE family. powerful 16channel VAE is one of the key components behind improved image quality.

    • Thanks to Neta.art for fine-tuning and open sourcing the Lumina-image-2.0 base model. Neta-Lumina gives us the opportunity to study the performance of Next-DiT on Anime Types.

    • Thanks to DeepGHS/narugo1992/SumomoLee for providing high-quality Anime Datasets.

    • Thanks to Nyanko for the early help and support.

    • Thanks to woctordho for helping improve NewBie’s compatibility with community tools.

    📖 Contribute

    • Neko, 衡鲍, XiaoLxl, xChenNing, Hapless, Lius

    • WindySea, 秋麒麟热茶, 古柯, Rnglg2, Ly, GHOSTLXH

    • Sarara, Seina, KKT机器人, NoirAlmondL, 天满, 暂时

    • Wenaka喵, ZhiHu, BounDless, DetaDT, 紫影のソナーニル

    • 花火流光, R3DeK, 圣人A, 王王玉, 乾坤君Sennke, 砚青

    • Heathcliff01, 无音, MonitaChan, WhyPing, TangRenLan

    • HomemDesgraca, EPIC, ARKBIRD, Talan, 448, Hugs288

    🧭 Community Guide

    Getting Started Guide

    LoRa Trainer

    💬 Communication

    📜 License

    • Model Weights: Newbie Non-Commercial Community License (Newbie-NC-1.0).

      Applies to: model weights/parameters/configs and derivatives (fine-tunes, LoRA, merges, quantized variants, etc.)

      For Non Commercial use only, and must be shared under the same license.

      See NewBie-image-Exp0.1 LICENSE.md

    • Code: Apache License 2.0.

      Applies to: training/inference scripts and related source code in this project.

      See Apache-2.0

    ⚠️ Disclaimer

    This model may produce unexpected or harmful outputs. Users are solely responsible for any risks and potential consequences arising from its use.

    Description

    Gemma3-4B-it AllinOne

    FAQ

    Comments (47)

    saltywwwDec 20, 2025· 11 reactions
    CivitAI

    我和newbie和解了....也许

    但是我永远不会原谅写lora教程的,在搭环境上面大费笔墨,结果到了最重要的打标和参数上一笔带过?最后产物和使用也不讲,你不说是lora训练教程我还以为是造轮子教程呢

    最关键的是你造轮子也没讲明白啊,按流程走会要求重下一份gemma和jina embdding v3你是一点不提啊?不认下好的gemma也不提啊,非要我用100kb/s的光速去hf拉取模型吗

    qekDec 20, 2025

    ?

    KH38MTDec 21, 2025

    lol

    waw1w1Dec 21, 2025

    qekDec 21, 2025

    lol

    Seii1Dec 20, 2025
    CivitAI

    is tehre a node to install ? im using app comfy ui and updated install missing nodes, but still the nodes are red on me,

    7456414Dec 20, 2025

    guide to using is here, takes a bit of installation but fairly simple
    https://ai.feishu.cn/wiki/NZl9wm7V1iuNzmkRKCUcb1USnsh

    saltywwwDec 20, 2025· 1 reaction

    use comfyui newest dev branch ,so you can use it with original comfyui nodes

    Seii1Dec 20, 2025

    @cococolacake i download all the files from here, but do i need a folder file imconfsed

    7456414Dec 20, 2025

    @Seii1 was unaware that base comfyui works with it now but as @saltywww said just use the latest nightly branch and install the custom nodes/follow the guide

    edit: corrected branch, thanks qek!

    qekDec 20, 2025

    @cococolacake Nightly*, the branch is master (default)

    RuDDicKDec 20, 2025· 18 reactions
    CivitAI

    Mini-guide to launching "NewBie" in ComfyUI

    1. Update ComfyUI.

    2. Download Exp0.1 Base, Gemma3-4B-it, Jina CLIP v2 and VAE in SafeTensor format from this page.

    3. Place the 'Exp0.1 Base' (newbieImage_exp01Base.safetensors) in the 'diffusion_models' folder or the 'unet' folder.

    Place Gemma3-4B-it (newbieImage_gemma34BIt.safetensors) in the 'text_encoders' folder.

    Place Jina CLIP v2 (newbieImage_jinaClipV2.safetensors) in the clip or text_encoders folder.

    Place VAE (newbieImage_vae.safetensors) in the vae folder.

    4. Download the workflow from here(download images): https://github.com/comfyanonymous/ComfyUI/pull/11415

    qekDec 20, 2025

    Gemma3-4B-it fp8 and gguf do not work

    E1taDec 20, 2025

    thanks for the guide

    but i see an error says:
    Prompt outputs failed validation: DualCLIPLoader: - Value not in list: type: 'newbie' not in ['sdxl', 'sd3', 'flux', 'hunyuan_video', 'hidream', 'hunyuan_image', 'hunyuan_video_15', 'kandinsky5', 'kandinsky5_image']
    on the DualCLIPLoader node.

    ComfyUI version is v0.5.1

    RuDDicKDec 20, 2025· 2 reactions

    @E1ta I had this problem too. Updating through Manager helped.

    Manager=> Update All

    After updating, close comfyui and restart it. “type newbie” should appear in DualCLIPLoader.

    E1taDec 20, 2025

    @RuDDicK
    Thanks for help. but I checked github of ComfyUI.

    "Implement Jina CLIP v2 and NewBie dual CLIP (#11415)" is committed 15 hours ago. This is newer than v0.5.1. Switching my ComfyUI version to master branch solves my problem.

    poisioeiDec 20, 2025
    CivitAI

    双clip加载器提示invalid tokenizer,我不明白……

    qekDec 21, 2025

    ComfyUI Nightly?

    poisioeiDec 22, 2025

    @qek yep

    scruffynerfDec 20, 2025· 8 reactions
    CivitAI

    Got it working... and then got an uncensored Gemma3 model to work (took some tweaking, including merging in the tokenizer.model). I'll post more later... testing to see if it helps.

    scruffynerfDec 21, 2025
    CivitAI

    Question: is there a reason you're not using the Anime VAE?

    qekDec 21, 2025

    And why pre-trained Gemma3 4b?

    RicemanTDec 22, 2025

    Flux1 anime VAE by anzhc? That wasn't a thing back when this model was conceived on plannings (6 month ago)

    scruffynerfDec 22, 2025

    @RicemanT yes, but it does now, and can easily be used, its not like the stock VAE is customized.

    RicemanTDec 24, 2025· 2 reactions

    @scruffynerf i mean, you would have to retrain the model to adapt to a new vae yknow (this is the case for every model) , this take time and money. Anyway sneak peak for the future, the dev is doing another experiment right now, a whole different version of Newbie so I'm not sure whether the anime vae is part of his plans or not, but i'll talk to him about it.

    scruffynerfDec 24, 2025· 1 reaction

    @RicemanT better to not open your mouth than show yourself to be clueless. The vae works independent of the model, and you can swap (and I have) VAEs anytime. UltraFlux, Anime, and more. Go read some facts, and apologize.

    RicemanTDec 24, 2025

    @scruffynerf uh? What i thought you meant with your question is "why did the dev not trained the model with the flux1 anime vae" , i didn't said anything about just using it in image generation, whatever man.

    qekDec 24, 2025· 2 reactions

    @scruffynerf UltraFlux's AE is total trash, why using it?

    scruffynerfDec 24, 2025· 1 reaction

    @RicemanT sigh, you wanna pretend that matters? It doesn't. Dig that hole deeper. I asked why they weren't using the Anime vae with an Anime model, and you blurt out nonsense. Apologize and move on.

    scruffynerfDec 24, 2025· 1 reaction

    @qek Anime Vae is different from Ultra Vae, and Ultra is fine. Just suffers from a 1 pixel shift, which I wrote a node to correct. Another clueless comment: Anime VAE, stay on topic. 

    RicemanTDec 24, 2025· 1 reaction

    @scruffynerf if you want to know so bad, then contact the team through the Newbie discord.

    scruffynerfDec 24, 2025

    @RicemanT already in there... sigh. moving on.

    saltywwwDec 23, 2025· 8 reactions
    CivitAI

    正在匠心炼制奶龙lokr
    (犯罪预告是也)

    喜报,已炼制完成

    [Newbie]奶龙/nailong - v1.0 | Other LyCORIS | Civitai

    DazrockDec 25, 2025· 3 reactions
    CivitAI

    Just to let you know.
    The model isn't listed under Lumina.
    It's only listed under Other.
    Will you change the listing for this model?

    qekDec 25, 2025

    No, it is very different, checked?

    DazrockDec 26, 2025

    @qek But it's built on the Lumina architecture. Which makes it a Lumina model?

    qekDec 31, 2025· 1 reaction

    @Dazrock It is Next DIT, I forgot to say

    velascoflushie624Dec 25, 2025· 2 reactions
    CivitAI

    can i run this with a 2060rtx 6gb vram?

    empek17Dec 26, 2025· 1 reaction

    you could try using normal jina and get other models from here:
    https://civitai.com/models/2217313
    and loading the jina and gemma models on cpu in the dualclip loader on comfyui.
    It won't be the fastest but it can maybe work. If you have enough ram ofc.

    velascoflushie624Dec 26, 2025· 1 reaction

    @empek17 Thanks!

    openmn793Dec 26, 2025· 5 reactions
    CivitAI

    ComfyUI 0.6终于正式支持了newbie!打开模板看到提示词时我是蛮震惊的,挺吓人的!看了官方的规范和例图提示词后虽然还是有点迷糊,但大致上也理解了。虽然觉得NewBie的提示词策略令人费解,既约定了规范,又说怎么写都行!好不好用先试试...

    gannibalDec 28, 2025· 1 reaction
    CivitAI

    Why does fp16 computeDtype on comfyui just outputs noise? My GPU doesn't support bf16, so it upcasts to fp32 making the gen impossibly slow.

    reakaakaskyDec 29, 2025· 2 reactions

    same problem as neta, the noise.refiner.0 exploded.

    reakaakaskyDec 29, 2025· 2 reactions

    Numbers go up to billions, fp16 overflowed.

    Chtholly_devFeb 11, 2026

    Same experience

    BaxterJan 1, 2026
    CivitAI

    Is there a comfyui workflow yet that includes the ability to use Loras for this model? I can't find one

    Checkpoint
    Other

    Details

    Downloads
    600
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/20/2025
    Updated
    5/2/2026
    Deleted
    -

    Files

    newbieImage_gemma34BIt.safetensors

    Mirrors

    Huggingface (1 mirrors)
    Other Platforms (TensorArt, SeaArt, etc.) (1 mirrors)