This is a fine-tuned model of qwen, the alpha version uses a small amount of data due to the unusually large size of qwen, and the second version is currently being trained with a huge amount of overhead. You can figure out the parameters yourself. I hope you like it.
If there are any investors or institutions interested in creating the world's most powerful Asian female models, please contact me. I have prepared a training set close to 300W waiting to train the QWEN image. But the cost of limited training is too high.
My email:[email protected]
Description
FAQ
Comments (40)
This is the best qwen model I've ever used. Thank you so much for sharing it.
这是我用过的最好的qwen模型。非常感谢你的分享。
thanks
38g, 5090 跑得动吗?
自行量化fp8
不量化也能跑,很慢,120秒出一张图40步。1080P
@su400538 你应该不够ram吧,我5090 + 96g ram 第一次跑108秒,第二次28秒,8 steps lora
原来模型是加在内存里的 等我发现的时候 内存他妈已经暴涨了
5090拖个39G 啥意思 至少96G
dang 38gb, what minimum vram can running this model?
fp8 maybe 24gb vram
@white2023 ty, where i can get it
24gb vram with 64gb ram can run this smoothly.
非常适合国人审美!非常期望下个版本!
期待gguf版本
Upload your models to other AI websites as well. CivitAI tends to erase models without any warning. There were around 20 Qwen models a week ago; now most of them are gone.
I could share with you quantized to fp8 this model. Also can help with training, have 5090 and unlimited power.
I hope you get the resources you need to train the next version of this model.
Can you please upload fp8 version?
I found it on lib but cannot download since i don't have account.
Can you tell me the website? I'll check it out.
@1062671009872 白转千问Qwen image FP8量化版-Checkpoint-潪AI_Binity-LiblibAI
没有了啊
@kicktest 感谢兄弟 原版38G 真吃不消
Can you please upload fp8 version to CivitAI?
I cannot download at liblib.art, because I need an account. to register for an account, you need a Chinese phone number (starts with +86).
seconded
Why not just use the fp16 version, but load it as FP8 in the load diffusion model node? it produces better results than using the fp8 version, while only using fp8 vram.
can you upload fp8 model here? I can't download from libart as it require chinese number
Why not just use the fp16 version, but load it as FP8 in the load diffusion model node? it produces better results than using the fp8 version, while only using fp8 vram.
For those waiting for FP8, just run FP16 model, but on load diffusion model node, change the weight_dtype to fp8, and it'll only use fp8 amount of vram, and quality is better than using fp8 file.(also... fp8_e5m2 gives better details than e4m3 from my experience)
I've uploaded the FP8 E4M3FN on HF HERE: jzpz/whiteqwen-alpha0.1-fp8 at main
But like @luneva said, if you can run the fp16 with fp8 weight_dtype. It does give better quality.
I just renamed my account to @luneva =) Rather than the random letters I typed in during account creation years ago haha. Btw @JzPz I really love your work, so inspiring and varied! Oh also, I use fp8_e5m2 to get more details instead of e4m3
白神 白神 我的神
does this checkpoint only favor east asian women?
Pretty much I tried with other race but it didn't work well with lora's like image analog or samsungcam ultra real it kind of works qwen rebalance works well with others race and is better in anatomy I don't know why but white qwen for me when using few loras produces a bit blurry result I do not know if its because of me using fp8 version from JzPz or not but I do like the asian women look that white qwen gives. In my opinion try qwen rebalance once and you'll notice it being overall better and way sharper where white qwen is more blurry/ softer, but that makes white qween look more life like, still doesn't change the fact that it gets noticeably blurry when using few loras.
i have created Q6 and Q8 version of this model https://huggingface.co/markasd/whiteQWEN_alpha01-GGUF/tree/main lower than that can reduce quality
God bless you
佬 这个怎么用呢 直接导进去吗 还是得自己建工作流在选这个模型
问一下大佬 暗的场景总是会出现闪光灯光源 如果不要闪光灯~~












