Source: https://huggingface.co/Freepik/flux.1-lite-8B-alpha from Freepik
The alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model. This version uses 7 GB less RAM and runs 23% faster while maintaining the same precision (bfloat16) as the original model. Download GGUF version here.
☕ Buy me a coffee: https://ko-fi.com/ralfingerai
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Description
For the best results, we strongly recommend using a guidance_scale of 3.5 and setting n_steps between 22 and 30.