FLUX.1 Kontext [dev] is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions. For more information, please read our blog post and our technical report. You can find information about the [pro] version in here.
Key Features
Change existing images based on an edit instruction.
Have character, style and object reference without any finetuning.
Robust consistency allows users to refine an image through multiple successive edits with minimal visual drift.
Trained using guidance distillation, making
FLUX.1 Kontext [dev]more efficient.Open weights to drive new scientific research, and empower artists to develop innovative workflows.
Generated outputs can be used for personal, scientific, and commercial purposes, as described in the FLUX.1 [dev] Non-Commercial License.
Usage
We provide a reference implementation of FLUX.1 Kontext [dev], as well as sampling code, in a dedicated github repository. Developers and creatives looking to build on top of FLUX.1 Kontext [dev] are encouraged to use this as a starting point.
FLUX.1 Kontext [dev] is also available in both ComfyUI and Diffusers.
API Endpoints
The FLUX.1 Kontext models are also available via API from the following sources
DataCrunch: https://datacrunch.io/flux-kontext
Replicate: https://replicate.com/blog/flux-kontext
TogetherAI: https://www.together.ai/models/flux-1-kontext-dev
Using with diffusers 🧨
# Install diffusers from the main branch until future stable release
pip install git+https://github.com/huggingface/diffusers.git
Image editing:
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained("black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16)
pipe.to("cuda")
input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png")
image = pipe(
image=input_image,
prompt="Add a hat to the cat",
guidance_scale=2.5
).images[0]
Flux Kontext comes with an integrity checker, which should be run after the image generation step. To run the safety checker, install the official repository from black-forest-labs/flux and add the following code:
import torch
import numpy as np
from flux.content_filters import PixtralContentFilter
integrity_checker = PixtralContentFilter(torch.device("cuda"))
image_ = np.array(image) / 255.0
image_ = 2 * image_ - 1
image_ = torch.from_numpy(image_).to("cuda", dtype=torch.float32).unsqueeze(0).permute(0, 3, 1, 2)
if integrity_checker.test_image(image_):
raise ValueError("Your image has been flagged. Choose another prompt/image or try again.")
For VRAM saving measures and speed ups check out the diffusers docs
Risks
Risks Black Forest Labs is committed to the responsible development of generative AI technology. Prior to releasing FLUX.1 Kontext, we evaluated and mitigated a number of risks in our models and services, including the generation of unlawful content. We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks:
Pre-training mitigation. We filtered pre-training data for multiple categories of “not safe for work” (NSFW) content to help prevent a user generating unlawful content in response to text prompts or uploaded images.
Post-training mitigation. We have partnered with the Internet Watch Foundation, an independent nonprofit organization dedicated to preventing online abuse, to filter known child sexual abuse material (CSAM) from post-training data. Subsequently, we undertook multiple rounds of targeted fine-tuning to provide additional mitigation against potential abuse. By inhibiting certain behaviors and concepts in the trained model, these techniques can help to prevent a user generating synthetic CSAM or nonconsensual intimate imagery (NCII) from a text prompt, or transforming an uploaded image into synthetic CSAM or NCII.
Pre-release evaluation. Throughout this process, we conducted multiple internal and external third-party evaluations of model checkpoints to identify further opportunities for improvement. The third-party evaluations—which included 21 checkpoints of FLUX.1 Kontext [pro] and [dev]—focused on eliciting CSAM and NCII through adversarial testing with text-only prompts, as well as uploaded images with text prompts. Next, we conducted a final third-party evaluation of the proposed release checkpoints, focused on text-to-image and image-to-image CSAM and NCII generation. The final FLUX.1 Kontext [pro] (as offered through the FLUX API only) and FLUX.1 Kontext [dev] (released as an open-weight model) checkpoints demonstrated very high resilience against violative inputs, and FLUX.1 Kontext [dev] demonstrated higher resilience than other similar open-weight models across these risk categories. Based on these findings, we approved the release of the FLUX.1 Kontext [pro] model via API, and the release of the FLUX.1 Kontext [dev] model as openly-available weights under a non-commercial license to support third-party research and development.
Inference filters. We are applying multiple filters to intercept text prompts, uploaded images, and output images on the FLUX API for FLUX.1 Kontext [pro]. Filters for CSAM and NCII are provided by Hive, a third-party provider, and cannot be adjusted or removed by developers. We provide filters for other categories of potentially harmful content, including gore, which can be adjusted by developers based on their specific risk profile. Additionally, the repository for the open FLUX.1 Kontext [dev] model includes filters for illegal or infringing content. Filters or manual review must be used with the model under the terms of the FLUX.1 [dev] Non-Commercial License. We may approach known deployers of the FLUX.1 Kontext [dev] model at random to verify that filters or manual review processes are in place.
Content provenance. The FLUX API applies cryptographically-signed metadata to output content to indicate that images were produced with our model. Our API implements the Coalition for Content Provenance and Authenticity (C2PA) standard for metadata.
Policies. Access to our API and use of our models are governed by our Developer Terms of Service, Usage Policy, and FLUX.1 [dev] Non-Commercial License, which prohibit the generation of unlawful content or the use of generated content for unlawful, defamatory, or abusive purposes. Developers and users must consent to these conditions to access the FLUX Kontext models.
Monitoring. We are monitoring for patterns of violative use after release, and may ban developers who we detect intentionally and repeatedly violate our policies via the FLUX API. Additionally, we provide a dedicated email address ([email protected]) to solicit feedback from the community. We maintain a reporting relationship with organizations such as the Internet Watch Foundation and the National Center for Missing and Exploited Children, and we welcome ongoing engagement with authorities, developers, and researchers to share intelligence about emerging risks and develop effective mitigations.
License
This model falls under the FLUX.1 [dev] Non-Commercial License.
Citation
@misc{labs2025flux1kontextflowmatching,
title={FLUX.1 Kontext: Flow Matching for In-Context Image Generation and Editing in Latent Space}, Add commentMore actions
author={Black Forest Labs and Stephen Batifol and Andreas Blattmann and Frederic Boesel and Saksham Consul and Cyril Diagne and Tim Dockhorn and Jack English and Zion English and Patrick Esser and Sumith Kulal and Kyle Lacey and Yam Levi and Cheng Li and Dominik Lorenz and Jonas MĂĽller and Dustin Podell and Robin Rombach and Harry Saini and Axel Sauer and Luke Smith},
year={2025},
eprint={2506.15742},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2506.15742},
}Description
FAQ
Comments (13)
What is the difference between two models: "fp8_e4m3fn" и "fp8_scaled" ?
The fp8_e4m3fn offers raw FP8 quantization in E4M3FN while fp8_scaled likely uses dynamic range scaling to optimize memory efficiency for practical deployment, especially in ComfyUI workflows.
Too censored. As usual from Black Forest Labs.
it is just a quant the base model, if u wanna nsfw content u could merge or train it.
this model doesn't support NSFW, how can I get it
Add NSFW Lora, but this type of Lora cannot be downloaded in Civitai due to policy reasons. You can find it in huggingface
@bigwinboy Actually a lot of NSFW models are banned by Huggingface cuz it infringing their Content Policy
@Fetch267Â Dark-Web Vendors are Plan B in the Future
Fetch267 Now even AI-generated images of women wearing suspender skirts and showing their shoulders will be rated R and hidden on Civtai. Doesn’t it feel like being in the Middle Eastern Islamic community? On the contrary, there are no such restrictions on AI websites in communist China.
bigwinboy I think the Civitai fears of being banned by bank, gov or payments services. But in China there are non grading system, so some sensitive images can be post.
Fetch267Â You are right. What I am saying is ironic is that this kind of image rating is only for female characters, while male characters have no problem even if they are topless. In other words, even in the AI era, the review mechanism is still in the patriarchal era. This is ridiculous.
is this just reupload same files from comfyorg huggingface page of kontext fp8 ?
yep, it's just another reupload





![FLUX.1 [dev] Grid](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev/resolve/main/teaser.png)