CivArchive
    Preview 115993185
    Preview 115993190
    Preview 116117111
    Preview 116117110
    Preview 116116841
    Preview 116116862
    Preview 116117112
    Preview 116117109
    Preview 115994149
    Preview 115994144
    Preview 115994601
    Preview 115994598
    Preview 115996191
    Preview 115996208
    Preview 115994600
    Preview 115994599
    Preview 116000110
    Preview 116001125
    Preview 116001130

    The BFS (Best Face Swap) LoRA series was developed for Qwen Image Edit 2509, specialized in high-fidelity face and head replacement tasks with natural tone blending and consistent lighting.

    Each version builds upon the previous one:

    • 🧠 Focus Faces: precise face swaps, keeping the original head shape and hair while transferring facial identity and expression.

    • 🧩 Focus Head: stronger head swaps, replacing the full head (including hair and pose orientation).

    • The 2 versions complement each other, one is focused on face swapping and the other is focused on head swapping.

    Share your creations that do not involve public figures or individuals who have not given consent. By sharing, you will earn Buzz, and your posts directly help me improve future versions by identifying and correcting potential issues.

    Important Note: If you are going to use Qwen Image Edit 2511, update your comfyui before anything else, because without it you may have problems with completely distorted or ugly images.

    If this model was helpful to you in any way, please consider helping me continue creating more model for the price of a coffee.

    Workflows:
    Head/Face Swap Workflow - Qwen-Image-Edit-2509 | Civitai

    My Custom Lightning LoRA:

    Custom Lightning - Qwen Image Edit - 2511 | Qwen LoRA | Civitai

    Alissonerdx/CustomLightning · Hugging Face

    Test V3 here:

    BFS Best Face Swap - a Hugging Face Space by Alissonerdx

    Face Swap Video Tests (V1):
    Face Swap - Qwen Image Edit 2509 (English)

    Another important thing is to update ComfyUI. Many people are having terrible results because they haven't updated ComfyUI. The 2511 model has an architecture with a few more layers, and that's why ComfyUI needs to be updated.

    About Flux 2:

    I've done my best so far, but the results aren't as good as with QWEN. The base Flux 2 model can already handle head swapping, but with some difficulties. The goal of this LoRa was to try and improve that a bit, but I haven't achieved very good results. It might be a configuration issue, so here's this beta version for you to test.

    Try with CFG: 8.0

    PERSONAL NOTES:

    The swap quality will always depend heavily on the quality of your input images. Larger, clean images with little noise or compression artifacts generally produce the best results. Keep in mind that the model always follows the quality of the body image, since it becomes the final rendered frame—so even if the face source is high-quality, a low-resolution or noisy body image will limit the outcome.

    Most of the images I generate are created without using the LightX2V lighting LoRA, since I noticed that enabling it tends to make the skin appear more plastic-like and reddish, and finding the right balance requires extra tuning that I didn’t focus on. If anyone has discovered good configurations, feel free to share them in the comments of this template.

    In short, using LightX2V makes the model less versatile because it operates with a fixed CFG value of 1.0. So before assuming it “didn’t work,” I recommend first testing the workflow I published without LightX2V to compare the results.

    If you’re getting results with too much contrast, overly strong colors, or plastic-like textures while using LightX2V’s lightning models, try reducing the number of inference steps. For example, if you’re using the Qwen Image Edit 2509 Lightning (8 steps) model, try running it with 4 steps instead. The excessive contrast often comes from running too many steps while CFG remains fixed at 1.0.

    If you encounter similar issues without using the lighting LoRA, try lowering the steps as well—e.g., from 20 down to around 16 or fewer—and reduce CFG to values like 1.2 or 1.5, which can help produce smoother, more natural results.

    Another important detail: in images where the body is positioned farther from the camera, the face region becomes smaller, which can reduce swap accuracy and overall quality. This happens because the model has less pixel information to work with in that small facial area. To handle these cases, you can use my older workflow, which automatically crops the face region from the body image and performs an inpainting-like process to improve results in distant or small-face compositions.

    Finally, if you notice loss of similarity between faces or poses—especially when the reference and target images differ significantly in aesthetics or angles—try increasing the strength of your head swap LoRA slightly (for instance, to 1.2 or 1.3) to restore consistency.


    ⚙️ BFS — “Focus Faces”

    Trained on 240 image triplets (face, body, and result),
    with a LoRA rank of 16 → later increased to 32,
    and gradient accumulation = 2, running for 5500 steps on an NVIDIA L40S GPU.

    This version produces stable and detailed face swaps, preserving expression, lighting, and gaze direction while maintaining the body’s natural look.


    🔧 Model Notes

    • You don't need to use my workflow to make this lora work, if you are having problems with it use yours, it is the simple workflow of qwen image edit + lora and the inputs in the right order: face image 1, body image 2.

    • Quantization: not guaranteed to work below FP8 (avoid GGUF Q4).

    • Face mask: optional — remove if MediaPipe or Planar Overlay cause issues.

    • Pose conditioning: use MediaPipe Face Mesh or DWPose if you need more alignment control.

    • Lightning LoRA: may produce plastic-like skin, especially when mixed with other Qwen-based LoRAs.


    Samplers:

    • er_sde + beta57 / kl_optimal / ddim_uniform (best results)

    • ddim + ddim_uniform (sometimes most realistic)

    • res_2s + beta57

    Don't get attached to one setting, sometimes if it doesn't work well with one, switch to another.

    Precision:

    • 🧠 Best: fp16

    • ⚙️ Recommended: gguf q8 or fp8

    • ⚠️ Below fp8: noticeable degradation

    Inference Tips:

    • With Qwen Image Edit 2509 Lightining LoRA → use 4 / 8 steps for fast generation.

    • Without it → use 12–20 steps, CFG 1.0–2.5 for realism.


    🧬 BFS — “Focus Head”

    The “Focus Head” version was trained as a continuation of Focus Face, extending the dataset and shifting focus toward full head swaps.

    It was trained on a NVIDIA RTX 6000 PRO, rank 32, for 12,000 steps, using 628 image pairs (face, body, target, and sometimes pose maps generated via MediaPipe).

    🔹 Training Phases

    1. Standard Face Swap – same Focus Face, focusing on facial identity.

    2. Pose-Conditioned Face Swap – added pose maps to align gaze and head angle.

    3. Full Head Swap – replaced the entire head (including hair) for stronger identity control.

    After ~2000 steps, the focus moved toward head swap refinement.
    At ~4000 steps, the dataset was narrowed to perfect skin-tone matches, and by the end of training,
    the dataset evolved from 628 → 138 → 76 high-quality samples for final fine-tuning.

    ⚠️ Note:
    While Focus Face can still perform standard face swaps, it’s more naturally inclined toward full head swaps due to its data balance.
    This was intentional in part, but also a side-effect of dataset distribution and mixed conditioning.


    ⚠️ Important Notice

    Do not share results involving real people, celebrities, or public figures.
    Civitai’s moderation may disable posts that violate likeness or consent rules.
    This model is intended only for artistic and fictional characters, educational use, and AI experimentation.

    I take no responsibility for any misuse of this model. Please use it responsibly and respect all likeness rights.

    Description

    This release includes two variants: the 100% original, which is the direct result of training for over 5,500 steps, and a merged version, created by merging version 4 of the 2509 model with this 100% original model trained on 2511.

    In my evaluation, the merged version performs better, particularly in its ability to reproduce a wider range of expressions. While it is not yet perfect, the results indicate that we are moving in the right direction.

    Training Data: It's a zip file containing the other two versions of Lora.

    FAQ

    Comments (135)

    wxcvbnwJan 2, 2026
    CivitAI

    Would be interested in seeing an equivalent but for clothing swaps!

    ApexArtist1Jan 2, 2026

    https://youtu.be/kk4VdtF7MPE best clothing swap

    ikrall001893Jan 2, 2026· 2 reactions
    CivitAI

    ok, am uninstalling face fusion... x)
    love it :3

    Ponder_StibbonsJan 2, 2026

    Facefusion has a really good pipeline, and incredible memory management, last time I checked, haven't updated since 3.something. I was never able to duplicate the occlusion masking, which was the real reason for keeping it around, even though it's way too aggressive. But comping a FF mouth with a comfy swap saved many projects. Worth keeping it around for the stuff it does well, and fast. If you don't mind compositing.

    christopherlinki1408Jan 2, 2026· 2 reactions
    CivitAI

    Amazing LoRA, but it can’t swap heads that are viewed from the back or above. I tried changing the prompt, but it still didn’t work. The Qwen model can generate images showing the head from the back, so maybe the training data didn’t include those head angles. I uploaded images to the gallery to showcase the problem. There might be a way to fix it, but it didn’t work for me.

    NRDX
    Author
    Jan 2, 2026

    My dataset doesn't actually have many of those examples, but look at the example I sent. I don't know what base model you're using, or what workflow you're using, but the more quantified the models you use, the worse the quality of anything you do will be.

    Imgur: The magic of the Internet

    shalegriJan 4, 2026· 1 reaction

    I completely agree with NRDX. In addition, to achieve the desired result, use additional LoRa and different Prompt options.
    https://fex.net/ru/s/f27nmvn

    shalegriJan 2, 2026· 1 reaction
    CivitAI

    I confirm, this is the Best Face Swap, one that actually works. Thank you very much to the сreator!

    NRDX
    Author
    Jan 2, 2026· 6 reactions
    CivitAI

    If you want to test it, I created a custom Lightning based on an original Lightning from QIE 2511, but I first extracted a rank-4 LoRA and then merged it with the Lightning LoRA from QI 2512. With this approach, the custom version ends up slightly less “plastic” than the original when used with 2511.

    https://huggingface.co/Alissonerdx/CustomLightning

    altoiddealerJan 2, 2026· 1 reaction

    I was impressed, but not super thrilled with the results - but I wasn't using the custom lightning LoRA, so I decided to go ahead and try that just to see if it really makes the difference. I was shocked to find that your custom lightning really made a big difference in the quality of the results! Nice work on this!

    soyv4Jan 2, 2026· 6 reactions
    CivitAI

    It's just awful, starting with v4 there's no similarity at all, 2511 - v5 doesn't work fine either, I took the diagram from here - https://civitai.com/articles/20190/headface-swap-workflow-qwen-image-edit-2509 . I checked the performance of v3 and was surprised how it clearly conveys faces, you can easily replace one head with another, I don't know what you did there and how you trained v4-v5, but they for some reason they don't work on your circuit, I assume they don't work with quantized versions, I have q5 to understand.

    NRDX
    Author
    Jan 2, 2026

    Man, how can you say it’s simply horrible? There’s no similarity at all? Do you think I made those posted images with Photoshop? Use the fp8 mixed model or at least a Q8. A large part of the problems comes from how people are using it—they try to run the models with Q4 quantization and expect the quality to be the same. On top of that, they throw in any random input image and think the model is supposed to perform miracles.

    NRDX
    Author
    Jan 2, 2026

    Post your examples here, along with your input images, so I can help you, because without showing you what's awful to you, I can't do anything.

    NRDX
    Author
    Jan 2, 2026

    It’s important to understand that 2511 does not have the same level of adherence that we achieved with the v3 trained on 2509. Just because 2511 is more capable overall does not mean it is automatically better at head swapping. These are different aspects of the model, and higher general capability does not guarantee better adherence for this specific task.

    I’ve already made this clear before, but even with that limitation, I’m genuinely very satisfied with V5 for 2511. I spent more than a week training this model to reach V5, using a dataset with over 300 head-swap images. This is the best result I’ve managed to achieve so far on 2511.

    So if you have concrete suggestions for improvement, or can clearly point out what exactly looks “horrible” to you, that kind of feedback would actually be valuable and help me continue improving the model.

    You should also note that the model you download directly as V5 is a merged version. If you want the original model that came straight out of training, you can download the file called “training images”. It’s a ZIP file that contains both the merged FP32 version and the original unmerged model.

    HMythAIJan 2, 2026· 1 reaction

    I used v5 today to create a dataset and I am quite happy with the consistency of head swap, you might be using a bad workflow maybe or some wrong settings

    kronos1959777Jan 3, 2026

    Are you using a lightning lora? I found in wan2gp with 2509 the result was bad without one but with it was amazing.

    NRDX
    Author
    Jan 3, 2026

    @kronos1959777  Hello, if you want, try my custom lighting; it has better results compared to the original model. Alissonerdx/CustomLightning · Hugging Face

    lblogan14Jan 3, 2026· 1 reaction
    CivitAI

    Appreciated the hard work here! Results are decent. I know sometimes we may need to rely on randomness and resample the final output. This is a decent LoRA. Depending on the complexity of both reference images and target images, a few parameter tweaks just need to be done. There is no one-for-all setting in any AI model. The fun is to find the balance point.

    NRDX
    Author
    Jan 3, 2026

    That's right, the quality depends much more on the base models you use, the quality of the inputs and techniques; LoRa is only activating something the model already knows how to do. I advise you to test using LMS + SGM_Uniform and also using my custom Lightning LoRa.

    lblogan14Jan 4, 2026

    @NRDX LMS + SGM_Uniform did boost the successful rate for randomness. I also saw you now have negative prompt in your V5 simple workflow. Any chance you know the distribution of head dataset? Seems like there is a challenge if the reference image has higher resolution than the target image. I could sample a few more times to check if better results are generated.

    lblogan14Jan 4, 2026

    For example, if the reference image is a regular portrait shot and the target image is more artistic (noir or black/white), i.e., the styles between two images differ too much, it is better to turn off lightning LoRA and use regular bf16 model to do style transfer and head swap at the same time.

    NRDX
    Author
    Jan 4, 2026

    @lblogan14 The distribution you’re asking about refers to resolution, correct?
    In my case, the entire dataset is usually trained at 1024 and 256, using both resolution buckets.
    If you notice, I always keep the images at a 1024 base resolution in the V5 workflow specifically to avoid any issues related to mismatched resolutions.

    NRDX
    Author
    Jan 4, 2026

    @lblogan14 The negative prompt was added because I noticed a reduction in noise when using it. In previous versions, I don’t think it made that much of a difference.
    Another thing: the LoRA you’re using for acceleration — is it the custom one I mentioned, or the original?
    And lastly, something you could try is using latent references to see if that helps in any way.

    NRDX
    Author
    Jan 4, 2026

    === ORIGINAL RESOLUTION DISTRIBUTION ===

    1328x1328 -> 78

    968x1328 -> 37

    1024x1024 -> 31

    1056x1328 -> 26

    1280x848 -> 26

    1280x1280 -> 16

    880x1328 -> 15

    1064x1328 -> 6

    992x1328 -> 5

    1328x1320 -> 5

    NRDX
    Author
    Jan 4, 2026

    === FINAL (NORMALIZED) RESOLUTIONS ===

    1024x1024 -> 126

    736x1024 -> 44

    800x1024 -> 37

    1024x672 -> 29

    672x1024 -> 20

    864x1024 -> 10

    896x1024 -> 9

    992x1024 -> 7

    832x1024 -> 7

    1024x992 -> 5

    928x1024 -> 4

    704x1024 -> 4

    960x1024 -> 4

    640x1024 -> 2

    1024x512 -> 2

    lblogan14Jan 4, 2026· 1 reaction

    @NRDX Thanks for sharing! That explains a lot! I do use your custom lightning lora. Another trick I would do is that I tried to turn of lightning lora if there is a nontrivial occlusion in reference image, so model has more time to restore

    valentinkognito365Jan 7, 2026· 1 reaction

    Using the custom lightning lora + lms/sgm_uniform is a new deal ! thanks

    orkomonJan 3, 2026· 1 reaction
    CivitAI

    Big fan of V4 and V5 simple workflows. You are doing God's work here, much better than other masking/controlnet based approaches. Best Open Source headswap on the internet right now IMO. Congrats! Looking forward to V6.

    PurpleCatJan 3, 2026· 2 reactions
    CivitAI

    Really, really good! Thank you!

    gummy_bearzJan 3, 2026· 3 reactions
    CivitAI

    Thank you for the great work. I am getting terrific results with v5. I had some initial issues, but updating to your latest workflow and running ComfyUI + node updates fixed everything. The QIE 2511 model with this Lora is a huge leap forward. As you have said repeatedly... this isn't magic and users need to use common sense before judging. Ensure quality input images, test with different sampler/scheduler combos, tweak steps/cfg/shift/strength levels, try different seeds, and recognize that quantization + lightning will reduce quality. With that said, I'm very impressed with my Q8/FP8 + Lightning/4step results in v5.

    Your work and hustle is appreciated by most of us. Don't let the dumb comments get to you.

    NRDX
    Author
    Jan 3, 2026· 1 reaction

    Dude, that's exactly what I've been trying to say, haha, thanks. Later, try my custom lightning, I think it might improve the results, instead of using the original lightning.

    _Jarvis_Jan 3, 2026· 1 reaction
    CivitAI

    Well, I don't know... I tested v5, and the result is very different from the examples. The face changes so much during the replacement process that it becomes barely recognizable. My settings with which I achieved the best results: lms + sgm uniform, cfg 1, steps 4, custom lighting_4or8_steps

    NRDX
    Author
    Jan 3, 2026

    Which base model are you using, FP8mixed? Q4, Q5? I recommend using at least an FP8 or Q8. Furthermore, 80% of the swap quality will depend on your inputs. Try adjusting or changing your inputs to see if any difference occurs. Try with a different face but with the same body, etc.

    NRDX
    Author
    Jan 3, 2026

    Another important thing is to update ComfyUI. Many people are having terrible results because they haven't updated ComfyUI. The 2511 model has an architecture with a few more layers, and that's why ComfyUI needs to be updated.

    NRDX
    Author
    Jan 3, 2026

    Another important detail is whether you are performing the head swap with human heads or anime characters, etc.? Because this model was only trained with images of real people. Just to clarify this point, it can work with anime images, etc., but that wasn't the main objective.

    _Jarvis_Jan 4, 2026

    @NRDX I first used the FP8 model, then switched to Q5_K_S. I’ve updated Comfy too. Mostly, I only swapped in real human faces—like from actual photos, not anime.

    NRDX
    Author
    Jan 4, 2026

    @_Jarvis_ Have you tested with other faces? Because these images in my examples are images generated using this LoRa, the problem might be your input images. Also check their resolution; I advise keeping 1024 as a base. Anyway, do you have any examples to show?

    _Jarvis_Jan 6, 2026

    @NRDX I think I've figured it all out, the quality is amazing. I would really like to see lora from you for comfortable Inpainting

    The_Last_Goblin_KingJan 4, 2026
    CivitAI

    Does anyone know if there is a node in V5 or V3 that helps correct too small a head scaling? Sometimes my head is perfect, but it's noticeably smaller than the one it's replacing

    NRDX
    Author
    Jan 4, 2026

    Do a test: try using, as your face input image, a head with the neck included, and with a size similar to the head size in your body image, to see if that has any effect.

    Veronika7070Jan 4, 2026
    CivitAI

    Thank you for the work done. But how can I transfer the head while keeping the hair the same as in the reference image? I need to replace the head without touching the hair from the original image.

    NRDX
    Author
    Jan 4, 2026

    You can try face swap instead of head swap

    kurokuroJan 5, 2026· 1 reaction
    CivitAI

    v5 with the new workflow is great. Two quirks, though: 1) Despite the prompt, it appears to heavily favor the expression of the face reference (picture 2) - e.g. if Picture 1 is bored, Picture 2 has a big smile, output will be smiling; and 2) I'm not convinced the negative prompt does anything, e.g. putting "smile, smiling, happy" doesn't change output.

    NRDX
    Author
    Jan 5, 2026

    If you are using LoRa lighting with a cfg 1, the negative prompt won't make any difference. You can also try slightly reducing the LoRa head swap strength or try other sampler/scheduler combinations. Another thing is to try using a face pose map in the 3rd available image, or if you are passing the images through a reference latent, you can add a reference latent to a face pose image (be careful because it can force the head shape; ideally, you should use a mouth pose only). Then you can control the strength of the mouth conditioning through a node called ConditioningSetAreaStrength along with ConditioningSetTimestepRange.

    kurokuroJan 7, 2026

    @NRDX Oh, interesting. I'll give these a shot; was using lightning at CFG 1 and 4 steps, didn't realize that's why the negative prompt wasn't working. I don't think I fully follow what you mean by passing images through a reference latent + controlling conditioning strength for only one of the three reference images, but I'll take a look at the nodes and maybe it'll make sense then.

    orkomonJan 5, 2026
    CivitAI

    Any tips on how we can control the head size? Different seeds will give different head sizes on the same reference body frame.

    NRDX
    Author
    Jan 5, 2026· 2 reactions

    I'm trying to find a scheduler/sampler that solves the problem. I'm running some XY plot tests to try and identify the best ones. I'll send a link here; there are already two tests done there. You can find the best ones and try testing with them.

    https://drive.google.com/drive/folders/1jgoRyGqavmJSSGsHgymB4kbnrfBZnk5Q?usp=sharing

    NRDX
    Author
    Jan 5, 2026· 2 reactions

    I'm looking at it here, but I haven't tested it much yet; deis_3m/heunpp2 + bong_tangent might be interesting.

    orkomonJan 5, 2026· 1 reaction

    @NRDX thanks a lot for doing this but I think the variance of head size is due to seed fluctuations rather than sampler changes.

    NRDX
    Author
    Jan 5, 2026· 1 reaction

    @orkomon Yes, I completely agree, seed variance is the main driver here.
    That said, I still think it makes sense to prefer samplers and schedulers that, in practice, show the lowest tendency to distort or elongate the head relative to the original reference.
    Since the scheduler defines the sigma curve, it directly controls how the model transitions from high noise (global structure changes) to low noise (fine detail refinement). High noise tends to introduce larger structural variation, while low noise mainly refines details.
    My hypothesis is that certain noise decay profiles may bias how early the head shape stabilizes, which could indirectly influence head size consistency across seeds. I don’t claim this fully solves the issue, but it may reduce the likelihood or severity of the distortion.

    orkomonJan 6, 2026

    @NRDX got it. thanks!

    orkomonJan 5, 2026· 4 reactions
    CivitAI

    one more finding after playing with this all day...

    the lightning 4 steps is way more consistent with the input face, especially when the face is a small percentage of the image size in the body image.

    the regular 20 step seems to be higher quality, but it seems to require more closeup for the body image, otherwise it either deviates away from the reference face, or starts deforming eyes and other stuff.

    do you have the same findings as me?

    NRDX
    Author
    Jan 5, 2026· 1 reaction

    Yes, I'm starting to see something at that level, but I haven't had enough time to do that level of testing yet. I appreciate you doing this and bringing this information here. I'm currently running some tests, but only on the sampler and scheduler; later I'll try running other parameter and size tests.

    mrnolan1234Jan 7, 2026

    I have found this to be true as well. I have found that the regular 20 step has more accurate and realistic skin and features but it seems put the face on the head instead of swapping heads if that makes sense. The lighting lora while less realistic seems to be better at the likeness as far as head and face shape is concerned which might be why it works on far away smaller heads. Also I am finding that I cant just leave it er_sde and beta57 I have to switch it up for the best result for each image.

    NRDX
    Author
    Jan 7, 2026

    @mrnolan1234 @orkomon You can after try running a test using the original LoRa file (not the merged version). It's in the training data zip file, or you can download it from HuggingFace.

    valentinkognito365Jan 5, 2026· 4 reactions
    CivitAI

    I can't believe how good is your 2511 lora. It's incredible. You are a god

    jill_41205Jan 6, 2026· 2 reactions
    CivitAI

    I’d like to share some of my own experiences—please excuse any lack of technical rigor in my review! This LoRA works great, though it occasionally produces incorrect facial proportions (elongated faces). I’ve tried various sampler configurations with similar results. However, I discovered that when using inpainting masks, drawing a smaller mask for the face—allowing the qwen_vl model to perceive the original head proportions—helps reduce the face-stretching issue. (Using qwen_edit_aio v18 model + bfs_fp16)

    NRDX
    Author
    Jan 6, 2026

    Do you have any examples of your masks? Also, are you using a merge as a base? Isn't AIO a merge of the two versions?

    jill_41205Jan 6, 2026· 1 reaction

    @NRDX Hi, I am using your model: https://huggingface.co/aiqwen/Qwen-Image-Edit-Rapid-AIO

    I would like to share some of my results here: https://drive.google.com/drive/folders/10o8zSAJO4hBU6Eutf6-NtQtpfy__5gg5?usp=sharing

    As a beginner in ComfyUI, I am currently using a workflow created by others. These samples do not include tests for face swapping at extreme angles. Finally, thank you so much for sharing this great model!

    NRDX
    Author
    Jan 6, 2026· 1 reaction

    @jill_41205 The AIO isn't my model; it's basically a merge that was done with different LoRas, etc., but it's essentially the QWEN Image Editor 2509 and not the newer 2511. LoRa V5 was made for the 2511, not the 2509. If you want to use it on the 2509, you'd have to use version V4 or earlier.

    NRDX
    Author
    Jan 6, 2026· 1 reaction

    @jill_41205 Cool, I'll try that trick, but on the 2511; your images turned out very good.

    jill_41205Jan 6, 2026· 1 reaction

    @NRDX ​I sent the wrong model URL earlier, let me correct it: > https://huggingface.co/Phr00t/Qwen-Image-Edit-Rapid-AIO

    ​I am using the v18 version, which merges 2509 and 2511. Therefore, it is compatible with your v5 model. >

    I have tested the official qwen-image-edit-2511-Bf16 on a cloud service, combined with your LoRA model bfs_head_v5_2511_merged_version_rank_32_fp32. The results are definitely better. I haven't tried extreme facial angles yet; I’ll need to look into that further.

    valentinkognito365Jan 7, 2026· 1 reaction

    I agree this elongated face issue is noticeable. While not being a disaster either, it's enough to give an uncanny valley effect. So what's the trick with inpainting masks ? Do we need to ask Qwen to do the swap on a cropped face without the context of the whole image ?

    NRDX
    Author
    Jan 8, 2026· 1 reaction

    @valentinkognito365 He posted the workflow embedded in the image and an image showing the mask, but I don't know if he cut out the face using the mask or something like that.

    jill_41205Jan 8, 2026

    @valentinkognito365 "Thanks for the explanation! My workflow uses:

    1.LoadImage node to load the image, then paint the mask in the editor

    2.TextEncodeQwenImageEditPlus node for text prompt

    3.InpaintCrop automatically crops the masked area

    4.KSampler generates new content

    5.InpaintStitch stitches it back

    Still learning the technical details, but this workflow works well for me!"

    The original author's workflow can be downloaded here:https://www.runninghub.ai/post/2007013698077986818

    valentinkognito365Jan 8, 2026· 1 reaction

    @jill_41205 I see, i'm familiar with these nodes. Use SAM2 segmentation nodes to do the mask automatically for you. Also check out my new comment on the model page : using a 3rd reference pic is amazing !

    jill_41205Jan 8, 2026

    @valentinkognito365 Thanks for the suggestion! I actually prefer manual masking because it allows me to preserve some of the original head contour, which helps reduce that 'uncanny' stitched look

    valentinkognito365Jan 8, 2026· 1 reaction

    @jill_41205 you can add nodes to extend or shrink the automatically generated mask. Aside for some rare cases where SAM2 struggle, there is no point doing masks manually

    valentinkognito365Jan 9, 2026· 1 reaction

    @jill_41205 that is a great thing you reminded me about masks, I've added SAM2+Inpaint crop/stitch to the workflow and now the 1rst reference pic is focused on the whole head + the context around, instead of the whole picture. So, more quality

    kishenmaan182004187Jan 7, 2026· 1 reaction
    CivitAI

    when i use it idk if its me but the result doesnt look like the ref image im using my face ref is in the same angle and lighting as my target but still doesnt transfer over to the target

    NRDX
    Author
    Jan 7, 2026

    Which base model do you use? Also, could you provide an example of the result? If you don't want to share yours, do it with another face and bring the results here for us.

    therapygirlieJan 7, 2026
    CivitAI

    getting far lower quality outputs in v5 compared with the v4... not sure what is going on, as i didn't change anything. really washed out colors and over smoothed images

    NRDX
    Author
    Jan 8, 2026

    This is the first time I've heard of washed-out colors. Had you used version 2511 before? Is your ComfyUI up to date?

    soyv4Jan 8, 2026· 1 reaction
    CivitAI

    Well, it's working with v5 Lora!!! I'm using bfs_head_v5_2511_merged_version_rank_32_fp32.safetensors, and everything is working out great, thanks!

    valentinkognito365Jan 8, 2026· 4 reactions
    CivitAI

    Just found out using a 3rd reference picture improves the quality, often dramatically.
    You just use the normal prompt that doesn't state the 3rd picture, but you provide it.

    What I would do for best quality is I feed a reference pic with the closest angle/expression possible to the original face (if you don't have it use chatGPT, it now beats Nano Banana Pro at this), then as extra third pic the front one.

    But just using two good pictures will beat a single one. I wonder why noone had tried this before.

    I'm also using er_sde and beta57 as I found lms / sgm_uniform to be hit and miss.


    Enjoy folks

    valentinkognito365Jan 8, 2026· 3 reactions

    Some other tips :
    - Obvious one : adding some description of the target expression helps (mouth open, mouth closed, parted lips, etc)
    - For NSFW BJ : add "keep the object inserted into her swallowing mouth, her lips are closed around the object." or "keep her tongue licking the back of the object"
    - For NSFW facials : add "keep the clear translucid white sticky fluid dripping on her [describe the zones : face, eyes, cheeks and chin]"

    This Lora often works better than an actual lora of the character. It's amazing

    Share your prompts if you have some more :-)

    NRDX
    Author
    Jan 8, 2026· 1 reaction

    Actually, I did some tests with the third reference image, including using a pose map. I created a specific one for that, but I didn't have much time to test it thoroughly; I was testing a lot of things this week. But using another image of the face that will be transferred can really help with details. I didn't get to really test that. Thank you very much for bringing this up. If you can, post some examples with the workflow embedded so people can replicate the results.

    @NRDX I find this amazing. It actually can beat the finest loras I've trained. But then I haven't trained yet on Qwen 2512. I wonder if adding in a Qwen 2512 lora of the character would help even further, or would it degrade the quality ?

    bertos2013623Jan 8, 2026

    What trigger words do you use when connecting image 3?

    NRDX
    Author
    Jan 8, 2026

    @valentinkognito365 But adding the character's lora somewhat undermines its main use, which is precisely for creating datasets to train character loras.

    @bertos2013623 like I wrote, i'm not changing the initial prompt so I don't use anything for picture 3. I just add one

    @NRDX I don't really agree. If you want to create datasets from a single pic you're far better off using chatGPT which is the currently best model for that, even beating Nano Banana Pro. And/or rotating the head with wan 2.6. Trust me i've spent a lot of time doing that stuff. However, your 2511 lora when working at its best, gives amazing results that beats what I could generate using inpainting with Flux/Krea loras previously. I don't know if the same could be said about Qwen Image / Z-Image loras though. I'm often seeking for ultimate likeness so was just wondering if a lora could be applied as well... another option would be swap + inpainting with the lora. Anyway, it's already amazing with just the lora and the workflow is simple. I hate complex, unintelligible workflows.

    @NRDX another reason to use your lora is actually to not have to train a lora. A few pictures is enough to do a lot of stuff already, so that's good for characters where I'm not willing to spend the time required to do a full dataset + training

    NRDX
    Author
    Jan 8, 2026

    @valentinkognito365 Most people who use these LoRa files do so to produce NSFW content; they wouldn't do that with Nano Banana Pro or ChatGPT. 

    NRDX
    Author
    Jan 8, 2026

    @valentinkognito365 That's the big question: if you only have one image of a character's face, how would you maintain body consistency only in T2I? With this LoRa, you can simply take images of real bodies while maintaining consistency and apply a face swap to create consistent datasets.

    @NRDX the thing is I never include the body in any dataset, because I consider either I can prompt it, or I can do the face swap. I was referring to gpt/nano banana pro to make variations of the face (angles/expression). they re unbeatable for that. Then when i got enough for my dataset, I train the lora. I ve sometimes done like 200+ different faces variations for a lora. All from a single starting pic. A folder for each expression with variations of it and angles. Its a lot of work but i obtained some absolute bulletproof loras like that. The quality was way above just collecting different pics because I had picked up my "favorite face" and derived all the dataset from it, so it was very consistent and that's good for training

    valentinkognito365Jan 11, 2026· 1 reaction

    With more practice, I think I actually have the best results using a single reference pic with the closest possible angle/expression to the target face, using chatGPT (prompting "give the face 1 the expression and angle of face 2"). What would happen using two references pics was the model would pick up the one closest to target I think. I've done comparison tests using same seed and same reference picture twice and using it only once was better. Plus using three reference pics is much slower.

    So as a conclusion I would say : top method is to use chatGPT / prepare a suitable pic, then if you can't for whatever reason it can be useful to use two references pics so you let the model pick up the closest one

    valentinkognito365Jan 12, 2026

    I confirm using an additional Qwen Image 2512 LoRa of the character's face can improve a lot the result, even when using a reference face of that character that wasn't in the LoRa's dataset.

    It doesn't work all the time though, some pictures where the expression was tricky came out horrible because I think in this case it was up to Qwen to make the variation and the LoRa made it impossible. So for some pictures it's probably better to lower the weight of the lora or even disable it... but for the majority of pics it can help remove the uncanny valley effect

    permittedJan 8, 2026· 5 reactions
    CivitAI

    hahaha, I used it on real individuals!

    NerezzaJan 10, 2026· 1 reaction

    Stop! You violated the law. Pay the court a fine or serve your sentence. Your stolen goods are now forfeit.

    samo50998Jan 10, 2026
    CivitAI

    Is there an SDXL version?

    NRDX
    Author
    Jan 12, 2026

    No, that's possible because Qwen Image Edit is a model focused on editing.

    JasonluvJan 11, 2026
    CivitAI

    Hi, thanks for sharing this awesome head swap workflow! I've been trying to use it with the Qwen Image Edit 2511 model, but I'm running into an issue where the output is just pure noise instead of an edited image based on my input photos.Here's what I did step by step:

    Loaded your workflow JSON directly into ComfyUI (latest version).

    Set the main UNet Loader to my 2511 model: qwen_image_edit_2511_fp8mixed.safetensors (FP8 mixed precision version from Comfy-Org on Hugging Face).

    Added the BFS Head Swap LoRA (bfs_head_v5_2511_merged_version_rank_16_fp16) with strength around 1.0.
    (I did NOT use the Lightning LoRA – it was bypassed by default in the workflow.)

    Input images:

    Base image (Picture 1): a full-body photo with background.

    Reference images (Picture 2 and 3): close-up face shots for the head swap.

    Positive prompt was the default one about preserving lighting/background from Picture 1 and copying head from others, high quality, sharp, etc.

    KSampler settings: tried steps 20-30, CFG 7-10, scheduler simple/karras, and denoise from 0.7 to 1.0 (mostly around default).

    No matter what I adjust (even lowering denoise to 0.7), the final output is always a full colorful noise image (like starting from pure random latent), instead of editing the base image and swapping the head.The workflow loads fine, no red/missing nodes, model loads quickly, and it runs without errors in the console. It seems like it's treating it as txt2img from noise instead of img2img editing.

    Thanks a lot for your help – really want to get this working because 2511 head swaps look amazing in other examples

    NRDX
    Author
    Jan 12, 2026

    Have you updated your ComfyUI to the latest version?

    JasonluvJan 12, 2026

    @NRDX Thank you for your reply, sir. A few hours ago, after reading your explanation, I updated both the frontend and backend of ComfyUI to the latest versions. Now all workflows are unable to run, so I had to revert my image to ensure I can still use BFS V3.

    NRDX
    Author
    Jan 12, 2026

    @Jasonluv That's strange, because for the people who had this problem, it was an issue with outdated ComfyUI.

    gilak7758Jan 11, 2026
    CivitAI

    Is there a working method to load reference images from a folder and batch process them automatically? I haven't got it working with the options I've tried. This workflow is AMAZING btw!

    NRDX
    Author
    Jan 12, 2026
    gilak7758Jan 12, 2026· 1 reaction

    @NRDX TYSM! I really appreciate your help, using that plus the scheduler/sampler test node is going to be epic with your workflow!

    daversJan 12, 2026
    CivitAI

    Your lora works amazingly, especially with your workflow!
    Quick question, I'm using your Head swap V3 and I'm having trouble keeping the facial expression of the body reference image, do you know what could help? Also for the cereal eating girl image, did you use Qwen Edit later on to keep the cereal part?

    NRDX
    Author
    Jan 12, 2026

    Thank you, all my examples only used Lora once, there weren't two passes through the model, meaning the cereal was kept on the first use. As for the facial expression, I tried to improve that in v5, but it's still not perfect.

    WoBJan 12, 2026
    CivitAI

    I don't know what my issue is, but it's not even close for me. Head swap, face swap, doesn't matter. Body image is a high quality professional photo while face photo is a decent quality selfie. Tried upping the lora strength to change. I don't know.

    NRDX
    Author
    Jan 12, 2026

    Can you share an example? Have you tried changing the angles of the face photo, placing the photo as close as possible? If the body photo is too far from the camera, you'll need to crop the face area and apply the head swap from that crop, because this model isn't good for editing things that are too far away.

    NRDX
    Author
    Jan 12, 2026

    Even in the v5 workflow, there are 2 nodes to do this; you need to create a mask and remove the bypass from these nodes for it to work.

    ✂️ Inpaint Crop (Improved) and ✂️ Inpaint Stitch (Improved)

    NRDX
    Author
    Jan 12, 2026

    Another important detail: avoid using highly quantized models because the more quantized, the worse the quality will be. Use at least a Q6; what I recommend is the FP8-mixed or Q8 if it's GGUF.

    kishenmaan182004187Jan 13, 2026· 1 reaction

    @NRDX could i get the v5 workflow also is there a tutorial for how to use this properly?

    brilliant845Jan 12, 2026· 2 reactions
    CivitAI

    Props to you, v5 is a big step up from v4. If the face in the reference body picture isn’t too far off, the results are incredibly good. I’ve found that adding something like “Rotate the head to a full frontal position facing the camera” to the prompt drastically improves similarity to the reference face. Keep up the great work!

    NRDX
    Author
    Jan 12, 2026· 1 reaction

    Thanks, let's keep going as long as we can haha, but it's great that you mentioned that, someone else said something similar, I'll test it out as soon as possible.

    brilliant845Jan 12, 2026· 1 reaction

    @NRDX I just read another comment from you here about the crop and stitch feature and decided to give it a try. I think you should enable it by default, it’s insane how much better the results are.

    Simply activating the two nodes didn’t work for me though. I had to disconnect all links from the resized body reference image and reconnect them to the inpaint crop instead. Is that normal, or did I do something wrong? I’m still kind of new to ComfyUI, so I’m not sure if I overlooked something.

    The results are impressive nonetheless. I also sent you a small tip per buy me a coffee to support you. Thanks again for the great lora and workflow!

    NRDX
    Author
    Jan 12, 2026

    @brilliant845 Look, to be honest, it should work. I might have forgotten to update the workflow, but anyway, if you managed to get it working, that's what matters, and thank you very much for the tip, it'll be converted into GPU hours hahaha.

    valentinkognito365Jan 13, 2026· 1 reaction
    CivitAI

    It's really worth to train a character lora to go along this lora. The quality goes from great to spectacular/insane level

    gilak7758Jan 14, 2026

    @valentinkognito365 what's your preferred method to train a lora? Last time I tried, it was a bit too complicated for me with the method I looked into.

    NRDX
    Author
    Jan 16, 2026

    You can use LoRa head swapping to create a small dataset and train your LoRa, and then use both together.

    valentinkognito365Jan 16, 2026· 1 reaction

    @gilak7758 what I do is I first make the best pics I can using chatGPT image editor. This mean cropping, upscaling, removing objets, getting a grey background to get rid of unnecessary data, getting the skin right (I've done some tests and you can get rid of the plastic skin just by asking GPT to give a "realistic skin texture" to your reference picture - simple as that), generating angles and expressions (really helps a lot Qwen), etc.

    I currently make datasets of 30 - 40 super high quality, coherent pictures with front/three quarter/side/looking over shoulder view + open/closed/wide open mouth, etc.
    I used to do a lot more but it takes too much time to prepare and to train. Plus in a way, restraining yourself to 30 - 40 is a good thing so you are merciless with your pics.

    NB : THIS IS CRUCIAL : use ChatGPT or NanoBanana Pro for your pictures. Even if you don't train a LoRa and use them just to refine your reference pic to do the head swap. The quality of the reference pic you will use is 50% of the result, even if your character Lora is great. Trust me, i've done it thousands times : Unless you have access to professional high-res photography, there isn't a single case where you can't improve your pics using either GPT or NBP.

    Then for the training part I would advise you to do the same as me :
    - Create a runpod account
    - Watch a video on Ostris youtube channel about how to set up your training for Qwen Image (doesn't matter if it's not Qwen Image 2512, it's the same stuff)
    - Launch a pod with a GPU with 48 gb VRAM such as A40 (0.42 $ / hour)
    - Select Ostris AI Toolkit as the pod template
    - Import your dataset, launch the training on Qwen Image 2512 (takes me 7h - 9h)

    NB 2 : I don't even caption my pictures for these loras cause I will use them only for the face swap anyway so it doesn't change much. I have trained ONE using captions and when prompting an expression to go along with the face swapping, it didn't work... so....

    Now this might sound daunting ? I swear except for the runpod subscription part and how to use runpod in general (there are probably videos for that), it takes litterally 10 - 20 minutes to be set the first time (including watching Ostris video), without installing anything or typing a single command line. And the following times it's a breeze. AI Toolkit is really simple to use. And runpod brings you massive GPUs for cheap so a Lora costs me like 3 $ and I could never do it myself (Training on qwen image needs a lot of VRAM and RAM)

    You do all that correctly, you won't believe the results

    Bibou19Jan 13, 2026
    CivitAI

    I have a problem : "'NoneType' object is not subscriptable"

    tss1232333Jan 14, 2026· 2 reactions
    CivitAI

    I tried running it with RTX4090 and 5090, I always run out of system memory. What am I doing wrong?

    Using V5.

    SamplerCustomAdvanced

    Allocation on device This error means you ran out of memory on your GPU. TIPS: If the workflow worked before you might have accidentally set the batch_size to a large number.

    NRDX
    Author
    Jan 19, 2026

    Were you able to solve it?

    DenbuFeb 17, 2026

    same here with a 4090, a temporal fix is restarting comfyUI, but after 3 or 4 generations it gets stuck again

    studiokevinabanto112Jan 15, 2026
    CivitAI

    character swap soon? i pay for this

    NRDX
    Author
    Jan 16, 2026

    I need to create a good dataset for this, but it could be an option soon.

    nicccJan 16, 2026
    CivitAI

    It makes the eyes blue, something is off with training data

    NRDX
    Author
    Jan 16, 2026

    It could just be a configuration issue or a problem with your input. Try different inputs and settings. For me, the blue eye doesn't appear that way; just look at my examples.

    nicccJan 19, 2026

    @NRDX when i use blue eyed girl in input image, and use black eyed girl for new face then i get new face and blue eyes which is a mix of 2 girls.Your examplles do not show swapping blue eyes to black eyes.

    ConejoquehaceJan 17, 2026
    CivitAI

    Hi, everytime I try to install the missing node [Switch Any Crystools], The installation never happens, that node is always missing, what seems to be the problem?

    NRDX
    Author
    Jan 19, 2026

    I've never had this problem; if that's the case, try replacing it with another node that does the same thing.

    ConejoquehaceJan 20, 2026

    Sorry, excuse me, I dont know what that meant but tried renaming it... I tried running the flow with all things in the text boxes filled but still something is missing: "Cannot execute because a node is missing the class_type property.: Node ID '#93' "

    Ses_AIJan 18, 2026
    CivitAI

    what this version is for particularly?

    NRDX
    Author
    Jan 18, 2026· 1 reaction

    This version is for Flux-2 Klein 4b, but I'm already training for Flux-2 Klein 9b.

    sarcastictofuJan 18, 2026· 4 reactions
    CivitAI

    Great Job Man! It Works extremely well with even Q8 GGUF of Flux.2 Klein 4B ! and I am surprised how good of a model Flux.2 Klein 4B is it's way faster than QWEN or older Flux.1... I will be sharing my workflow on CivitAI later today and link back to this LORA in it's "Suggested Resources" section

    hahsdakiiasdaJan 18, 2026

    following!

    forfreelsd368Jan 19, 2026

    He-he, I knew you do early access. :)

    sarcastictofuJan 19, 2026

    @hahsdakiiasda hey I just uploaded my workflow on CivitAI, link is on the other comment here.

    noyartJan 18, 2026· 1 reaction
    CivitAI

    Thank you for yet again for another awesome lora! I wondering if its possible to get a workflow for the new klein 4b lora that works good with your lora. The one for qwen worked wonders!

    LORA
    Qwen
    by NRDX

    Details

    Downloads
    8,304
    Platform
    CivitAI
    Platform Status
    Available
    Created
    1/2/2026
    Updated
    4/30/2026
    Deleted
    -
    Trigger Words:
    head_swap: start with Picture 1 as the base image, keeping its lighting, environment, and background. remove the head from Picture 1 completely and replace it with the head from Picture 2, strictly preserving the hair, eye color, nose structure of Picture 2. copy the direction of the eye, head rotation, micro expressions from Picture 1, high quality, sharp details, 4k

    Files