CivArchive

    The BFS (Best Face Swap) LoRA series was developed for Qwen Image Edit 2509, specialized in high-fidelity face and head replacement tasks with natural tone blending and consistent lighting.

    Each version builds upon the previous one:

    • 🧠 Focus Faces: precise face swaps, keeping the original head shape and hair while transferring facial identity and expression.

    • 🧩 Focus Head: stronger head swaps, replacing the full head (including hair and pose orientation).

    • The 2 versions complement each other, one is focused on face swapping and the other is focused on head swapping.

    Share your creations that do not involve public figures or individuals who have not given consent. By sharing, you will earn Buzz, and your posts directly help me improve future versions by identifying and correcting potential issues.

    Important Note: If you are going to use Qwen Image Edit 2511, update your comfyui before anything else, because without it you may have problems with completely distorted or ugly images.

    If this model was helpful to you in any way, please consider helping me continue creating more model for the price of a coffee.

    Workflows:
    Head/Face Swap Workflow - Qwen-Image-Edit-2509 | Civitai

    My Custom Lightning LoRA:

    Custom Lightning - Qwen Image Edit - 2511 | Qwen LoRA | Civitai

    Alissonerdx/CustomLightning Ā· Hugging Face

    Test V3 here:

    BFS Best Face Swap - a Hugging Face Space by Alissonerdx

    Face Swap Video Tests (V1):
    Face Swap - Qwen Image Edit 2509 (English)

    Another important thing is to update ComfyUI. Many people are having terrible results because they haven't updated ComfyUI. The 2511 model has an architecture with a few more layers, and that's why ComfyUI needs to be updated.

    About Flux 2:

    I've done my best so far, but the results aren't as good as with QWEN. The base Flux 2 model can already handle head swapping, but with some difficulties. The goal of this LoRa was to try and improve that a bit, but I haven't achieved very good results. It might be a configuration issue, so here's this beta version for you to test.

    Try with CFG: 8.0

    PERSONAL NOTES:

    The swap quality will always depend heavily on the quality of your input images. Larger, clean images with little noise or compression artifacts generally produce the best results. Keep in mind that the model always follows the quality of the body image, since it becomes the final rendered frame—so even if the face source is high-quality, a low-resolution or noisy body image will limit the outcome.

    Most of the images I generate are created without using the LightX2V lighting LoRA, since I noticed that enabling it tends to make the skin appear more plastic-like and reddish, and finding the right balance requires extra tuning that I didn’t focus on. If anyone has discovered good configurations, feel free to share them in the comments of this template.

    In short, using LightX2V makes the model less versatile because it operates with a fixed CFG value of 1.0. So before assuming it ā€œdidn’t work,ā€ I recommend first testing the workflow I published without LightX2V to compare the results.

    If you’re getting results with too much contrast, overly strong colors, or plastic-like textures while using LightX2V’s lightning models, try reducing the number of inference steps. For example, if you’re using the Qwen Image Edit 2509 Lightning (8 steps) model, try running it with 4 steps instead. The excessive contrast often comes from running too many steps while CFG remains fixed at 1.0.

    If you encounter similar issues without using the lighting LoRA, try lowering the steps as well—e.g., from 20 down to around 16 or fewer—and reduce CFG to values like 1.2 or 1.5, which can help produce smoother, more natural results.

    Another important detail: in images where the body is positioned farther from the camera, the face region becomes smaller, which can reduce swap accuracy and overall quality. This happens because the model has less pixel information to work with in that small facial area. To handle these cases, you can use my older workflow, which automatically crops the face region from the body image and performs an inpainting-like process to improve results in distant or small-face compositions.

    Finally, if you notice loss of similarity between faces or poses—especially when the reference and target images differ significantly in aesthetics or angles—try increasing the strength of your head swap LoRA slightly (for instance, to 1.2 or 1.3) to restore consistency.


    āš™ļø BFS — ā€œFocus Facesā€

    Trained on 240 image triplets (face, body, and result),
    with a LoRA rank of 16 → later increased to 32,
    and gradient accumulation = 2, running for 5500 steps on an NVIDIA L40S GPU.

    This version produces stable and detailed face swaps, preserving expression, lighting, and gaze direction while maintaining the body’s natural look.


    šŸ”§ Model Notes

    • You don't need to use my workflow to make this lora work, if you are having problems with it use yours, it is the simple workflow of qwen image edit + lora and the inputs in the right order: face image 1, body image 2.

    • Quantization: not guaranteed to work below FP8 (avoid GGUF Q4).

    • Face mask: optional — remove if MediaPipe or Planar Overlay cause issues.

    • Pose conditioning: use MediaPipe Face Mesh or DWPose if you need more alignment control.

    • Lightning LoRA: may produce plastic-like skin, especially when mixed with other Qwen-based LoRAs.


    Samplers:

    • er_sde + beta57 / kl_optimal / ddim_uniform (best results)

    • ddim + ddim_uniform (sometimes most realistic)

    • res_2s + beta57

    Don't get attached to one setting, sometimes if it doesn't work well with one, switch to another.

    Precision:

    • 🧠 Best: fp16

    • āš™ļø Recommended: gguf q8 or fp8

    • āš ļø Below fp8: noticeable degradation

    Inference Tips:

    • With Qwen Image Edit 2509 Lightining LoRA → use 4 / 8 steps for fast generation.

    • Without it → use 12–20 steps, CFG 1.0–2.5 for realism.


    🧬 BFS — ā€œFocus Headā€

    The ā€œFocus Headā€ version was trained as a continuation of Focus Face, extending the dataset and shifting focus toward full head swaps.

    It was trained on a NVIDIA RTX 6000 PRO, rank 32, for 12,000 steps, using 628 image pairs (face, body, target, and sometimes pose maps generated via MediaPipe).

    šŸ”¹ Training Phases

    1. Standard Face Swap – same Focus Face, focusing on facial identity.

    2. Pose-Conditioned Face Swap – added pose maps to align gaze and head angle.

    3. Full Head Swap – replaced the entire head (including hair) for stronger identity control.

    After ~2000 steps, the focus moved toward head swap refinement.
    At ~4000 steps, the dataset was narrowed to perfect skin-tone matches, and by the end of training,
    the dataset evolved from 628 → 138 → 76 high-quality samples for final fine-tuning.

    āš ļø Note:
    While Focus Face can still perform standard face swaps, it’s more naturally inclined toward full head swaps due to its data balance.
    This was intentional in part, but also a side-effect of dataset distribution and mixed conditioning.


    āš ļø Important Notice

    Do not share results involving real people, celebrities, or public figures.
    Civitai’s moderation may disable posts that violate likeness or consent rules.
    This model is intended only for artistic and fictional characters, educational use, and AI experimentation.

    I take no responsibility for any misuse of this model. Please use it responsibly and respect all likeness rights.

    Description

    V3 introduces a new persistent-template conditioning workflow.

    Unlike previous versions, which relied primarily on the identity being established from Frame 0 only, V3 uses a custom guide-video construction step that keeps the new face visible throughout the entire guide sequence.

    This results in a much stronger and more persistent identity signal during inference.

    You can choose between the two models I released and see which one you think is better.

    Youtube Video 40 minutes in Portuguese:

    LTX-2.3 Head Swap (IC LoRA)

    Models:

    Alissonerdx/BFS-Best-Face-Swap-Video at main

    Workflow:

    workflows/workflow_ltx2_head_swap_drag_and_drop_v3.0.json Ā· Alissonerdx/BFS-Best-Face-Swap-Video at main

    šŸ™ Acknowledgements

    Special thanks to facy.ai for sponsoring the GPU used to train this model.

    If you want to check their platform, you can use my referral link:

    https://facy.ai/a/headswap


    Because the identity reference stays visible during the full guide sequence, V3 gives the model a much more stable conditioning signal across time.

    In practice, this can improve:

    • Identity consistency

    • Temporal stability

    • Resistance to identity drift

    • Facial motion continuity

    • Lip sync behavior

    • Expressive facial movement preservation

    This version is especially useful for shots where the face remains visible for longer periods, or where dialogue, mouth movement, and facial acting matter more.

    V3 is not just a refinement of the first-frame method. It changes the conditioning logic by giving the model access to a persistent identity template across the entire inference sequence.

    FAQ

    Comments (116)

    skyrimer3dMar 19, 2026Ā· 1 reaction
    CivitAI

    Checked LTX2 version, but the workflow has two missing nodes, ollama video describer and reservedregionframecomposer. Manager only finds the issue with comfyui-ollama-describer node, but installing does nothing, pip installed requirements.txt but still shows up. No idea what to do with the other. Any help?

    NRDX
    Author
    Mar 19, 2026

    Yes, I'm trying to resolve this because the ComfyUI registry isn't updating automatically. The only way to get it working for now is by manually cloning the nodes.

    macgyverjunk935Mar 19, 2026

    you also want to make sure you delete the node downloaded from the manager in the custom_nodes folder before you try to git clone it, then rerun the pip install and you might need to run the install.bat/py in the folder

    skyrimer3dMar 19, 2026

    @NRDXĀ ok i'll do that thanks

    skyrimer3dMar 19, 2026

    @macgyverjunk935 i'll do that first thanks for the suggestionĀ 

    honryindianMar 19, 2026

    Which repo to clone to get the ReservedRegionFrameComposer node?

    NRDX
    Author
    Mar 19, 2026
    dillion1920Mar 21, 2026

    @honryindianĀ Look for BSF in custom nodes. It can't find it with the nodename. Maybe it's too long.

    AIforIRMar 19, 2026Ā· 1 reaction
    CivitAI

    I apologize if you've answered this already, but where did you get your ltx 2.3 models and vae at?

    AIforIRMar 19, 2026

    @macgyverjunk935Ā Thank you so much bro

    macgyverjunk935Mar 19, 2026Ā· 1 reaction

    @AIforIRĀ no problem

    mixailckopp978Mar 19, 2026

    Thanks for this comment, I found needed workflow.

    azra1lMar 20, 2026

    @mixailckopp978Ā where have you found that? for ltx 2.3?

    AIforIRMar 23, 2026

    @macgyverjunk935Ā Hate to be a bother, I keep getting a "header too large" error on my dual clip loader. Where did you get your gemma model at? mine must not be working

    NRDX
    Author
    Mar 23, 2026

    @AIforIRĀ Are you using the workflow I provided? If so, it uses Kijai loaders. If you downloaded the Kijai templates and are using a native workflow, I'm not sure if it will work.

    AIforIRMar 24, 2026

    @NRDXĀ Yeah, I'm using the included workflow. with all the same nodes. for some reason it just doesn't like the gemma model :/

    NRDX
    Author
    Mar 24, 2026Ā· 1 reaction

    @AIforIRĀ Then export your workflow and open an issue on Huggingface and share it there so I can see what the problem might be.

    rek409Mar 19, 2026
    CivitAI

    I've tried using the linked LTX2 workflow but i cant make it work. Is there an update one for 2.3 or which do i use?

    macgyverjunk935Mar 20, 2026Ā· 1 reaction

    v3 is the ltx 2.3 version. make sure you use git clone for the ollama node and the https://github.com/alisson-anjos/ComfyUI-BFSNodes

    and manually run the "pip install -r requirements.txt"

    if you already installed those nodes through the manager, and they're still giving you problems, delete the folders for those nodes from the custom_nodes folder and then git clone them


    the author plans on updating them so the work in the manager but for now they might give you problems. if you have problems with other nodes do the same

    _crz_Mar 20, 2026

    @macgyverjunk935Ā theres no requirements.txt on that repo

    rek409Mar 20, 2026

    @macgyverjunk935Ā i installed everything correctly and should be working. But it just ends after the ollama node?

    NRDX
    Author
    Mar 20, 2026

    @rek409Ā Do you have custom node RES4LYF installed and have access to bong_tangent? Also, have you installed Ollama on your machine? Are you getting an error with the custom sampler?

    rek409Mar 21, 2026

    @NRDXĀ I have RES4LYF installed. bong_tangent is picked in the scheduler but i think it doesnt even reach that node on execution. Ollama is running and i dont get any errors. It says prompt executed and shows the control video preview as final product of my execution.

    NRDX
    Author
    Mar 21, 2026

    @rek409Ā That's strange; instead of executing the workflow directly (by pressing play), try grabbing one of the nodes at the end of the workflow and queuing it to see if any visible errors appear.

    rek409Mar 21, 2026

    @NRDXĀ Idk what happened but after queuing everything manually it just works now somehow

    _crz_Mar 20, 2026Ā· 1 reaction
    CivitAI

    I feel like a rat in a maze trying to navigate your resources and find workflows and simply run a lora. Like its cool that you do it and everything but holy moley what a mess

    NRDX
    Author
    Mar 20, 2026

    The minimum a person needs to read, the workflow is on HuggingFace, the model is also on HuggingFace or can be downloaded here, it's simple.

    The_Last_Goblin_KingMar 20, 2026
    CivitAI

    Sorry to ask, but is this a video face swap, Lora, or just a face swap, then you can i2v it? If it's a video face swap, where is the workflow? I looked in the repo he mentioned, but I didn't see anything. Did I miss it?

    kronos1959777Mar 20, 2026
    CivitAI

    the 2.3 version gives errors.

    LTX2 LoRA preprocessing dropped 184 unmatched keys for model 'ltx2_22B': diffusion_model.audio_embeddings_connector.transformer_1d_blocks.0.attn1.k_norm.diff, diffusion_model.audio_embeddings_connector.transformer_1d_blocks.0.attn1.q_norm.diff,

    NRDX
    Author
    Mar 20, 2026

    Are you sure you're using LTX version 2.3? It's the base version of the model; you can't use LoRa from version 2.3 on base 2.0.

    kronos1959777Mar 21, 2026

    @NRDXĀ yes, but actually maybe it was because I tried it without using a video control. I was just trying to see if I could use it with i2v to keep better face consistency like I could with image editing and your other lora for that.

    wxcvbnwMar 20, 2026
    CivitAI

    First, thanks for your work.
    Second, i think the workflow here on this page is incomplete? the upscale model isnt doing anything? Also my output is way way more lowres than your results. You hint at 'generate low res" with a block but there is no "high res pass?". Where can we find your actual latest workflow?

    NRDX
    Author
    Mar 20, 2026

    Yes upscale model is not used, because is not simple to use on this workflow, I think the results can be worst because the model will change a lot of pixels, but if you see a way you can tell me. The model was trained with samples with 1024 base resolution I think if you want more is better do a upscale after.

    lades666528Mar 20, 2026Ā· 1 reaction
    CivitAI

    I only have one thing to say about your work... amazing!

    I had a little trouble installing your custom nodes, but once that was done, your LTX V3 workflow works perfectly. I only had time for a quick test, but the result is really good, even though I didn't change a thing. It's not perfect but well done!

    NRDX
    Author
    Mar 20, 2026

    Thanks, Yes, is not perfect but can be improve.

    q13958958Mar 20, 2026
    CivitAI

    ltx-2.3-22b-dev_transformer_only_bf16.safetensors +ltx-2.3-22b-distilled-lora-384.safetensors. Is this right?

    q13958958Mar 20, 2026

    ltx2.3-head_swap_v3_rank_64.safetensors , ltx2.3-head_swap_v3_rank_adaptive_fro_098.safetensors , both should be used , right?

    NRDX
    Author
    Mar 20, 2026

    @q13958958Ā test both

    NRDX
    Author
    Mar 20, 2026

    You can use fp8 if you want, I show the models in the video

    q13958958Mar 20, 2026

    @NRDXĀ another question: in your V3 workflow, the speed of samplercustomAdvanced is very slow on my Mac, why?? but my other ltx2.3 workflows used the 8 custom sigmas node for the sampler, the speed is ok

    NRDX
    Author
    Mar 20, 2026

    @q13958958Ā I think is because of the ic lora

    JankolonkoMar 20, 2026Ā· 1 reaction
    CivitAI

    Is it possible to run using LTX2.3 in WAN2GP - Pinokio? Anyone have idea how to setup everything?

    lades666528Mar 22, 2026

    I've done several tests with wan2gp, but without success so far... For the Klein version, with a photo, it works without any problem, but for video, wan2gp's options are too limited...

    divineblessingMar 21, 2026
    CivitAI

    Hi, thanks for sharing this.

    I've tested the V3 workflow on both 24fps and 50fps videos. I followed your workflow exactly, only changing the model to fp8+distilled384 (at 0.6 strength). I tried both LoRAs, but the results aren't as good as what you showed, and I'm not sure why.

    At first, I thought it might be a prompt issue, so I manually wrote a prompt using the new format. However, the results were still not great. It seems like the faces get a bit feminized? Either way, I'm not a fan of that look.

    I also tried the V1 method—swapping the face on the first frame, upscaling it, and then running the sampling—but the results were very poor. I haven't tested videos with objects passing in front of the face yet.

    Overall, it seems to me that the V3 version does improve facial dynamics and the success rate for consistent face swapping?

    Right now, I'm using a hybrid approach: mixing the V1, V2, and V3 LoRAs, writing prompts in the new format, but using the V1 sampling method. The facial dynamics are much richer than with the original V1, and I'm seeing a lot less stiffness.

    Thanks again for sharing your work!

    NRDX
    Author
    Mar 21, 2026

    That's strange, all the videos I try work. Of course, it won't work perfectly, etc., but it always works. What resolution are you trying to generate the videos in? Could you share what a bad result would look like? You can post it on HuggingFace if you want, open an issue there and post the video there if you want.

    NRDX
    Author
    Mar 21, 2026

    I still don't know why everyone uses distilled384 instead of what I use hahaha
    Another thing we have to remember is that this model was made for LTX-2.3 and not for 2.0. If you are using V1, then you may be using the wrong base model. Are you sure you are using LTX-2.3? Because V1 and V2 were trained on LTX-2.0.

    divineblessingMar 22, 2026

    @NRDXĀ Thanks for the reply. It might have been a translation issue, as English isn't my native language.

    Just to clarify, I'm not new to ComfyUI. I can get V3 to generate videos, but the face-swapping quality isn't as good as V1. The distilled384 model serves a similar purpose to the lightx2v model for Wan 2.2.

    I know the LoRA models were developed for LTX-2.3, but previous versions were also compatible with it. The video resolutions I'm using are 544x960 or 960x544.

    I've tested three camera angles: front-facing, side profile, and lying on the side. So far, the front-facing view gives the best results. I'll test the full LTX-2.3 model as well to see how it performs.

    NRDX
    Author
    Mar 22, 2026Ā· 2 reactions
    CivitAI

    For anyone who wants to test it, RetroGazzaSpurs has discovered a new potential for this Lora. I haven't tested it yet to verify its consistency.

    Alissonerdx/BFS-Melhor-Troca-Face-VĆ­deo Ā· Descoberta importante (loucura)

    NRDX
    Author
    Mar 22, 2026
    CivitAI

    Another important note: Ablejones, one of the most collaborative users on Banodoco's Discord server, created a workflow that enables upscaling with high quality. Anyone interested can take a look; the results are very good, but remember, it's a completely customized workflow.

    https://discord.com/channels/1076117621407223829/1461011216578248907/1485133039053836572

    ArrogarsMar 22, 2026
    CivitAI

    I'm using your lora on Forge Neo with Flux Klein 9B distilled, so far I've had promising results, I was wondering though if I'm missing something that is preventing me from utilizing this lora fully with Forge Neo. For example I see that unless I specify the pose and emotion very well it tends to drift a lot, but that's easy to correct. Anything else I should pay attention at?

    Kaze111Mar 22, 2026Ā· 1 reaction
    CivitAI

    I got an issue, the node OllamaVideoDescriber is missing even after I installed ComfyUI-Ollama-Describer, I tried both way install through comfy manager and manually from github link but the node is still missing.

    NRDX
    Author
    Mar 22, 2026

    Manually, it's 100% certain to work. I still don't know why the ComfyUI registry hasn't updated my Node version; I'll contact them.

    spaz8Mar 23, 2026

    Did you install the app as well as the node?

    Kaze111Mar 23, 2026

    @spaz8Ā yeah, I also updated comfyui but it's still missing the node.

    inwatheMar 23, 2026

    @Kaze111 i had the same issue until just now. ...i must have accidentally clicked the Workflow Properties panel top right? i hadn't just done an update. anyway what i saw there were instructions to run in my env:

    pip install -U --pre comfyui-manager

    ...then relaunch with the --enable-manager arg.

    and indeed i see pulled comfyui-manager-4.1b8 ... i didn't know that exists. i'm not surprised it does but i am that it's seen that many updates without me being made aware. out of curiosity i checked -- it's not a requirement of the ComfyUI-Manager node pack, it's not in the app's requirements.txt but there is a manager-requirements.txt now ... except it requires (exact specifier) 4.1b6 so idk.

    spaz8Mar 23, 2026
    CivitAI

    Thanks for this. V3 seems to work better out of the box than V2. But there are pros and cons to both v3 and v2. In V2 I had to train a LTX2 character lora to get the character not to lose likeness half way through. In V3 I now can have longer hair etc, but thing there seems to be more movement in the model.. hair disappears when it is occluded by the head or moves around. Anyways just started to play with this. Gonna train an ltx23 character lora and see if that helps. I'm trying to do static object orbits.

    spaz8Mar 23, 2026

    Does v3 still work best with landscape (1280x720) footage, and the face large in frame?

    gorodorkMar 23, 2026
    CivitAI

    Is it possible to use it in a video that has no audio?

    NRDX
    Author
    Mar 23, 2026

    yes

    gorodorkMar 23, 2026

    @NRDXĀ When I use an video that doenst have audio I get an error saying:

    [out#0/f32le @ 00000220427191c0] Output file does not contain any stream

    NRDX
    Author
    Mar 23, 2026

    @gorodorkĀ But then you can use Empty Audio Latent instead of getting the audio from the video.

    og1BFGMar 23, 2026Ā· 3 reactions
    CivitAI

    Why not upload a Workflow in addition to the LTX v2.3 Lora?

    og1BFGMar 30, 2026

    Thank you @Ponder_Stibbons

    My mistake, I found out right after posting the comment.

    renderguyMar 23, 2026Ā· 1 reaction
    CivitAI

    Has anybody tried it with Wan2GP?

    lades666528Mar 26, 2026Ā· 2 reactions

    I tried, but wan2gp isn't persuasive enough with LTX inputs to work with videos... however, it works perfectly for images (Klein).

    VlonadioMar 26, 2026Ā· 2 reactions

    The LTX version of this lora doesn't work with WanGP currently, I guess you should ask the developer about it

    MikeyOGMar 25, 2026
    CivitAI

    Hi. Great work.
    Is it possible to use anything else other than Ollama? Thanks.

    NRDX
    Author
    Mar 25, 2026Ā· 1 reaction

    Yes, you can use whatever you want as long as it generates a prompt in the same format, that's fine.

    MoreColors123Mar 25, 2026Ā· 1 reaction
    CivitAI

    So I got the LTX workflow (V3 ) you posted here to work - still not sure if i got the right Lora though, as they are named differently in your huggingface repo and in your workflow.

    It does work, but the likeliness is still much too far from the input image, even when using a perfect headswap on the first frame and using that as the input (did that with your splendid Flux2Klein BFS workflow btw!)

    What could i do? I run the ltx model distilled transformer only fp3 scaled.safetensors. Is it possible that it really works with the input v3 one?

    NRDX
    Author
    Mar 26, 2026

    I honestly don't know how people are using this because I even made an uncut video showing how it works, and some people can't get it to work. Obviously, it won't work in many cases, and there are several reasons for this, such as resolution, prompts with little description of the face, quantization, and videos with very complex situations. I need to understand what "doesn't work" means; I need examples.

    MoreColors123Mar 26, 2026Ā· 1 reaction

    @NRDXĀ i understand your frustration. comfy has so many hurdles anyway, i don't know if i can bring up the effort to help bugfix this or spend the time finding out whats the culprit.

    i just hope some day a new workflow of a new version will work. thanks for your work anyway!

    dkpc69Mar 26, 2026
    CivitAI

    i got this working still testing a bit though, had to swap the ollama node out with QwenVL using qwen 3 for the image and video describe, thanks for sharing

    BocekAdamMar 26, 2026
    CivitAI

    No matter which workflow and model I try, it only works with the flux model; it fails with qwen models. I'm looking for the reason for the head transfer issue but can't find it. Can you help me? Also, qwen models run slower than flux models.

    Ponder_StibbonsMar 26, 2026Ā· 1 reaction

    Ollama describer might be putting out nothing. Change it to https://github.com/stavsap/comfyui-ollama. That fixed it for me.

    Ponder_StibbonsMar 26, 2026Ā· 1 reaction
    CivitAI

    LTX version is definitely working, and damn fast, as expected. I haven't gotten the likeness dialed in yet, thinking about maybe throwing in a codeformer pre-swap to help with the head shape. I've got so many face models already. ReActor isn't dead yet, at least not for me. But this is very cool. Still got tons of sampler permutations to get through. Thanks for continually expanding the scope of your swap WFs. They are all well worth checking out.

    NRDX
    Author
    Mar 26, 2026

    You can do a head swap with my BFS for Flux Klein and try using that both as the first frame of the video and as a reference image of the face.

    Ponder_StibbonsMar 27, 2026Ā· 1 reaction

    @NRDXĀ I settled on a hq reActor swap on a good frame to use as my base. That way I can keep using my face models (plus, running an image batch through the composer and qwen takes forever). Results are really promising. The technique is so bizarre, but it make sense when you see it work. It can handle some seriously infuriating occlusions problems, as well as missing training data. Exactly what reActor is terrible at. After I've gone through all of the test permutations I'll have some people over, we're going to roast marshmallows over my GPU, maybe have a schvitz. I'm sending you the power bill.

    Ponder_StibbonsMar 26, 2026
    CivitAI

    The Ollama captioner puts out nothing for me. Took me a while to realize the enhancer was getting no input, so it was just making up random crap. There is something about the output format that I could not fix. If anyone else has this issue replace it with https://github.com/stavsap/comfyui-ollama. Three nodes- connectivity, options and generate. Copy the two instruction sets from the old node and you're good.

    mduster303216Mar 28, 2026
    CivitAI

    i tried V5 Qwen Edit but i have the problem, that my output only gives me the 2 original pictures, not even swaped. what im doing wrong?

    NRDX
    Author
    Mar 29, 2026

    It would be easier if you sent a screenshot of how you are submitting your entries and your models.

    mduster303216Mar 31, 2026

    i found the problem, it was sageattention :-) without it, it works perfekt.

    tupuApr 2, 2026
    CivitAI

    Let me clear one thing....if i do understand you ok, u say that combining both loras (ltx 2.3 face swap,) works better? thanks for sharing

    NRDX
    Author
    Apr 2, 2026Ā· 1 reaction

    Where did I say that?

    tupuApr 2, 2026

    @NRDXĀ The 2 versions complement each other, one is focused on face swapping and the other is focused on head swapping.

    tupuApr 2, 2026

    @NRDXĀ any way, now i know its my bad understanding....thanks for answering.....btw, im not getting best results as u? any advice? thanks

    NRDX
    Author
    Apr 2, 2026Ā· 1 reaction

    @tupuĀ One of the main tips is to avoid using models with extreme quantization and to use high-quality inputs; poor-quality inputs will produce poor-quality results. In images where the face is distant, you will need to crop the face, perform a face swap, and then restore it to the original image.

    mduster303216Apr 3, 2026
    CivitAI

    when i have 2 faces in my base image, how do i decide which head should be changed?

    NRDX
    Author
    Apr 3, 2026

    You don't decide, hahaha. It was already difficult to do for one person, for two it would be even more so. It would be easier for the person to use a video editor, cut out the person's part, use that as input, and then restore it.

    jay_santosApr 6, 2026

    In a group image, I actually mask the head of the subject that will do swapping, so only that head or face will change while others remains the same.

    NRDX
    Author
    Apr 6, 2026

    @jay_santosĀ Can you use this in BFS? Use a mask + BFS?

    jay_santosApr 6, 2026

    @NRDXĀ yes, I am using your LORA to head swap some images because when I for example generate an image with 2 men in one photo, normally it will result to look the same like twins so I use your LORA to head swap the other man by masking. I am using only for images, I have not tried for video yet.

    NRDX
    Author
    Apr 6, 2026

    @jay_santosĀ Ah, okay, that's for images, I thought it was for video hahaha

    mduster303216Apr 8, 2026

    @jay_santosĀ sometimes it works with the mask but often its not ā˜ŗļø

    noyartApr 3, 2026Ā· 3 reactions
    CivitAI

    Do anyone have a workflow for the LTX2.3, im clueless how exactly you would use the lora.

    DetroitArtDudeApr 3, 2026Ā· 4 reactions

    I think this might be it. Found it from the same user as the still image workflow link in the description

    https://huggingface.co/Alissonerdx/BFS-Best-Face-Swap-Video/tree/main/workflows

    noyartApr 5, 2026Ā· 1 reaction

    @DetroitArtDudeĀ Thank you!!

    ApexArtist1Apr 9, 2026
    IckeBinsApr 25, 2026

    @ApexArtist1Ā Self-praise stinks. Plus, there's a paywall. Nice try.

    IckeBinsApr 25, 2026Ā· 1 reaction

    Self-praise stinks. Plus, there's a paywall. Nice try

    NRDX
    Author
    Apr 26, 2026

    @IckeBinsĀ I still don't understand where the head swap is in the video above, haha.

    johnpennywise42Apr 6, 2026
    CivitAI

    I am able to get the head face change, but how do I get the curvy body as well? I don't want the skinny from image.

    NRDX
    Author
    Apr 6, 2026

    This is a head swap, not a full body swap, so it won't be possible.

    johnpennywise42Apr 6, 2026

    @NRDXĀ I understand. Does a full body swap exist?

    NRDX
    Author
    Apr 6, 2026

    @johnpennywise42Ā I haven't trained for this yet. If you want to do it, you'd need to use WAN and not LTX. With WAN, you can do it using WAN SCAIL or WAN Animate.

    gumpbubba721291Apr 7, 2026

    What you're thinking of goes in the realm of VACE, SCAIL, or ANIMATE under WAN. With VACE, the way it ends up working is basically taking your original video, extracting subject identifying info (i.e. pose frame, blending in an amount of outline/subject depth and applying it to your . The workflows can get very complicated. Made one at one point (it's a mess) trying for a long and full video replacement, and it was a pain the ass to make and requires a beast gpu. I've tried animate in the past, and it's much more simple, and can determine physics interestingly enough, but sometimes it ends up making everything jiggle weirdly. Haven't tried scail but heard it's higher quality although requires heavy vram.
    https://civitai.com/models/1913485/wan22-v2v-vace-one-click-seamless-workflow-loop-preserving-subject?modelVersionId=2169439

    7854518Apr 14, 2026

    One trick worked for me. I just added "slim thick alte baddie" at the end.

    DetroitArtDudeApr 9, 2026Ā· 2 reactions
    CivitAI

    After getting this to work for video using the workflow buried on the huggingface user's page, I'll say it's an awesome and impressively done project, but it's just not accurate enough to be a true face swap. Still, might be useful for some people.

    bekejej6031201Apr 10, 2026
    CivitAI

    anyone got this working for NSFW?

    banger27Apr 16, 2026
    CivitAI

    It gives a 404 error when trying to open a link from workflow :(

    WindGoneApr 27, 2026
    CivitAI

    Has anyone encountered hair-like blurring in ltx2.3?

    LORA
    LTXV 2.3
    by NRDX

    Details

    Downloads
    3,532
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/19/2026
    Updated
    5/3/2026
    Deleted
    -
    Trigger Words:
    head_swap: FACE: [describe the new face here] ACTION: [describe the action from original video here]