***Flux Klein Swap Anyting (head, face, body etc..) ***
About Native Workflow; FIRST OF ALL UPDATE YOUR COMFYUI !!!
This has been the best node I've tried so far. In my tests, I saw that the 3.1 model made much better choices. Instead of person mask, I put the Sam3.1 text option where the prompt is entered. You can download the model from here and put it in the checkpoint folder; https://huggingface.co/Comfy-Org/sam3.1/tree/main/checkpoints
____________________________________________________________
About V2; Only python 3.12. If anyone runs this node on Python 3.13, can they share it with me?
I use a different node that performs the same function in place of the original Sam3 node. yolain/ComfyUI-Easy-Sam3.
Please upgrade your photobuf and mediapipe for "personmask" ( python_embeded\python.exe -m pip install --upgrade mediapipe protobuf google-cloud-videointelligence )
______________________________________________________________
In this workflow, you can easily swap anything you want with high CONSISTENCY* and without node complexity. The chance of facial mixing is ZERO. Although Flux Klein is not as consistent as qwen in terms of character consistency, I believe it performed well in this workflow.
Here, I utilized the advanced SAM3 model. You can easily swap faces, heads, just hair, or a part of an object. Simply type what you want to mask into the SAM3 Text Prompt panel. For example, "woman's head" or "man's hat." Everything you type in the prompt will be masked.
Just upload the relevant images to the corresponding "load image" section. You can use any checkpoint you like.
Those experiencing Tensor issues. They should first perform the comfyui update.
There are two files in the zip. One is for those experiencing tensor issues. Of course, first try installing the "normal" version. The normal version works better for "outfit swapping." If you encounter problems, install the second "tensor fix" file.
Thanks, @nroonarij129 found a solution to the tensor error problem.
***((1: Unpack the Subgraph "Person Mask".
2: Double left click on the Comfy UI canvas to bring up the Node Search Menu. Search for and add a new node, "LayerUtility: ImageRemoveAlpha".
3: Connect a new spaghetti string from the "img" output of the LayerMask: PersonMask Ultra V2 node, to the RGBA_image input of the new node you just added (LayerUtility: ImageRemoveAlpha).
4. Connect the RGB_Image output from the new node, LayerUtility: ImageRemoveAlpha, to the Draw Mask on Image node.***
I'm new to this, and this is my first workflow. I hope I haven't done anything wrong.
☕ Buy Me a Coffee:
🔗 https://buymeacoffee.com/ugurdoyduk
* HEAD OR FACE SWAP
Prompt Sam3 text2; "The woman's head"
Prompt Sam3 Text1; "The woman's face"
Leave Clip Text Encode blank. No matter what prompt I wrote, it caused problems with face and body colours and created disproportionate heads.
* OUTFIT CLOTHING
(When changing outfits, img1 becomes the location of your model, while image2 becomes the location of the reference outfit.)
Prompt Sam3 text2; "outfit"
Prompt Sam3 Text1; "clothe"
"Style the woman in image1, with every article of clothing in image2"
* FOOD, ITEMS, THINGS SWAP
(Bypass ToolYoloCropper)
Specify what will be exchanged in Sam3 text2 and text1.
Replace the masked area with the " your object for example chocolate" in image2. Keep everything else the same.
Ensure the replaced object fits perfectly into the area in a balanced manner. Make sure it complies with the laws of physics.
Description
I removed unnecessary nodes and added NSFW prompts.
FAQ
Comments (71)
First, thank you for sharing. Second, a question:
Why are only "nightly" installations available?
One of the node packs ( ComfyUI-RvTools_v2) requires lowering the security threshold. As I understand it, "nightly" versions means the code has not been checked, leaving a risk of exposing your system to malicious code.
EDIT: For other people with this question, direct your fave AI to the Repo for the node pack and ask it to search for malicious code. I did that and there was nothing, so I installed the workflow.
PROBLEM: Regardless of picture type, I get this error message "LayerMask: PersonMaskUltra V2
operands could not be broadcast together with shapes (1288,1024,1,4) (1288,1024,4) (1288,1024,4)"
Tried to ask AI for help. Looks like it is receiving a value too much - 4 values when it expects 3. Unpacked the subgraph and changed to "PyMatting", did not help. Changed "Device" from CPU to GPU. Did not help. May need to convert the input pics somehow before feeding them to this node.
Any ideas?
Thank you for your warning. I need to give up my nightly obsession. Mine is a habit of always wanting the very latest. A few people have also mentioned the issue you're experiencing in the qwen version. It seems to happen in some photos. When I uploaded my json file to gemini, it mentioned an rgb issue. I placed an rgb node before the image output from the node I call PersonMask and uploaded it. I uploaded the new one. Please download it again and let me know if you get a positive or negative result. Because I never experienced that problem.
@ugurdoyduk341 Thanks so much for being so responsive! It sounds like the added RGB node might fix the issue, fingers crossed and excited to try the updated version. Will check and update you.
@nroonarij129 I have the same workflow in ‘qwen’ too. Someone has solved the problem with the comfy update.
@ugurdoyduk341 I downloaded your updated workflow and updated Comfy UI. Unfortunately, I still get the same error message. When I asked Grok about it, I was given instructions to modify the code in the node. I will try this and let you know how it goes.
It persists, and it appears to be a problem with RGB versus RGBA (where A stands for Alpha. The output mask image is RGBA but the kjnode wants RGB. It seems like your node would solve this, so I am confused. I changed the code in one of the nodes, which did solve the problem in that node, but appears to have pushed the error further downstream to the KJ node instead...
Found a solution that works for me:
1: Unpack the Subgraph "Person Mask".
2: Double left click on the Comfy UI canvas to bring up the Node Search Menu. Search for and add a new node, "LayerUtility: ImageRemoveAlpha".
3: Connect a new spaghetti string from the "img" output of the LayerMask: PersonMask Ultra V2 node, to the RGBA_image input of the new node you just added (LayerUtility: ImageRemoveAlpha).
4. Connect the RGB_Image output from the new node, LayerUtility: ImageRemoveAlpha, to the Draw Mask on Image node.
Now the workflow works here. Thanks for your workflow and assistance! Hope my description can help somebody else down the line.
@nroonarij129 Super' Thanks.I'll summarize (because I'll update it) image to rbg---image remove alpha---LayerMask: PersonMaskUltra V2(Advance)---and thats it. Everything else the same. Am I right? Hmmm btw a small problem may arise here. The background is white from now on. In clothing swaps, could the target person accidentally be dressed in white?
@ugurdoyduk341 I have not tried clothing swaps so far, only face and hair - but you could be right. I will check. One thing: VitMatte is set as default and also appears to conflict with something in my setup. Changing to PyMatting appears to solve it.
@ugurdoyduk341 It does add white trimming for the areas masked white, When I tried to swap a leather jacket with some hair masked over it, the model added white fur trimming to the leather jacket collar. So obviously my "solution" is less than perfect. If somebody only wants to swap face and hair though, it works. But not for optimal clothes swap. EDIT: If no part of the clothes is masked out, the clothes swap works as intended.
@nroonarij129 I tried a few things and the outfit swap seems to be working fine for now. I updated the file. Thanks again.
@nroonarij129 I've just read your last message :) In that case, I'll upload two files. One of them is ‘for those experiencing tensor issues,’ the other is a normal file.
With PyMatting, you get white parts of the mask. With VitMatte, it seems to work...
badass
CLIPLoader
'nvfp4'
ошибка почему у меня 5080 и 13 питон
Hey, i got error :
# ComfyUI Error Report
## Error Details
- Node ID: 123
- Node Type: CLIPLoader
- Exception Type: RuntimeError
- Exception Message: Error(s) in loading state_dict for Llama2:
size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([151936, 4096]) from checkpoint, the shape in current model is torch.Size([128256, 4096]).
What models exactly did you use?
ok it was corrected in comfyui 9.0.2. :)
update all your nodes then update from update.bat
I try with klein-4b because klein-9b make comfyui crash. But I got distorted/useless result. Does this work for someone with -4b?
You can use Klein 4b, no problem.
estoy utilizando el flux-2-klein-9b-Q8_0.gguf sirve tambien muy bien si tienes una buena grafica
Nice workflow. What does the middle lora do?
Thanks! It's currently empty. You can add any lora you want.
Thanks.
DrawMaskOnImage
The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 2
Update your comfyui. Update all your nodes from manager. Download the zip file of this workflow. Inside it there is "SwapAnything For Tensor issue fix .json" version. try it.
Thanks but already done all of this and i get the same error ...
P.S I updated one more time all ComfyUI nodes and everything works fine now ! Thanks !
@lemans99993 you're welcome
Is there a way to use fixed seed in this workflow ?
And if there is not - can you add it please ?
Thank you !
Click the icon in the upper right corner of the node labeled Ksampler. After making the desired change, tap the blue header in the upper left corner.
Works pretty well, thanks.
Hello there. Thank you for this. I'm new so I had a lot of issues installing but I found out most of the solutions :)
I have two questions though. I can't seem to be able to adapt the negative prompt. With the outfit swap I often have the skin tone and body parts changed with the reference photo. Is there a way to add a working negative prompt ? Wasn't able to add it.
When I select body, clothes and accessories the head and hair appears in the mask. Is that a bug or issue ? Thank you
Hello. In cases where Cfg is set to 1, your negative prompt inputs will not work. In other words, negative prompts are disabled in 4 or 8-step models. Instead, you should write your negative commands in the positive prompt. For example, "It is strictly forbidden for the model to look at the viewer." ...I'm afraid I didn't understand your other question.
Hey! Amazing workflow, pretty simple to use and not much hassle to get it work. Also is to target only woman's face when using head swap with hair?
Thanks! No, it's not just the woman's face. For example, you can specify "Santa Claus's face," "the face of the man on the right," etc. Just specify this in the "sam3 text" node.
Is there a way to preserve the emotion?
i got this error
LayerMask: PersonMaskUltra V2
operands could not be broadcast together with shapes (477,425,1,4) (477,425,4) (477,425,4)
First, run "update all" from the manager. Then run the update_comfyui.bat file. There are two files in the downloaded zip file. Try them one by one. It will be fixed.
Hey, generally it is working for me, but the output image is kind of washed out compared to the original input image, is there any way to fix this?
Please see an example here: https://pasteboard.co/3Px2o874waxS.png
Many thanks!
The "washed out" problem is flux Klein's biggest bug. First of all leave the text empty, do not write anyrhing. Secondly, Try prompt "Add shadows and contrast to image" and if you prefer add " smooth the skin" also
@ugurdoyduk341 Thanks for the fast reply, I will test this :)
Solid workflow, works great, though is there a way i can improve the masking? It tends to leave big unmasked blotches on the cheeks or even half the face if there's a tiny bit of shading.
The "Grow mask" is for this purpose. You can expand the Mask. It was at the bottom in Wf.
still working? i can't download SAM3
Almost all the issues here have been resolved with the update. Update all from manager.
@ugurdoyduk341 No, the new SAM3 node downloads sam3.safeteneors instead of sam3.pt and detects nothing. Switching to SAM3 version 1.0 nodes may resolve the issue.
UPD. The author is aware of the problem and promises to update the code soon.
https://github.com/PozzettiAndrea/ComfyUI-SAM3/issues/98
@chain2k Thank you for the warning. I think I gave an automatic response, sorry about that. The owner of the repository, PozzettiAndrea, said he would solve the problem over the weekend.
You can download sam3.pt from here 1038lab/sam3 at main
Can find it here: 1038lab/sam3 at main
Hi. Where should I put sam3.pt file? Automatic download doesn't work
@pinkflamingopain255 ComfyUI\models\sam3 and be sure to download new v2 wf
LoadSAM3Model
Error while deserializing header: invalid JSON in header: unknown variant C64, expected one of BOOL, F4, F6_E2M3, F6_E3M2, U8, I8, F8_E5M2, F8_E4M3, F8_E8M0, I16, U16, F16, BF16, I32, U32, F32, F64, I64, U64 at line 1 column 201515
Update your gguf, update your comfyui, update all from manager
My previous answer was incorrect. Currently, sam3 is not working. They will fix it soon. https://github.com/PozzettiAndrea/ComfyUI-SAM3/issues/98
Hello. Thanks for the workflow, it works.
I've encountered a problem. When replacing a face, the output image in the area of the replaced face looks blurry, as if it was upscaled from a tiny 8x8 pixel image, while the reference image remains sharp. I uploaded different images for the face replacement, from 1024x1024 to 2048x2048, with perfect clarity and detail. Can you tell me what the problem might be? I'm using 9b_Q8 GGUF version, because my 3060 can't handle heavier models.
The photo you want to transfer the head from was taken from very far away and is a full body shot, just type this in the prompt section "Make sure the final image is 4k detail."
@ugurdoyduk341 Thanks for the reply. It's quite the opposite :) The target image, where I want to transfer the head, is a full-body shot. And the donor image, from which I'm taking the head, is a close-up of excellent quality. I'm preparing a dataset for LoRa, and I don't like that the girl doesn't look very much like herself in the full-body shots. I tried to fix it using your workflow.
@WhiteSnake_Pro Hmm, I don't know. You must have a different problem.
"DrawMaskOnImage
only integer tensors of a single element can be converted to an index"
I add RGB node, restart and still doesn't work..
help please ;)
Have you tried the "SwapAnything For Tensor issue fix" file extracted from the zip?
What do you do about the mediapipe incompatibility with python? At node LayerMask: PersonMaskUltra V2
I updated the nodes. Using the new (down)Load sam3 model node, there is no field to select a model. Am i doing something wrong?
To be honest, I didn’t understand any of that, so I went back to the old version. // for the working old versiyon custom nodes/sam3/ open terminal type; git checkout 18efc9b
I have the same issue. The sam3 node has only precision as an input option and we cannot select the same3 model from dropdown...
You can solve the Sam3 problem by switching the note from "Sam3 Model" to easy Sam3modelLoader (3057 ComfyUI-Easy-Sam3). You then need to reconnect the links and set the Draw Mask to CPU. In the new note, you can then simply load the "Sam3.pt" file; just pay attention to your settings (mode / device / precision). My settings are "Sam3.pt/ Image / Cuda / bf16"
@neomcclaoud Thank you. I created a new version based on your suggestions.
cannot load sam3 model
You can download the new version
@ugurdoyduk341 new version works good, thanks for the workflow
@zylzzz66 You're welcome





