Regional prompt workflow for XL Models
This is a combination of some workflow I'v used in the past or recently.
Most of the part from the regional prompt was originally on this workflow
Huge thx to @zml_w for all the testing and the help
I'v just found that Levpat might be banned or deleted account
Upscale Method for V2 by @LevPat
https://civarchive.com/models/1994653/custom-upscale-flow
USER GUIDE
https://civarchive.com/articles/18257
For V3 Merged Workflow
Merged the two method into one workflow, you can now toggle between Dense Diffusion and Attention couple
Changed the detailer method, I'v kept Couple Face Detail group (for some image it's ok)
Now I'm mostly using a method from MGHerder
Found on this workflow
Added some settings to the Face detailer, I'v linked all the merged text prompt to the positive on the first two SEGS for each two characters, you can swicth between character (I couldn't find a good way to had wildcard or overwrite prompt on the SEGS detailer)
Added SeedVr2 upscale you will need to download the model and VAE for it to work
Added a more simple and lighter latent Upscale
I'v kept the Advanced latent upscale (was called Crazy Upscale before) linked all the text positive and negative prompt to it, converted to subgraph all the noodles
Kept the USDU upscale in some use it can be good too
Basically you have now 4 upscale method to choose with, you can delete the Upscale group you don't want it will not break the workflow.
You will need all the the custom nodes below that was used for V2 and all the following
https://github.com/numz/ComfyUI-SeedVR2_VideoUpscaler (If you want to use it)
https://github.com/mcmonkeyprojects/sd-dynamic-thresholding
https://github.com/chrisgoringe/cg-use-everywhere
https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
https://github.com/LAOGOU-666/Comfyui-Memory_Cleanup
https://github.com/Extraltodeus/Skimmed_CFG
https://github.com/mirabarukaso/ComfyUI_Mira
https://github.com/ltdrdata/was-node-suite-comfyui
FOR V2 Dense Diffusion and Attention Couple
I'v made two separate workflow with different Regional prompt method, both of them are in the archive if you want to try.
Dense Diffusion
So the error "The size of tensor a (924) must match the size of tensor b (308) at non-singleton dimension 3"
Is still here with Dense Diffusion BUT now I know 100% why it happens, so I will detail everythings how to avoid it.
First you can put almost everything in every positive prompt, a lot of detail or action for left char and almost nothing for 2nd character will not trigger this error.
The only way this error will pop is Positive VS Negative Prompt.
So why it happens then, almost exclusivly with Embedings, because of Tokens count and the way Dense Diffusion work, the size of Tokens need to be almost the same, and when you put emebeddings, like Lazyhand, Lazyneg, etc will use a lot of Tokens (more exactly tokens Chunks)
So how to avoid it, you can either not use emebeddings at all, I'v added note on the workflow with good (I think) quality prompt.
You can still use Embeddings but you wiil need to balance them between positive and Negative. So for example if you put Lazyhand + Lazyneg in Negative, you will need to use at least one or more embeddings in the positive quality to get the same size of Tokens chunks.
So because of that I'v added a custom nodes made by me (and GPT) to Normalize the negative quality, you can either turn it off or on, I'v put pics for comparaison in the archive.
If you'r not happy or don't like this method I'v made another Workflow with a different nodes called Attention couple.
Attention Couple
With Attention Couple this error will never pop, then why not using it and getting rid of Dense Diffusion ?
Attention couple work great and can have pretty good result, but in my testing I saw two major flaw with it.
First the Lora did not get applied the same way with it, Dense Diffusion seems to have a better blend for the model and loras.
Second the interaction seems more difficult too, I did not use it very extensively but the few image I'v made seems to have less prompt adherence.
Here is every nodes you will need for both
For Dense Diffusion
https://github.com/Fannovel16/comfyui_controlnet_aux
https://github.com/ltdrdata/ComfyUI-Impact-Pack
https://github.com/rgthree/rgthree-comfy
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/ssitu/ComfyUI_UltimateSDUpscale
https://github.com/cubiq/ComfyUI_essentials
https://github.com/ClownsharkBatwing/RES4LYF
https://github.com/giriss/comfy-image-saver
https://github.com/shiimizu/ComfyUI_smZNodes
https://github.com/ltdrdata/ComfyUI-Impact-Subpack
https://github.com/huchenlei/ComfyUI_densediffusion
https://github.com/edelvarden/comfyui_image_metadata_extension
https://github.com/rcsaquino/comfyui-custom-nodes
https://github.com/zml-w/ComfyUI-ZML-Image
(https://github.com/zml-w/ZZZ_ZML_English_Patch)
A small custom node is in the archive for the Negative Normalization, just need to put it in custom nodes folder. No requirements needed.
For Attention Couple
Everythings above +
https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
https://github.com/laksjdjf/cgem156-ComfyUI
If you have trouble with FAILED IMPORT for cgem156 you can try this fix
https://github.com/laksjdjf/cgem156-ComfyUI/issues/17#issuecomment-2918745574
Worked for me with ComfyUI desktop App
Older Version
I'v added a chain sampler because I love this method, the render time and the result it gave.
This is quiet an advanced workflow so you will need a lot of custom nodes, some may not work with ComfyUI manager, I will link them here if you want to install manually.
Make sure you activate your venv environment for the requirements.txt for custom nodes.
There is a simplified version in the archive if you want to try without all the nodes for HiresFix, Face detailer, upscale and color match.
I'm not responsible if you break your ComfyUI install, some of the nodes may use different Pytorch version or requirements you already use.
https://github.com/ltdrdata/ComfyUI-Impact-Pack
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
https://github.com/rgthree/rgthree-comfy
https://github.com/yolain/ComfyUI-Easy-Use
https://github.com/shadowcz007/comfyui-mixlab-nodes
https://github.com/jags111/efficiency-nodes-comfyui
https://github.com/ssitu/ComfyUI_UltimateSDUpscale
https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes
https://github.com/cubiq/ComfyUI_essentials
https://github.com/ClownsharkBatwing/RES4LYF
https://github.com/ltdrdata/ComfyUI-Impact-Subpack
https://github.com/huchenlei/ComfyUI_densediffusion
https://github.com/edelvarden/comfyui_image_metadata_extension
https://github.com/Miosp/ComfyUI-FBCNN
https://github.com/rcsaquino/comfyui-custom-nodes
https://github.com/weilin9999/WeiLin-ComfyUI-prompt-all-in-one
Deprecated version use this one instead, I'v fixed the workflow with the new one
https://github.com/weilin9999/WeiLin-Comfyui-Tools
For V1 Multimask
https://github.com/chflame163/ComfyUI_LayerStyle
https://github.com/kijai/ComfyUI-KJNodes
https://github.com/chrisgoringe/cg-use-everywhere
https://github.com/giriss/comfy-image-saver
Description
MASK
Added Zml_w multi Mask method, I didn't add his Manual method for custom mask for now.
As he wrote on the comment section "However, I need to remind you that the effect of the horizontal mask is not good, and the model cannot understand this composition. Among the automatically generated masks, I recommend the vertical one the most, followed by the diagonal one."
So be aware of that.
I will add explanation on the article related on how to use the "switch" for Mask and for his second option for conditioning.
SAMPLER
Added few nodes to have a better control for clownshark sampler and explanation on how to use the Sampler for chain or standard sampling.
Removed some notes for loras, after few test Loras will not work if you put them in the prompt area, you can add them in the loras stacker but the result may be very random.
Removed the save image after sampler generation and added preview for each sampling method you use.
Added a group bypasser switch for the save with Metadata
Known bug
Sometimes for some reason I can't really explain, if you don't put enough or put too much condition in the prompt area this error will pop.
"The size of tensor a (924) must match the size of tensor b (308) at non-singleton dimension 3"
Apparently this is due to attention for mask area and conditioning related to it (if I understood correctly)
I'm trying to find a way of getting rid of this error but this seems very hard as I'm not an expert and just steal some part of other good workflow and putting them together.
Be aware that even sometime the error can happen on the face detailer if you add wildcard, condition or a lora to it, or even if you let the condition I'v put by default ({face|face,detailed face}).
You can try to remove everything to see if it work.
FAQ
Comments (22)
nice
Would love to use that but WeiLin just doesn't work at all if i install it my comfyui is not able to start its just stuck at the all startup tasks have been completed
Are your sure your node or comfyui is up to date, because weilin node was broken some weeks ago but got fix lately
@hmmmmmnike yeah everyting is updated did that just 2 days ago it just simply won't stat up comfy for me or i ave to wait maybe hours te lonest i waited was about half an hour and nothing happened
@Pentox what is your comfyui , portable, stability matrix, software or git instal ?
@hmmmmmnike Portable
@Pentox ok so you may need to install
uuid7
aiosqlite
on your python environment that probably the requirement.txt files that got messed up if your install it with comfyui manager or with cmd
For exmple on my setup I needed to instal those on my venv envirnoment, I'm not familiar with the portable version so you might search a bit to find the good way to install properly those requirements
@hmmmmmnike i try to give it a try hoping to not scramble my comfyui thanks for the advice though
@Pentox ok so apparently the comfyui portable version use a simplified python environment and aiosqlite may not work with it unless you modify the way of the python environment work (if I understand correctly)
I'm pretty sure it was installed when you tried to install weilin node and that maybe why it was taking for ever to load
you can try to install it manually with cmd but you might have the same result because those requirement use specific python components that you may not have with the protable version
@hmmmmmnike ah ok mhh well that's a shame then but thanks again for the research and clearance then i have to stick to my other workflow without WeiLin it works good enough i guess just have to do a lot with inpaint afterwards
@Pentox Yeah apparently comfyui portable is not really good for this type of workflow
@hmmmmmnike mhh happens but i guess your workflow is really good since mine does a good job and that without WeiLin your's must be magnificent as i see it from the images so a like is there nontheless ^^
"ClownsharKSampler_Beta
The size of tensor a (462) must match the size of tensor b (231) at non-singleton dimension 3"
everytime i change the left person prompt
So yeah it can happen because the token count is to far off for each individual prompt text, you can either copy 1 or time the prompt that is problematic or delete some text on other, I prefer copy multiple time in most case it will not break the image coherence, make sure also your general prompt are not too long or too short
I'm currently working on a new workflow and hopefully this error will not happen with it.
UltralyticsDetectorProvider
Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m. (1) In PyTorch 2.6, we changed the default value of the weights_only argument in torch.load from False to True. Re-running torch.load with weights_only set to False will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source. (2) Alternatively, to load with weights_only=True please check the recommended steps in the following error message. WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.DetectionModel was not an allowed global by default. Please use torch.serialization.add_safe_globals([ultralytics.nn.tasks.DetectionModel]) or the torch.serialization.safe_globals([ultralytics.nn.tasks.DetectionModel]) context manager to allowlist this global if you trust this class/function. Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
sorry I never got this issue can't really tell why it happens, sound like maybe impact pack nodes or impact subpack, if other workflow you tried works with facedetailer then I don't really know sorry
you gotta install a custom node that disables torch security
So far the workflow works pretty great.
used the default settings.
Tho I have to say it seem a bit complicated at first.
The masks mart especially.
would love it if you could find a way to toggle. while preview only the mask that is being used.
but that's just and idea.
still so far this workflow worked the best for me of all I tried. thank you
thank you very much, on the future workflow all the mask processing parts will get deleted and I will use ZML nodes, they mades custom nodes for mask split and just need to toggle on the direction, vertical, diagonal, horizontal
@hmmmmmnike nice work. i want to ask. as a newbie. what is the different between using chain sampling and normal 1 ksamplerer
@redhoneyai713 Chain Sampling give better prompt adherence for me (not all the time), and with ClownShark Sampler using Bongmath + the Sampler I'v put It by default the difference is pretty noticeable, If you want I can give you a pretty simple workflow with differents Sampler to compare, I was using soimething similar before but like 80% of the time the best image was comming from the ClownShark Sampler
@hmmmmmnike yeah you're right...i tried it myself now. There is fewer fragments between the masks if using the chain. The thing is I like the style/quality results from ksampler dpmpp_2 - karras. The chain makes is more noisy and in general kinda quality wise worse than using simple ksampler. So put dpmpp_2 KSampler, with denoise 0.55 at the end as sort of "high res fix". fixed lot of the things. so far its working for me better and much faster than using the SD upscaler and compression removal node at the end. still working on it tho.
I'm right now working on a way to automate it for batch renderers.





