CharTurner
Edit: controlNet works great with this. Charturner keeps the outfit consistent, controlNet openPose keeps the turns under control.
Three versions, scroll down to pick the right one for you.
If you're unsure of what version you are running, it's probably 1.5, as it is more popular, but 2.1 is newer and gaining ground fast.
Version 2, for 2.0 and 2.1 models
Version 2, for 1.5 models
Version 1, for 1.5 models
BONUS: Experimental LORA released
used at your own risk. :D (mixes well tho)
Hey there! I'm a working artist, and I loathe doing character turnarounds, I find it the least fun part of character design. I've been working on an embedding that helps with this process, and, though it's not where I want it to be, I was encouraged to release it under the MVP principle.
I'm also working on a few more character embeddings, including a head turn around and an expression sheet. They're still way too raw to release tho.
Is there some type of embedding that would be useful for you? Let me know, i'm having fun making tools to fix all the stuff I hate doing by hand.
v1 is still a little bit... fiddly.
Sampler: I use DPM++ 2m Karras or DDIM most often.
Highres. fix ON for best results
landscape orientation will get you more 'turns'; square images tend toward just front and back.
I like https://civarchive.com/models/2540/elldreths-stolendreams-mix to make characters in.
I use an embedding trained on my own art (smoose) that I will release if people want it? But it's an aesthetic thing, just my own vibe.
I didn't really test this in any of the waifu/NAI type models, as I don't usually use them. Looks like it works but it probably has its own special dance.
Things I'm working on for v2: EDIT: V2 out, see below! (also v2 2.1)
It fights you on style sometimes. I'm adding more various types of art styles to the dataset to combat this. - V2 has much better styles
Open front coats and such tend to be open 'back' on the back view. Adding more types of clothing to the dataset to combat this. - Still has this problem
Tends toward white and 'fit' characters, which isn't useful. Adding more diversity in body and skin tone to the dataset to combat this. - v2 Much more body and racial diversity added to the set, easier to get different results.
Helps create multiple full body views of a character. The intention is to get at least a front and back, and ideally, a front, 3/4, profile, 1/4 and back versions, in the same outfit.
Description
I'm not great at prompting for 2.1 yet, so I"m sure your prompts will work better with it. Works fantastic with negative embeds. (still collecting links)
FAQ
Comments (166)
Could you do a version that's true orthographic? Character sheets are supposed to be orthographic, but these are kinda just people standing around in different poses at a perspective. Usually from a bottom up perspective. These seem more like reference perspectives from a orthographic turnaround sheet, like you'd see in a final concept.
I would love to do a true orthographic. Each one of the source images in the dataset IS a true orthographic. I will keep training, but it's not something that's easy to teach the AI, at this point. I'm working on it. :D
@mousewrites I hear ya there. In my orthographic experiments it was hit or miss. This is pretty awesome though. I was just hoping to have something consistent for like matching up in blender to make a character 3D (and then mapping the character as texture)
@WAS yeah, that's the eventual goal, just not there yet. XD
Damn this really shows promise already - great job! Curious - for already generated models - have tried inpainting both an existing person next to my "target" (ex: batman and robin standing side by side - I mask robin), or trying to create from blank by inpainting a blank area and assuming "latent noise / latent nothing". It tends to bring in random models but doesn't seem to try to capture the "existing" one. Is it just easier to try to recreate the model I want but with this embedding (having a hard time getting a exact recreation)?
not perfect, but the way I've done is is to make a turnaround of a similar character (in build), replace one of the poses with your character, and then run it through Img2Img with an inpainting model, with the 'target' character masked. Regular models (vs inpainting models) don't look at the character enough to make it work.
@mousewrites Hi, I also have a similar question. I've been trying out your suggestion of using an inpainting model to try to recreate the same character in a different pose without getting a one-to-one faithful recreation.
My process is to start with a txt2img prompt with no charturner embedding enabled. Once I find an image I like, I send it to img2img and outpaint the bust or portrait into a full body character. Now, I outpaint the new fullbody character until I have enough blank or empty canvas room for two characters. Eventually, I generate a second figure with the help of charturner, a style only inpainting model, and masking the blank space on the canvas.
Even with multiple runs in img2img, I can't get a faithful recreation of the original txt2img character. I am able to get a second character in a different pose, but it's not accurate. Is this expected behavior?
@kasukanra Use control net, and other than that, it's only an embedding, not an actual program, so it's something you'll just have to keep trying at. This is a helper, not a generator. :D
Hello! Check out the new controlnet thing. It has openpose model that can be used for character turning.
YUP! It's AMAZING. It will pull the pose from a good turnaround and you can use it to make the perfect poses, it's AWESOME.
@mousewrites do you have a link to this ive not heard about it.
@orwelian84 https://github.com/Mikubill/sd-webui-controlnet
i put it in the embedding folder but the webui doesnt show it as valid pt, any reasons? thanks
Downloaded this for 1.5, can't get it to load. Double checked that it was the 1.5 version but they all seem to have the same file name. pls advise. Thanks.
Hm, not sure what's going on. As long as it's in the embedding folder, and you're using the 1.5 version with a 1.5 model, it should work.
i am unable to use this model properly and help would be appreciated
Need more info to help. What version, what model, what prompt, or is it not loading?
TXT:Textual inversion embeddings skipped(1): 21charturnerv2,,,,,error?
Hm, weird, i'll look into it.
same trouble to me..
I have tried put this into “hypernetwork” folder,but it do not work .even though the "aesthetic_embeddings"floder.
@Chandle It's not an hypernetwork, it's an embedding. I'm not sure what happened, it working fine on my side, I'll reupload it.
Just to confirm, @bad_leaf and @Chandle there's a version for 1.5 and 2.1 models, and it will skip loading the one that doesn't work. IE, if you're using 21CharturnerV2 in a 1.5 model, you'll get the error. Make sure you download the one for 1.5.
@mousewrites thanks~~
@mousewrites thank u very much.and i will try this in the 2.1models.
@Chandle There's a 1.5 version as well, if you want to use it in those. Good luck! :D
@mousewrites thank u a lot.
Hi,i want to know how the 'embeding' is trained and saved? Can you you expain it here?
The embedding was trained on my system, from a small dataset I put together of my own character turnarounds, plus a few of character turnarounds I found online (mostly photo reference, to make sure it didn't only do illustrated turnarounds). I trained it using Auto1111's training tab, which outputs a pickletensor embed (pt file), which I uploaded here.
Hi! Thanks for amazing addon =) Is there a way to make t pose for pre existing art?
I tried adding both versions 1.5 and 2.1 to embeddings folder but they don't appear in the UI
Make sure you restart. They don't show up under extra networks?
@mousewrites 1.5 version showed up after I refreshed it, thanks. I was a bit confused because I had to click on the charturnerv2 folder before it actually popped up!
@mousewrites Unfortunately I can't make it work. It adds the charturnerv2 prompt, I've tried adding the additional prompts in the guide, but it just gives regular images. The lora one works, but the images are full of artifacts and glitchy.
@JohnnyDoe303 The lora needs to be set low, such as .4 or so. Try out one of the prompts on either my example images, or some of the other ones in the reviews (check the little i with a circle around it in the lower corner of each image). It does work, even if it's a little fiddly.
I put it in embeddings fold, it didn't show up in the Textual Inversion window, even after restared, but when I put it in Lora folds, it show up in the Lora window.
well, it's not a lora. Make sure you have the one loaded for the models you are useing (ie, the 1.5 version won't show up if you' have a 2.1 checkpoint loaded, and vise versa)
@mousewrites me either, my webui is 2.1,restart many times,didn’t show up,I really need this,what should I do next
@May2xx Download the other version, just in case, and put both in the embedding folder, restart, see if that fixes it.
How can I limit it like "only 3 views including side view and back view"?
use control net (extension) find the exact turn you want (pose libraries, photos, other reference) and then use charturner to make sure everybody is wearing the same outfit.
The Lora version might be better to lock it in for your needs.
Hello, can you make a six view version? l think it'spossible to create a 3D model using six views
You can! just do a wide version, and if you want very set positions, use the extension ControlNet to control the poses, and CharTurner to make sure it's all the same character. Good luck!
Which software allows to port six views into a 3d model?
@Mifory
@MUXINGZHE 可以教我一下吗?大佬
what is the recommended width and height , sampler used?
Should be in the metadata of the images, other than the grids. Wider images get more turns. Square tends to front and back. Sampler used is up to you, I tend toward DDIM at 20 steps with highrez fix, set to 1.3-1.5 x with latent nearest exact. However, it should work with most settings, it's not locked into any particular size or sampler. (
I am kind of new to using SD. Help me understand using this as a posed to just using control net's openpose. I get good results already. Since this is o LoRA won't it change it to look more like the characters the you trained it on?
I made this before ControlNet was a thing. OpenPose will 100% be more accurate on poses, but it also won't make up poses (ie, if you want a solid T or A pose, controlNet is the way to go.
Using CharTurner WITH controlNet is my favorite: controlNet keeps the poses exactly where you want, CharTurner makes sure you get multiple images of the SAME character, not several characters. Both is better.
I also trained it pretty carefully to be a POSE embed, not a character embed. As you can see from the wide variety of characters people have made with it, none of which are the characters I trained it on.
There IS a lora version, if you want it. Check my profile. :D
I am sorry this is a TEXTUAL INVERSION not an LoRA. I was watching a youtube that when along with https://github.com/tobias17/sd-anim-utils to see how Tobias Fischer was using SD to create animations. He has a link to this. I get pretty good results with using just Control Net. I get mix results, most of the time I get consistent looking characters, sometimes one of the views the character is wearing a slightly different color outfit or one of the views it is wearing a hat or some other accessory. Which is great for concept art, not so great for creating 2d animations. I do like the consistency that this brings to all of the poses in one sheet. I am going to give this a try and see how it affects my images.
I have a proposal for you to collaborate with us on an interesting project.
I'm not sure what you're asking for, to be honest. You can use img2img with it, yes. If you're wanting to inpaint poses, make sure you're using an inpainting model.
@mousewrites
Sorry, I didn't express clearly. I mean, I have an existing image, can I use this to generate multiview directly?
@hq901112603 Yes. Here's how i do it: https://imagecache.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/035cbefd-7770-424a-4a63-0ffb18dae200/width=1772/5.jpeg
@mousewrites thanks
:( can't get it to work, the lora works but the embedding for v1.5 on 1.5 models just isn't working for some reason for me.
make sure you downloaded the right one? There's a v2 for 1.5 and a v2 for 2.0 models, and a lot of people download the 2.0 one (it's the first one up.). Click on the one that says for 1.5
@mousewrites I downloaded the v2 for 1.5. Should I be messing with the weights for it? It doesn't do the turnaround effect or multiple poses like the Lora does. Only problem is the Lora has a habit of stealing the style.
@mechamuffin The works, but i made it before lora or controlnet were a thing. both of those are a little "easier". However, it SHOULD be working without them. Try using a prompt from one of the example images. using the words "character turnaround" and making your image wider than it is tall both help.
@mousewrites what sub folder should this go on to get it to work? Sorry- newbie :)
@magikal stable-diffusion-webui\embeddings :)
Why cannot load 21charturnerv2 pt, no matter how many times refresh, nothing appears at stable diffusion/embeddings
There are two versions, one that loads for 1.5 models, and one that loads for 2.1 models. The 21CharturnerV2 is for the 2.1 models. If you're working with 1.5 models, make sure you grab the other version. (embeds don't even show up if they are for the wrong model.)
@mousewrites Not working for me either, it's skipped during launch for some reason, even if I launch with a 2.1 checkpoint. Strange. V1.5 charturner still loads with the 2.1 checkpoint..
i love the utility , how did you train the embeding and lora (beta)?
Same way most stuff is trained. Collected a bunch of turnaround (some I made, some I found), captioned by hand, trained, used that training to make some more, trained, repeat until it's useful. I think I trained around 25 versions of this before I was happy with it. Lora is made with the same dataset.
@mousewrites Salute magnificent
@mousewrites Hello, may I ask what keywords need to be added to the handmade title to achieve the effect of three views
@wsjshaoyeahnet697 Most images have the prompt included, look for the little i in the circle on the example image. Make sure you're a little wider than square, and if you really really really need it in a very particular pose, use this AND control net with an image of the poses you want. While I tried to train in a key word specifically for 3 view, 4 view, 5 view it doesn't consistently enough to recommend it. In general,
"Character turnaround, front back side view," and charturner should be enough to make it work.
thank you,i so love the utility
100% this is the answer to almost every "i can't find it" issue. It sucks that you have to have both the 1.5 and 2.1 versions if you use both types of models. I tried to make it clear in the text that there's a file for each 'type' of model (1.5/2.1) but it is still unclear.
if you are using 1.5 models, you need CharTurner V2 for 1.5.
If you are using 2.1 models, you need CharTurner v2 for 2.1
I am using 2.1 models, and the charturner 2.1 is not showing up, bot h1.5 and 2.1 versions are in my embedded folder
@FoxDude that is very weird! if you have both, and they aren't showing, I have no idea what the issue could be. Could try the CharTurner Lora in my profile.
多谢指导,正疑惑呢
do i have to use character lora to make sure it's always the same character, im using v2for1.5 with controlnet and characterturn lora, but its not the same character turning around
The embed and lora versions won't make the system be able to produce a character the system can't do without it, it only helps you make a character turn around (same character, different pose.) So, if you can make the character without charturner (lora, OR embed) than you should be able to make a turnaround with this.
If you can't make the character, you need a lora to make the character, and then you can use this to make a turnaround.
ControlNet helps make sure the pose is 100% what you want (since the embed and lora sometimes pick their own poses), but doesn't help you keep consistent characters.
Good luck! :D
@mousewrites thanks!
I like this embedding, but can you also make a safetensor version of it?
There's a lora. I don't know how and can't find a way to change an Textual Inversion Embed into a safetensor; checkpoints and LORA have safetensor options, but Embeds do not by default. If you can point me at a resource to do change it into a safetensor, I will make one.
@mousewrites There's a guide explaining how to do it in the Automatic1111 webui: https://rentry.org/safetensorsguide#a-guide-on-safetensors-and-how-to-convert-ckpt-models-to-safetensors-directly-with-voldy-automatic1111s-ui
If you don't want to go though all the trouble of installing Automatic1111 just for this, then you can use this colab notebook instead: https://colab.research.google.com/github/DiffusionDalmation/pt_to_safetensors_converter_notebook/blob/main/pt_to_safetensors_converter.ipynb
However it seems to strip the meta-data out of the file in the process, so it might not be ideal.
EDIT: I'm not the author of the rentry guide, and I hadn't actually tried it myself when I wrote this comment. It seems, at least in my version of the UI, that it doesn't actually convert the embedding and creates a tiny (~12 MB) file instead. I highly recommend using the colab notebook I linked instead.
@TheRoamingSwamp I have Auto1111 installed, so i'll check that out. I had thought that only worked for checkpoint models, and not the smaller PT embeds. Thanks for the info.
posting this here for reference
Can I convert other types of models that aren't Checkpoints to .safetensors?
Yes, you can also convert any type of model to .safetensors using this method. This means you can also convert LoRA, embeddings, hypernetworks, VAE and also special models like Pix2Pix, inpainting, ControlNet models and presumably other types of new models that might come out and that aren't already in .safetensors for whatever reason.
Just make sure their file extension is .ckpt. For example if your embedding is called embedding.pt then rename it to embedding.ckpt. They need to be renamed .ckpt so the Model Converter extension can detect them.
Then simply place the embedding, VAE, etc. you want to convert in the root of stable-diffusion-webui\models\Stable-diffusion and then the extension should detect them. Use the blue Refresh list button in the extension if needed to make them show up if they aren't, even after renaming them to .ckpt. Alternatively, restart the UI.
@TheRoamingSwamp Safetensor version uploaded (for 1.5 models, 2.1 version safetensor coming soon)
@mousewrites I really appreciate that you take feedback from the community, but the safetensors embedding you've posted is really small (way too small to be an embedding), and when I try to load it, I get an error message saying that it's the wrong type of file.
I'm guessing that you used the automatic1111 web method from the rentry guide. I hadn't tested it myself, so I shouldn't have mentioned it. My bad.
If you still want to upload a Safetensor version, I recommend using the colab notebook I linked instead: https://colab.research.google.com/github/DiffusionDalmation/pt_to_safetensors_converter_notebook/blob/main/pt_to_safetensors_converter.ipynb
I've tested it on several embeddings, including 2.1 embeddings, so I know it works.
@TheRoamingSwamp ah, darn, i hadn't checked them. Thank you for the feedback, I will see what i need to do to fix it. Thank you for letting me know.
Is anyone else getting an error when loading 21charturnerv2.safetensors? I tried all of the stable diffusion 2 checkpoints.
Sorry about that! The .safetensor version is broken, (the method I was using to change the .pt file to safetensor didn't work.). I thought I had deleted all of my attempts, i missed this one. Sorry about that.
I will eventually get a working safetensor version of the embed working.
I have 2 questions, Can you make a a model that shows gradual progress. Also can you make this model/lora and the gradual progress one use separate images for each angle of the view/Step of the progress?
If you use control net to control the pose, you can get as many 'gradual turns' out of this as you want, and charTurner will make sure they're all in the same outfit/character. It won't work as separate images, for it to be all the same outfit, it needs to be all the same gen.
If you're talking about giving it the front and side and back and having it fill in the 'missing' poses, you can do that with this, plus control net, plus inpainting. There's an image in the gallery of how to do that.
With controlNet openPose now available, I see no reason for me to spend another 100-150 hours to create either of the requested models. :D
Hello, I am interested in car body design and I need to produce orthogonal views of a vehicle (front, side, rear and top). Do you know if there is any Stable Diffusion extension that allows me to generate these views/images based on a car render I already have? My idea is to use these four views as a blueprint to make the 3D CAD model in Alias/CATIA. Thank you!
Nope, none yet. I made the PlanIt embed (check my profile) trying to do exactly that. You need a large dataset that shows all the views you want (ie, a well tagged orthographic library dataset) . However, with Planit, you can get part of the way there. Everybody is chasing the AI to 3D pipeline. XD
@mousewrites I will watch it soon and comment on how it goes. Thank you very much!
Hi I am trying to do the samething as well whats Planit can you share a link thanks?
will this work with products, mech, or hard surface items?
Not very well. I have a second embed, called "PlanIt" in my profile, it does better with products, though it doesn't do as constant a turn around.
sorry where shoud i put?
add it to the embeddings folder. it's available under same button where Lora is kept, in the "textual inversion" tab
Thanks for your amazing work! I have a few questions about training:
1. How did you get the dataset you used?
2. How large is the dataset you used to train the model to achieve current performance?
I would be very grateful if you could answer my question! Thanks in advance!
1) The first tiny dataset was character turnarounds that I made over the last 20 years, or a few I found, and then made a early version, which i used to make a dataset of 35 by the 25-26 version of the trainer.
@mousewrites Thanks for your reply ! But I'm still a little confuse about what you mean by "make a dataset of 35". Is that means you made a dataset of 35 different characters turnarounds or something else? Sorry for my poor comprehension ability !
@langhr241192 You're fine. Yes, the final dataset was 35 or so images. :D
@mousewrites OK I got it ! Thanks very much :D !
Thanks for this tool!! Do you have any advise on how to generate only front view and back view?
I've tried enabling controlnet and drawing one pose looking forward and the second looking backwards, yet the generation brings only the front view.
Hm, front and back should be doable. I've noticed that the highres.fix pass sometimes seems to 'flip' somebody around, in some models. Try it without that, as a test? The other thing is you can try the char-turner Lora (check my profile), it might be better at 'forcing' it.
The other thing to try is take one that has the front and 'not back', mask the front, and ask it for the back view, but put "eyes, nose, mouth" in the negative.
Hello, I am trying to use your model, but cannot figure out how... E.g. I have an image of some characterr 768x512, what are my actions? Should I make the width of the image 1536x512, draw the silhouette load it into inpaint, generate ControlNet open pose with the position I want my character to be in , enter the same prompt as for the original image plus "charturnerv2 a character turnaround of a" in front of it? For some reason it doesn't quite work for me... It generates only part of the face and then some noise in place of the body...
Hi there! This was created before we had control net, so originally it was mostly used with text2img to try to generate the whole turntable all at once. if you have one pose, and want to try to get the rest, the easiest way is to find a image (or generate one) that has the rough character shapes you want (very rough), replace your source image over one of the figures in something like photoshop, and then in Img2img, mask the source image and tell it to inpaint not masked. Still can be hit or miss, but i've had pretty good luck with that method. One of the images in the gallery is a little tutorial on that. :D
Hello, I wonder if your model could be used to create images on the side or back of the same character with the help of the front image. I’m currently trying to do so but still think simply uploading the front image to img2img with prompts doesn’t work. Do you have any idea how this can be managed to be achieved?
Yes. One of the images in the gallery is a tutorial on how to do this. if you need more info, check the other comments; it's by far the most often asked question (and why i made a tutorial page. :D)
I don't know. What model are you using? I didn't train it in NAI so i've heard it doesn't work as well in that. Are you making the image wider than it is tall?
Thanks for your reply, I use modle AOMA31b, 1024x1024 size. I guess the image size is not wider enough.
@cipher_wh Orange Mix has some issues with it, for sure. AOM probably has the most problems with it of all. However! Try 512x1024, high res fix helps (i usually go 1.3 size with 'latent Nearest Exact')
Thanks for sharing, I would like to ask a few questions
1: Has your TEXTUAL IVERSION been trained by the webui plug-in?
2: Did you deliberately adjust some parameters during training?
3: I tried training text inversion for different orientations of the top view characters, and the results were terrible.
1) I did train it in the webui, with the included training tab.
2) Yes, it was very difficult to find settings that worked well with my dataset
3) I'm not surprised. This embed was VERY difficult to create. I trained more than 50 copies (went thru the alphabet twice on testing versions). I'd highly suggest you use ControlNet OpenPose to help force the character into the poses. ControlNet hadn't come out when I started down this path to make the embed. XD
I have made a 1024x1024 openpose model for controlnet with 4 diffrent view (Front, back, right and left). Because the character i want to model has big wings, i kept free space on the right and left sides of these. This works fine, but there are some diffrances in the character, to avoid this i want to use charturner. The issue i encounter when useing charturner is that it adds more poses/views to this. Is there a way to stop charturner from adding unwanted poses/views when useing controlnet?
Generally, using controlnet will override the poses in the embed, but you can try pushing up the weight to 1.2
@mousewrites Thank you for your qwick reply! I have just tryed that, it seems to help, but poses and headshots are still added. i also tryed 1.5 and 2.0 control weight, that makes it less, but breaks the wings and tail.
@Rikuthin Right. I had 0 characters with wings or tail in the dataset, it was all humans. So the more you ask charturner to be 'strong' its going to pull away from the wings and tail. You probably will have to do it as a 2 step process: charturner with control net for the poses, take them into inpaint for the wings and tail.
Great job! It's one of my favorite embeddings. Is there any chance that you are interested in the creation of age progression embedding? I saw some LoRAs up there, but it changes the picture drastically. Just want to have a tool, that doesn't affect the overall quality.
I hadn't thought to do one, specifically. I'll think about it, but mostly i've been using the other ones.
@mousewrites I was messing around with a lot of different approaches, but mostly neural network "forgets" what it was doing in the middle or right away. Another problem is that Lora's trained with anime pictures treating very bad age-related facial features (wrinkles, eye changes, facial contours, and other age-related facial features). And mostly, as I said earlier, there is a problem with drastic changes when a lot of LoRAs are in use (despite I usually use 2 or 3, 4 for max with low weights). So I think embedding is the best option here. Never trained it myself btw. Only LoRA's.
Hi. Are you planning to create embedding or LoRA for SDXL?
If not, could you provide the dataset to me for training for SDXL
Works great, Thnks
might be a stupid question but I'll ask anyway
if i create a picture of a character with this, can i train a lora with it or the Ai will be confused because the same character is there multiple times
are you trying to train a lora of the character? Or are you including it in a larger data set but are training a style or something?
@mousewrites was asking if i could include the images i make from your embedding in a data set to train a lora since the images created have multiple poses/angles of the same character idk if the Ai will get confused or not also sorry if you get confused i really don't know how to explain it better since english is not my first lang
@lovelyzoey You can 100% include these images in a dataset to train a lora. Depending on WHAT you are trying to train your lora to do, it may or may not help. If you put several of the same person into the training set, the lora will learn that person, but just one image of a turnaround probably isn't enough to make that person show up over and over again, unless your dataset is very very small.
I want to help more, but without knowing WHAT you are trying to do I can't give more advice.
Hi!
Can you do a short youtube tutorial for it?
Looking forward!
- Nightknight
I don't make videos. If you search for charturner on youtube, somebody else made one in Spanish.
@mousewrites There's (at least) one in English that I'm just starting on. If the vid's any good, it might be worth posting on your description page, eh? Just a thought. Here's the link: https://youtu.be/-iwPVUzAWzk
求救❗❗❗❗❗无法正确触发这个embedding!
我加入了以下这串代码,但感觉完全没有绘制出我想要的效果,画面上只是多出了两个dva,而不是三视图,请各位大佬帮帮忙!
a character turnaround of a (corneo_dva) waering ( blue bodysuit),charturnerv2, (multiple views of the same character in the same outfit: 1.2),you can try moving the trigger to the start of the prompt, but the embed doesn't work perfectly with all base models. You can also find a turn around you like and put it into ControlNet, and use and charTurner together. That way you should get the same outfit on all the turns. Also make sure you try making "wide" vs "square" images to increase the number of turns. Good luck
Ey Hi, is exist for mac m1 chips too ¿?
This is an embedding, it will work on any system that can run a textual embedding. It is not a stand alone bit of software; it has to be running inside a stable diffusion front end. I have no idea if there are different types of embeddings for mac. If you are using a front end like Auto1111, on mac or PC, this embedding will work.
Does any1 recognise this artstyle?
https://www.deviantart.com/thaleroyjenkins/art/Adopt-Sure-Fire-Elf-OPEN-999260481
I browsed the whole civitai and couldn't find any LORAs looking like this.
Love this. Trying to get some prompts that will result in stages of undress. A strip-tease, if you will. Having fun experimenting.
A lora would be perfect for that, maybe it could pick apart 3d machine objects too, so we get the internals of a machine say,.
hi, dude, i have a question, when i use this embbeding, i run 10 picture by SD but just only less 2 pictures show that effect, i try to enhance this embedding's proportion but still not work, i'm sure that i had put the down file to SD's embbeding folder and copy the trigger word in positive area, i dk why, what should i do that let SD can show correct effct stably, forgive me i from China and my English is really not good
This is a great tool. Wanted to check if you are going to update it, or (as you imply in the text), you're going to release a Lora version. Thanks
for SDXL???
I never trained it, because with openpose it's so easy to get exactly the turn around you want. If you still would like it, i will add it to the To Do list.
Is it working with Fooocus? If yes could you tell in which folder I need to add?
This is a textual embedding. I've never used Fooocus but this has instructions https://www.reddit.com/r/fooocus/comments/18hth1h/using_embeddings/
@mousewrites Thanks a lot
Hey brother! This sounds really useful, is it possible to make this for pony?:D
SDXL/Pony please
which pony version? SDXL pony or 1.5 pony? (i've only used pony a little)
@mousewrites SDXL
Does this work on SDXL and Pony, or must I use SD 1.5?
I never got around to making an SDXL version, because controlNet OpenPose became a thing, and most people swapped to using that with sdxl and Pony.
Also, the concept of 'character turnaround' is at least partially existent in the SDXL base model, so if you use openpose and good prompt you can probably do as good as the original dataset would get you if I trained it now. You could run the 1.5 version to get a set of poses you like, and use openPose with pony to guide your generation.
... and i just remembered that people can use the onsite generator. Hm. Okay, maybe I SHOULD train a pony/IL version. Nobody's got one going yet? I sorta stopped working on this because other people had it covered.
@mousewrites Please do if you have the time! I have troubles getting OpenPose to work well with SDXL.
@mousewrites Can you train a Pony version?
@nroonarij129 You have to have the SDXL openpose model, it's different than the 1.5 version. Let me know what kind of troubles you're having. :3
@jellycap I could but if you use openpose you might be able to get the same effect without it. But that doesn't help with the online thing, so i'll work on it.
@mousewrites Thanks! For anyone else looking, the OpenPose Xinsir Twin model is confirmed to work with SDXL. The others I tried prior to that did not work, or only sporadically. It appears some of them were trained with the red and blue channels flipped.
Hey there! This is incredible work.
You mention this works well with controlnets. I clearly did something wrong because my attempt produced utter garbage with one, vs. without one. Do you load openposes of all the turns or just one pose and then this will do the rotations?
Also, I can't make my char do the 360d turn. Not sure if that's got to do with my using a IPAdapter/FaceID model which forces the character to always be more-or-less forward facing.
Once again, thank you for your work!
Cheers
Hi there! Yeah, this was made long before iP adapter, and the way that control net works messes with this for sure. I'd try doing it without the adapter, and then inpainting all the faces with it as a second step.
The 360 turn is easier with a wide vs square image, but yeah, adapter is messing with that too.
Really i meant this works well with controlnet, if you need very specific poses but the same character and outfit.
Details
Files
21charturnerv2.pt
Mirrors
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
3036_21charturnerv2.pt
21charturnerv2.pt
3036_9857_21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2 - SD1.5.pt
21charturnerv2 - SD1.5.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt
21charturnerv2.pt

















