My first embedding and attempt at recreating the nsfw artists "Calm" art. Trained with 40 images.
Using prompts like 1girl, 1boy, pointy ears, elf, colored skin usually gives the best results. For now its more or less "hit or miss". I intend on improving the embedding further. You will notice that you mostly get elf girls, which most of the training pictures consisted of.
There really is no exact trigger word, just using the embedding works for me at least. I have tested the embedding on anyhentai, MeinaHentai, revAnimated, Cetusmix, AOM3, and on everyone i got some pretty good results. So i can safely say that it works on most models.
Alot of the images i used for training contained watermarks, so you should add on the negative prompt: watermark, signature, patreon username, patreon logo,
Example images were created with very simple prompts. Some of them i used inpainting on to remove the logos and watermarks. Some i fixed the faces on. Did very little inpainting tho.
Settings that worked best for me:
Sampling Method: Euler A
Sampling steps: 20-25
CFG Scale: 6-8
Hires.fix: upscale by: 1.5x -- Hires steps: 10 -- Denoising strength: 0.3
Would love to get some feedback and see your creations :D
Description
This is almost a complete change from the first version. I re-did the whole training and this version 2 gives much crispier images.
The main difference between the two versions is that v1 gives much more consistent images of the elves but the quality is usually very blurry. The v2 on the other hand gives much less images of elves but the quality is much better. In other words, you have to use tags like elf, elf girl, pointy ears etc.. to get the same "results" as v1.
Another thing is that v2 gives alot of results with nature/water falls/monsters in the background because the training images had these. You need to write your prompts while keeping this in mind.
Once again, there is no Trigger words, just using the embedding will give results.
FAQ
Comments (11)
I'll check this out and post back my findings. The images / style look amazing..
But..
It's hard to gauge the impact this embedding has when you are leveraging an unreleased model.
"DarkRevPikas" doesn't seem to be on civitai and I don't know if the visual effects came from the TI or the model or both.
I confirm the colors and effects are from the embedding :) XD XD, I used Yuzu and gave good results
You are right regarding the model. It is a merged model I made recently of dark sushi mix + rev animated v12 + pikas animated mix. I am getting some really good results with this custom model. I can probably upload it as well if others are interested
Just added some pictures using the anyhentai and cetusmix models
Uploaded the DarkRevPikas model!
@BixBit11 Amazing thank you!
@DreamExplorer My pleasure :)
i wish there is a background only version
!!Please read this regarding the Embedding!!
Hello everyone. I've noticed more and more people are using this embedding, and quite frankly, it does not do what i originally created it for.
I made this embedding with goal of re-creating an artists style. This was one of my earliest models, and i was pretty inexperienced with creating models.
As i said, it was meant to re-create a style, instead it turned out to be more of a "fantasy" theme embedding. You can pretty much tell from the example pictures how the colors and setting usually are similar.
I've gotten comments asking about what model i used, and why they can't re-create the same images. I'll write one of the answers i gave a user:
Sorry, this was a custom checkpoint mix i made a long time ago. I can't really remember, but i believe i used my version 2 of DarkRevPikas checkpoint. https://civitai.com/models/53215?modelVersionId=66374
Also, i did a lot of img2img and inpainting to get better quality, so you will most likely not get the same image when re-using the data. But you should get a similar theme with colors etc.. If you use the same checkpoint and prompts.
Double check if the embedding name is the same. When i trained it, it was "ArtCalmConcept-2-400" but i think i renamed it after uploading the model.
I always did txt2img -> img2img (with 1.5x - 2x resolution) -> inpainting the face and any deformities. This was my workflow on almost all of my images, and is the reason why the quality might be "good".
Hope this helps.


















