π¬ New Discord, come play with us: https://discord.gg/rHCnjX9cW9
Introducing Elixir π§ͺ β a LoRA that helps revitalize antiquated models, infusing them with renewed vitality. While for newer models, it offers a gentle yet discernible enhancement.
π Hey there! I'm Chillpixel, a developer, just diving into the vibrant CivitAI community.
Elixir was created using NinjaFix, my super cool algorithm that extracts the absolute best neurons from a bunch of neural networks, then mixes them together to create a totally fresh model.
I use Elixir by default, which plays nicely with other LoRAs and resources.
I hope you enjoy my first LoRA release β and if you do, please give Elixir a 5-star review!
HOW TO USE ELIXIR:
<lora:Elixir:1>Turn up or down depending on how old the model is.
WHAT'S IN IT?
With Elixir V1, we start with a collection of exceptional LoRAs. Stuff that looks like an obvious enhancement.
We then use NinjaFix to dismantle these resources, extract the most prized neurons, and merge them to form a super LoRA, which I now entrust to you.
WHAT'S NEXT?
Elixir V1's effect is subtle and only scratches the surface of what's possible. We have many more resources to explore and create on the path to V2.
I hope to keep Elixir up to date with all the latest advancements: encompassing noise, lighting, styles, textures, intricate details, and other refinements.
Let's Have Fun!
I'm kicking off daily art themes and challenges now:
Description
V1
Introducing Elixir π§ͺ β a LoRA that helps revitalize antiquated models, infusing them with renewed vitality. While for newer models, it offers a gentle yet discernible enhancement.
FAQ
Comments (7)
super cool algorithm that extracts the absolute best neurons from a bunch of neural networks
Is this merging the models and then extracting a LoRA?
Thank you for asking! NinjaFix looks at the neurons of the resources to be merged (like checkpoints and LoRAs) and compares them. It then decides which neurons are the best and puts them into a brand new model. The new model was then normalized and reconditioned before the LoRA was extracted from it.
NinjaFix is a different way of combining models that is not like any other way. It merges neural networks at a very detailed level, deeper than "layers" or "blocks." It goes really deep, down to the individual parts called neurons. NinjaFix doesn't just mix them all together evenly, though. It picks them very carefully, one at a time. Sometimes, when it's not sure which neuron to choose, it puts them together in a special way I call fusion.
@chillpixelΒ I see! I've been working with different ways to merge models for a long time (October for Stable Diffusion, but since early 2019 for other CNNs), but I don't know of a method similar to the one you describe, it sounds very interesting. The past few months I've been evaluating regular averaging, add difference, cosine merge, weighted block merge and a couple more with their variants and have them more or less mapped and understood.
I'm preparing to do an experiment with ~600 models and I'm still looking for an appropriate way to compare every part of the models, identify the similarities and differences and transplant "vectors" of concepts from one model (or a series of models) to another and am very interested to learn about any additional alternative methods. Would you have any additional references or implementations for this method that I could read so I can try to add it to my tests? :D
@victorc25744 Wow, you know a lot more about this than me! I'd love to nerd out on it together, but I'm working right now, so, I'll give you the main things I've figured out so far.
I tried some of the methods you mentioned, but I thought there might be other ways to do it too. One important thing is to average everything together so we don't lose any knowledge. But it's hard to do that with 600 models because then the knowledge gets too smoothed out and isn't as deep anymore. Weighted block merge is another way to deal with this, but it's kind of like guessing and it's tiring to do it with so many models. Plus, the results are often not very stable.
The "secret" of NinjaFix is what stops the knowledge from getting too smoothed out. The fastest way to do it is by comparing each neuron of every model with base SD v1.5. The neuron that's the most different from base SD v1.5 gets put into a new model. There are some things we have to be careful about to make sure the model doesn't get too heavy. But in the end, we want the model to be really heavy because it has the most important knowledge. Then we'll make a second model.
The second model should be the perfect average of every model. It keeps the knowledge from all of them, but it becomes a bit mushy. To get the most important knowledge back, we put the first model on top of the second one. I'm still playing around with how to do this. Maybe you can start with a simple averaging at 15% of the first model on top and see how it goes. You can go up or down depending on what happens.
I wish I could explain more, but I'm still working on this method. I can't share the code right now, but I would if I could. Maybe we can work together in the future! You can join our Discord group and share your experiments with us. That way, we can keep up with each other and learn from one another.
Here's the link: https://discord.gg/89Pu5ehUvEΒ
@chillpixel nice! That sounds pretty interesting! For my experiment with the models I also planned a few things, one of which was after I find the proper ways to cluster the models and get the average model of a cluster, compare this average vs. the original SD1.5 and my idea is to use this to obtain a vector that represents the direction of the models, so it can be manipulated at will (add or remove this vector with a weight and evaluate the effects). Also with this the idea is to see what part of the model changes the most with the vector and see if some particular layers or components mean something that is relevant (one of the ideas for the "vectors of concepts" I want to extract).
With the 600 models it will indeed get too smooth and probably just regress to the mean (meaning, it probably will become very similar to plain SD1.5), so I want to test with the clusters first instead.
In general I think I understand your idea and what you mean with the normalization and the model getting "heavy", I believe it's similar to adding multiple "add difference" extracted models to a single model, in which case the range of the model goes over the original range and it becomes unstable and starts breaking. That low weight merging back into the stable model is a good way to avoid it from breaking and still add as much as possible.
I don't use Discord much recently, but I've joined the channel, but I'll look around and ping you there! Cheers! :D
I thought Cinema4Dream was done until I gave it the Elixir
adds a zest to models for sure; hope to see one for sd3.5 and/or pony
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.














