Docs
Quantum Circuit
TYTAN CLOUD

QUANTUM GAMING


autoQAOA
Desktop RAG

Overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up

Simple Merge of Diffusion Models in Image Generation AI

Yuichiro Minato

2024/03/28 03:32

In diffusion models, the model is represented using the weights of neural networks. By adding these weights together, you can merge the weights. Simply, by applying coefficients to the weights, you can fuse two or more models together.

This time, we'll attempt a simple merge of different models using the same random seed for portrait photographs. While hierarchical merging involves changing the weights of each layer of the neural network, it's an endless task, so for simplicity, we'll keep the merge ratio of each layer constant and execute a simple merge.

Model 1

Model 2

In the same portrait photo and with the same random seed, different women appeared.

The models are stored in the same safetensors format, and they can be added together.

By setting the ratio of the addition to 1/2 each and generating an image,

0.5x Model 1 + 0.5x Model 2

Despite using the exact same seed, the atmosphere somehow feels like it's exactly halfway between the two.

Let's try merging Model 1 with a 75% weight and Model 2 with a 25% weight.

The atmosphere of Model 1 came out strongly. Next, let's try reversing it and set Model 2 to 75%.

Similarly, the characteristics of Model 2 are more pronounced. It's interesting to see how the hairstyle also moves a bit closer to that of Model 1.

This time, we explored a method called simple merging, which can be achieved by adding tensors.

Since you can merge different styles as well, you can create as many models as you like!

© 2025, blueqat Inc. All rights reserved