|

The Ultimate Stable Diffusion Inpainting Guide for 2024

Inpainting is one of the most powerful features of Stable Diffusion and can help you fix images or change them completely. 

If you’re a Stable Diffusion user, you must know the feeling of crafting a perfect prompt and still not getting the image you want. 

Instead of regenerating the same prompt over and over again, it’s a better idea to turn to inpainting and fix the defects in the generated image. 

In this Stable Diffusion inpainting guide, I’ll walk you through step-by-step what inpainting is, how it works, and some of its common uses and applications. 

That being said, let’s get started. 

What Is Inpainting & How Does it Work? 

Inpainting is a method of painting over an image and regenerating the painted or masked area guided by the prompts you enter. 

It is very similar to the image 2 image technique where random noise is added to the entire image and is generated with prompts. The only difference with inpainting is that the noise is added only to the masked area or the part you painted over. 

This means that you can manipulate a specific area of an image by creating a mask and changing it with prompts. 

What this implies is that you can change anything in an image by writing some prompts. You can fix defects, remove objects, add objects, and do a lot more which we’ll explore later in this guide. 

Why use Inpainting with Stable Diffusion?

Once you’ve generated an image using Stable Diffusion or any other AI tool, why should you use inpainting in Stable Diffusion to fix or change it? Couldn’t you do it in any other tool or just use Photoshop? 

Well, you can use other tools to fix or manipulate an image but there are two problems with that approach: 

  • It takes time to manually fix an AI-generated image using Photoshop or other tools. 
  • Inpainting with Stable Diffusion has the best results compared to any other alternative out there. 

To add to this, if you’ve generated an image using Stable Diffusion, using inpainting to fix it will be much better as it’ll result in more realistic and accurate outputs. 

And since inpainting is guided by prompts, you can explore different options instantly by just modifying your prompts and getting a new result. 

Overall, inpainting with Stable Diffusion is fast, powerful, and versatile allowing you to manipulate an image quickly with perfect accuracy. 

How To Do Inpainting In Stable Diffusion 

Now that you know what inpainting is and how it works, let’s inpaint an image using Stable Diffusion. 

I’ll walk you step-by-step through the process and you can follow along. 

Install Automatic1111

We’ll be using the Automatic1111 WebUI for Stable Diffusion for this inpainting guide. I’m assuming you already have it installed and know how to use it. 

If you don’t have it installed, you can download Automatic1111 from here or just follow our guide to install it quickly. 

Automatic1111 WebU

However, you can also do inpainting in other WebUI’s like ComfyUI or Invoke AI but I’ll not be covering them here. 

Download Inpainting Models 

There are special models made just for inpainting purposes and I’d recommend you use those models rather than a normal model. 

Here are some good inpainting checkpoint models you can try: 

Download any of the above models and place it in the folder where you store your checkpoint models:

stable-diffusion-webui/models/Stable-diffusion

I’ll be using the Clarity Inpainting checkpoint model for this guide. 

Once you’ve downloaded and placed the model in the directory, load it in your Automatic1111 from the checkpoint dropdown at the top. 

Automatic1111 - Select Checkpoint Model

Draw Your Mask 

As you already know by now, we draw a mask over the image for inpainting. Here’s the image I’ll be using for this guide. 

Stable Diffusion Inpainting
Original

You can download it if you want to follow along with the guide. 

In Automatic1111, go to the img2img tab and select the inpaint tab.

Automatic1111 - Inpainting Tab

Now, upload the image to the canvas.

Automatic1111 - Inpainting Upload Image

For this image, I want to add the person wearing a sunglass. 

In the canvas, you’ll notice a paintbrush when you over the image. So, draw a mask over the eyes and cover them completely.

Automatic1111 - Inpainting Draw A Mask

You can also change the brush size and reset the canvas which clears the mask you’ve created. 

So, our mask is ready and all we have to do now is to write a prompt to add sunglasses to this image. 

Prompt for Inpainting

In this example, we’re adding something new to the image. So, we’ll be writing a short prompt describing the object we’re adding. 

But if you’re using inpainting to fix smaller defects, you can just use the prompt you used to generate the original image. 

Here are the prompts I’m using for inpainting: 

Positive 

Black sunglasses, stylish

Negative

Ugly, deformed

As you can see, the prompts are very short and simple. You don’t have to use complicated or long prompts for inpainting as it’ll most likely mess up the result. 

Related: Best Stable Diffusion Negative Prompts

Inpainting Settings

When you first opened the inpaint tab in Automatic1111, you must have seen a bunch of new options that look overwhelming. 

Automatic1111 - Inpainting Settings

Let me walk you through each of these options one by one. 

Mask Blur: The blur value of the masked area before inpainting. You should keep this to 4 in most cases and don’t need to touch change it. 

Mask Mode: This option defines what part of the image will be modified. When ‘Inpaint Masked’ is selected, the area that’s covered by the mask will be modified whereas ‘Inpaint Not Masked’ changes the area that’s not masked. 

We’ll be selecting the ‘Inpaint Masked’ option as we want to change the masked area. 

Masked Content: Masked Content specifies whether you want to change the masked area before inpainting. Usually, this should be set to either ‘Original’ or ‘Fill’. 

Since we’re adding something new to our image in the masked area, we’ll set it to ‘Fill’. 

Inpaint Area: This lets you decide whether you want the inpainting to use the entire image as a reference or just the masked area. 

You should set it to ‘Whole Picture’ as the inpaint result matches better with the overall image. 

Only Masked Padding: The padding area of the mask. By default, it’s set to 32 pixels. If you increase this value too much, the output quality decreases. This is why, I recommend not changing this value at all. 

Besides this, the other generation settings are the same as the txt2img in Automatic1111. 

For this image, here are the image generation settings I’m using: 

  • Resize Mode: Crop & Resize
  • Sampling Method: Euler A
  • Sampling Steps: 25
  • Image Size: 512x768px
  • CFG Scale: 7
  • Denoising Strength: 0.6
  • Batch Size: 2

Generate Your Image

Now that we have everything set up, let’s generate our image. 

Click on the ‘Generate’ button to begin the inpainting process. Here are the inpainting results:

Automatic1111 - Inpainting Generate

As you can see, we successfully added sunglasses to our original image and it looks very realistic. 

That’s how inpainting in Stable Diffusion works. It’s simpler than it looks. 

Inpainting Uses & Applications

So, you now know how to inpaint in Stable Diffusion. 

But what can you do with it? 

I already shared a few use cases at the beginning of our Stable Diffusion inpainting guide. But let me show you some examples. 

This will help you understand the true extent of how powerful inpainting really is. 

Object Removal, Replacement & Addition

Removing or replacing objects from an image is a very common use case of inpainting. The process is very simple: you draw a mask over the object you want to remove/replace and use prompts to change the image.

Here’s an example image where I replaced the sword with a gun. 

Prompt Used: 

holding a pistol

This is another example of removing an object from an image. (yes, you can also remove your ex-wife from your wedding photos easily or replace it with a celebrity using inpainting): 

Prompt Used (Negative): 

wearing a chain, gold chain

You can even turn your boring images into something fun by replacing your clothes using inpainting: 

Prompt Used: 

(1girl), wearing a dress made of red flowers, intricate details, highly detailed, rose petals, beautiful, elegant 

Lastly, here’s another example where I added birds flying in this image: 

Prompt Used: 

birds flying

Change Backgrounds

There are many tools on the Internet that let you change the backgrounds in your images. But I never feel comfortable uploading my images on these new and relatively unknown websites. 

And I’d rather use inpainting in Stable Diffusion to change backgrounds in images. Here’s an example: 

Prompt Used: 

a field of rose flowers, CGI, colorful, nature, sunlight, day, masterpiece

The process for this is the same as shown above but instead of masking over the entire background, you simply mask over your whole body and then select the Mask Mode to ‘Inpaint Not Masked’. 

This will change the area that’s not masked allowing you to easily change backgrounds in Stable Diffusion. 

Fixing & Refining Bad Images

Not all images generated in Stable Diffusion are perfect and some have bad defects or errors that ruin a perfect image. 

You can fix and refine such bad images using inpainting by simple prompts. 

Here’s an example of fixing bad eyes and making them look better using inpainting: 

Prompt Used: 

beautiful hazel eyes, highly detailed eyes, realistic eyes, thin eyebrows

You can follow this guide to fixing eyes in Stable Diffusion which goes into detail about this process. 

Similarly, bad hands in Stable Diffusion is a universal issue for every user. You can fix such bad hands using inpainting as well: 

Prompt Used: 

highly detailed hands, holding cup of coffee 

Again, I’ve written a dedicated guide to fixing hands in Stable Diffusion which you can check out. 

These are just two examples of how you can fix certain aspects of your image using inpainting. 

Stable Diffusion Inpainting Tips

Inpainting in Stable Diffusion is very simple and straightforward but if you want to get the best possible results, you’ll need some additional tips to help you out. 

Here are some inpainting tips that will help you get better results: 

Use A Small Mask Area

While the mask area doesn’t directly affect the output image, it’s always a best practice to keep the masked area contained to the part you want to change in the image. 

This ensures that other aspects that you don’t want changed aren’t affected in the resulting image. 

You don’t have to draw a very accurate and clean mask but you need to be mindful about not masking the content you don’t want to be changed. 

Denoising Strength

Desnoising strength determines how much the resulting image will be changed compared to the original image. When the denoising strength is 0, nothing will change in the image. 

The higher the denoising strength value, the more change the output image will have. And if it’s too low, there would be negligible change in the output image. 

For inpainting, a denoising strength between 0.5-0.75 is good as it doesn’t drastically change the image and results in a good output. 

Here’s an image depicting the change in output image during inpainting at different denoising strengths:

Inpainting Results At Different Denoising Strengths

You can experiment with this value but anything between 0.5 to 0.75 is good to go. 

Samplers

The Sampler you choose for inpainting greatly affects the output image. Since there are so many sampling methods, it’s difficult to choose the right one. 

In my experience, using Ancestral samplers like Euler A works really well with inpainting. That’s because these samplers are noisy when compared to others which gives good inpainting results. 

You can also use the DPM++ 2M SDE Karras sampler which is pretty good for inpainting. If you’re inpainting photorealistic images, then Euler A will work well at getting realistic colors.

Sampling Steps

Sampling steps don’t matter much in influencing the result during inpainting. So, a value between 15-35 will give you good results. 

However, you shouldn’t go below this as it will result in a bad output. Also, going above 35 will give you diminishing returns. 

Use Inpainting Models

Using inpainting models is always a good idea when you’re doing inpainting in Stable Diffusion. That’s because these models are specialized for inpainting. 

Personally, I have tried both inpainting and normal models for inpainting and the results are a bit mixed. With normal checkpoint models, I get good results but the color accuracy is not always the best. 

If you do want to use a normal checkpoint model for inpainting, you should pick a well-trained model like Realistic Vision, epicphotogasm, or DreamShaper

However, I’d still recommend downloading at least one inpainting model that can be your go-to whenever you want to inpaint anything in Stable Diffusion. 

ControlNet Inpainting

Using ControlNet during inpainting can help you a lot in getting better outputs. Since this is a big topic, it needs to be covered separately. 

But to give you a gist, there are several ControlNet models for depth, pose, etc which can help you get more detailed and accurate outputs. 

For example, in my guide on how to change clothes in Stable Diffusion, I explained using ControlNet as the OpenPose model helps you preserve the pose of the human. 

If you look at the images above, you can see that inpainting without ControlNet messes up the pose in the output. However, with ControlNet’s OpenPose model, the pose is preserved thereby giving more accurate outputs. 

Ethical Considerations of Inpainting

Now that you’ve learned how to do inpainting in Stable Diffusion, I’d like to address something very important before concluding this guide. 

Inpainting is a very popular feature in Stable Diffusion and as evidenced by this guide, you can manipulate any image in a matter of clicks using it. 

This also means that inpainting can be used for harmful or dangerous purposes. Since the advent of generative AI, people have been using these tools for the sole purpose of posing harm to others by creating inappropriate artwork of people’s images. 

While using inpainting to change your clothes or turn boring images into something creative is fun, it’s our duty to ensure we don’t use this technology to harm any individual. 

That’s why, it’s my earnest request that you make good use of inpainting and Stable Diffusion in general and avoid falling into the mindset of harming others using this technology. 

FAQs

Here are some frequently asked questions about inpainting in Stable Diffusion: 

Do I need a powerful computer to use inpainting with Stable Diffusion?

Using inpainting in Stable Diffusion has the same requirements for running Stable Diffusion. In short, you’ll need a powerful GPU with atleast 4GB vRAM and atleast 8GB RAM on your device. 

How can I fix common issues like artifacts or blurs in inpainting?

To avoid blurring and artifacts in your images, use a very low denoising strength of 0.2 or 0.3. If that doesn’t work, consider changing the sampling method to get a better output. 

What’s the best software for Stable Diffusion inpainting? 

Automatic1111 is the easiest Stable Diffusion Web UI for inpainting as it’s fast and quick. However, you can also try ComfyUI which is more performance-optimized and can work on lower-end devices as well. 

Conclusion

So, that concludes our Stable Diffusion inpainting guide and hopefully, you will now be able to inpaint images easily. 

We’ve covered various uses and applications for inpainting so that you can make use of this feature to the full extent. 

If you have any questions about inpainting, feel free to drop your questions in the comments below.

Stable Diffusion Prompt Organizer

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.