How To Use SDXL In ComfyUI [Complete Guide]

Using Stable Diffusion in ComfyUI is very powerful as its node-based interface gives you a lot of freedom over how you generate an image. 

But what if you want to use SDXL models in ComfyUI? 

In this ComfyUI SDXL guide, you’ll learn how to set up SDXL models in the ComfyUI interface to generate images. 

If you’ve not used ComfyUI before, make sure to check out my beginner’s guide to ComfyUI first to learn how it works. 

Setting up SDXL in ComfyUI is very simple and doesn’t take up a lot of time. We’ll be using some workflows for using SDXL in ComfyUI so that you don’t have to build a workflow from scratch. 

That being said, let’s get started. 

Why Use ComfyUI for SDXL

SDXL works with other Stable Diffusion interfaces such as Automatic1111 but the workflow for it isn’t as straightforward. 

Since SDXL requires you to use both a base and a refiner model, you’ll have to switch models during the image generation process. 

To do this in Automatci1111, you’ll first have to generate your image and then switch to the refiner model and run the generation again. This is very counterintuitive and requires manual work. 

Fortunately, that’s not the case with ComfyUI as you can build workflows where you can run the generation through both base and refiner models in one go. 

This saves you time from manually switching models and isn’t as confusing as Automatic1111. 

Moreover, ComfyUI is more performance-optimized making it a better choice for running SDXL. I’ve also had memory errors when trying to use SDXL in Automatic1111. 

However, that’s not the case in ComfyUI and my RTX3050 card with 4GB vRAM can generate 4K images using SDXL models

So, my recommendation is to always use ComfyUI when running SDXL models as it’s simple and fast. 

How To Use SDXL In ComfyUI

Using SDXL in ComfyUI isn’t all complicated. In fact, it’s the same as using any other SD 1.5 model except that your image goes through a second sampler pass with the refiner model. 

Here are the step-by-step instructions on how to use SDXL in ComfyUI. 

Install ComfyUI

First things first, you need to download and install ComfyUI on your device. I’ll assume you already have ComfyUI installed but if you don’t, click on the button below to download ComfyUI. 

ComfyUI - Default Workflow

I also recommend checking out my ComfyUI guide to get started and understand how it works. 

Download SDXL Models

To use SDXL, you’ll need to download the two SDXL models and place them in your ComfyUI models folder. 

Download the SDXL base and refiner models from the links given below: 

Once you’ve downloaded these models, place them in the following directory: 


Next, we’ll download the SDXL VAE which is responsible for converting the image from latent to pixel space and vice-versa. 

When using SDXL models, you’ll have to use the SDXL VAE and cannot use SD 1.5 VAE as it’ll mess up the output. 

Click on the link below to download the SDXL VAE model: 

Once downloaded, place the VAE model in the following directory: 


Besides this, you’ll also need to download an upscale model as we’ll be upscaling our image in ComfyUI. 

If you don’t have any upscale model in ComfyUI, download the 4x NMKD Superscale model from the link below: 

After downloading this model, place it in the following directory: 


Download ComfyUI SDXL Workflow

Instead of building a workflow from scratch, we’ll be using a pre-built workflow designed for running SDXL in ComfyUI. 

You can create your own workflows but it’s not necessary since there are already so many good ComfyUI workflows out there. 

We’ll be using the SDXL Config ComfyUI Fast Generation workflow which is often my go-to workflow for running SDXL in ComfyUI. 

Download this workflow and extract the .zip file. You’ll find a .json file which is the ComfyUI workflow file. 

Now, start ComfyUI by clicking on the run_nvidia_gpu.bat file which will open up ComfyUI in your browser. 

Load SDXL Workflow In ComfyUI

In ComfyUI, click on the Load button from the sidebar and select the .json workflow we just downloaded.

SDXL in ComfyUI - Load Workflow

The workflow will load in ComfyUI successfully. 

SDXL in ComfyUI - Workflow Loaded

As you can see, this ComfyUI SDXL workflow is very simple and doesn’t have a lot of nodes which can be overwhelming sometimes. 

We’ll be using this workflow to generate images using SDXL. 

Generate Your Image

So, we’ve loaded our ComfyUI SDXL workflow successfully and now it’s time to generate an image. 

Before we do that, let me go through some of the important nodes in this workflow so you can understand how it works. 

I’ll go through each step and you can follow along and make any changes mentioned. 

Load VAE

SDXL in ComfyUI - Load VAE Node

This is the VAE loader where we load the SDXL VAE model we just downloaded in the first step. Click on the dropdown and select the sdxl_vae.safetensors file.

Load Checkpoint (Base Here) 

SDXL in ComfyUI - Load Base Checkpoint Node

This is the regular checkpoint loader node for ComfyUI but in this workflow, it’s used for loading the base SDXL checkpoint model. 

Click on the dropdown and select the sd_xl_base_1.0.safetensors model. 

Load Checkpoint (Refiner Here) 

SDXL in ComfyUI - Load Refiner Checkpoint Node

This node loads the refiner SDXL checkpoint model. Select the sd_xl_refiner_1.0.safetensors model from the dropdown. 

Positive & Negative Prompt

SDXL in ComfyUI - Positive & Negative Prompt Nodes

These two nodes are self-explanatory where you enter your positive and negative prompts for the image you want to generate. 

I’ll be using the following positive and negative prompts for this tutorial. 

Positive Prompt: 

photo of a 1woman, cyberpunk, sci-fi, closeup, portrait, dystopian background

Negative Prompt: 

blurry, logo, watermark, signature, cropped, out of frame, worst quality, low quality, jpeg artifacts, poorly lit, overexposed, underexposed, glitch, error, out of focus, (semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, digital art, anime, manga:1.3), amateur, (poorly drawn hands, poorly drawn face:1.2), deformed iris, deformed pupils, morbid, duplicate, mutilated, extra fingers, mutated hands, poorly drawn eyes, mutation, deformed, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, incoherent

You can copy these prompts if you want to follow along or write your own prompts. 

Image Size

SDXL in ComfyUI - Image Size Node

Select the width and height of the image you want to generate. There are many image resolutions you can choose for Stable Diffusion but for this guide, I’ll be using 1024×1024 resolution. 

The Image Size node also has an option to set the batch size to choose how many images you want to generate. You can set this to 1 or increase it if you want to generate more images. 


SDXL in ComfyUI - KSampler Node

The KSampler node is responsible for the image generation part in Stable Diffusion. This node has many options such as seed, steps, CFG, sampler, and more. 

You can leave these unchanged if you want but I’ll be making the following changes: 

  • Steps: 15
  • CFG: 5
  • Sampler Name: dpmpp_2m_sde_gpu

KSampler for Refiner

SDXL in ComfyUI - KSampler for Refiner Node

This KSampler node is for the refiner step of the image generation. This sampler will use the SDXL refiner model to add more details to our image and make the output image look better. 

It’s best to leave the settings in this node unchanged. If you increase the steps or denoising value, the image will change drastically from the base image generated. 

Load Upscale Model: 

SDXL in ComfyUI - Load Upscale Node

This node will load the upscale model for the upscaling process in the workflow. Click on the dropdown and select the upscale model we downloaded. However, if you already have other upscale models, you can choose them as well. 

Now that you know what each of these nodes does in our ComfyUI SDXL workflow, let’s generate our image. 

Click on the Generate button to begin the image generation process. After the process is completed, you’ll see two images displayed in the workflow. 

SDXL in ComfyUI - Output Images

One is the output image without the refiner and the other with the refiner. This way, you can see the difference between the two and see the details and improvements done by the refiner model. 

The workflow saves the images generated in the Outputs folder in your ComfyUI directory. 

That’s how easy it is to use SDXL in ComfyUI using this workflow. It’s fast and very simple and even if you’re a beginner, you can use it. 

Best ComfyUI SDXL Workflows

As I mentioned above, creating your own SDXL workflow for ComfyUI from scratch isn’t always the best idea. 

That’s because there are so many workflows for ComfyUI out there that you don’t need to go through the hassle of creating your own. 

There are many ComfyUI SDXL workflows and here are my top recommendations. 

1. SDXL Config ComfyUI Fast Generation

ComfyUI Workflows - SDXL Config ComfyUI Fast Generation

This is the SDXL workflow which was demonstrated in the guide above and is my favorite SDXL workflow for ComfyUI. 

That’s because it’s very simple and generates images very fast compared to other workflows. 

If you’re new to ComfyUI and want to use SDXL, this is the workflow I’d recommend you use for your generations. 

There are other versions of this workflow with LoRA support, face fix, and LCM. The LoRA support version is very good if you want to use LoRA models with SDXL. 

However, keep in mind that only SDXL LoRA models work with SDXL checkpoint models.

2. Sytan’s SDXL Workflow

ComfyUI Workflows - Sytan's SDXL Workflow

Sytan’s SDXL Workflow is another great option for using SDXL models in ComfyUI. This workflow is also very simple and optimized well for low vRAM devices. 

The workflow has nodes for the base and refiner models along with an upscale node for image upscaling. 

The only downside of this workflow is that it doesn’t let you use LoRA models. 

Still, this is a very good workflow you can try when using SDXL in ComfyUI. 


ComfyUI Workflows - Searge-SDXL EVOLVED

If you want the ultimate SDXL workflow for ComfyUI then look no further than Searge-SDXL: EVOLVED which is by far the most advanced workflow for ComfyUI. 

This workflow lets you do everything in ComfyUI such as txt2img, img2img, inpainting, and more. You can apply up to 5 LoRA models at once in this workflow allowing you to use various styles in Stable Diffusion

Being so advanced also means that this workflow is not for beginners. If you’re new to ComfyUI, you’ll find this workflow very overwhelming as it’s filled with nodes everywhere. 

Besides these three workflows, you can also use normal workflows and SDXL models without refiner. 

I’ve reviewed SDXL models on this blog and many of them don’t require you to use a refiner model. These models can be used with various ComfyUI workflows which you can find here


Here are some frequently asked questions about using SDXL in ComfyUI: 

Is ComfyUI faster than Automatic1111? 

ComfyUI utilizes GPU memory much better than Automatic1111 which results in a better performance especially when using SDXL models. 

Can I use LoRA models in ComfyUI?

Yes, you can use LoRA models in ComfyUI by using the LoraLoader node which lets you load and apply LoRA models. 

What are the system requirements for running ComfyUI?

The requirements for running ComfyUI are based on the Stable Diffusion system requirements which is to have a GPU with atleast 4GB vRAM with 8GB RAM and 15GB storage space. 


Running SDXL models in ComfyUI is very straightforward as you must’ve seen in this guide. All you need is to download the SDXL models and use the right workflow. 

The image generation using SDXL in ComfyUI is much faster compared to Automatic1111 which makes it a better option between the two. 

If you have any questions about using SDXL in ComfyUI, feel free to ask them in the comments section below.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.