common.title
Cloud support

Nobisuke

Dekisugi

RAG


autoQAOA
RAG for dev
Fortune telling app
Annealing
DEEPSCORE
Translation

Overview
Service overview
Terms of service

Privacy policy

Contact
Research

Sign in
Sign up
common.title

Stable Diffusion XL on an AMD GPU

Yuichiro Minato

2024/08/10 00:22

Hello. I thought there might be people who want to run Stable Diffusion on AMD, so I decided to give it a try.

The CPU I used is a Ryzen 9 5950X, and the GPU is a Radeon 7900XTX (with 24GB VRAM).
Since the VRAM capacity is sufficient, I wanted to check the performance and usability.

I utilized Stable Diffusion's SDXL for this test.

You need to install PyTorch that is compatible with ROCm, AMD's standard. After that, I installed the necessary libraries using pip as usual:

pip install diffusers invisible_watermark transformers accelerate safetensors

The usage is the same as with NVIDIA GPUs. Since I'm using the standard SDXL setup without any special libraries, I didn't encounter any issues.

from diffusers import DiffusionPipeline
import torch

# Specify the GPU
device = "cuda"

# Set a fixed seed
generator = torch.Generator(device).manual_seed(100)

# Write the prompt
prompt = "An astronaut riding a green horse"

# Build the pipeline and set to fp16.
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True, variant="fp16")
pipe.to(device)

# Generate and retrieve the image
images = pipe(prompt=prompt, generator=generator).images[0]
images

There was no noticeable difference from the CUDA version.

For a 1024x1024 image with 50 steps (the standard number of steps), the generation speed was 16 seconds.

image

While the generation speed is slightly slower, it was still quite usable.

© 2024, blueqat Inc. All rights reserved