The enterprise paid version of blueqat cloud is running an NVIDIA V100 with 32G VRAM, so I can run and use diffusion models such as Stable Diffusion normally.
We already have most of the necessary environment, so let's give it a try.
1. Install additional libraries
Since pytorch and other libraries are already included, the rest is
!pip install diffusers==0.13.1 transformers scipy ftfy accelerate
2, Obtain an API token.
Obtain it from here.
3, Run it as soon as possible.
Put the token obtained from HuggingFace in the TOKEN section below.
import torch
from diffusers import StableDiffusionPipeline
from torch import autocast
model_id = "stabilityai/stable-diffusion-2"
#model_id = "CompVis/stable-diffusion-v1-4"
pipe = StableDiffusionPipeline.from_pretrained(model_id, revision="fp16", torch_dtype=torch.float16, use_auth_token=r'TOKEN')
pipe.to("cuda")
This will load the required model.
4, Generate images
prompt = 'a photo of an astronaut riding a horse on mars'
image = pipe(prompt).images[0]
image
It was generated in about 8 seconds.
50/50 [00:07<00:00, 6.30it/s]
Google colab is too good, but this one can generate images 24/365 all the time. I would like to think of some way to use it.