Try out this model on VESSL Hub.

This examples runs an inference app for SSD-1B. After launching, you can access a streamlit web app to generate images with your own prompts. The Segmind Stable Diffusion Model (SSD-1B) is a distilled 50% smaller version of the Stable Diffusion XL (SDXL), offering a 60% speedup while maintaining high-quality text-to-image generation capabilities.

Running the model

You can run the model with our quick command.

vessl run create -f ssd-streamlit.yaml

Here’s a raundom of the ssd-streamlit.yaml file.

name: SSD-1B-streamlit
description: A template Run for inference of SSD-1B with streamlit app
resources:
  cluster: vessl-gcp-oregon
  preset: v1.l4-1.mem-42
image: quay.io/vessl-ai/hub:torch2.1.0-cuda12.2-202312070053
import:
  /code/:
    git:
      url: https://github.com/vessl-ai/hub-model
      ref: main
  /model/: hf://huggingface.co/VESSL/SSD-1B
run:
  - command: |-
      pip install --upgrade pip
      pip install -r requirements.txt
      pip install git+https://github.com/huggingface/diffusers
      streamlit run ssd_1b_streamlit.py --server.port=80
    workdir: /code/SSD-1B
interactive:
  max_runtime: 24h
  jupyter:
    idle_timeout: 120m
ports:
  - name: streamlit
    type: http
    port: 80