This Docker image is pre-configured for Stable Diffusion WebUI Forge (not Auto1111 WebUI), offering a streamlined way to run the WebUI with all necessary components bundled. By using this Docker image, you gain access to the following features:
- Pre-configured WebUI Forge setup
- Optional CivitAI model downloader integration
- Optional Cloudflare Tunnel configuration
Simply clone the repository and start the service with docker compose up -d
. Note that models are not included and must be downloaded separately using the provided options.
Ensure that the NVIDIA driver version on your host machine is compatible with the selected Docker image. Refer to the table below for details:
Docker Image | Required NVIDIA Driver Version (Host Machine) | Forge Configuration |
---|---|---|
sammrai/sd-forge-docker:12.4.0 |
550.54.14 or higher | CUDA 12.4 + Pytorch 2.4 (Fastest, but MSVC issues possible, xformers may not work) |
sammrai/sd-forge-docker:12.1.0 |
530.30.02 or higher | CUDA 12.1 + Pytorch 2.3.1 (Recommended) |
Key Notes:
- The NVIDIA driver version on the host machine must meet or exceed the required version for the selected Docker image.
- The listed driver versions are based on public documentation and do not guarantee full compatibility or official endorsement.
Run the following commands to clone the repository and navigate to the project directory:
git clone https://github.com/sammrai/sd-forge-docker.git
cd sd-forge-docker
Use Docker Compose to start the containers:
docker compose up -d
Note: Models are not included by default. You can either download them manually or use the optional CivitAI integration.
Choose the deployment method that best suits your needs:
By default, the docker-compose.yml
file is configured for NVIDIA GPUs. Ensure your system has:
- The appropriate NVIDIA drivers
- The NVIDIA Container Toolkit installed
If GPU acceleration is not required, modify the docker-compose.yml
file to disable GPU support and enable CPU-specific options. Update the ARGS
environment variable as follows:
environment:
- ARGS: "--listen --enable-insecure-extension-access --port 7680 --api"
+ ARGS: "--listen --enable-insecure-extension-access --port 7680 --api --always-cpu --skip-torch-cuda-test"
- deploy:
- resources:
- reservations:
- devices:
- - driver: nvidia
- count: 1
- capabilities: [gpu]
For secure external access to the WebUI, configure a Cloudflare Tunnel. Follow these steps:
-
Set Up Environment Variables:
Add your
TUNNEL_TOKEN
to the.env
file:TUNNEL_TOKEN=yourtoken...
-
Replace
docker-compose.yml
:Overwrite the default
docker-compose.yml
withdocker-compose-tunnel.yml
:cp docker-compose-tunnel.yml docker-compose.yml
-
Start the Containers with the Tunnel Configuration:
docker compose up -d
-
Complete the Cloudflare Tunnel Setup:
Refer to the Cloudflare Tunnel Documentation for detailed setup instructions. This includes authenticating with Cloudflare and establishing the tunnel using your
TUNNEL_TOKEN
.
The CivitAI model downloader automates the downloading and placement of models. While optional, it is highly recommended to add your CIVITAI_TOKEN in the .env
file, as many models require authentication to download.
-
Add Your CIVITAI Token
Include your CIVITAI_TOKEN in the.env
file to enable seamless integration:CIVITAI_TOKEN=yourtoken...
-
Supported Model Types and Example Commands
The following model types are supported for automatic downloads, along with their corresponding aliases to ensure models are placed in the correct directories:- Lora:
@lora
- VAE:
@vae
- Embed:
@embed
- Checkpoint:
@checkpoint
Here are some example commands to download and place models:
docker compose exec webui civitdl 257749 439889 @checkpoint docker compose exec webui civitdl 332646 @embed docker compose exec webui civitdl 660673 @vae docker compose exec webui civitdl 341353 @lora
- Lora: