# RunPod Qwen Image Edit

## CUDA compatibility

Default Docker build is intended for VPS hosts with an NVIDIA driver that supports CUDA 12.5.

The image uses `pytorch/pytorch:2.6.0-cuda12.4-cudnn9-devel`. This is expected and correct for a CUDA 12.5 driver: NVIDIA drivers are backward-compatible with older CUDA runtimes, and PyTorch does not provide a dedicated CUDA 12.5 image/wheel. Using the PyTorch base image also avoids re-downloading and unpacking the large PyTorch/NVIDIA wheels during build.

Build for CUDA 12.5 driver:

```bash
docker build -t runpod-qwen .
```

Run:

```bash
docker run --gpus all -p 7860:7860 -v /workspace:/workspace runpod-qwen
```

If the VPS has a CUDA 12.8-capable driver, you can build with a CUDA 12.8 PyTorch image instead:

```bash
docker build \
  --build-arg BASE_IMAGE=pytorch/pytorch:2.9.1-cuda12.8-cudnn9-devel \
  -t runpod-qwen:cu128 .
```

Do not use a CUDA 12.8 image on a machine whose driver only supports CUDA 12.5. Use the default CUDA 12.4 image there.

## Startup checks

At startup the app prints `torch.__version__`, `CUDA_VISIBLE_DEVICES`, and selected device. FlashAttention 3 is optional; if the `kernels-community/vllm-flash-attn3` kernel cannot load, the app falls back to the default attention processor.
