Docker Watchers
Watchers are responsible for scanning Docker containers.

Watchers are responsible for scanning Docker containers.
The docker watcher lets you configure the Docker hosts you want to watch.
Variables
| Env var | Required | Description | Supported values | Default value when missing |
|---|---|---|---|---|
DD_WATCHER_{watcher_name}_CAFILE | ⚪ | CA pem file path (only for TLS connection) | ||
DD_WATCHER_{watcher_name}_AUTH_BEARER | ⚪ | Bearer token for remote Docker API auth (HTTPS only) | ||
DD_WATCHER_{watcher_name}_AUTH_INSECURE | ⚪ | Allow fail-open remote auth fallback when auth is invalid or non-HTTPS | true, false | false |
DD_WATCHER_{watcher_name}_AUTH_PASSWORD | ⚪ | Password for remote Docker API basic auth (HTTPS only) | ||
DD_WATCHER_{watcher_name}_AUTH_TYPE | ⚪ | Auth mode for remote Docker API auth | BASIC, BEARER, OIDC | auto-detected from provided credentials |
DD_WATCHER_{watcher_name}_AUTH_USER | ⚪ | Username for remote Docker API basic auth (HTTPS only) | ||
DD_WATCHER_{watcher_name}_CERTFILE | ⚪ | Certificate pem file path (only for TLS connection) | ||
DD_WATCHER_{watcher_name}_CRON | ⚪ | Scheduling options | Valid CRON expression | 0 * * * * (every hour) |
DD_WATCHER_{watcher_name}_HOST | ⚪ | Docker hostname or ip of the host to watch | ||
DD_WATCHER_{watcher_name}_JITTER | ⚪ | Jitter in ms applied to the CRON to better distribute the load on the registries (on the Hub at the first place) | > 0 | 60000 (1 minute) |
DD_WATCHER_{watcher_name}_KEYFILE | ⚪ | Key pem file path (only for TLS connection) | ||
DD_WATCHER_{watcher_name}_MAINTENANCE_WINDOW | ⚪ | Allowed update schedule (checks outside this window are skipped) | Valid CRON expression | |
DD_WATCHER_{watcher_name}_MAINTENANCE_WINDOW_TZ | ⚪ | Timezone used to evaluate MAINTENANCE_WINDOW | IANA timezone (e.g. UTC, Europe/Paris) | UTC |
DD_WATCHER_{watcher_name}_PORT | ⚪ | Docker port of the host to watch | 2375 | |
DD_WATCHER_{watcher_name}_PROTOCOL | ⚪ | Docker remote API protocol | http, https | http |
DD_WATCHER_{watcher_name}_SOCKET | ⚪ | Docker socket to watch | Valid unix socket | /var/run/docker.sock |
DD_WATCHER_{watcher_name}_WATCHALL | ⚪ | Include containers in every state (created, paused, exited, restarting, etc.) — not just running ones | true, false | false |
DD_WATCHER_{watcher_name}_WATCHATSTART (deprecated) | ⚪ | If drydock must check for image updates during startup | true, false | true if this watcher store is empty |
DD_WATCHER_{watcher_name}_WATCHBYDEFAULT | ⚪ | Watch containers that don't have an explicit dd.watch label | true, false | true |
DD_WATCHER_{watcher_name}_WATCHEVENTS | ⚪ | If drydock must monitor docker events | true, false | true |
DD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_* | ⚪ | Shared per-image defaults (image match + include/exclude/transform/link/display/trigger/lookup) | See Image Set Presets section below |
WATCHALL and WATCHBYDEFAULT are independent and operate at different stages. WATCHALL controls which container states Docker returns (running-only vs all states). WATCHBYDEFAULT controls whether unlabeled containers are watched — the dd.watch label always takes precedence when set. See the behavior matrix below for details.
DD_WATCHER_{watcher_name}_WATCHDIGEST is deprecated and will be removed in a future release. Use the dd.watch.digest container label instead.Notes:
- If no watcher is configured, a default one named
localwill be automatically created (reading the Docker socket). To suppress this default watcher (e.g. when running as a controller-only node for remote agents), setDD_LOCAL_WATCHER=false. - Multiple watchers can be configured (if you have multiple Docker hosts to watch). Just give them different names.
- Socket configuration and host/port configuration are mutually exclusive.
- When
MAINTENANCE_WINDOWis configured and a check is skipped outside the allowed schedule, drydock queues one pending check and runs it automatically when the next maintenance window opens. - Legacy compatibility:
wud.*labels are still accepted as fallback todd.*. When a fallback is used, drydock emits a one-time warning for that key and incrementsdd_legacy_input_total{source="label",key="<legacy_key>"}. To rewrite configs in place, runnode dist/index.js config migrate --dry-runthennode dist/index.js config migrate --file <path>. - If watcher logger initialization fails, drydock automatically falls back to structured stderr JSON logs and increments
dd_watcher_logger_init_failures_total{type,name}. Alert on non-zero increases to catch degraded logging early.
Before you start — important watcher caveats:
- Socket vs remote API — When using socket configuration, mount the Docker socket on your drydock container. When using host/port configuration, enable the Docker remote API. If the remote API is secured with TLS, mount and configure the TLS certificates.
- Remote auth is fail-closed — Remote watcher auth (
AUTH_*) requires HTTPS (PROTOCOL=https) or TLS certificate-based connections by default. SetAUTH_INSECURE=trueonly if you intentionally need legacy fail-open behavior. - Digest watching and Hub quotas — Watching image digests causes extensive usage of the Docker Registry Pull API, which is restricted by Quotas on the Docker Hub. By default, drydock enables it only for non semver image tags. You can tune this behavior per container using the
dd.watch.digestlabel. If you face quota related errors, consider slowing down the watcher rate by adjusting theDD_WATCHER_{watcher_name}_CRONvariable.
Variable examples
Watch the local docker host every day at 1am
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_LOCAL_CRON=0 1 * * *docker run \
-e DD_WATCHER_LOCAL_CRON="0 1 * * *" \
...
codeswhat/drydockWatch all containers regardless of their status (created, paused, exited, restarting, running...)
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_LOCAL_WATCHALL=truedocker run \
-e DD_WATCHER_LOCAL_WATCHALL="true" \
...
codeswhat/drydockWATCHALL / WATCHBYDEFAULT behavior matrix
These two variables are orthogonal — they filter at different stages of the container selection pipeline:
- WATCHALL (Docker API level) — decides which container states are fetched from Docker.
- WATCHBYDEFAULT (per-container level) — decides whether containers without a
dd.watchlabel are watched. An explicitdd.watchlabel always takes precedence.
WATCHALL | WATCHBYDEFAULT | Containers fetched | Unlabeled containers | dd.watch=true | dd.watch=false |
|---|---|---|---|---|---|
false (default) | true (default) | Running only | Watched | Watched | Not watched |
false | false | Running only | Not watched | Watched | Not watched |
true | true | All states | Watched | Watched | Not watched |
true | false | All states | Not watched | Watched | Not watched |
dd.watch=true will not be watched when WATCHALL is false because it is never returned by the Docker API in the first place. Set WATCHALL=true if you need to monitor non-running containers.Watch a remote docker host via TCP on 2375
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_MYREMOTEHOST_HOST=myremotehostdocker run \
-e DD_WATCHER_MYREMOTEHOST_HOST="myremotehost" \
...
codeswhat/drydockWatch a remote docker host behind HTTPS with bearer auth
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_MYREMOTEHOST_HOST=myremotehost
- DD_WATCHER_MYREMOTEHOST_PORT=443
- DD_WATCHER_MYREMOTEHOST_PROTOCOL=https
- DD_WATCHER_MYREMOTEHOST_AUTH_TYPE=BEARER
- DD_WATCHER_MYREMOTEHOST_AUTH_BEARER=my-secret-tokendocker run \
-e DD_WATCHER_MYREMOTEHOST_HOST="myremotehost" \
-e DD_WATCHER_MYREMOTEHOST_PORT="443" \
-e DD_WATCHER_MYREMOTEHOST_PROTOCOL="https" \
-e DD_WATCHER_MYREMOTEHOST_AUTH_TYPE="BEARER" \
-e DD_WATCHER_MYREMOTEHOST_AUTH_BEARER="my-secret-token" \
...
codeswhat/drydockWatch a remote docker host via TCP with TLS enabled on 2376
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_MYREMOTEHOST_HOST=myremotehost
- DD_WATCHER_MYREMOTEHOST_PORT=2376
- DD_WATCHER_MYREMOTEHOST_CAFILE=/certs/ca.pem
- DD_WATCHER_MYREMOTEHOST_CERTFILE=/certs/cert.pem
- DD_WATCHER_MYREMOTEHOST_KEYFILE=/certs/key.pem
volumes:
- /my-host/my-certs/ca.pem:/certs/ca.pem:ro
- /my-host/my-certs/cert.pem:/certs/cert.pem:ro
- /my-host/my-certs/key.pem:/certs/key.pem:rodocker run \
-e DD_WATCHER_MYREMOTEHOST_HOST="myremotehost" \
-e DD_WATCHER_MYREMOTEHOST_PORT="2376" \
-e DD_WATCHER_MYREMOTEHOST_CAFILE="/certs/ca.pem" \
-e DD_WATCHER_MYREMOTEHOST_CERTFILE="/certs/cert.pem" \
-e DD_WATCHER_MYREMOTEHOST_KEYFILE="/certs/key.pem" \
-v /my-host/my-certs/ca.pem:/certs/ca.pem:ro \
-v /my-host/my-certs/cert.pem:/certs/cert.pem:ro \
-v /my-host/my-certs/key.pem:/certs/key.pem:ro \
...
codeswhat/drydockConnecting via SSH
Drydock does not currently support the ssh:// protocol for remote Docker connections. The PROTOCOL variable only accepts http or https.
If your remote Docker host is only reachable over SSH, you can use an SSH tunnel to forward the remote Docker socket to a local TCP port or Unix socket, then point Drydock at that forwarded endpoint:
# Forward the remote Docker socket to a local TCP port
ssh -nNT -L 2375:/var/run/docker.sock user@remote-hostThen configure Drydock to connect to the forwarded port:
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_MYREMOTEHOST_HOST=host.docker.internal
- DD_WATCHER_MYREMOTEHOST_PORT=2375
extra_hosts:
- "host.docker.internal:host-gateway"Docker Socket Security
Drydock needs to communicate with the Docker Engine API to monitor containers and (optionally) perform updates. By default this means mounting the Docker socket — which grants broad access to the host. This section covers all available approaches to secure that access, from the recommended socket proxy to remote TLS connections.
Security comparison
| Approach | Attack surface | Privilege level | Setup complexity | Auto-updates? |
|---|---|---|---|---|
| Socket proxy (recommended) | Filtered API only | Non-root | Low | Yes (with POST=1) |
| Remote Docker over TLS | Network + TLS | Non-root | Medium | Yes |
| Rootless Docker | Full API, unprivileged daemon | Non-root | Medium | Yes |
| Direct socket mount | Full Docker API | Root | Trivial | Yes |
| Break-glass root mode | Full Docker API + host root | Root | Trivial | Yes |
Option 1: Socket proxy (recommended)
A socket proxy runs as a separate container with access to the Docker socket and exposes only the API endpoints Drydock needs. Drydock connects to the proxy over HTTP, so no socket mount is required at all.
This is the recommended approach for all deployments. It provides a strict security boundary with minimal setup.
services:
drydock:
image: codeswhat/drydock
depends_on:
socket-proxy:
condition: service_healthy
environment:
- DD_WATCHER_LOCAL_HOST=socket-proxy
- DD_WATCHER_LOCAL_PORT=2375
ports:
- 3000:3000
socket-proxy:
image: tecnativa/docker-socket-proxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- CONTAINERS=1
- IMAGES=1
- EVENTS=1
- SERVICES=1
healthcheck:
test: wget --spider http://localhost:2375/version || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 5s
restart: unless-stoppedThe :ro (read-only) flag on the Docker socket mount is omitted intentionally. The proxy's environment variables (CONTAINERS, EVENTS, etc.) control which API endpoints are exposed — that is the actual security boundary. The :ro flag on Unix sockets does not prevent API communication and can cause connection failures on some Linux kernels.
Proxy permissions by feature
| Feature | Required proxy env vars |
|---|---|
| Watch containers (default) | CONTAINERS=1, IMAGES=1, EVENTS=1, SERVICES=1 |
| Container actions (start/stop/restart) | All of the above plus POST=1 and a Docker trigger |
| Docker trigger (auto-updates) | All of the above plus POST=1, NETWORKS=1 |
DD_TRIGGER_DOCKER_{name}_AUTO=false. Without a Docker trigger, the UI will show "No docker trigger found for this container" when attempting actions.Alternative socket proxies
Tecnativa/docker-socket-proxy is the most widely used option, but any HAProxy or nginx-based Docker socket proxy that filters by HTTP method and path will work. The key requirement is that the proxy exposes the Docker Engine API endpoints listed in the permissions table above and blocks everything else (especially exec, build, and swarm endpoints).
Other compatible proxies include linuxserver/docker-socket-proxy (a fork with additional features).
Option 2: Remote Docker over TLS
Instead of mounting the socket at all, Drydock can connect to a remote Docker host over the network using mutual TLS (mTLS). This is ideal for multi-host setups or when you want to completely avoid socket mounts.
Step 1: Generate TLS certificates on the Docker host
Follow the official Docker TLS guide to generate a CA, server certificate, and client certificate.
Step 2: Configure the Docker daemon
Configure the daemon to listen on a TLS port by adding this to /etc/docker/daemon.json:
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2376"],
"tls": true,
"tlscacert": "/etc/docker/ca.pem",
"tlscert": "/etc/docker/server-cert.pem",
"tlskey": "/etc/docker/server-key.pem",
"tlsverify": true
}Step 3: Configure Drydock
Configure Drydock with the client certificates:
services:
drydock:
image: codeswhat/drydock
environment:
- DD_WATCHER_REMOTE_HOST=docker-host.example.com
- DD_WATCHER_REMOTE_PORT=2376
- DD_WATCHER_REMOTE_CAFILE=/certs/ca.pem
- DD_WATCHER_REMOTE_CERTFILE=/certs/client-cert.pem
- DD_WATCHER_REMOTE_KEYFILE=/certs/client-key.pem
volumes:
- ./certs/ca.pem:/certs/ca.pem:ro
- ./certs/client-cert.pem:/certs/client-cert.pem:ro
- ./certs/client-key.pem:/certs/client-key.pem:ro
ports:
- 3000:3000When TLS certificates are provided (CAFILE, CERTFILE, KEYFILE), Drydock automatically uses HTTPS regardless of the PROTOCOL setting.
:ro) and restrict file permissions to the Drydock user.Option 3: Rootless Docker
Rootless Docker runs the Docker daemon entirely in user space without root privileges. Even full socket access does not grant host root because the daemon itself is unprivileged.
Step 1: Install rootless Docker following the official guide.
Step 2: Find your rootless socket path:
echo $XDG_RUNTIME_DIR/docker.sock
# Typically: /run/user/1000/docker.sockStep 3: Mount the rootless socket:
services:
drydock:
image: codeswhat/drydock
volumes:
- /run/user/1000/docker.sock:/var/run/docker.sock
ports:
- 3000:3000The rootless socket path varies by user. Replace 1000 with your user's UID (id -u).
--privileged containers, limited network configuration, and some storage drivers may not be available. Check the official limitations list for your use case.You can combine rootless Docker with a socket proxy for defense in depth — even if the proxy is compromised, the attacker only gains unprivileged access.
Option 4: Direct socket mount (default)
The simplest approach — mount the Docker socket directly. Drydock runs as a non-root user inside the container, but the socket grants access to the full Docker API.
services:
drydock:
image: codeswhat/drydock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 3000:3000Option 5: Break-glass root mode (least secure)
If the non-root user cannot connect to the socket (common with :ro mounts), you can explicitly opt into root mode. This is a break-glass path and should not be your default. Both flags are required; setting only DD_RUN_AS_ROOT=true fails closed at startup.
services:
drydock:
image: codeswhat/drydock
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
- DD_RUN_AS_ROOT=true
- DD_ALLOW_INSECURE_ROOT=true
ports:
- 3000:3000Which Docker API endpoints does Drydock use?
Understanding exactly what Drydock accesses helps you configure socket proxies and evaluate your security posture.
Read-only operations (watchers):
| Endpoint | Purpose |
|---|---|
GET /containers/json | List running containers |
GET /containers/{id}/json | Inspect container details and labels |
GET /images/json | List images on the host |
GET /images/{id}/json | Inspect image metadata (tags, digests, architecture) |
GET /events | Stream real-time container lifecycle events |
GET /services/{id} | Inspect Docker Swarm service labels |
Write operations (triggers — only when performing updates):
| Endpoint | Purpose |
|---|---|
POST /images/create | Pull new image versions |
POST /containers/create | Create replacement container |
POST /containers/{id}/start | Start the new container |
POST /containers/{id}/stop | Stop the old container |
POST /containers/{id}/wait | Wait for container removal |
DELETE /containers/{id} | Remove the old container |
DELETE /images/{id} | Prune old images (when enabled) |
POST /networks/{id}/connect | Connect container to additional networks |
If you only need monitoring (no auto-updates), a read-only socket proxy configuration is sufficient.
Podman Quick Start
Podman works with Drydock through Podman's Docker-compatible API socket.
Socket path mapping (podman.sock vs docker.sock)
Use Podman's host socket path, but mount it inside the Drydock container at /var/run/docker.sock so existing Drydock defaults still work.
| Runtime mode | Host socket path | Path inside Drydock container | Drydock watcher variable |
|---|---|---|---|
| Docker | /var/run/docker.sock | /var/run/docker.sock | DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock |
| Podman rootful | /run/podman/podman.sock | /var/run/docker.sock | DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock |
| Podman rootless | /run/user/<uid>/podman/podman.sock | /var/run/docker.sock | DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock |
Rootful vs rootless setup
- Rootful Podman socket:
sudo systemctl enable --now podman.socket - Rootless Podman socket:
systemctl --user enable --now podman.socket
Choosing rootful vs rootless:
- Rootless is recommended for production because it runs the entire container stack without root privileges. Use rootful only when you need privileged ports (<1024) or specific kernel features.
- Rootless requires lingering sessions — You must run
loginctl enable-linger <user>so the user systemd instance (and all rootless containers) survives logout. Without it, Podman containers stop when the user session ends.
services:
drydock:
image: codeswhat/drydock
volumes:
- /run/podman/podman.sock:/var/run/docker.sock
environment:
- DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock
- DD_TRIGGER_DOCKER_LOCAL_AUTO=false
ports:
- 3000:3000services:
drydock:
image: codeswhat/drydock
volumes:
- ${XDG_RUNTIME_DIR}/podman/podman.sock:/var/run/docker.sock
environment:
- DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock
- DD_TRIGGER_DOCKER_LOCAL_AUTO=false
ports:
- 3000:3000services:
drydock:
image: codeswhat/drydock
depends_on:
socket-proxy:
condition: service_healthy
environment:
- DD_WATCHER_LOCAL_HOST=socket-proxy
- DD_WATCHER_LOCAL_PORT=2375
- DD_TRIGGER_DOCKER_LOCAL_AUTO=false
ports:
- 3000:3000
socket-proxy:
image: tecnativa/docker-socket-proxy
environment:
- SOCKET_PATH=/run/podman/podman.sock
- CONTAINERS=1
- IMAGES=1
- EVENTS=1
- SERVICES=1
- POST=1
- NETWORKS=1
volumes:
- /run/podman/podman.sock:/run/podman/podman.sock
healthcheck:
test: wget --spider http://localhost:2375/version || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 5sPodman socket proxy notes:
- Rootless proxy path — For rootless Podman with socket proxy, set
SOCKET_PATH=${XDG_RUNTIME_DIR}/podman/podman.sockand mount the same host path into the proxy container. - 503 errors — Some users have reported intermittent
503errors when usingtecnativa/docker-socket-proxywith Podman (tecnativa/docker-socket-proxy#66). If you experience this, prefer direct socket mount over the proxy — the direct mount examples above are the most reliable option with Podman. - Docker Compose trigger compatibility — The Docker Compose trigger (
DD_TRIGGER_DOCKERCOMPOSE_*) works with Podman — it inherits the socket connection from your watcher configuration. No additional Podman-specific trigger settings are needed.
Known limitations and tested versions
| Topic | Status |
|---|---|
| Tested Podman version | 5.6.0 on Rocky Linux 9.7 (issue #152) |
| API version handling | Drydock probes daemon compatibility and pins client API behavior for Podman/Docker socket endpoints |
| CI coverage | No dedicated Podman CI matrix yet |
| Rootless networking | Can differ from Docker bridge behavior; see Podman FAQ |
| SELinux (RHEL/Rocky/Fedora) | Socket mounts need :Z flag; see SELinux FAQ |
podman-compose networking | Pod-based networking breaks service-name DNS; use podman compose instead; see FAQ |
For common Podman troubleshooting, see FAQ Podman entries.
Watch 1 local Docker host and 2 remote docker hosts at the same time
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock
- DD_WATCHER_MYREMOTEHOST1_HOST=myremotehost1
- DD_WATCHER_MYREMOTEHOST2_HOST=myremotehost2docker run \
-e DD_WATCHER_LOCAL_SOCKET="/var/run/docker.sock" \
-e DD_WATCHER_MYREMOTEHOST1_HOST="myremotehost1" \
-e DD_WATCHER_MYREMOTEHOST2_HOST="myremotehost2" \
...
codeswhat/drydockMaintenance window
Only allow update checks between 2 AM and 4 AM in Europe/Berlin:
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_LOCAL_CRON=0 * * * *
- DD_WATCHER_LOCAL_MAINTENANCE_WINDOW=0 2-3 * * *
- DD_WATCHER_LOCAL_MAINTENANCE_WINDOW_TZ=Europe/Berlindocker run \
-e DD_WATCHER_LOCAL_CRON="0 * * * *" \
-e DD_WATCHER_LOCAL_MAINTENANCE_WINDOW="0 2-3 * * *" \
-e DD_WATCHER_LOCAL_MAINTENANCE_WINDOW_TZ="Europe/Berlin" \
...
codeswhat/drydockImage Set Presets
Use IMGSET to define reusable defaults by image reference. This is useful when many containers need the same tag filters, link template, icon, or trigger routing.
Looking for ready-to-copy presets for common containers? See Popular IMGSET Presets.
Supported imgset keys
DD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_IMAGEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TAG_INCLUDEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TAG_EXCLUDEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TAG_TRANSFORMDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_LINK_TEMPLATEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_DISPLAY_NAMEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_DISPLAY_ICONDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TRIGGER_INCLUDEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TAG_FAMILYDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_WATCH_DIGESTDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_INSPECT_TAG_PATHDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_TRIGGER_EXCLUDEDD_WATCHER_{watcher_name}_IMGSET_{imgset_name}_REGISTRY_LOOKUP_IMAGE
Imgset precedence
dd.*labels on the container (or swarm service/container merged labels) are highest priority.IMGSETvalues are defaults applied only when the corresponding label is not set.
Imgset example
services:
drydock:
image: codeswhat/drydock
environment:
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_IMAGE=ghcr.io/home-assistant/home-assistant
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_TAG_INCLUDE=^\\d+\\.\\d+\\.\\d+$$
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_DISPLAY_NAME=Home Assistant
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_DISPLAY_ICON=hl-home-assistant
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_LINK_TEMPLATE=https://www.home-assistant.io/changelogs/core-$${major}$${minor}$${patch}
- DD_WATCHER_LOCAL_IMGSET_HOMEASSISTANT_TRIGGER_INCLUDE=ntfy.default:majorLabels
To fine-tune the behaviour of drydock per container, you can add labels on them.
| Label | Required | Description | Supported values | Default value when missing |
|---|---|---|---|---|
dd.display.icon | ⚪ | Custom display icon for the container | Valid Fontawesome Icon, Homarr Labs Icon, Selfh.st Icon, or Simple Icon (see details below). mdi: icons are auto-resolved but not recommended. | fab fa-docker |
dd.display.picture | ⚪ | Custom entity picture URL for Home Assistant MQTT integration. When set to an HTTP/HTTPS URL, overrides the icon-derived entity_picture in HASS discovery payloads. | Valid HTTP or HTTPS URL | |
dd.display.name | ⚪ | Custom display name for the container | Valid String | Container name |
dd.group | ⚪ | Group name for stack/group views in the UI (falls back to com.docker.compose.project if not set) | Valid String | |
dd.inspect.tag.path | ⚪ | Docker inspect path used to derive a local semver tag | Slash-separated path in docker inspect output | |
dd.registry.lookup.image | ⚪ | Alternative image reference used for update lookups | Full image path (for example library/traefik or ghcr.io/traefik/traefik) | |
dd.link.template | ⚪ | Browsable link associated to the container version | JS string template with vars ${raw}, ${original}, ${transformed}, ${major}, ${minor}, ${patch}, ${prerelease} | |
dd.tag.exclude | ⚪ | Regex to exclude specific tags | Valid JavaScript Regex | |
dd.tag.include | ⚪ | Regex to include specific tags only | Valid JavaScript Regex | |
dd.tag.transform | ⚪ | Transform function to apply to the tag | $valid_regex => $valid_string_with_placeholders (see below) | |
dd.tag.family | ⚪ | Tag family policy for semver updates | strict (default) or loose | strict |
dd.action.exclude | ⚪ | Exclude specific action triggers from automatic execution for this container | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.action.include | ⚪ | Only allow specific action triggers to automatically execute for this container | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.notification.exclude | ⚪ | Exclude specific notification triggers from this container | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.notification.include | ⚪ | Only allow specific notification triggers for this container | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.trigger.exclude | ⚪ | Deprecated — use dd.action.exclude or dd.notification.exclude instead | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.trigger.include | ⚪ | Deprecated — use dd.action.include or dd.notification.include instead | $trigger_1_id_or_name,$trigger_2_id_or_name:$threshold | |
dd.watch.digest | ⚪ | Watch this container digest | Valid Boolean | false |
dd.watch | ⚪ | Explicitly include or exclude this container from monitoring (overrides WATCHBYDEFAULT) | Valid Boolean | Inherits from WATCHBYDEFAULT (true by default) |
dd.compose.file | ⚪ | Path to the docker-compose.yml file for compose-native updates (also configurable via trigger COMPOSEFILELABEL) | file path | |
dd.source.repo | ⚪ | Source repository override for release-notes lookup (e.g. owner/repo) | Valid owner/repo string | |
dd.webhook.enabled | ⚪ | Allow or block webhook API calls targeting this container | true, false | true |
dd.hook.pre | ⚪ | Shell command to run before a container update | Valid shell command | |
dd.hook.post | ⚪ | Shell command to run after a container update | Valid shell command | |
dd.hook.pre.abort | ⚪ | Abort the update if the pre-hook exits non-zero | Valid Boolean | true |
dd.hook.timeout | ⚪ | Timeout in milliseconds for hook execution | integer | 60000 |
dd.rollback.auto | ⚪ | Automatically rollback the container if the health check fails after an update | Valid Boolean | false |
dd.rollback.window | ⚪ | Health monitoring window in milliseconds after an update (how long to watch for failures) | integer | 300000 |
dd.rollback.interval | ⚪ | Health polling interval in milliseconds during the rollback window | integer | 10000 |
dd.runtime.entrypoint.origin | ⚪ | Tracks whether the container's entrypoint was explicitly set or inherited from the image. Used during updates to decide whether to preserve the current entrypoint or let the new image defaults apply. | explicit, inherited, unknown | auto-detected |
dd.runtime.cmd.origin | ⚪ | Tracks whether the container's cmd was explicitly set or inherited from the image. Used during updates to decide whether to preserve the current cmd or let the new image defaults apply. | explicit, inherited, unknown | auto-detected |
dd.runtime.entrypoint.origin and dd.runtime.cmd.origin are managed automatically by the Docker trigger during container updates. You only need to set them manually if drydock cannot detect the origin (shows unknown) and you want to explicitly pin or release a custom entrypoint/cmd.
dd.inspect.tag.path is optional and opt-in. Use it only when your image metadata tracks the running app version reliably; some images set unrelated values. Also note that legacy alias dd.registry.lookup.url is still accepted for compatibility, but prefer dd.registry.lookup.image.dd.action.include / dd.notification.include (and their .exclude counterparts) control automatic trigger dispatch. These labels filter which triggers fire automatically when an update is detected (e.g., route notifications to specific Slack channels or restrict which docker trigger auto-updates this container). They match against the trigger name (e.g., alerts) or full ID (e.g., smtp.alerts). Manual updates from the UI use agent-based matching instead — the update is routed to the agent that discovered the container. The legacy dd.trigger.include / dd.trigger.exclude labels still work but are deprecated — see Deprecation Schedule.dd.hook.pre / dd.hook.post) let you run arbitrary shell commands before and after a container update. If dd.hook.pre.abort is true (the default) and the pre-hook exits non-zero, the update is cancelled. Use dd.hook.timeout to control how long the hook is allowed to run before being killed.dd.rollback.auto) monitors the container's health check after an update. If the health check fails within the dd.rollback.window (default 5 minutes, polled every dd.rollback.interval = 10 seconds), drydock automatically rolls the container back to its previous image.Label examples
Include specific containers to watch
Set WATCHBYDEFAULT=false so that only containers with an explicit dd.watch=true label are monitored.
services:
drydock:
image: codeswhat/drydock
...
environment:
- DD_WATCHER_LOCAL_WATCHBYDEFAULT=falsedocker run \
-e DD_WATCHER_LOCAL_WATCHBYDEFAULT="false" \
...
codeswhat/drydockThen add the dd.watch=true label on the containers you want to watch.
services:
mariadb:
image: mariadb:10.4.5
...
labels:
- dd.watch=truedocker run -d --name mariadb --label dd.watch=true mariadb:10.4.5Exclude specific containers to watch
Ensure DD_WATCHER_{watcher_name}_WATCHBYDEFAULT is true (default value).
Then add the dd.watch=false label on the containers you want to exclude from being watched.
services:
mariadb:
image: mariadb:10.4.5
...
labels:
- dd.watch=falsedocker run -d --name mariadb --label dd.watch=false mariadb:10.4.5Derive a semver from Docker inspect when image tag is latest
Use this when the running container exposes a version label in docker inspect.
services:
myapp:
image: ghcr.io/example/myapp:latest
labels:
- dd.inspect.tag.path=Config/Labels/org.opencontainers.image.versiondocker run -d \
--name myapp \
--label dd.inspect.tag.path=Config/Labels/org.opencontainers.image.version \
ghcr.io/example/myapp:latestUse an alternative image for update lookups
Use this when your runtime image is pulled from a cache/proxy registry, but you want updates checked against an upstream image.
services:
traefik:
image: harbor.example.com/dockerhub-proxy/traefik:v3.5.3
labels:
- dd.watch=true
- dd.registry.lookup.image=library/traefikdocker run -d \
--name traefik \
--label 'dd.watch=true' \
--label 'dd.registry.lookup.image=library/traefik' \
harbor.example.com/dockerhub-proxy/traefik:v3.5.3Include only 3 digits semver tags
You can filter (by inclusion or exclusion) which versions can be candidates for update.
For example, you can indicate that you want to watch x.y.z versions only
services:
mariadb:
image: mariadb:10.4.5
labels:
- dd.tag.include=^\d+\.\d+\.\d+$$docker run -d --name mariadb --label 'dd.tag.include=^\d+\.\d+\.\d+$' mariadb:10.4.5Transform the tags before performing the analysis
In certain cases, tag values are so badly formatted that the resolution algorithm cannot find any valid update candidates or, worst, find bad positive matches.
For example, you can encounter such an issue if you need to deal with tags looking like 1.0.0-99-7b368146, 1.0.0-273-21d7efa6...
By default, drydock will report bad positive matches because of the sha-1 part at the end of the tag value (-7b368146...).
That's a shame because 1.0.0-99 and 1.0.0-273 would have been valid semver values ($major.$minor.$patch-$prerelease).
You can get around this issue by providing a function that keeps only the part you are interested in.
How does it work?
The transform function must follow the following syntax:
$valid_regex_with_capturing_groups => $valid_string_with_placeholdersFor example:
^(\d+\.\d+\.\d+-\d+)-.*$ => $1The capturing groups are accessible with the syntax $1, $2, $3....
$1!For example, you can indicate that you want to watch x.y.z versions only
services:
searx:
image: searx/searx:1.0.0-269-7b368146
labels:
- dd.tag.include=^\d+\.\d+\.\d+-\d+-.*$$
- dd.tag.transform=^(\d+\.\d+\.\d+-\d+)-.*$$ => $$1docker run -d --name searx \
--label 'dd.tag.include=^\d+\.\d+\.\d+-\d+-.*$' \
--label 'dd.tag.transform=^(\d+\.\d+\.\d+-\d+)-.*$ => $1' \
searx/searx:1.0.0-269-7b368146Enable digest watching
Additionally to semver tag tracking, you can also track if the digest associated to the local tag has been updated.
It can be convenient to monitor image tags known to be overridden (latest, 10, 10.6...)
services:
mariadb:
image: mariadb:10
labels:
- dd.tag.include=^\d+$$
- dd.watch.digest=truedocker run -d --name mariadb --label 'dd.tag.include=^\d+$' --label dd.watch.digest=true mariadb:10Associate a link to the container version
You can associate a browsable link to the container version using a templated string.
For example, if you want to associate a mariadb version to a changelog (e.g. https://mariadb.com/kb/en/mariadb-1064-changelog),
you would specify a template like https://mariadb.com/kb/en/mariadb-${major}${minor}${patch}-changelog
The available variables are:
${original}the original unparsed tag${transformed}the original unparsed tag transformed with the optionaldd.tag.transformlabel option${major}the major version (if tag value is semver)${minor}the minor version (if tag value is semver)${patch}the patch version (if tag value is semver)${prerelease}the prerelease version (if tag value is semver)
services:
mariadb:
image: mariadb:10.6.4
labels:
- dd.link.template=https://mariadb.com/kb/en/mariadb-$${major}$${minor}$${patch}-changelogdocker run -d --name mariadb --label 'dd.link.template=https://mariadb.com/kb/en/mariadb-${major}${minor}${patch}-changelog' mariadb:10Customize the name and the icon to display
You can customize the name & the icon of a container (displayed in the UI, in Home-Assistant...)
Icons must be prefixed with:
fab:orfab-for Fontawesome brand icons (fab:github,fab-mailchimp...)far:orfar-for Fontawesome regular icons (far:heart,far-house...)fas:orfas-for Fontawesome solid icons (fas:heart,fas-house...)hl:orhl-for Homarr Labs icons (hl:plex,hl-authelia...)mdi:ormdi-icons are auto-resolved to Dashboard Icons but are not recommended; preferhl:orfaprefixes insteadsh:orsh-for Selfh.st (sh:authentik,sh-authelia-light...) (only works for logo available aspng)si:orsi-for Simple icons (si:mysql,si-plex...)
services:
mariadb:
image: mariadb:10.6.4
labels:
- dd.display.name=Maria DB
- dd.display.icon=si:mariadbdocker run -d --name mariadb --label 'dd.display.name=Maria DB' --label 'dd.display.icon=si:mariadb' mariadb:10.6.4Assign different triggers to containers
You can assign different triggers and thresholds on a per container basis.
Example send a mail notification for all updates but auto-update only if minor or patch
services:
my_important_service:
image: my_important_service:1.0.0
labels:
- dd.notification.include=smtp.gmail
- dd.action.include=dockercompose.local:minordocker run -d --name my_important_service --label 'dd.notification.include=smtp.gmail' --label 'dd.action.include=dockercompose.local:minor' my_important_service:1.0.0dd.notification.include=smtp.gmailis a shorthand fordd.notification.include=smtp.gmail:alldd.action.include=update(ordd.action.exclude=update) targets all triggers namedupdate, for exampledocker.updateanddockercompose.update
Threshold values:
| Threshold | Behavior |
|---|---|
all | Trigger runs regardless of the nature of the change |
major | Trigger runs only for major, minor, or patch semver changes |
minor | Trigger runs only for minor or patch semver changes |
patch | Trigger runs only for patch semver changes |
digest | Trigger runs only on digest updates |
*-no-digest | Any threshold ending with -no-digest excludes digest updates for that threshold |
Container Runtime Details
Each monitored container exposes runtime details sourced from Docker inspect and the container summary. These are visible in the API response and the UI:
| Field | Type | Description |
|---|---|---|
details.ports | string[] | Published port mappings (e.g. 8080->80/tcp, 443/tcp) |
details.volumes | string[] | Volume and bind mounts (e.g. myvolume:/data, /host/path:/container/path:ro) |
details.env | { key, value }[] | Environment variables set on the container |
Container Status Fields
The container model includes additional fields useful for understanding update state:
| Field | Type | Description |
|---|---|---|
result.noUpdateReason | string | When no update is available, explains why (e.g. tag filtering, no newer version found) |
updateDetectedAt | string (ISO 8601) | Timestamp of when an update was first detected for this container |
Docker Event Stream
The watcher monitors Docker events (container create, destroy, start, stop, etc.) in real time. If the event stream disconnects, Drydock automatically reconnects with exponential backoff starting at 1 second and capping at 30 seconds.
When using third-party socket proxies, periodic reconnects are expected: many proxies close idle long-lived HTTP connections after their configured timeout. In logs this appears as:
Docker event stream error (aborted); reconnect attempt #1 in 1000ms
followed by:
Listening to docker events
The first reconnect attempt is logged at info level since it is expected behavior (proxy timeout or network blip). If the first attempt fails and subsequent reconnects are needed, those are logged at warn level to flag a potential real problem.
Event gap during reconnect
Container lifecycle events (create, start, stop, destroy) that occur during the brief reconnect window are not captured until the next scheduled poll cycle. The reconnect window is typically ~1 second under normal conditions. In practice this is rarely noticeable because the next cron poll will pick up the current container state, but it means Drydock does not guarantee zero missed events across a reconnect.
A future release will wire the reconnect path to request events from the last-seen timestamp, closing this gap entirely.
Native socket proxy (roadmap)
The root cause of proxy-timeout reconnects is that Drydock cannot control idle timeout behavior on third-party proxies. A purpose-built Drydock socket proxy is planned (see roadmap) that will exempt event stream connections from idle timeouts, eliminating the reconnect cycle entirely.
Reduce proxy-timeout reconnect churn
Until the native proxy ships, you can reduce reconnect noise:
- Increase your proxy idle timeout (for example, linuxserver socket-proxy
TIMEOUT=86400s). - Prefer direct socket mount when your environment allows it.
- If your proxy supports endpoint-specific behavior, exempt Docker
/eventsstreams from idle timeout. - As a workaround, run a lightweight heartbeat container (e.g.
alpine:latestwithcommand: sh -c "sleep 300"andrestart: always) to keep the proxy connection path active.