FAQ
Frequently asked questions and common troubleshooting tips for drydock.
Socket permission errors (EACCES)
Drydock runs as a non-root user by default. The entrypoint automatically detects the Docker socket GID and adds the node user to the matching group. If the socket is owned by GID 0 (root), the entrypoint refuses to start implicitly as root.
Recommended fix: Use a Docker socket proxy so drydock never needs direct socket access.
Break-glass override: If you cannot use a socket proxy, set both environment variables:
environment:
- DD_RUN_AS_ROOT=true
- DD_ALLOW_INSECURE_ROOT=trueSetting only DD_RUN_AS_ROOT=true without DD_ALLOW_INSECURE_ROOT=true will fail closed at startup. See Docker Socket Security for details.
Socket proxy connection refused (ECONNREFUSED)
If you see repeated Docker event stream connection failure (connect ECONNREFUSED ...) messages when using a socket proxy like tecnativa/docker-socket-proxy, the proxy is not accepting connections. Common causes:
-
Read-only socket mount (
:ro) — Remove the:roflag from the proxy's Docker socket volume mount. The:roflag can prevent the proxy from connecting to the Docker daemon on some Linux kernels. The proxy's environment variables (CONTAINERS=1,EVENTS=1, etc.) are the actual security boundary, not the mount flag. -
Startup ordering — The proxy may not be ready when drydock tries to connect. Add a health check and
depends_oncondition:
services:
drydock:
depends_on:
socket-proxy:
condition: service_healthy
socket-proxy:
healthcheck:
test: wget --spider http://localhost:2375/version || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 5sDrydock automatically retries with exponential backoff (1s → 30s max) and will recover once the proxy becomes available, but the health check prevents unnecessary retries during startup.
CSRF validation failed (403) behind a reverse proxy
If you see 403 {"error":"CSRF validation failed"} when clicking buttons like Recheck for updates, Scan, or any action that sends a POST/PUT/DELETE request, your reverse proxy setup is missing the DD_SERVER_TRUSTPROXY configuration.
Why this happens: Drydock validates that the Origin header sent by your browser matches the server's own origin. Behind a TLS-terminating reverse proxy (Traefik, Nginx, Caddy, HAProxy, etc.), the internal connection to drydock uses plain HTTP. Without trust-proxy enabled, drydock thinks it is running on http:// while your browser sends Origin: https://... — the mismatch triggers CSRF protection.
Fix: Add DD_SERVER_TRUSTPROXY to your drydock environment:
services:
drydock:
image: codeswhat/drydock
environment:
- DD_SERVER_TRUSTPROXY=1 # Trust one proxy hop (Traefik, Nginx, etc.)This tells drydock to read the X-Forwarded-Proto header from your proxy, so it correctly sees https as the protocol.
Most reverse proxies (Traefik, Nginx, Caddy) set X-Forwarded-Proto automatically. If yours does not, add the header explicitly in your proxy configuration — for example, in Nginx: proxy_set_header X-Forwarded-Proto $scheme;
Traefik example (complete):
services:
drydock:
image: codeswhat/drydock
environment:
- DD_SERVER_TRUSTPROXY=1
# ... other DD_ variables ...
labels:
- traefik.enable=true
- traefik.http.routers.drydock.rule=Host(`drydock.example.com`)
- traefik.http.routers.drydock.entrypoints=websecure
- traefik.http.services.drydock.loadbalancer.server.port=3000Nginx example (proxy block):
location / {
proxy_pass http://drydock:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}See Server configuration for additional trust-proxy options.
OIDC callback fails or session lost after login
If your OIDC callback redirects back to the login page or loses the session, the session cookie is likely being blocked by the browser's SameSite policy during the cross-origin IdP redirect.
Fix: Set DD_SERVER_COOKIE_SAMESITE=lax (this is the default since v1.4.0). If you previously set it to strict, change it to lax:
environment:
- DD_SERVER_COOKIE_SAMESITE=laxUse strict only when your IdP is on the same domain as drydock. Use lax (default) for all cross-site IdP callback flows. See Authentication and Server configuration for details.
Dex logout only clears local session
If you use Dex as OIDC provider, clicking logout may only end the drydock session and not the upstream IdP session. This is expected when no OIDC end-session URL is available.
Why: Dex discovery metadata does not advertise end_session_endpoint, so drydock cannot auto-build an IdP logout redirect.
Options:
- Accept local-session-only logout (default behavior).
- If your auth stack exposes a real logout URL in front of Dex, set
DD_AUTH_OIDC_DEX_LOGOUTURL=https://<your_logout_endpoint>.
See the OIDC configuration guide for examples.
Registry auth failures (401/403)
Token exchange is required for most registries, even for public images. Common causes:
- GHCR/LSCR: If configured credentials are rejected with 401 or 403, drydock automatically retries the token request without credentials for public image checks. If you only pull public images, you may not need credentials at all.
- Docker Hub: Anonymous access requires a token exchange with
https://auth.docker.io. Ensure outbound HTTPS is not blocked. - Quay: Similar token exchange flow. Check that your token has
repo:readscope.
Check the drydock logs at debug level for the exact token exchange URL and response status.
DNS resolution failures (getaddrinfo EAI_AGAIN)
If drydock logs show errors like getaddrinfo EAI_AGAIN auth.docker.io or token request failed (getaddrinfo EAI_AGAIN ghcr.io), DNS resolution is failing inside the container even though the hosts are reachable via ping.
Why this happens: Drydock runs on Alpine Linux, which uses musl libc instead of glibc. Musl's DNS resolver handles dual-stack (IPv4/IPv6) networks less robustly than glibc. On some hosts — particularly NixOS, certain Kubernetes setups, or systems with IPv6 enabled but not fully routable — DNS lookups return IPv6 records first and time out before falling back to IPv4.
Fix (default since v1.4.3): Drydock now defaults to IPv4-first DNS ordering, which resolves this for most users. No action is needed unless you run in an IPv6-only environment.
If you need to change the DNS behavior, set DD_DNS_MODE:
| Value | Behavior |
|---|---|
ipv4first | IPv4 addresses tried first (default) |
ipv6first | IPv6 addresses tried first |
verbatim | Use OS resolver order (Node.js default, may trigger the bug on Alpine) |
environment:
- DD_DNS_MODE=verbatim # Only set this if you need IPv6-first or OS-native orderingdns.setDefaultResultOrder() at startup and affects all outbound connections (registry checks, trigger webhooks, agent communication).Container shows "update available" but it is wrong (tag family mismatch)
This usually happens when the registry has tags from different naming families (e.g. 5.1.4 and 20.04.1 for Ubuntu, or 18-alpine vs 18.20.1). Drydock may pick a higher semver tag from a different family.
Since v1.4.0, tag family detection (dd.tag.family) defaults to strict, which infers the current tag's prefix, suffix, and segment structure and only considers candidates within the same family.
If you still see cross-family false positives, verify that your dd.tag.include regex is specific enough. If you intentionally want cross-family updates (e.g. you want to jump from one naming scheme to another), set:
labels:
- dd.tag.family=looseSee Docker Watchers for the full label reference.
Log output format (JSON vs pretty)
Drydock log behavior is controlled by environment variables:
| Variable | Values | Default |
|---|---|---|
DD_LOG_LEVEL | fatal, error, warn, info, debug, trace | info |
DD_LOG_FORMAT | text, json | json |
DD_LOG_BUFFER_ENABLED | true, false | true |
The official image defaults to DD_LOG_FORMAT=json for structured output suitable for log aggregators like Elasticsearch or Loki. Set DD_LOG_FORMAT=text if you prefer pretty logs.
Set DD_LOG_BUFFER_ENABLED=false to disable the in-memory application log ring buffer used by /api/log/entries and the UI log viewer.
See Logs for examples.
Trivy or Cosign not found errors
Since v1.4.0, drydock ships a single image that bundles both Trivy and Cosign. If you see "not found" errors, you are likely running a custom or older image that does not include them.
Verify tool availability at runtime via the API:
curl http://drydock:3000/api/server/security/runtimeIf you build a custom image, ensure Trivy and Cosign are installed in your Dockerfile. See Security for scanning configuration.
Self-update leaves container stopped
Drydock updates itself using a helper container pattern: it renames the old container, creates a new one with the updated image, then spawns a short-lived helper container that stops the old container, starts the new one, and removes the old one on success. If the new container fails to start, the helper restarts the old container as a fallback.
For this to work:
- The Docker socket must be bind-mounted at
/var/run/docker.sock - The Docker trigger must be enabled
- Drydock must be able to create temporary helper containers
If the container ends up stopped, check docker logs drydock-old-* for the helper container output. See Self-Update for the full sequence.
Legacy WUD environment variables or labels
Drydock accepts legacy WUD_* environment variables and wud.* container labels as fallback, but emits deprecation warnings when they are used. Each legacy fallback increments the dd_legacy_input_total{source,key} Prometheus counter.
To convert your configuration in place, use the built-in migration CLI:
# Preview changes without writing
docker exec drydock node dist/index.js config migrate --dry-run
# Apply migration to a specific file
docker exec drydock node dist/index.js config migrate --file /path/to/compose.ymlSee the Quick start for the recommended DD_* / dd.* configuration format.