FAQ
Frequently asked questions and common troubleshooting tips for drydock.
Socket permission errors (EACCES)
Drydock runs as a non-root user by default. The entrypoint automatically detects the Docker socket GID and adds the node user to the matching group. If the socket is owned by GID 0 (root), the entrypoint refuses to start implicitly as root.
Recommended fix: Use a Docker socket proxy so drydock never needs direct socket access.
Break-glass override: If you cannot use a socket proxy, set both environment variables:
environment:
- DD_RUN_AS_ROOT=true
- DD_ALLOW_INSECURE_ROOT=trueSetting only DD_RUN_AS_ROOT=true without DD_ALLOW_INSECURE_ROOT=true will fail closed at startup. See Docker Socket Security for details.
Socket proxy connection refused (ECONNREFUSED)
If you see repeated Docker event stream connection failure (connect ECONNREFUSED ...) messages when using a socket proxy like tecnativa/docker-socket-proxy, the proxy is not accepting connections. Common causes:
-
Read-only socket mount (
:ro) — Remove the:roflag from the proxy's Docker socket volume mount. The:roflag can prevent the proxy from connecting to the Docker daemon on some Linux kernels. The proxy's environment variables (CONTAINERS=1,EVENTS=1, etc.) are the actual security boundary, not the mount flag. -
Startup ordering — The proxy may not be ready when drydock tries to connect. Add a health check and
depends_oncondition:
services:
drydock:
depends_on:
socket-proxy:
condition: service_healthy
socket-proxy:
healthcheck:
test: wget --spider http://localhost:2375/version || exit 1
interval: 5s
timeout: 3s
retries: 3
start_period: 5sDrydock automatically retries with exponential backoff (1s → 30s max) and will recover once the proxy becomes available, but the health check prevents unnecessary retries during startup.
Docker event stream "aborted" reconnects every few minutes
If you see periodic messages like:
Docker event stream error (aborted); reconnect attempt #1 in 1000ms
followed by:
Listening to docker events
this is commonly caused by proxy idle timeout, not a hard failure. Docker events use a long-lived stream, and third-party socket proxies often close idle streams after TIMEOUT.
Common fixes:
- Increase proxy timeout (for example
TIMEOUT=86400son linuxserver socket-proxy). - Prefer direct socket mount over proxy where possible.
- Use a proxy that supports exempting
/eventsfrom idle timeout. - Optional workaround: run a lightweight heartbeat container/job to reduce idle disconnects.
SELinux blocks Podman socket access (Permission denied)
On RHEL, Rocky Linux, Fedora, and CentOS with SELinux enforcing, you may see permission denied or EACCES when Drydock tries to connect to the Podman socket. This is caused by SELinux denying the container access to the host socket file.
Fix: Add the :Z flag to your socket volume mount so SELinux relabels the socket for the container:
services:
drydock:
image: codeswhat/drydock
volumes:
- /run/podman/podman.sock:/var/run/docker.sock:Z:Z flag applies a private SELinux label, granting exclusive access to this container. If multiple containers need the same socket, use :z (lowercase) for a shared label instead. Check ausearch -m avc -ts recent to confirm SELinux denials are the cause.podman-compose vs podman compose networking
If service-name DNS (e.g. socket-proxy) fails between containers, check which compose tool you are using:
podman-compose(third-party Python tool) groups containers into a single pod where all containers sharelocalhost. Service names are not resolvable — containers communicate vialocalhost:<port>instead.podman compose(Podman 4.0+ built-in) uses proper container networking with service-name DNS, matching Docker Compose behavior.
Recommendation: Use podman compose (built-in, Podman 4.0+) or Docker Compose with the Podman socket for proper service-name DNS resolution. If you must use podman-compose, replace hostname references like socket-proxy with localhost and ensure port mappings are correct.
Podman socket proxy DNS resolution fails (ENOTFOUND)
If you see getaddrinfo ENOTFOUND socket-proxy with Podman, this is usually service-name DNS, not an API compatibility problem.
Checklist:
- Run
drydockandsocket-proxyon the same Compose network (same compose file is simplest). - Keep
DD_WATCHER_LOCAL_HOST=socket-proxyonly when that hostname is resolvable from the drydock container. - If running containers separately, use a resolvable hostname or IP instead of
socket-proxy.
See Podman Quick Start and GitHub issue #152.
Podman rootless networking limitations
Rootless Podman networking can behave differently from Docker bridge networking:
- Service-name DNS works only inside the same rootless Podman network/project.
- Port publishing uses user-mode networking, which may differ in latency and behavior from rootful bridge networking.
- Host networking and cross-project name resolution are more limited than rootful setups.
If socket-proxy hostname resolution is unstable in rootless mode, prefer:
- Running both services in one compose project/network.
- Or mounting the Podman socket directly and using
DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sock.
Reference examples: Podman Quick Start.
Podman podman.sock vs docker.sock path
Use Podman's host socket path, then map it to /var/run/docker.sock inside Drydock.
- Rootful Podman host path:
/run/podman/podman.sock - Rootless Podman host path:
/run/user/<uid>/podman/podman.sock - Inside Drydock:
/var/run/docker.sock(mapped target)
Then set:
environment:
- DD_WATCHER_LOCAL_SOCKET=/var/run/docker.sockSee the full mapping table in Podman Quick Start.
Podman trigger configuration differences
There are no Podman-specific trigger variables in v1.5. Trigger setup is the same as Docker:
- Add a Docker trigger (
DD_TRIGGER_DOCKER_{name}_AUTO=falseortrue). - If using socket proxy, include
POST=1andNETWORKS=1on the proxy.
Without a Docker trigger, UI actions show No docker trigger found for this container.
See Socket proxy permissions for the required proxy env vars.
CSRF validation failed (403) behind a reverse proxy
If you see 403 {"error":"CSRF validation failed"} when clicking buttons like Recheck for updates, Scan, or any action that sends a POST/PUT/DELETE request, your reverse proxy setup is missing the DD_SERVER_TRUSTPROXY configuration.
Why this happens: Drydock validates that the Origin header sent by your browser matches the server's own origin. Behind a TLS-terminating reverse proxy (Traefik, Nginx, Caddy, HAProxy, etc.), the internal connection to drydock uses plain HTTP. Without trust-proxy enabled, drydock thinks it is running on http:// while your browser sends Origin: https://... — the mismatch triggers CSRF protection.
Fix: Add DD_SERVER_TRUSTPROXY to your drydock environment:
services:
drydock:
image: codeswhat/drydock
environment:
- DD_SERVER_TRUSTPROXY=1 # Trust one proxy hop (Traefik, Nginx, etc.)This tells drydock to read the X-Forwarded-Proto header from your proxy, so it correctly sees https as the protocol.
Most reverse proxies (Traefik, Nginx, Caddy) set X-Forwarded-Proto automatically. If yours does not, add the header explicitly in your proxy configuration — for example, in Nginx: proxy_set_header X-Forwarded-Proto $scheme;
Traefik example (complete):
services:
drydock:
image: codeswhat/drydock
environment:
- DD_SERVER_TRUSTPROXY=1
# ... other DD_ variables ...
labels:
- traefik.enable=true
- traefik.http.routers.drydock.rule=Host(`drydock.example.com`)
- traefik.http.routers.drydock.entrypoints=websecure
- traefik.http.services.drydock.loadbalancer.server.port=3000Nginx example (proxy block):
location / {
proxy_pass http://drydock:3000;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $http_host;
}See Server configuration for additional trust-proxy options.
OIDC callback fails or session lost after login
If your OIDC callback redirects back to the login page or loses the session, the session cookie is likely being blocked by the browser's SameSite policy during the cross-origin IdP redirect.
Fix: Set DD_SERVER_COOKIE_SAMESITE=lax (this is the default since v1.4.0). If you previously set it to strict, change it to lax:
environment:
- DD_SERVER_COOKIE_SAMESITE=laxUse strict only when your IdP is on the same domain as drydock. Use lax (default) for all cross-site IdP callback flows. See Authentication and Server configuration for details.
Dex logout only clears local session
If you use Dex as OIDC provider, clicking logout may only end the drydock session and not the upstream IdP session. This is expected when no OIDC end-session URL is available.
Why: Dex discovery metadata does not advertise end_session_endpoint, so drydock cannot auto-build an IdP logout redirect.
Options:
- Accept local-session-only logout (default behavior).
- If your auth stack exposes a real logout URL in front of Dex, set
DD_AUTH_OIDC_DEX_LOGOUTURL=https://<your_logout_endpoint>.
See the OIDC configuration guide for examples.
Registry auth failures (401/403)
Token exchange is required for most registries, even for public images. Common causes:
- GHCR/LSCR: If configured credentials are rejected with 401 or 403, drydock automatically retries the token request without credentials for public image checks. If you only pull public images, you may not need credentials at all.
- Docker Hub: Anonymous access requires a token exchange with
https://auth.docker.io. Ensure outbound HTTPS is not blocked. - Quay: Similar token exchange flow. Check that your token has
repo:readscope.
Check the drydock logs at debug level for the exact token exchange URL and response status.
DNS resolution failures (getaddrinfo EAI_AGAIN)
If drydock logs show errors like getaddrinfo EAI_AGAIN auth.docker.io or token request failed (getaddrinfo EAI_AGAIN ghcr.io), DNS resolution is failing inside the container even though the hosts are reachable via ping.
Why this happens: Drydock runs on Alpine Linux, which uses musl libc instead of glibc. Musl's DNS resolver handles dual-stack (IPv4/IPv6) networks less robustly than glibc. On some hosts — particularly NixOS, certain Kubernetes setups, or systems with IPv6 enabled but not fully routable — DNS lookups return IPv6 records first and time out before falling back to IPv4.
Fix (default since v1.4.3): Drydock now defaults to IPv4-first DNS ordering, which resolves this for most users. No action is needed unless you run in an IPv6-only environment.
If you need to change the DNS behavior, set DD_DNS_MODE:
| Value | Behavior |
|---|---|
ipv4first | IPv4 addresses tried first (default) |
ipv6first | IPv6 addresses tried first |
verbatim | Use OS resolver order (Node.js default, may trigger the bug on Alpine) |
environment:
- DD_DNS_MODE=verbatim # Only set this if you need IPv6-first or OS-native orderingdns.setDefaultResultOrder() at startup and affects all outbound connections (registry checks, trigger webhooks, agent communication).Container shows "update available" but it is wrong (tag family mismatch)
This usually happens when the registry has tags from different naming families (e.g. 5.1.4 and 20.04.1 for Ubuntu, or 18-alpine vs 18.20.1). Drydock may pick a higher semver tag from a different family.
Since v1.4.0, tag family detection (dd.tag.family) defaults to strict, which infers the current tag's prefix, suffix, and segment structure and only considers candidates within the same family.
If you still see cross-family false positives, verify that your dd.tag.include regex is specific enough. If you intentionally want cross-family updates (e.g. you want to jump from one naming scheme to another), set:
labels:
- dd.tag.family=looseSee Docker Watchers for the full label reference.
Log output format (JSON vs pretty)
Drydock log behavior is controlled by environment variables:
| Variable | Values | Default |
|---|---|---|
DD_LOG_LEVEL | fatal, error, warn, info, debug, trace | info |
DD_LOG_FORMAT | text, json | text |
DD_LOG_BUFFER_ENABLED | true, false | true |
The official image defaults to DD_LOG_FORMAT=text for pretty, human-readable logs. Set DD_LOG_FORMAT=json if you want structured output for log aggregators like Elasticsearch or Loki.
Set DD_LOG_BUFFER_ENABLED=false to disable the in-memory application log ring buffer used by /api/v1/log/entries and the UI log viewer.
See Logs for examples.
Trivy or Cosign not found errors
Since v1.4.0, drydock ships a single image that bundles both Trivy and Cosign. If you see "not found" errors, you are likely running a custom or older image that does not include them.
Verify tool availability at runtime via the API:
curl http://drydock:3000/api/v1/server/security/runtimeIf you build a custom image, ensure Trivy and Cosign are installed in your Dockerfile. See Security for scanning configuration.
Temporary container names during recreation (hex prefix aliases)
When Docker recreates a container — for example via Portainer, Docker Compose up --force-recreate, or a Drydock Docker trigger — Docker temporarily renames the old container to a name like fcdb966987a0_myapp (a 12-character hex prefix matching the container ID, followed by an underscore and the original name). This alias is transient and disappears once the recreation completes.
Drydock automatically detects and filters these temporary alias names. They will never appear in notifications, MQTT topics, the UI, or any trigger output. No configuration is needed — this is handled unconditionally.
Self-update leaves container stopped
Drydock updates itself using a helper container pattern: it renames the old container, creates a new one with the updated image, then spawns a short-lived helper container that stops the old container, starts the new one, and removes the old one on success. If the new container fails to start, the helper restarts the old container as a fallback.
For this to work:
- The Docker socket must be bind-mounted at
/var/run/docker.sock - The Docker trigger must be enabled
- Drydock must be able to create temporary helper containers
If the container ends up stopped, check docker logs drydock-old-* for the helper container output. See Self-Update for the full sequence.
Legacy WUD environment variables or labels
Drydock accepts legacy WUD_* environment variables and wud.* container labels as fallback, but emits deprecation warnings when they are used. Each legacy fallback increments the dd_legacy_input_total{source,key} Prometheus counter.
To convert your configuration in place, use the built-in migration CLI:
# Preview changes without writing
docker exec drydock node dist/index.js config migrate --dry-run
# Apply migration to a specific file
docker exec drydock node dist/index.js config migrate --file /path/to/compose.ymlSee the Quick start for the recommended DD_* / dd.* configuration format.