Files
consentos/docker-compose.yml
James Cottrill 84e41857c3 Bundle banner into admin-ui image and add prod docker-compose (#1)
* fix: bundle banner into admin-ui image and serve at origin root

The loader at apps/banner/src/loader.ts derives the bundle URL from
its own origin, not its directory, so ``consent-loader.js`` and
``consent-bundle.js`` must live at the web root rather than under a
sub-path. The upstream admin-ui image never bundled the banner at
all, forcing deployment overlays to paper over the gap — and those
overlays misplaced the files under ``/banner/``.

Fold the banner build into ``apps/admin-ui/Dockerfile`` as an extra
stage, move its output to ``public/`` so Vite emits it at the image
root, and add CORS + caching rules for the two scripts in
``nginx.conf`` ahead of the SPA fallback. Switch the root
``docker-compose.yml`` build context to the repo root (with the
dockerignore trimmed accordingly) so one image now covers admin + CDN.

Also drop the published sourcemap for ``consent-bundle.js`` — the
bundle is minified and cross-origin, shipping a map to anyone
inspecting a customer page isn't something we want.

* feat: add docker-compose.prod.yml for single-host deployment

Add a production-targeted compose file alongside the existing dev one.
Operators running ConsentOS on a single host (the OSS quick-start
path) now have a canonical compose to point ``-f`` at, instead of
hand-rolling overlays in their deployment repo.

Differences from ``docker-compose.yml`` (dev) — see the file header
for the full list, but the load-bearing ones are:

* A one-shot ``consentos-bootstrap`` init container owns alembic
  migrations and the initial-admin provisioning. Every long-running
  service that touches the database waits for it via
  ``service_completed_successfully``.
* Postgres credentials and Redis password come from the ``.env``
  file rather than being hardcoded; the dev compose keeps the
  ``consentos:consentos`` defaults so ``make up`` still just works.
* All host-bound ports are scoped to ``127.0.0.1`` so a reverse
  proxy on the host (Caddy in the reference deployment) can
  terminate TLS in front of them.
* The scanner gets a scoped ``environment:`` block instead of
  ``env_file: .env``. Sharing the env file caused vars like
  ``PORT`` to leak into ``ScannerSettings`` and rebind the service
  off its default ``8001``, which silently broke
  ``SCANNER_SERVICE_URL`` for the worker.
* ``shm_size: 1gb`` on the scanner — Playwright/Chromium crashes
  under the default 64 MB ``/dev/shm`` on heavy pages.
* ``consentos-admin`` builds with the repo root as the context so
  the upstream ``apps/admin-ui/Dockerfile`` (added in the previous
  commit) can pull ``apps/banner/`` in alongside ``apps/admin-ui/``
  and bundle ``consent-loader.js`` / ``consent-bundle.js`` at the
  nginx root.
* Per-service ``mem_limit`` and dependency-aware healthchecks so
  ``docker compose up -d`` gives a consistent, observable start.
2026-04-14 13:03:36 +01:00

123 lines
2.7 KiB
YAML

services:
api:
build:
context: ./apps/api
dockerfile: Dockerfile
ports:
- "8000:8000"
env_file:
- .env
environment:
DATABASE_URL: postgresql+asyncpg://consentos:consentos@postgres:5432/consentos
REDIS_URL: redis://redis:6379/0
volumes:
- ./apps/api:/app
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
postgres:
image: postgres:16-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: consentos
POSTGRES_PASSWORD: consentos
POSTGRES_DB: consentos
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U consentos"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redisdata:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
scanner:
build:
context: ./apps/scanner
dockerfile: Dockerfile
ports:
- "8001:8001"
environment:
HOST: "0.0.0.0"
PORT: "8001"
LOG_LEVEL: INFO
CRAWLER_HEADLESS: "true"
depends_on:
- api
restart: unless-stopped
celery-worker:
build:
context: ./apps/api
dockerfile: Dockerfile
command: >
celery -A src.celery_app worker
--loglevel=info
--concurrency=2
env_file:
- .env
environment:
DATABASE_URL: postgresql+asyncpg://consentos:consentos@postgres:5432/consentos
REDIS_URL: redis://redis:6379/0
SCANNER_SERVICE_URL: http://scanner:8001
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
scanner:
condition: service_started
restart: unless-stopped
celery-beat:
build:
context: ./apps/api
dockerfile: Dockerfile
command: >
celery -A src.celery_app beat
--loglevel=info
env_file:
- .env
environment:
DATABASE_URL: postgresql+asyncpg://consentos:consentos@postgres:5432/consentos
REDIS_URL: redis://redis:6379/0
depends_on:
redis:
condition: service_healthy
restart: unless-stopped
admin-ui:
build:
# Context is the repo root so the Dockerfile can pull in
# apps/banner/ alongside apps/admin-ui/ and bake both into
# a single nginx image.
context: .
dockerfile: apps/admin-ui/Dockerfile
ports:
- "5173:80"
depends_on:
- api
restart: unless-stopped
volumes:
pgdata:
redisdata: