Bundle banner into admin-ui image and add prod docker-compose (#1)

* fix: bundle banner into admin-ui image and serve at origin root

The loader at apps/banner/src/loader.ts derives the bundle URL from
its own origin, not its directory, so ``consent-loader.js`` and
``consent-bundle.js`` must live at the web root rather than under a
sub-path. The upstream admin-ui image never bundled the banner at
all, forcing deployment overlays to paper over the gap — and those
overlays misplaced the files under ``/banner/``.

Fold the banner build into ``apps/admin-ui/Dockerfile`` as an extra
stage, move its output to ``public/`` so Vite emits it at the image
root, and add CORS + caching rules for the two scripts in
``nginx.conf`` ahead of the SPA fallback. Switch the root
``docker-compose.yml`` build context to the repo root (with the
dockerignore trimmed accordingly) so one image now covers admin + CDN.

Also drop the published sourcemap for ``consent-bundle.js`` — the
bundle is minified and cross-origin, shipping a map to anyone
inspecting a customer page isn't something we want.

* feat: add docker-compose.prod.yml for single-host deployment

Add a production-targeted compose file alongside the existing dev one.
Operators running ConsentOS on a single host (the OSS quick-start
path) now have a canonical compose to point ``-f`` at, instead of
hand-rolling overlays in their deployment repo.

Differences from ``docker-compose.yml`` (dev) — see the file header
for the full list, but the load-bearing ones are:

* A one-shot ``consentos-bootstrap`` init container owns alembic
  migrations and the initial-admin provisioning. Every long-running
  service that touches the database waits for it via
  ``service_completed_successfully``.
* Postgres credentials and Redis password come from the ``.env``
  file rather than being hardcoded; the dev compose keeps the
  ``consentos:consentos`` defaults so ``make up`` still just works.
* All host-bound ports are scoped to ``127.0.0.1`` so a reverse
  proxy on the host (Caddy in the reference deployment) can
  terminate TLS in front of them.
* The scanner gets a scoped ``environment:`` block instead of
  ``env_file: .env``. Sharing the env file caused vars like
  ``PORT`` to leak into ``ScannerSettings`` and rebind the service
  off its default ``8001``, which silently broke
  ``SCANNER_SERVICE_URL`` for the worker.
* ``shm_size: 1gb`` on the scanner — Playwright/Chromium crashes
  under the default 64 MB ``/dev/shm`` on heavy pages.
* ``consentos-admin`` builds with the repo root as the context so
  the upstream ``apps/admin-ui/Dockerfile`` (added in the previous
  commit) can pull ``apps/banner/`` in alongside ``apps/admin-ui/``
  and bundle ``consent-loader.js`` / ``consent-bundle.js`` at the
  nginx root.
* Per-service ``mem_limit`` and dependency-aware healthchecks so
  ``docker compose up -d`` gives a consistent, observable start.
This commit is contained in:
James Cottrill
2026-04-14 13:03:36 +01:00
committed by GitHub
parent fbf26453f2
commit 84e41857c3
6 changed files with 274 additions and 10 deletions

View File

@@ -3,6 +3,15 @@
**/node_modules **/node_modules
**/.venv **/.venv
**/*.pyc **/*.pyc
**/dist
**/build
**/.pytest_cache
**/.ruff_cache
**/.mypy_cache
.env .env
*.md *.md
docs/ docs/
sdks/
tests/load/
helm/
scripts/

View File

@@ -1,11 +1,29 @@
FROM node:20-slim AS builder # Build context is the repo root so we can see apps/banner/ alongside
WORKDIR /app # apps/admin-ui/. A .dockerignore at the repo root keeps this cheap.
COPY package.json package-lock.json ./
# ── Stage 1: build the banner bundle ────────────────────────────────
FROM node:20-slim AS banner-builder
WORKDIR /build/banner
COPY apps/banner/package.json apps/banner/package-lock.json ./
RUN npm ci RUN npm ci
COPY . . COPY apps/banner/ .
RUN npm run build
# ── Stage 2: build the admin UI ─────────────────────────────────────
FROM node:20-slim AS admin-builder
WORKDIR /build/admin
COPY apps/admin-ui/package.json apps/admin-ui/package-lock.json ./
RUN npm ci
COPY apps/admin-ui/ .
# Drop the banner build output at the web root so it's served as
# /consent-loader.js and /consent-bundle.js. The loader resolves the
# bundle URL from its own origin (see apps/banner/src/loader.ts), so
# both files must live at the origin root — not under a sub-path.
COPY --from=banner-builder /build/banner/dist/ ./public/
RUN npx vite build RUN npx vite build
# ── Stage 3: serve with nginx ───────────────────────────────────────
FROM nginx:alpine FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html COPY --from=admin-builder /build/admin/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf COPY apps/admin-ui/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80 EXPOSE 80

View File

@@ -3,7 +3,27 @@ server {
root /usr/share/nginx/html; root /usr/share/nginx/html;
index index.html; index index.html;
# SPA fallback — serve index.html for all routes # Banner entry points — cross-origin script loads from customer
# sites, so they need permissive CORS. Served from the web root
# because the loader derives the bundle URL from its own origin
# (see apps/banner/src/loader.ts). Declared before the SPA
# fallback so nginx doesn't rewrite them to index.html when the
# files aren't yet built in dev.
location = /consent-loader.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
location = /consent-bundle.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
# SPA fallback — serve index.html for all other routes
location / { location / {
try_files $uri $uri/ /index.html; try_files $uri $uri/ /index.html;
} }

View File

@@ -23,7 +23,11 @@ export default [
file: 'dist/consent-bundle.js', file: 'dist/consent-bundle.js',
format: 'iife', format: 'iife',
name: 'CmpBanner', name: 'CmpBanner',
sourcemap: true, // No sourcemap in the published bundle. The file is minified
// and served cross-origin from customer sites; shipping the
// map would publish our source tree to anyone inspecting the
// page. Build locally if you need to debug.
sourcemap: false,
}, },
plugins: [ plugins: [
typescript({ tsconfig: './tsconfig.json', declaration: false }), typescript({ tsconfig: './tsconfig.json', declaration: false }),

210
docker-compose.prod.yml Normal file
View File

@@ -0,0 +1,210 @@
# Single-host production deployment.
#
# Differences from ``docker-compose.yml`` (dev):
# - Ports bound to ``127.0.0.1`` only — expects a reverse proxy on
# the host (e.g. Caddy) to terminate TLS and forward.
# - A one-shot ``consentos-bootstrap`` init container owns all
# database setup (alembic + initial admin provisioning); every
# long-running service that touches the DB waits for it via
# ``service_completed_successfully``.
# - Per-service resource limits, healthchecks, and dependency
# ordering so ``docker compose up -d`` gives a consistent start.
# - The scanner gets its own scoped ``environment:`` block rather
# than ``env_file: .env`` so unrelated variables (``PORT``,
# ``HOST``, …) from the shared env can't rebind its settings.
# - ``shm_size: 1gb`` on the scanner — Playwright/Chromium crashes
# under the default 64 MB ``/dev/shm``.
services:
# ── Init container: migrations + initial admin bootstrap ──────────
consentos-bootstrap:
build:
context: apps/api
dockerfile: Dockerfile
container_name: consentos-bootstrap
env_file: .env
working_dir: /app
command:
- "sh"
- "-c"
- "python -m alembic upgrade head && python -m src.cli.bootstrap_admin"
restart: "no"
depends_on:
postgres:
condition: service_healthy
deploy:
resources:
limits:
memory: 256M
# ── API ──────────────────────────────────────────────────────────
consentos-api:
build:
context: apps/api
dockerfile: Dockerfile
container_name: consentos-api
env_file: .env
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 5s
start_period: 15s
retries: 3
ports:
- "127.0.0.1:11001:8000"
deploy:
resources:
limits:
memory: 512M
depends_on:
consentos-bootstrap:
condition: service_completed_successfully
redis:
condition: service_healthy
# ── Celery worker ────────────────────────────────────────────────
consentos-worker:
build:
context: apps/api
dockerfile: Dockerfile
container_name: consentos-worker
env_file: .env
working_dir: /app
command: >
celery -A src.celery_app worker
--loglevel=info --concurrency=2
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "celery -A src.celery_app inspect ping -d celery@$${HOSTNAME} || exit 1"]
interval: 30s
timeout: 10s
start_period: 30s
retries: 3
depends_on:
consentos-bootstrap:
condition: service_completed_successfully
consentos-scanner:
condition: service_healthy
redis:
condition: service_healthy
deploy:
resources:
limits:
memory: 512M
# ── Celery beat ──────────────────────────────────────────────────
consentos-beat:
build:
context: apps/api
dockerfile: Dockerfile
container_name: consentos-beat
env_file: .env
working_dir: /app
command: >
celery -A src.celery_app beat
--loglevel=info
restart: unless-stopped
# Beat has no HTTP surface and no inspect endpoint — rely on the
# container exit status rather than a fake healthcheck so it
# doesn't permanently show as "unhealthy".
healthcheck:
disable: true
depends_on:
consentos-bootstrap:
condition: service_completed_successfully
redis:
condition: service_healthy
deploy:
resources:
limits:
memory: 256M
# ── Scanner (Playwright / Chromium) ──────────────────────────────
consentos-scanner:
build:
context: apps/scanner
dockerfile: Dockerfile
container_name: consentos-scanner
# Scoped environment — do NOT env_file the shared .env here or
# vars like PORT bleed across and rebind the scanner off its
# default 8001 (which is what SCANNER_SERVICE_URL expects).
environment:
LOG_LEVEL: ${LOG_LEVEL:-INFO}
CRAWLER_HEADLESS: "true"
CRAWLER_TIMEOUT_MS: "30000"
MAX_PAGES_PER_SCAN: "50"
restart: unless-stopped
# Chromium crashes under /dev/shm pressure on sites with many
# iframes or heavy DOM trees. Default is 64 MB — not enough.
shm_size: "1gb"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
start_period: 30s
retries: 3
deploy:
resources:
limits:
memory: 1G
# ── Admin UI + banner CDN (single nginx image) ───────────────────
consentos-admin:
build:
# Context is the repo root so the Dockerfile can pull in
# apps/banner/ alongside apps/admin-ui/ and bake the banner
# output at the nginx root — see apps/admin-ui/Dockerfile.
context: .
dockerfile: apps/admin-ui/Dockerfile
container_name: consentos-admin
restart: unless-stopped
ports:
- "127.0.0.1:11002:80"
deploy:
resources:
limits:
memory: 128M
# ── Postgres ─────────────────────────────────────────────────────
postgres:
image: postgres:17-alpine
container_name: consentos-postgres
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER"]
interval: 5s
timeout: 5s
retries: 5
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
# ── Redis ────────────────────────────────────────────────────────
redis:
image: redis:7-alpine
container_name: consentos-redis
command: redis-server --requirepass ${REDIS_PASSWORD} --appendonly yes
volumes:
- redisdata:/data
healthcheck:
test: ["CMD-SHELL", "redis-cli -a $$REDIS_PASSWORD ping"]
interval: 2s
timeout: 3s
retries: 10
restart: unless-stopped
deploy:
resources:
limits:
memory: 128M
volumes:
pgdata:
redisdata:

View File

@@ -106,8 +106,11 @@ services:
admin-ui: admin-ui:
build: build:
context: ./apps/admin-ui # Context is the repo root so the Dockerfile can pull in
dockerfile: Dockerfile # apps/banner/ alongside apps/admin-ui/ and bake both into
# a single nginx image.
context: .
dockerfile: apps/admin-ui/Dockerfile
ports: ports:
- "5173:80" - "5173:80"
depends_on: depends_on: