Bundle banner into admin-ui image and add prod docker-compose (#1)

* fix: bundle banner into admin-ui image and serve at origin root

The loader at apps/banner/src/loader.ts derives the bundle URL from
its own origin, not its directory, so ``consent-loader.js`` and
``consent-bundle.js`` must live at the web root rather than under a
sub-path. The upstream admin-ui image never bundled the banner at
all, forcing deployment overlays to paper over the gap — and those
overlays misplaced the files under ``/banner/``.

Fold the banner build into ``apps/admin-ui/Dockerfile`` as an extra
stage, move its output to ``public/`` so Vite emits it at the image
root, and add CORS + caching rules for the two scripts in
``nginx.conf`` ahead of the SPA fallback. Switch the root
``docker-compose.yml`` build context to the repo root (with the
dockerignore trimmed accordingly) so one image now covers admin + CDN.

Also drop the published sourcemap for ``consent-bundle.js`` — the
bundle is minified and cross-origin, shipping a map to anyone
inspecting a customer page isn't something we want.

* feat: add docker-compose.prod.yml for single-host deployment

Add a production-targeted compose file alongside the existing dev one.
Operators running ConsentOS on a single host (the OSS quick-start
path) now have a canonical compose to point ``-f`` at, instead of
hand-rolling overlays in their deployment repo.

Differences from ``docker-compose.yml`` (dev) — see the file header
for the full list, but the load-bearing ones are:

* A one-shot ``consentos-bootstrap`` init container owns alembic
  migrations and the initial-admin provisioning. Every long-running
  service that touches the database waits for it via
  ``service_completed_successfully``.
* Postgres credentials and Redis password come from the ``.env``
  file rather than being hardcoded; the dev compose keeps the
  ``consentos:consentos`` defaults so ``make up`` still just works.
* All host-bound ports are scoped to ``127.0.0.1`` so a reverse
  proxy on the host (Caddy in the reference deployment) can
  terminate TLS in front of them.
* The scanner gets a scoped ``environment:`` block instead of
  ``env_file: .env``. Sharing the env file caused vars like
  ``PORT`` to leak into ``ScannerSettings`` and rebind the service
  off its default ``8001``, which silently broke
  ``SCANNER_SERVICE_URL`` for the worker.
* ``shm_size: 1gb`` on the scanner — Playwright/Chromium crashes
  under the default 64 MB ``/dev/shm`` on heavy pages.
* ``consentos-admin`` builds with the repo root as the context so
  the upstream ``apps/admin-ui/Dockerfile`` (added in the previous
  commit) can pull ``apps/banner/`` in alongside ``apps/admin-ui/``
  and bundle ``consent-loader.js`` / ``consent-bundle.js`` at the
  nginx root.
* Per-service ``mem_limit`` and dependency-aware healthchecks so
  ``docker compose up -d`` gives a consistent, observable start.
This commit is contained in:
James Cottrill
2026-04-14 13:03:36 +01:00
committed by GitHub
parent fbf26453f2
commit 84e41857c3
6 changed files with 274 additions and 10 deletions

View File

@@ -1,11 +1,29 @@
FROM node:20-slim AS builder
WORKDIR /app
COPY package.json package-lock.json ./
# Build context is the repo root so we can see apps/banner/ alongside
# apps/admin-ui/. A .dockerignore at the repo root keeps this cheap.
# ── Stage 1: build the banner bundle ────────────────────────────────
FROM node:20-slim AS banner-builder
WORKDIR /build/banner
COPY apps/banner/package.json apps/banner/package-lock.json ./
RUN npm ci
COPY . .
COPY apps/banner/ .
RUN npm run build
# ── Stage 2: build the admin UI ─────────────────────────────────────
FROM node:20-slim AS admin-builder
WORKDIR /build/admin
COPY apps/admin-ui/package.json apps/admin-ui/package-lock.json ./
RUN npm ci
COPY apps/admin-ui/ .
# Drop the banner build output at the web root so it's served as
# /consent-loader.js and /consent-bundle.js. The loader resolves the
# bundle URL from its own origin (see apps/banner/src/loader.ts), so
# both files must live at the origin root — not under a sub-path.
COPY --from=banner-builder /build/banner/dist/ ./public/
RUN npx vite build
# ── Stage 3: serve with nginx ───────────────────────────────────────
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=admin-builder /build/admin/dist /usr/share/nginx/html
COPY apps/admin-ui/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

View File

@@ -3,7 +3,27 @@ server {
root /usr/share/nginx/html;
index index.html;
# SPA fallback — serve index.html for all routes
# Banner entry points — cross-origin script loads from customer
# sites, so they need permissive CORS. Served from the web root
# because the loader derives the bundle URL from its own origin
# (see apps/banner/src/loader.ts). Declared before the SPA
# fallback so nginx doesn't rewrite them to index.html when the
# files aren't yet built in dev.
location = /consent-loader.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
location = /consent-bundle.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
# SPA fallback — serve index.html for all other routes
location / {
try_files $uri $uri/ /index.html;
}