Files
consentos/apps/admin-ui/nginx.conf
James Cottrill 84e41857c3 Bundle banner into admin-ui image and add prod docker-compose (#1)
* fix: bundle banner into admin-ui image and serve at origin root

The loader at apps/banner/src/loader.ts derives the bundle URL from
its own origin, not its directory, so ``consent-loader.js`` and
``consent-bundle.js`` must live at the web root rather than under a
sub-path. The upstream admin-ui image never bundled the banner at
all, forcing deployment overlays to paper over the gap — and those
overlays misplaced the files under ``/banner/``.

Fold the banner build into ``apps/admin-ui/Dockerfile`` as an extra
stage, move its output to ``public/`` so Vite emits it at the image
root, and add CORS + caching rules for the two scripts in
``nginx.conf`` ahead of the SPA fallback. Switch the root
``docker-compose.yml`` build context to the repo root (with the
dockerignore trimmed accordingly) so one image now covers admin + CDN.

Also drop the published sourcemap for ``consent-bundle.js`` — the
bundle is minified and cross-origin, shipping a map to anyone
inspecting a customer page isn't something we want.

* feat: add docker-compose.prod.yml for single-host deployment

Add a production-targeted compose file alongside the existing dev one.
Operators running ConsentOS on a single host (the OSS quick-start
path) now have a canonical compose to point ``-f`` at, instead of
hand-rolling overlays in their deployment repo.

Differences from ``docker-compose.yml`` (dev) — see the file header
for the full list, but the load-bearing ones are:

* A one-shot ``consentos-bootstrap`` init container owns alembic
  migrations and the initial-admin provisioning. Every long-running
  service that touches the database waits for it via
  ``service_completed_successfully``.
* Postgres credentials and Redis password come from the ``.env``
  file rather than being hardcoded; the dev compose keeps the
  ``consentos:consentos`` defaults so ``make up`` still just works.
* All host-bound ports are scoped to ``127.0.0.1`` so a reverse
  proxy on the host (Caddy in the reference deployment) can
  terminate TLS in front of them.
* The scanner gets a scoped ``environment:`` block instead of
  ``env_file: .env``. Sharing the env file caused vars like
  ``PORT`` to leak into ``ScannerSettings`` and rebind the service
  off its default ``8001``, which silently broke
  ``SCANNER_SERVICE_URL`` for the worker.
* ``shm_size: 1gb`` on the scanner — Playwright/Chromium crashes
  under the default 64 MB ``/dev/shm`` on heavy pages.
* ``consentos-admin`` builds with the repo root as the context so
  the upstream ``apps/admin-ui/Dockerfile`` (added in the previous
  commit) can pull ``apps/banner/`` in alongside ``apps/admin-ui/``
  and bundle ``consent-loader.js`` / ``consent-bundle.js`` at the
  nginx root.
* Per-service ``mem_limit`` and dependency-aware healthchecks so
  ``docker compose up -d`` gives a consistent, observable start.
2026-04-14 13:03:36 +01:00

50 lines
1.7 KiB
Nginx Configuration File

server {
listen 80;
root /usr/share/nginx/html;
index index.html;
# Banner entry points — cross-origin script loads from customer
# sites, so they need permissive CORS. Served from the web root
# because the loader derives the bundle URL from its own origin
# (see apps/banner/src/loader.ts). Declared before the SPA
# fallback so nginx doesn't rewrite them to index.html when the
# files aren't yet built in dev.
location = /consent-loader.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
location = /consent-bundle.js {
add_header Access-Control-Allow-Origin "*" always;
add_header Access-Control-Allow-Methods "GET, OPTIONS" always;
add_header Cache-Control "public, max-age=3600" always;
try_files $uri =404;
}
# SPA fallback — serve index.html for all other routes
location / {
try_files $uri $uri/ /index.html;
}
# Proxy API requests to the backend
# Uses Docker's embedded DNS with a variable so nginx resolves at request
# time rather than at startup — prevents crash if api is temporarily down.
location /api/ {
resolver 127.0.0.11 valid=10s;
set $upstream http://api:8000;
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Cache static assets
location /assets/ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}