Emdash source with visual editor image upload fix
Fixes: 1. media.ts: wrap placeholder generation in try-catch 2. toolbar.ts: check r.ok, display error message in popover
This commit is contained in:
142
infra/perf-monitor/README.md
Normal file
142
infra/perf-monitor/README.md
Normal file
@@ -0,0 +1,142 @@
|
||||
# Emdash Perf Monitor
|
||||
|
||||
Tracks cold start / TTFB of the emdash demo sites over time from multiple regions. Two sites are measured in parallel so the effect of Astro's experimental cache provider can be compared head-to-head:
|
||||
|
||||
- `blog` -- `blog-demo.emdashcms.com` (baseline, catalog Astro)
|
||||
- `cache` -- `cache-demo.emdashcms.com` (prerelease Astro with `cacheCloudflare()` enabled)
|
||||
|
||||
Each measurement row is tagged with a `site` column matching one of those ids.
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Coordinator Worker** (`emdash-perf-coordinator`) owns the D1 database, cron trigger, queue consumer, HTTP API, and frontend dashboard. Served at `https://perf.emdashcms.com`.
|
||||
- **4 Probe Workers** (`emdash-perf-probe-{use,euw,ape,aps}`) are placed near AWS regions via `placement.region`. They receive measurement requests from the coordinator via service bindings and run `fetch()` timing from their placed location.
|
||||
- **D1 database** (`emdash_perf`) stores all measurements, tagged by `source`: `deploy` (queue-triggered, has SHA + PR) or `cron` (ambient baseline, untagged).
|
||||
- **Cloudflare Queue** (`emdash-perf-deploy-events`) subscribes to `cf.workersBuilds.worker.build.succeeded` events. The coordinator consumes these, filters for the baseline demo Worker, resolves the PR via the GitHub API, and runs a measurement against every registered site. This is the primary attribution path; see `src/routes.ts` for the site registry.
|
||||
|
||||
All five Workers are built from this directory by the Cloudflare Vite plugin -- the coordinator entry is `src/index.ts` and the four probes are defined as `auxiliaryWorkers` in `vite.config.ts`.
|
||||
|
||||
## Measurement triggers
|
||||
|
||||
| Trigger | When | `source` | Sites | SHA | PR | On graph? | Persisted? |
|
||||
| ---------------------- | ---------------------------------- | -------- | ------------ | ---------- | -------- | --------- | ---------- |
|
||||
| Queue event | Every successful `blog-demo` build | `deploy` | all | from event | resolved | yes | yes |
|
||||
| Cron (`*/30 * * * *`) | Every 30 min | `cron` | all | null | null | yes | yes |
|
||||
| `pnpm trigger` | Private/quiet check (default) | n/a | all (or one) | n/a | n/a | no | **no** |
|
||||
| `pnpm trigger --store` | Manual, persisted | `manual` | all (or one) | optional | optional | **no** | yes |
|
||||
|
||||
The queue is the deploy-attribution path. The cron is a safety net that fills gaps between deploys and catches regressions the queue might miss.
|
||||
|
||||
`pnpm trigger` defaults to ephemeral: the probes run for real, but the coordinator skips the database insert and just returns the results to stdout. Use this for private/local checks you don't want on the dashboard.
|
||||
|
||||
Passing `--store` persists the run as `source=manual`. Stored manual runs land in the results table with a yellow `manual` badge but are excluded from the line chart, the summary cards, and the 7-day rolling medians so they don't skew the baseline.
|
||||
|
||||
## Manual triggers
|
||||
|
||||
```bash
|
||||
# Default: run the probes, print results, record nothing.
|
||||
# First invocation opens a browser for Cloudflare Access login; subsequent
|
||||
# invocations reuse the token until the Access session expires.
|
||||
pnpm trigger
|
||||
|
||||
# Persist the run as source=manual (appears in the results table)
|
||||
pnpm trigger -- --store --note "pre-cold-start-fix baseline"
|
||||
|
||||
# Attach a SHA and/or PR number to a persisted run
|
||||
pnpm trigger -- --store --sha 1a2b3c4 --pr 532 --note "PR #532 preview"
|
||||
```
|
||||
|
||||
Auth is handled by a Cloudflare Access policy on `POST /api/trigger`
|
||||
|
||||
## First-time setup
|
||||
|
||||
```bash
|
||||
# 1. Create the D1 database and apply the initial schema
|
||||
wrangler d1 create emdash_perf
|
||||
# copy the database_id into wrangler.jsonc
|
||||
|
||||
wrangler d1 execute emdash_perf --remote --file=schema.sql
|
||||
pnpm db:migrations:apply # any incremental migrations on top
|
||||
|
||||
# 2. Create the deploy events queue and DLQ
|
||||
wrangler queues create emdash-perf-deploy-events
|
||||
wrangler queues create emdash-perf-deploy-events-dlq
|
||||
|
||||
# 3. Build and deploy all 5 Workers
|
||||
pnpm deploy
|
||||
|
||||
# 4. Subscribe the queue to Workers Builds events.
|
||||
# (No wrangler command for this yet -- use the CF dashboard or API:
|
||||
# https://developers.cloudflare.com/queues/event-subscriptions/manage-event-subscriptions/)
|
||||
# Source: Workers Builds
|
||||
# Events: build.succeeded (at minimum)
|
||||
# Queue: emdash-perf-deploy-events
|
||||
|
||||
# 5. (Optional, to enable manual triggers) Add a Cloudflare Access policy
|
||||
# on POST /api/trigger. See "Access setup" above.
|
||||
```
|
||||
|
||||
No secrets required. PR lookup hits the public GitHub API unauthenticated
|
||||
(60 req/hr limit, plenty for one lookup per deploy).
|
||||
|
||||
## Deploy order
|
||||
|
||||
The coordinator's service bindings require the probes to exist first. `pnpm deploy` handles this: it builds, deploys all 4 probes, then deploys the coordinator.
|
||||
|
||||
## Dev
|
||||
|
||||
```bash
|
||||
pnpm dev # Vite dev server, all 5 Workers via Miniflare
|
||||
```
|
||||
|
||||
Open `http://localhost:5173` for the dashboard. API is at `/api/*`. Queue events can't be exercised locally without manual message publishing -- rely on the live environment or the next cron tick to verify the measurement path.
|
||||
|
||||
Local manual trigger (no Access locally):
|
||||
|
||||
```bash
|
||||
curl -sS -X POST http://localhost:5173/api/trigger \
|
||||
-H 'content-type: application/json' \
|
||||
-d '{"note":"local test"}'
|
||||
```
|
||||
|
||||
## Endpoints
|
||||
|
||||
| Endpoint | Method | Auth | Purpose |
|
||||
| -------------- | ------ | --------- | ------------------------------------------------- |
|
||||
| `/` | GET | none | Dashboard |
|
||||
| `/api/config` | GET | none | Target URL, available routes and regions |
|
||||
| `/api/summary` | GET | none | Latest result per route/region + rolling medians |
|
||||
| `/api/results` | GET | none | Filtered historical results |
|
||||
| `/api/chart` | GET | none | Time series for charting (with PR markers) |
|
||||
| `/api/trigger` | POST | CF Access | Run an ad-hoc measurement, tagged `source=manual` |
|
||||
|
||||
All GET endpoints are read-only. `POST /api/trigger` is the only state-changing endpoint and is expected to be protected by a Cloudflare Access policy at the edge.
|
||||
|
||||
## Schema changes
|
||||
|
||||
D1's native migrations are wired up (`migrations_dir` in `wrangler.jsonc`).
|
||||
|
||||
```bash
|
||||
pnpm db:migrations:list # show pending migrations
|
||||
pnpm db:migrations:apply # apply pending migrations
|
||||
pnpm db:migrations:create # scaffold a new migration file
|
||||
```
|
||||
|
||||
`schema.sql` is the desired end state for fresh installs only. For incremental changes on an existing database, add a file under `migrations/` and apply it -- don't rely on editing `schema.sql` to take effect.
|
||||
|
||||
## Types
|
||||
|
||||
Binding types come from `wrangler types`, which reads `wrangler.jsonc` and writes `worker-configuration.d.ts`. The generated file is committed so `tsc` doesn't need wrangler to run first.
|
||||
|
||||
Re-run after any binding change:
|
||||
|
||||
```bash
|
||||
pnpm cf-typegen
|
||||
```
|
||||
|
||||
## Operational notes
|
||||
|
||||
- **Trigger worker name**: `TRIGGER_WORKER_NAME` in `src/routes.ts` is the Worker whose `build.succeeded` event drives deploy-attributed runs. Events for any other Worker are discarded (the cron job still measures every site on its own schedule). Since every registered site rebuilds from the same main-branch commit, one event triggers a measurement for all of them. If the baseline demo is ever renamed, update this constant.
|
||||
- **Adding a site**: add an entry to `SITES` in `src/routes.ts` with a stable `id` (stored in `perf_results.site`), `targetUrl`, and Worker name. Existing rows continue to use their recorded site id.
|
||||
- **PR lookup**: hits the public GitHub API unauthenticated (60 req/hr per IP). One call per deploy, so rate limits are a non-issue. If deploy rate ever gets anywhere near that, add a fine-grained PAT via `wrangler secret put GITHUB_TOKEN` and pass it in `src/github.ts`.
|
||||
- **DLQ**: failed messages retry 3x, then go to `emdash-perf-deploy-events-dlq`. Check this periodically if deploy-attributed results stop appearing.
|
||||
@@ -0,0 +1,12 @@
|
||||
-- Add columns for Server-Timing capture and free-form run notes.
|
||||
--
|
||||
-- cold_server_timings stores the parsed Server-Timing header from the cold
|
||||
-- request as a JSON object keyed by timing name:
|
||||
-- { "<name>": { "dur": <number>, "desc"?: <string> } }
|
||||
-- Only the cold response is stored -- warm requests are aggregated into
|
||||
-- medians, so keeping N server-timing blobs per route makes no sense.
|
||||
--
|
||||
-- note is a free-form label, primarily for manual triggers
|
||||
-- (e.g. "pre-cold-start-fix baseline") but available for any source.
|
||||
ALTER TABLE perf_results ADD COLUMN cold_server_timings TEXT;
|
||||
ALTER TABLE perf_results ADD COLUMN note TEXT;
|
||||
@@ -0,0 +1,14 @@
|
||||
-- Add column for the median-per-metric warm Server-Timing snapshot.
|
||||
--
|
||||
-- The original capture only stored cold Server-Timing (`cold_server_timings`).
|
||||
-- That's useful for cold-start investigation but useless for steady-state
|
||||
-- measurements -- which is what most performance work actually moves.
|
||||
--
|
||||
-- `warm_server_timings` stores the median duration per metric across all
|
||||
-- warm requests in a single probe, in the same JSON shape as
|
||||
-- `cold_server_timings`:
|
||||
-- { "<name>": { "dur": <number>, "desc"?: <string> } }
|
||||
--
|
||||
-- Null when the target site didn't emit Server-Timing on warm responses, or
|
||||
-- when no warm requests were issued.
|
||||
ALTER TABLE perf_results ADD COLUMN warm_server_timings TEXT;
|
||||
7
infra/perf-monitor/migrations/0003_add_site.sql
Normal file
7
infra/perf-monitor/migrations/0003_add_site.sql
Normal file
@@ -0,0 +1,7 @@
|
||||
-- Tag each measurement with the demo site it came from. Existing rows all
|
||||
-- belong to the baseline blog-demo; the cache-demo site was added later.
|
||||
|
||||
ALTER TABLE perf_results ADD COLUMN site TEXT NOT NULL DEFAULT 'blog';
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_site_ts ON perf_results(site, timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_site_route_region_ts ON perf_results(site, route, region, timestamp);
|
||||
24
infra/perf-monitor/package.json
Normal file
24
infra/perf-monitor/package.json
Normal file
@@ -0,0 +1,24 @@
|
||||
{
|
||||
"name": "@emdash-cms/perf-monitor",
|
||||
"version": "0.0.1",
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "vite dev",
|
||||
"build": "vite build",
|
||||
"deploy": "pnpm build && pnpm deploy:probes && wrangler deploy",
|
||||
"deploy:probes": "for dir in dist/emdash_perf_probe_*; do wrangler deploy -c $dir/wrangler.json; done",
|
||||
"cf-typegen": "wrangler types",
|
||||
"typecheck": "tsc --noEmit",
|
||||
"db:migrations:list": "wrangler d1 migrations list emdash_perf --remote",
|
||||
"db:migrations:apply": "wrangler d1 migrations apply emdash_perf --remote",
|
||||
"db:migrations:create": "wrangler d1 migrations create emdash_perf",
|
||||
"trigger": "node scripts/trigger.mjs"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@cloudflare/vite-plugin": "^1.0.0",
|
||||
"typescript": "catalog:",
|
||||
"vite": "^6.0.0",
|
||||
"wrangler": "catalog:"
|
||||
}
|
||||
}
|
||||
31
infra/perf-monitor/probe/src/index.ts
Normal file
31
infra/perf-monitor/probe/src/index.ts
Normal file
@@ -0,0 +1,31 @@
|
||||
/**
|
||||
* Perf probe Worker -- deployed per-region with placement hints.
|
||||
* Receives measurement requests via service binding fetch(),
|
||||
* runs the measurements from its placed location, returns results.
|
||||
*/
|
||||
|
||||
import { measureRoutes } from "./measure.js";
|
||||
import type { MeasureRequest, MeasureResponse } from "./measure.js";
|
||||
|
||||
export default {
|
||||
async fetch(request: Request): Promise<Response> {
|
||||
if (request.method !== "POST") {
|
||||
return new Response("Method not allowed", { status: 405 });
|
||||
}
|
||||
|
||||
try {
|
||||
const body = await request.json<MeasureRequest & { region?: string }>();
|
||||
const results = await measureRoutes(body);
|
||||
|
||||
const response: MeasureResponse = {
|
||||
results,
|
||||
probeRegion: body.region ?? "unknown",
|
||||
};
|
||||
|
||||
return Response.json(response);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : "Unknown error";
|
||||
return Response.json({ error: message }, { status: 500 });
|
||||
}
|
||||
},
|
||||
} satisfies ExportedHandler;
|
||||
212
infra/perf-monitor/probe/src/measure.ts
Normal file
212
infra/perf-monitor/probe/src/measure.ts
Normal file
@@ -0,0 +1,212 @@
|
||||
/** Measurement logic -- runs inside the placed probe Worker. */
|
||||
|
||||
export interface MeasureRequest {
|
||||
targetUrl: string;
|
||||
routes: Array<{ path: string; label: string }>;
|
||||
warmRequests: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parsed Server-Timing header. Keyed by timing name. `desc` is optional.
|
||||
* Example: { render: { dur: 42, desc: "Page render" }, mw: { dur: 58 } }
|
||||
*/
|
||||
export type ServerTimings = Record<string, { dur: number; desc?: string }>;
|
||||
|
||||
export interface RouteResult {
|
||||
path: string;
|
||||
label: string;
|
||||
coldTtfbMs: number;
|
||||
/**
|
||||
* Median warm-request TTFB. Null if warmRequests was 0 and no warm
|
||||
* samples were taken — caller should fall back to coldTtfbMs in that case.
|
||||
*/
|
||||
warmTtfbMs: number | null;
|
||||
/** p95 warm-request TTFB. Null when no warm samples were taken. */
|
||||
p95TtfbMs: number | null;
|
||||
statusCode: number;
|
||||
cfColo: string | null;
|
||||
cfPlacement: string | null;
|
||||
/** Parsed from the cold response. Null if header absent or unparseable. */
|
||||
coldServerTimings: ServerTimings | null;
|
||||
/**
|
||||
* Median of each Server-Timing metric across all warm requests.
|
||||
* Null if no warm responses carried the header or no warm requests
|
||||
* were issued. Use this to isolate steady-state render/middleware/
|
||||
* runtime cost, independent of cold-start.
|
||||
*/
|
||||
warmServerTimings: ServerTimings | null;
|
||||
}
|
||||
|
||||
export interface MeasureResponse {
|
||||
results: RouteResult[];
|
||||
probeRegion: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the Server-Timing response header.
|
||||
*
|
||||
* Grammar (RFC 8673 §2):
|
||||
* Server-Timing: metric[;param]*[, metric[;param]*]*
|
||||
* param = dur=<number> | desc="<string>" | desc=<token>
|
||||
*
|
||||
* We only extract `dur` and `desc` and silently skip malformed entries.
|
||||
* Unknown params are ignored rather than rejected so future additions
|
||||
* upstream don't cause us to drop data.
|
||||
*/
|
||||
export function parseServerTiming(header: string | null): ServerTimings | null {
|
||||
if (!header) return null;
|
||||
const out: ServerTimings = {};
|
||||
for (const rawEntry of header.split(",")) {
|
||||
const parts = rawEntry.split(";").map((p) => p.trim());
|
||||
const name = parts[0];
|
||||
if (!name) continue;
|
||||
const entry: { dur: number; desc?: string } = { dur: 0 };
|
||||
let sawDur = false;
|
||||
for (const param of parts.slice(1)) {
|
||||
const eq = param.indexOf("=");
|
||||
if (eq === -1) continue;
|
||||
const key = param.slice(0, eq).trim();
|
||||
let value = param.slice(eq + 1).trim();
|
||||
// desc may be quoted
|
||||
if (value.startsWith('"') && value.endsWith('"')) {
|
||||
value = value.slice(1, -1);
|
||||
}
|
||||
if (key === "dur") {
|
||||
const n = Number(value);
|
||||
if (Number.isFinite(n)) {
|
||||
entry.dur = n;
|
||||
sawDur = true;
|
||||
}
|
||||
} else if (key === "desc") {
|
||||
entry.desc = value;
|
||||
}
|
||||
}
|
||||
if (sawDur) out[name] = entry;
|
||||
}
|
||||
return Object.keys(out).length > 0 ? out : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Measure TTFB for a single URL.
|
||||
* Returns wall-clock time from fetch start to first byte (headers received).
|
||||
*/
|
||||
async function measureTtfb(url: string): Promise<{
|
||||
ttfbMs: number;
|
||||
statusCode: number;
|
||||
cfColo: string | null;
|
||||
cfPlacement: string | null;
|
||||
serverTimings: ServerTimings | null;
|
||||
}> {
|
||||
const start = performance.now();
|
||||
const response = await fetch(url, {
|
||||
method: "GET",
|
||||
headers: {
|
||||
"User-Agent": "emdash-perf-probe/1.0",
|
||||
// Bust any edge cache
|
||||
"Cache-Control": "no-cache",
|
||||
},
|
||||
redirect: "follow",
|
||||
});
|
||||
const ttfbMs = performance.now() - start;
|
||||
|
||||
// Consume the body so the connection is properly released
|
||||
await response.arrayBuffer();
|
||||
|
||||
// Extract cf-ray colo: format is "<ray-id>-<COLO>"
|
||||
const cfRay = response.headers.get("cf-ray");
|
||||
const cfColo = cfRay?.split("-").pop() ?? null;
|
||||
const cfPlacement = response.headers.get("cf-placement");
|
||||
const serverTimings = parseServerTiming(response.headers.get("server-timing"));
|
||||
|
||||
return { ttfbMs, statusCode: response.status, cfColo, cfPlacement, serverTimings };
|
||||
}
|
||||
|
||||
/** Compute the median of an array. */
|
||||
function median(values: number[]): number {
|
||||
const sorted = values.toSorted((a, b) => a - b);
|
||||
const mid = Math.floor(sorted.length / 2);
|
||||
if (sorted.length % 2 === 0) {
|
||||
return (sorted[mid - 1]! + sorted[mid]!) / 2;
|
||||
}
|
||||
return sorted[mid]!;
|
||||
}
|
||||
|
||||
/** Compute p95 of an array. */
|
||||
function p95(values: number[]): number {
|
||||
const sorted = values.toSorted((a, b) => a - b);
|
||||
const idx = Math.ceil(sorted.length * 0.95) - 1;
|
||||
return sorted[Math.max(0, idx)]!;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run measurements for all routes.
|
||||
* For each route: 1 cold request (cache-busted with unique query param),
|
||||
* then N warm requests. Returns structured results.
|
||||
*/
|
||||
export async function measureRoutes(req: MeasureRequest): Promise<RouteResult[]> {
|
||||
const results: RouteResult[] = [];
|
||||
|
||||
for (const route of req.routes) {
|
||||
const url = `${req.targetUrl}${route.path}`;
|
||||
|
||||
// Cold request -- add a unique query param to avoid any isolate reuse
|
||||
const coldUrl = url + (url.includes("?") ? "&" : "?") + `_perf_cold=${Date.now()}`;
|
||||
const cold = await measureTtfb(coldUrl);
|
||||
|
||||
// Warm requests — keep per-metric samples so we can median each one.
|
||||
const warmTimings: number[] = [];
|
||||
const warmMetricSamples: Record<string, { durs: number[]; desc?: string }> = {};
|
||||
let lastStatusCode = cold.statusCode;
|
||||
for (let i = 0; i < req.warmRequests; i++) {
|
||||
const warm = await measureTtfb(url);
|
||||
warmTimings.push(warm.ttfbMs);
|
||||
lastStatusCode = warm.statusCode;
|
||||
if (warm.serverTimings) {
|
||||
for (const [name, entry] of Object.entries(warm.serverTimings)) {
|
||||
const acc = warmMetricSamples[name] ?? { durs: [], desc: entry.desc };
|
||||
acc.durs.push(entry.dur);
|
||||
if (!acc.desc && entry.desc) acc.desc = entry.desc;
|
||||
warmMetricSamples[name] = acc;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Collapse per-metric samples into medians so the stored shape
|
||||
// mirrors coldServerTimings.
|
||||
const warmServerTimings: ServerTimings | null = Object.keys(warmMetricSamples).length
|
||||
? Object.fromEntries(
|
||||
Object.entries(warmMetricSamples).map(([name, { durs, desc }]) => {
|
||||
const entry: { dur: number; desc?: string } = {
|
||||
dur: Math.round(median(durs) * 100) / 100,
|
||||
};
|
||||
if (desc) entry.desc = desc;
|
||||
return [name, entry];
|
||||
}),
|
||||
)
|
||||
: null;
|
||||
|
||||
// Handle the (uncommon) warmRequests=0 case: without warm samples,
|
||||
// median/p95 would compute against an empty array and produce NaN.
|
||||
// Report the cold TTFB in both slots so the row remains valid;
|
||||
// warm timings are reported as null so downstream code knows there's
|
||||
// no warm breakdown to render.
|
||||
const hasWarm = warmTimings.length > 0;
|
||||
const warmTtfbMs = hasWarm ? Math.round(median(warmTimings) * 100) / 100 : null;
|
||||
const p95TtfbMs = hasWarm ? Math.round(p95(warmTimings) * 100) / 100 : null;
|
||||
|
||||
results.push({
|
||||
path: route.path,
|
||||
label: route.label,
|
||||
coldTtfbMs: Math.round(cold.ttfbMs * 100) / 100,
|
||||
warmTtfbMs,
|
||||
p95TtfbMs,
|
||||
statusCode: lastStatusCode,
|
||||
cfColo: cold.cfColo,
|
||||
cfPlacement: cold.cfPlacement,
|
||||
coldServerTimings: cold.serverTimings,
|
||||
warmServerTimings,
|
||||
});
|
||||
}
|
||||
|
||||
return results;
|
||||
}
|
||||
991
infra/perf-monitor/public/index.html
Normal file
991
infra/perf-monitor/public/index.html
Normal file
@@ -0,0 +1,991 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>Emdash Perf Monitor</title>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js@4/dist/chart.umd.min.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chartjs-adapter-date-fns@3/dist/chartjs-adapter-date-fns.bundle.min.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chartjs-plugin-annotation@3/dist/chartjs-plugin-annotation.min.js"></script>
|
||||
<style>
|
||||
:root {
|
||||
--bg: #0c0c0e;
|
||||
--bg-card: #16161a;
|
||||
--bg-hover: #1e1e24;
|
||||
--border: #2a2a32;
|
||||
--text: #e0e0e6;
|
||||
--text-muted: #888894;
|
||||
--text-dim: #5c5c66;
|
||||
--accent: #4dabf7;
|
||||
--accent-dim: #2a6cb5;
|
||||
--green: #51cf66;
|
||||
--yellow: #fcc419;
|
||||
--red: #ff6b6b;
|
||||
--orange: #ff922b;
|
||||
--purple: #b197fc;
|
||||
--radius: 6px;
|
||||
--font: "SF Mono", "Cascadia Code", "Fira Code", Menlo, monospace;
|
||||
}
|
||||
|
||||
* {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: var(--font);
|
||||
background: var(--bg);
|
||||
color: var(--text);
|
||||
font-size: 13px;
|
||||
line-height: 1.5;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
padding: 24px;
|
||||
}
|
||||
|
||||
header {
|
||||
display: flex;
|
||||
align-items: baseline;
|
||||
gap: 16px;
|
||||
margin-bottom: 32px;
|
||||
border-bottom: 1px solid var(--border);
|
||||
padding-bottom: 16px;
|
||||
}
|
||||
|
||||
header h1 {
|
||||
font-size: 16px;
|
||||
font-weight: 600;
|
||||
letter-spacing: -0.02em;
|
||||
}
|
||||
|
||||
header .target {
|
||||
color: var(--text-muted);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
header .last-updated {
|
||||
margin-left: auto;
|
||||
color: var(--text-dim);
|
||||
font-size: 11px;
|
||||
}
|
||||
|
||||
/* Controls */
|
||||
.controls {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
margin-bottom: 24px;
|
||||
flex-wrap: wrap;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.controls label {
|
||||
color: var(--text-muted);
|
||||
font-size: 11px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
|
||||
.controls select {
|
||||
background: var(--bg-card);
|
||||
color: var(--text);
|
||||
border: 1px solid var(--border);
|
||||
padding: 6px 10px;
|
||||
border-radius: var(--radius);
|
||||
font-family: var(--font);
|
||||
font-size: 12px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
.controls select:hover {
|
||||
border-color: var(--accent-dim);
|
||||
}
|
||||
|
||||
/* Summary cards */
|
||||
.summary-grid {
|
||||
display: grid;
|
||||
grid-template-columns: repeat(auto-fill, minmax(200px, 1fr));
|
||||
gap: 12px;
|
||||
margin-bottom: 32px;
|
||||
}
|
||||
|
||||
.summary-card {
|
||||
background: var(--bg-card);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius);
|
||||
padding: 14px 16px;
|
||||
}
|
||||
|
||||
.summary-card .label {
|
||||
font-size: 11px;
|
||||
color: var(--text-muted);
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.05em;
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
.summary-card .value {
|
||||
font-size: 22px;
|
||||
font-weight: 600;
|
||||
letter-spacing: -0.03em;
|
||||
}
|
||||
|
||||
.summary-card .meta {
|
||||
font-size: 11px;
|
||||
color: var(--text-dim);
|
||||
margin-top: 4px;
|
||||
}
|
||||
|
||||
.good {
|
||||
color: var(--green);
|
||||
}
|
||||
.warn {
|
||||
color: var(--yellow);
|
||||
}
|
||||
.bad {
|
||||
color: var(--red);
|
||||
}
|
||||
|
||||
/* Chart area */
|
||||
.chart-section {
|
||||
margin-bottom: 32px;
|
||||
}
|
||||
|
||||
.chart-section h2 {
|
||||
font-size: 13px;
|
||||
font-weight: 600;
|
||||
margin-bottom: 12px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.chart-wrapper {
|
||||
background: var(--bg-card);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius);
|
||||
padding: 16px;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.chart-wrapper canvas {
|
||||
width: 100% !important;
|
||||
}
|
||||
|
||||
/* Table */
|
||||
.results-table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
.results-table th {
|
||||
text-align: left;
|
||||
font-size: 10px;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.06em;
|
||||
color: var(--text-dim);
|
||||
padding: 8px 12px;
|
||||
border-bottom: 1px solid var(--border);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.results-table td {
|
||||
padding: 8px 12px;
|
||||
border-bottom: 1px solid var(--border);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.results-table tr:hover td {
|
||||
background: var(--bg-hover);
|
||||
}
|
||||
|
||||
.results-table .mono {
|
||||
font-variant-numeric: tabular-nums;
|
||||
}
|
||||
|
||||
.pr-badge {
|
||||
display: inline-block;
|
||||
background: var(--accent-dim);
|
||||
color: var(--accent);
|
||||
padding: 1px 6px;
|
||||
border-radius: 3px;
|
||||
font-size: 11px;
|
||||
font-weight: 500;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.pr-badge:hover {
|
||||
background: #3577c5;
|
||||
}
|
||||
|
||||
.source-badge {
|
||||
display: inline-block;
|
||||
padding: 1px 6px;
|
||||
border-radius: 3px;
|
||||
font-size: 10px;
|
||||
font-weight: 500;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
.source-cron {
|
||||
background: #1a1f2a;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
.source-deploy {
|
||||
background: #1a3a2a;
|
||||
color: var(--green);
|
||||
}
|
||||
.source-manual {
|
||||
background: #3a2a1a;
|
||||
color: var(--yellow);
|
||||
}
|
||||
|
||||
.timing-tag {
|
||||
display: inline-block;
|
||||
padding: 1px 5px;
|
||||
margin-right: 4px;
|
||||
border-radius: 3px;
|
||||
font-size: 10px;
|
||||
background: #1e1e24;
|
||||
color: var(--text-muted);
|
||||
font-variant-numeric: tabular-nums;
|
||||
}
|
||||
.timing-tag strong {
|
||||
color: var(--text);
|
||||
font-weight: 500;
|
||||
margin-right: 3px;
|
||||
}
|
||||
|
||||
.note-text {
|
||||
color: var(--text-muted);
|
||||
font-size: 11px;
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
a.sha-link {
|
||||
color: var(--text-dim);
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a.sha-link:hover {
|
||||
color: var(--text-muted);
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.region-tag {
|
||||
display: inline-block;
|
||||
padding: 1px 6px;
|
||||
border-radius: 3px;
|
||||
font-size: 10px;
|
||||
font-weight: 500;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.04em;
|
||||
}
|
||||
|
||||
.region-use {
|
||||
background: #1a3a2a;
|
||||
color: var(--green);
|
||||
}
|
||||
.region-euw {
|
||||
background: #1a2a3a;
|
||||
color: var(--accent);
|
||||
}
|
||||
.region-ape {
|
||||
background: #2a1a3a;
|
||||
color: var(--purple);
|
||||
}
|
||||
.region-aps {
|
||||
background: #3a2a1a;
|
||||
color: var(--orange);
|
||||
}
|
||||
|
||||
.loading {
|
||||
text-align: center;
|
||||
padding: 48px;
|
||||
color: var(--text-dim);
|
||||
}
|
||||
|
||||
.error-msg {
|
||||
background: #2a1a1a;
|
||||
border: 1px solid #3a2020;
|
||||
color: var(--red);
|
||||
padding: 12px 16px;
|
||||
border-radius: var(--radius);
|
||||
font-size: 12px;
|
||||
}
|
||||
|
||||
/* Legend for chart */
|
||||
.chart-legend {
|
||||
display: flex;
|
||||
gap: 16px;
|
||||
margin-top: 12px;
|
||||
flex-wrap: wrap;
|
||||
}
|
||||
|
||||
.chart-legend .item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
font-size: 11px;
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.chart-legend .swatch {
|
||||
width: 12px;
|
||||
height: 3px;
|
||||
border-radius: 1px;
|
||||
}
|
||||
|
||||
.chart-legend .swatch-marker {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
border: 2px solid var(--red);
|
||||
background: transparent;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<h1>emdash perf</h1>
|
||||
<span class="target" id="target-label"></span>
|
||||
<span class="last-updated" id="last-updated"></span>
|
||||
</header>
|
||||
|
||||
<div class="controls">
|
||||
<div>
|
||||
<label>Site</label>
|
||||
<select id="site-select"></select>
|
||||
</div>
|
||||
<div>
|
||||
<label>Route</label>
|
||||
<select id="route-select"></select>
|
||||
</div>
|
||||
<div>
|
||||
<label>Region</label>
|
||||
<select id="region-select">
|
||||
<option value="all">All Regions</option>
|
||||
</select>
|
||||
</div>
|
||||
<div>
|
||||
<label>Period</label>
|
||||
<select id="period-select">
|
||||
<option value="1h">1 hour</option>
|
||||
<option value="24h">24 hours</option>
|
||||
<option value="7d" selected>7 days</option>
|
||||
<option value="30d">30 days</option>
|
||||
<option value="90d">90 days</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="summary-grid" id="summary-cards">
|
||||
<div class="loading">Loading...</div>
|
||||
</div>
|
||||
|
||||
<div class="chart-section">
|
||||
<h2>Cold Start TTFB</h2>
|
||||
<div class="chart-wrapper">
|
||||
<canvas id="cold-chart" height="300"></canvas>
|
||||
</div>
|
||||
<div class="chart-legend">
|
||||
<div class="item">
|
||||
<span class="swatch" style="background: var(--green)"></span>
|
||||
US East
|
||||
</div>
|
||||
<div class="item">
|
||||
<span class="swatch" style="background: var(--accent)"></span>
|
||||
Europe West
|
||||
</div>
|
||||
<div class="item">
|
||||
<span class="swatch" style="background: var(--purple)"></span>
|
||||
Asia Pacific East
|
||||
</div>
|
||||
<div class="item">
|
||||
<span class="swatch" style="background: var(--orange)"></span>
|
||||
Asia Pacific South
|
||||
</div>
|
||||
<div class="item">
|
||||
<span class="swatch-marker"></span>
|
||||
Deploy
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="chart-section">
|
||||
<h2>Warm TTFB (median)</h2>
|
||||
<div class="chart-wrapper">
|
||||
<canvas id="warm-chart" height="300"></canvas>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="chart-section">
|
||||
<h2>Recent Results</h2>
|
||||
<div class="chart-wrapper" style="overflow-x: auto">
|
||||
<table class="results-table" id="results-table">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Time</th>
|
||||
<th>Route</th>
|
||||
<th>Region</th>
|
||||
<th>Cold TTFB</th>
|
||||
<th>Warm TTFB</th>
|
||||
<th>P95</th>
|
||||
<th>Status</th>
|
||||
<th>Colo</th>
|
||||
<th>Cold Timings</th>
|
||||
<th>Warm Timings</th>
|
||||
<th>Source</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="results-body">
|
||||
<tr>
|
||||
<td colspan="11" class="loading">Loading...</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
const GITHUB_REPO = "emdash-cms/emdash";
|
||||
const GITHUB_URL = `https://github.com/${GITHUB_REPO}`;
|
||||
|
||||
const REGION_COLORS = {
|
||||
use: "#51cf66",
|
||||
euw: "#4dabf7",
|
||||
ape: "#b197fc",
|
||||
aps: "#ff922b",
|
||||
};
|
||||
|
||||
const REGION_LABELS = {
|
||||
use: "US East",
|
||||
euw: "Europe West",
|
||||
ape: "Asia Pacific East",
|
||||
aps: "Asia Pacific South",
|
||||
};
|
||||
|
||||
let coldChart = null;
|
||||
let warmChart = null;
|
||||
let configData = null;
|
||||
|
||||
async function fetchJson(url) {
|
||||
const resp = await fetch(url);
|
||||
if (!resp.ok) throw new Error(`${resp.status} ${resp.statusText}`);
|
||||
return resp.json();
|
||||
}
|
||||
|
||||
function periodToSince(period) {
|
||||
const now = new Date();
|
||||
switch (period) {
|
||||
case "1h":
|
||||
return new Date(now - 60 * 60 * 1000).toISOString();
|
||||
case "24h":
|
||||
return new Date(now - 24 * 60 * 60 * 1000).toISOString();
|
||||
case "7d":
|
||||
return new Date(now - 7 * 24 * 60 * 60 * 1000).toISOString();
|
||||
case "30d":
|
||||
return new Date(now - 30 * 24 * 60 * 60 * 1000).toISOString();
|
||||
case "90d":
|
||||
return new Date(now - 90 * 24 * 60 * 60 * 1000).toISOString();
|
||||
default:
|
||||
return new Date(now - 7 * 24 * 60 * 60 * 1000).toISOString();
|
||||
}
|
||||
}
|
||||
|
||||
// 7d/30d/90d views show per-point samples that are too spiky to read.
|
||||
// Bucket by UTC day so the trend is visible. Median, not mean, so a
|
||||
// single cron spike doesn't pull the bucket.
|
||||
const DAILY_BUCKET_PERIODS = new Set(["7d", "30d", "90d"]);
|
||||
|
||||
/**
|
||||
* Parse a D1-stored timestamp ("YYYY-MM-DD HH:MM:SS", no TZ) as UTC.
|
||||
* `new Date("YYYY-MM-DD HH:MM:SS")` is implementation-defined and most
|
||||
* browsers treat it as *local* time, which shifts samples across UTC
|
||||
* day boundaries when we bucket with `getUTC*`. Normalize first.
|
||||
*/
|
||||
function parseStoredTimestamp(ts) {
|
||||
if (!ts) return null;
|
||||
if (ts.includes("T") || ts.endsWith("Z")) return new Date(ts);
|
||||
return new Date(ts.replace(" ", "T") + "Z");
|
||||
}
|
||||
|
||||
function median(nums) {
|
||||
const sorted = nums.filter((n) => n != null).sort((a, b) => a - b);
|
||||
if (sorted.length === 0) return null;
|
||||
const mid = sorted.length >> 1;
|
||||
return sorted.length % 2 ? sorted[mid] : (sorted[mid - 1] + sorted[mid]) / 2;
|
||||
}
|
||||
|
||||
function bucketByUtcDay(points) {
|
||||
const byDay = new Map();
|
||||
for (const p of points) {
|
||||
const d = p.x instanceof Date ? p.x : new Date(p.x);
|
||||
const day = `${d.getUTCFullYear()}-${String(d.getUTCMonth() + 1).padStart(2, "0")}-${String(d.getUTCDate()).padStart(2, "0")}`;
|
||||
if (!byDay.has(day)) byDay.set(day, []);
|
||||
byDay.get(day).push(p.y);
|
||||
}
|
||||
return [...byDay.entries()]
|
||||
.map(([day, ys]) => ({
|
||||
x: new Date(`${day}T12:00:00Z`),
|
||||
y: median(ys),
|
||||
}))
|
||||
.filter((p) => p.y != null)
|
||||
.sort((a, b) => a.x - b.x);
|
||||
}
|
||||
|
||||
function formatMs(ms) {
|
||||
if (ms == null) return "-";
|
||||
if (ms < 1000) return Math.round(ms) + "ms";
|
||||
return (ms / 1000).toFixed(2) + "s";
|
||||
}
|
||||
|
||||
function ttfbClass(ms, threshold) {
|
||||
if (ms == null) return "";
|
||||
if (ms <= threshold * 0.5) return "good";
|
||||
if (ms <= threshold) return "warn";
|
||||
return "bad";
|
||||
}
|
||||
|
||||
function formatTime(ts) {
|
||||
const d = parseStoredTimestamp(ts);
|
||||
if (!d) return "";
|
||||
const pad = (n) => String(n).padStart(2, "0");
|
||||
return `${pad(d.getMonth() + 1)}/${pad(d.getDate())} ${pad(d.getHours())}:${pad(d.getMinutes())}`;
|
||||
}
|
||||
|
||||
const HTML_ESCAPES = {
|
||||
"&": "&",
|
||||
"<": "<",
|
||||
">": ">",
|
||||
'"': """,
|
||||
"'": "'",
|
||||
};
|
||||
function escapeHtml(s) {
|
||||
if (s == null) return "";
|
||||
return String(s).replace(/[&<>"']/g, (c) => HTML_ESCAPES[c]);
|
||||
}
|
||||
function escapeAttr(s) {
|
||||
// attribute-safe subset of characters
|
||||
return escapeHtml(s).replace(/\//g, "/");
|
||||
}
|
||||
|
||||
/**
|
||||
* Render server timings as a row of small tagged pills.
|
||||
* Input is the JSON string as stored in D1, or null.
|
||||
* We keep the tooltip (title attr) with the full `desc` when present
|
||||
* so hovering surfaces readable names without cluttering the table.
|
||||
*/
|
||||
function renderServerTimings(raw) {
|
||||
if (!raw) return '<span style="color:var(--text-dim)">-</span>';
|
||||
let parsed;
|
||||
try {
|
||||
parsed = JSON.parse(raw);
|
||||
} catch {
|
||||
return '<span style="color:var(--text-dim)">-</span>';
|
||||
}
|
||||
if (!parsed || typeof parsed !== "object") return "";
|
||||
const entries = Object.entries(parsed);
|
||||
if (entries.length === 0) return "";
|
||||
return entries
|
||||
.map(([name, t]) => {
|
||||
const dur = Math.round(t.dur);
|
||||
const title = t.desc ? `${t.desc} (${dur}ms)` : `${name}: ${dur}ms`;
|
||||
return `<span class="timing-tag" title="${escapeAttr(title)}"><strong>${escapeHtml(name)}</strong>${dur}ms</span>`;
|
||||
})
|
||||
.join("");
|
||||
}
|
||||
|
||||
async function loadConfig() {
|
||||
configData = await fetchJson("/api/config");
|
||||
|
||||
const siteSelect = document.getElementById("site-select");
|
||||
const sites = configData.sites ?? [];
|
||||
for (const site of sites) {
|
||||
const opt = document.createElement("option");
|
||||
opt.value = site.id;
|
||||
opt.textContent = site.label ? `${site.label} (${site.id})` : site.id;
|
||||
if (site.id === configData.defaultSite) opt.selected = true;
|
||||
siteSelect.appendChild(opt);
|
||||
}
|
||||
updateTargetLabel();
|
||||
|
||||
const routeSelect = document.getElementById("route-select");
|
||||
for (const route of configData.routes) {
|
||||
const opt = document.createElement("option");
|
||||
opt.value = route.path;
|
||||
opt.textContent = route.label;
|
||||
routeSelect.appendChild(opt);
|
||||
}
|
||||
|
||||
const regionSelect = document.getElementById("region-select");
|
||||
for (const region of configData.regions) {
|
||||
const opt = document.createElement("option");
|
||||
opt.value = region.id;
|
||||
opt.textContent = region.label;
|
||||
regionSelect.appendChild(opt);
|
||||
}
|
||||
}
|
||||
|
||||
function currentSite() {
|
||||
return document.getElementById("site-select").value || configData?.defaultSite || "blog";
|
||||
}
|
||||
|
||||
function updateTargetLabel() {
|
||||
const site = (configData?.sites ?? []).find((s) => s.id === currentSite());
|
||||
const label = document.getElementById("target-label");
|
||||
if (site?.targetUrl) {
|
||||
label.textContent = site.targetUrl.replace(/^https?:\/\//, "");
|
||||
} else {
|
||||
label.textContent = "";
|
||||
}
|
||||
}
|
||||
|
||||
async function loadSummary() {
|
||||
const data = await fetchJson(`/api/summary?site=${encodeURIComponent(currentSite())}`);
|
||||
const container = document.getElementById("summary-cards");
|
||||
container.innerHTML = "";
|
||||
|
||||
if (!data.latest || data.latest.length === 0) {
|
||||
container.innerHTML =
|
||||
'<div class="summary-card"><div class="label">No data</div><div class="value">-</div></div>';
|
||||
return;
|
||||
}
|
||||
|
||||
// Group by region and show the latest cold TTFB for the selected route
|
||||
const route = document.getElementById("route-select").value || configData.routes[0]?.path;
|
||||
const routeConfig = configData.routes.find((r) => r.path === route);
|
||||
|
||||
for (const region of configData.regions) {
|
||||
const result = data.latest.find((r) => r.route === route && r.region === region.id);
|
||||
const median = data.medians.find((m) => m.route === route && m.region === region.id);
|
||||
|
||||
const card = document.createElement("div");
|
||||
card.className = "summary-card";
|
||||
|
||||
const coldMs = result?.cold_ttfb_ms;
|
||||
const threshold = routeConfig?.coldThresholdMs ?? 2000;
|
||||
const cls = ttfbClass(coldMs, threshold);
|
||||
|
||||
card.innerHTML = `
|
||||
<div class="label"><span class="region-tag region-${region.id}">${region.id}</span> ${region.label}</div>
|
||||
<div class="value ${cls}">${formatMs(coldMs)}</div>
|
||||
<div class="meta">
|
||||
warm ${formatMs(result?.warm_ttfb_ms)}
|
||||
${median ? ` · avg ${formatMs(median.median_cold)}` : ""}
|
||||
${result?.cf_colo ? ` · ${result.cf_colo}` : ""}
|
||||
</div>
|
||||
`;
|
||||
container.appendChild(card);
|
||||
}
|
||||
|
||||
// Update timestamp
|
||||
const newest = data.latest.reduce(
|
||||
(a, b) => (a.timestamp > b.timestamp ? a : b),
|
||||
data.latest[0],
|
||||
);
|
||||
if (newest) {
|
||||
document.getElementById("last-updated").textContent =
|
||||
`Updated ${formatTime(newest.timestamp)}`;
|
||||
}
|
||||
}
|
||||
|
||||
function createChart(canvasId, label) {
|
||||
const ctx = document.getElementById(canvasId).getContext("2d");
|
||||
|
||||
return new Chart(ctx, {
|
||||
type: "line",
|
||||
data: { datasets: [] },
|
||||
options: {
|
||||
responsive: true,
|
||||
maintainAspectRatio: false,
|
||||
interaction: { mode: "nearest", axis: "x", intersect: false },
|
||||
scales: {
|
||||
x: {
|
||||
type: "time",
|
||||
time: { tooltipFormat: "MMM d, HH:mm" },
|
||||
grid: { color: "#2a2a32", lineWidth: 0.5 },
|
||||
ticks: { color: "#5c5c66", font: { size: 10 } },
|
||||
},
|
||||
y: {
|
||||
title: {
|
||||
display: true,
|
||||
text: "ms",
|
||||
color: "#5c5c66",
|
||||
font: { size: 10 },
|
||||
},
|
||||
grid: { color: "#2a2a32", lineWidth: 0.5 },
|
||||
ticks: { color: "#5c5c66", font: { size: 10 } },
|
||||
beginAtZero: true,
|
||||
},
|
||||
},
|
||||
plugins: {
|
||||
legend: { display: false },
|
||||
tooltip: {
|
||||
backgroundColor: "#16161a",
|
||||
titleColor: "#e0e0e6",
|
||||
bodyColor: "#888894",
|
||||
borderColor: "#2a2a32",
|
||||
borderWidth: 1,
|
||||
titleFont: { size: 11, family: "monospace" },
|
||||
bodyFont: { size: 11, family: "monospace" },
|
||||
callbacks: {
|
||||
label: function (context) {
|
||||
const point = context.raw;
|
||||
let text = `${context.dataset.label}: ${formatMs(context.parsed.y)}`;
|
||||
if (point && point.sha) {
|
||||
text += ` [${point.sha.slice(0, 7)}]`;
|
||||
}
|
||||
if (point && point.prNumber) {
|
||||
text += ` PR #${point.prNumber}`;
|
||||
}
|
||||
return text;
|
||||
},
|
||||
},
|
||||
},
|
||||
annotation: { annotations: {} },
|
||||
},
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
async function loadCharts() {
|
||||
const route = document.getElementById("route-select").value || configData.routes[0]?.path;
|
||||
const regionFilter = document.getElementById("region-select").value;
|
||||
const period = document.getElementById("period-select").value;
|
||||
const since = periodToSince(period);
|
||||
|
||||
const regions =
|
||||
regionFilter === "all" ? configData.regions.map((r) => r.id) : [regionFilter];
|
||||
|
||||
const site = currentSite();
|
||||
const bucketDaily = DAILY_BUCKET_PERIODS.has(period);
|
||||
|
||||
// Fetch chart data for each region in parallel
|
||||
const chartDataPromises = regions.map((region) =>
|
||||
fetchJson(
|
||||
`/api/chart?site=${encodeURIComponent(site)}&route=${encodeURIComponent(route)}®ion=${region}&since=${since}&limit=500`,
|
||||
),
|
||||
);
|
||||
const chartResults = await Promise.all(chartDataPromises);
|
||||
|
||||
// Build datasets for cold chart
|
||||
const coldDatasets = [];
|
||||
const warmDatasets = [];
|
||||
const annotations = {};
|
||||
|
||||
for (let i = 0; i < regions.length; i++) {
|
||||
const region = regions[i];
|
||||
const result = chartResults[i];
|
||||
const color = REGION_COLORS[region] || "#888";
|
||||
|
||||
const rawCold = result.data.map((d) => ({
|
||||
x: parseStoredTimestamp(d.timestamp),
|
||||
y: d.coldTtfbMs,
|
||||
prNumber: d.prNumber,
|
||||
sha: d.sha,
|
||||
source: d.source,
|
||||
}));
|
||||
const rawWarm = result.data.map((d) => ({
|
||||
x: parseStoredTimestamp(d.timestamp),
|
||||
y: d.warmTtfbMs,
|
||||
prNumber: d.prNumber,
|
||||
sha: d.sha,
|
||||
source: d.source,
|
||||
}));
|
||||
const coldPoints = bucketDaily ? bucketByUtcDay(rawCold) : rawCold;
|
||||
const warmPoints = bucketDaily ? bucketByUtcDay(rawWarm) : rawWarm;
|
||||
|
||||
coldDatasets.push({
|
||||
label: REGION_LABELS[region] || region,
|
||||
data: coldPoints,
|
||||
borderColor: color,
|
||||
backgroundColor: color + "20",
|
||||
borderWidth: 1.5,
|
||||
pointRadius: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? 5 : 1.5;
|
||||
},
|
||||
pointBackgroundColor: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? "#ff6b6b" : color;
|
||||
},
|
||||
pointBorderColor: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? "#ff6b6b" : color;
|
||||
},
|
||||
pointBorderWidth: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? 2 : 0;
|
||||
},
|
||||
tension: 0.3,
|
||||
fill: false,
|
||||
});
|
||||
|
||||
warmDatasets.push({
|
||||
label: REGION_LABELS[region] || region,
|
||||
data: warmPoints,
|
||||
borderColor: color,
|
||||
backgroundColor: color + "20",
|
||||
borderWidth: 1.5,
|
||||
pointRadius: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? 5 : 1.5;
|
||||
},
|
||||
pointBackgroundColor: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? "#ff6b6b" : color;
|
||||
},
|
||||
pointBorderColor: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? "#ff6b6b" : color;
|
||||
},
|
||||
pointBorderWidth: (ctx) => {
|
||||
const point = ctx.raw;
|
||||
return point && point.sha ? 2 : 0;
|
||||
},
|
||||
tension: 0.3,
|
||||
fill: false,
|
||||
});
|
||||
|
||||
// Add vertical lines for all deploys
|
||||
if (result.deployMarkers) {
|
||||
for (const marker of result.deployMarkers) {
|
||||
const sha7 = marker.sha ? marker.sha.slice(0, 7) : "?";
|
||||
const label = marker.prNumber ? `PR #${marker.prNumber}` : sha7;
|
||||
const key = `deploy-${marker.sha || marker.timestamp}-${region}`;
|
||||
annotations[key] = {
|
||||
type: "line",
|
||||
xMin: parseStoredTimestamp(marker.timestamp),
|
||||
xMax: parseStoredTimestamp(marker.timestamp),
|
||||
borderColor: "#ff6b6b40",
|
||||
borderWidth: 1,
|
||||
borderDash: [4, 4],
|
||||
label: {
|
||||
display: regions.length <= 2,
|
||||
content: label,
|
||||
position: "start",
|
||||
backgroundColor: "#2a1a1a",
|
||||
color: "#ff6b6b",
|
||||
font: { size: 10, family: "monospace" },
|
||||
padding: { top: 2, bottom: 2, left: 4, right: 4 },
|
||||
},
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update cold chart
|
||||
if (!coldChart) {
|
||||
coldChart = createChart("cold-chart", "Cold TTFB");
|
||||
}
|
||||
coldChart.data.datasets = coldDatasets;
|
||||
coldChart.options.plugins.annotation.annotations = annotations;
|
||||
coldChart.update();
|
||||
|
||||
// Update warm chart
|
||||
if (!warmChart) {
|
||||
warmChart = createChart("warm-chart", "Warm TTFB");
|
||||
}
|
||||
warmChart.data.datasets = warmDatasets;
|
||||
warmChart.options.plugins.annotation.annotations = annotations;
|
||||
warmChart.update();
|
||||
}
|
||||
|
||||
async function loadTable() {
|
||||
const route = document.getElementById("route-select").value || configData.routes[0]?.path;
|
||||
const regionFilter = document.getElementById("region-select").value;
|
||||
const period = document.getElementById("period-select").value;
|
||||
const since = periodToSince(period);
|
||||
|
||||
const params = new URLSearchParams({
|
||||
site: currentSite(),
|
||||
since,
|
||||
limit: "50",
|
||||
});
|
||||
if (route) params.set("route", route);
|
||||
if (regionFilter !== "all") params.set("region", regionFilter);
|
||||
|
||||
const data = await fetchJson(`/api/results?${params}`);
|
||||
const tbody = document.getElementById("results-body");
|
||||
|
||||
if (!data.results || data.results.length === 0) {
|
||||
tbody.innerHTML =
|
||||
'<tr><td colspan="11" style="text-align:center;color:var(--text-dim)">No results</td></tr>';
|
||||
return;
|
||||
}
|
||||
|
||||
tbody.innerHTML = data.results
|
||||
.map((r) => {
|
||||
const routeConfig = configData.routes.find((rc) => rc.path === r.route);
|
||||
const threshold = routeConfig?.coldThresholdMs ?? 2000;
|
||||
const cls = ttfbClass(r.cold_ttfb_ms, threshold);
|
||||
|
||||
const prLink = r.pr_number
|
||||
? ` <a class="pr-badge" href="${GITHUB_URL}/pull/${r.pr_number}" target="_blank" rel="noopener">PR #${r.pr_number}</a>`
|
||||
: "";
|
||||
const shaLink = r.sha
|
||||
? ` <a class="sha-link mono" href="${GITHUB_URL}/commit/${r.sha}" target="_blank" rel="noopener">${r.sha.slice(0, 7)}</a>`
|
||||
: "";
|
||||
const sourceBadge = `<span class="source-badge source-${escapeAttr(r.source)}">${escapeHtml(r.source)}</span>`;
|
||||
const note = r.note
|
||||
? `<div class="note-text" title="${escapeAttr(r.note)}">${escapeHtml(r.note)}</div>`
|
||||
: "";
|
||||
|
||||
return `<tr>
|
||||
<td class="mono">${formatTime(r.timestamp)}</td>
|
||||
<td>${escapeHtml(r.route)}${note}</td>
|
||||
<td><span class="region-tag region-${escapeAttr(r.region)}">${escapeHtml(r.region)}</span></td>
|
||||
<td class="mono ${cls}">${formatMs(r.cold_ttfb_ms)}</td>
|
||||
<td class="mono">${formatMs(r.warm_ttfb_ms)}</td>
|
||||
<td class="mono">${formatMs(r.p95_ttfb_ms)}</td>
|
||||
<td class="mono">${r.status_code ?? "-"}</td>
|
||||
<td class="mono">${r.cf_colo ?? "-"}</td>
|
||||
<td>${renderServerTimings(r.cold_server_timings)}</td>
|
||||
<td>${renderServerTimings(r.warm_server_timings)}</td>
|
||||
<td>${sourceBadge}${prLink}${shaLink}</td>
|
||||
</tr>`;
|
||||
})
|
||||
.join("");
|
||||
}
|
||||
|
||||
async function refresh() {
|
||||
try {
|
||||
await Promise.all([loadSummary(), loadCharts(), loadTable()]);
|
||||
} catch (err) {
|
||||
console.error("Failed to load data:", err);
|
||||
}
|
||||
}
|
||||
|
||||
async function init() {
|
||||
try {
|
||||
await loadConfig();
|
||||
await refresh();
|
||||
|
||||
// Refresh on control changes
|
||||
document.getElementById("site-select").addEventListener("change", () => {
|
||||
updateTargetLabel();
|
||||
refresh();
|
||||
});
|
||||
document.getElementById("route-select").addEventListener("change", refresh);
|
||||
document.getElementById("region-select").addEventListener("change", refresh);
|
||||
document.getElementById("period-select").addEventListener("change", refresh);
|
||||
|
||||
// Auto-refresh every 5 minutes
|
||||
setInterval(refresh, 5 * 60 * 1000);
|
||||
} catch (err) {
|
||||
document.querySelector(".container").innerHTML = `
|
||||
<div class="error-msg">Failed to load: ${err.message}</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
init();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
26
infra/perf-monitor/schema.sql
Normal file
26
infra/perf-monitor/schema.sql
Normal file
@@ -0,0 +1,26 @@
|
||||
-- Perf monitor D1 schema
|
||||
|
||||
CREATE TABLE IF NOT EXISTS perf_results (
|
||||
id TEXT PRIMARY KEY,
|
||||
sha TEXT,
|
||||
pr_number INTEGER,
|
||||
route TEXT NOT NULL,
|
||||
region TEXT NOT NULL,
|
||||
cold_ttfb_ms REAL,
|
||||
warm_ttfb_ms REAL,
|
||||
p95_ttfb_ms REAL,
|
||||
status_code INTEGER,
|
||||
cf_colo TEXT,
|
||||
cf_placement TEXT,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now')),
|
||||
source TEXT NOT NULL, -- 'deploy' | 'cron' | 'manual'
|
||||
site TEXT NOT NULL DEFAULT 'blog'
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_route_region_ts ON perf_results(route, region, timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_sha ON perf_results(sha);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_pr ON perf_results(pr_number);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_source_ts ON perf_results(source, timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_timestamp ON perf_results(timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_site_ts ON perf_results(site, timestamp);
|
||||
CREATE INDEX IF NOT EXISTS idx_perf_site_route_region_ts ON perf_results(site, route, region, timestamp);
|
||||
243
infra/perf-monitor/scripts/trigger.mjs
Executable file
243
infra/perf-monitor/scripts/trigger.mjs
Executable file
@@ -0,0 +1,243 @@
|
||||
#!/usr/bin/env node
|
||||
/**
|
||||
* Ad-hoc perf-monitor trigger.
|
||||
*
|
||||
* Fires POST https://perf.emdashcms.com/api/trigger via Cloudflare Access.
|
||||
* The endpoint is gated by Access, so authentication is handled by
|
||||
* `cloudflared access` (first invocation opens a browser; subsequent
|
||||
* invocations reuse the token until session expiry).
|
||||
*
|
||||
* Usage:
|
||||
* pnpm trigger # default: runs probes, does NOT record
|
||||
* pnpm trigger -- --store # persist with source=manual
|
||||
* pnpm trigger -- --store --note "..." # persist with a note
|
||||
* pnpm trigger -- --sha abc1234 # attach a SHA (requires --store to persist)
|
||||
* pnpm trigger -- --pr 123 # attach a PR number (requires --store)
|
||||
* pnpm trigger -- --site cache # measure only the cache-demo site
|
||||
*
|
||||
* The default is ephemeral -- probes run for real but nothing is written
|
||||
* to the database. Pass --store to persist the run as source=manual
|
||||
* (excluded from the graph and summary cards, visible in the results
|
||||
* table). Other flags like --note/--sha/--pr only have an effect when
|
||||
* combined with --store.
|
||||
*/
|
||||
|
||||
import { spawnSync } from "node:child_process";
|
||||
import { parseArgs } from "node:util";
|
||||
|
||||
const ENDPOINT = process.env.PERF_ENDPOINT ?? "https://perf.emdashcms.com/api/trigger";
|
||||
|
||||
function die(msg, code = 1) {
|
||||
console.error(`trigger: ${msg}`);
|
||||
process.exit(code);
|
||||
}
|
||||
|
||||
// pnpm passes a literal `--` token through when users invoke `pnpm trigger -- --note foo`.
|
||||
// parseArgs treats that as a positional and throws in strict mode. Strip any `--` tokens.
|
||||
const argv = process.argv.slice(2).filter((a) => a !== "--");
|
||||
|
||||
if (argv.includes("-h") || argv.includes("--help")) {
|
||||
console.log(
|
||||
"Usage: pnpm trigger [-- --store] [--note <string>] [--sha <sha>] [--pr <number>] [--site <id>]\n" +
|
||||
"\n" +
|
||||
"Runs an ad-hoc perf measurement against every registered demo site.\n" +
|
||||
"Pass --site <id> (e.g. blog, cache) to target a single site.\n" +
|
||||
"\n" +
|
||||
"Default is ephemeral: probes run for real but nothing is written to\n" +
|
||||
"the database. Pass --store to persist the run as source=manual\n" +
|
||||
"(excluded from the graph and summary cards, visible in the results\n" +
|
||||
"table). --note/--sha/--pr only take effect together with --store.",
|
||||
);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const { values } = parseArgs({
|
||||
args: argv,
|
||||
options: {
|
||||
store: { type: "boolean" },
|
||||
note: { type: "string" },
|
||||
sha: { type: "string" },
|
||||
pr: { type: "string" },
|
||||
site: { type: "string" },
|
||||
},
|
||||
allowPositionals: false,
|
||||
strict: true,
|
||||
});
|
||||
|
||||
// Default is ephemeral (not persisted). --store flips that.
|
||||
const ephemeral = !values.store;
|
||||
|
||||
const body = {};
|
||||
if (ephemeral) body.ephemeral = true;
|
||||
if (values.note) body.note = values.note;
|
||||
if (values.sha) body.sha = values.sha;
|
||||
if (values.pr) {
|
||||
const n = Number.parseInt(values.pr, 10);
|
||||
if (!Number.isInteger(n) || n <= 0) die(`--pr must be a positive integer, got ${values.pr}`);
|
||||
body.prNumber = n;
|
||||
}
|
||||
if (values.site) body.site = values.site;
|
||||
|
||||
// Warn loudly if someone passed metadata flags without --store: those fields
|
||||
// only make it into the DB, and we're not writing to the DB in ephemeral mode.
|
||||
if (ephemeral && (values.note || values.sha || values.pr)) {
|
||||
console.warn(
|
||||
"trigger: warning: --note/--sha/--pr have no effect without --store (ephemeral mode discards everything)",
|
||||
);
|
||||
}
|
||||
|
||||
const label = values.note ? ` (${values.note})` : "";
|
||||
const mode = ephemeral ? " [ephemeral, not recorded]" : "";
|
||||
console.log(`trigger: firing against ${ENDPOINT}${label}${mode}`);
|
||||
console.log("trigger: this typically takes 20-40s while probes run...");
|
||||
|
||||
// `cloudflared access curl` passes everything after the URL straight to curl.
|
||||
// The URL must come first, immediately after `curl` (no `--` separator).
|
||||
const result = spawnSync(
|
||||
"cloudflared",
|
||||
[
|
||||
"access",
|
||||
"curl",
|
||||
ENDPOINT,
|
||||
"-sS",
|
||||
"-X",
|
||||
"POST",
|
||||
"-H",
|
||||
"content-type: application/json",
|
||||
"--data",
|
||||
JSON.stringify(body),
|
||||
],
|
||||
{ encoding: "utf8", stdio: ["inherit", "pipe", "inherit"] },
|
||||
);
|
||||
|
||||
if (result.error) {
|
||||
if (result.error.code === "ENOENT") {
|
||||
die(
|
||||
"cloudflared is not installed or not on PATH.\n" +
|
||||
" Install: brew install cloudflared (or see https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation/)\n" +
|
||||
" Then run this command again to complete first-time browser login.",
|
||||
);
|
||||
}
|
||||
die(`cloudflared failed: ${result.error.message}`);
|
||||
}
|
||||
if (result.status !== 0) die(`cloudflared exited ${result.status}`);
|
||||
|
||||
let parsed;
|
||||
try {
|
||||
parsed = JSON.parse(result.stdout);
|
||||
} catch {
|
||||
die(`unexpected non-JSON response:\n${result.stdout}`);
|
||||
}
|
||||
|
||||
if (parsed.error) die(`server error: ${parsed.error}`);
|
||||
|
||||
if (parsed.ephemeral) {
|
||||
console.log(
|
||||
`trigger: measured ${parsed.results?.length ?? 0} samples in ${parsed.durationMs}ms (ephemeral, nothing recorded)`,
|
||||
);
|
||||
} else {
|
||||
console.log(
|
||||
`trigger: recorded ${parsed.inserted} samples in ${parsed.durationMs}ms (source=manual)`,
|
||||
);
|
||||
}
|
||||
|
||||
// Pretty-print a per-site, per-route table. Layout per (site, route):
|
||||
//
|
||||
// [cache] /
|
||||
// REGION COLD WARM P95 COLO TIMINGS
|
||||
// use 1234ms 123ms 156ms IAD render=42ms mw=58ms
|
||||
// euw ...
|
||||
//
|
||||
// Column widths are computed from the rows we're about to print so that
|
||||
// unusually slow runs don't break alignment, and timing columns only
|
||||
// appear if at least one row has timings.
|
||||
const useColor = process.stdout.isTTY && !process.env.NO_COLOR;
|
||||
const dim = (s) => (useColor ? `\x1b[2m${s}\x1b[0m` : s);
|
||||
const bold = (s) => (useColor ? `\x1b[1m${s}\x1b[0m` : s);
|
||||
|
||||
const formatMs = (n) => (n == null ? "-" : `${Math.round(n)}ms`);
|
||||
|
||||
// Group results by site -> route, preserving insertion order. Results from
|
||||
// the server arrive interleaved per (site, region, route); a Map-of-Maps
|
||||
// keeps each site's rows together for display.
|
||||
const bySite = new Map();
|
||||
for (const r of parsed.results ?? []) {
|
||||
const siteId = r.site ?? "blog";
|
||||
if (!bySite.has(siteId)) bySite.set(siteId, new Map());
|
||||
const byRoute = bySite.get(siteId);
|
||||
if (!byRoute.has(r.route)) byRoute.set(r.route, []);
|
||||
byRoute.get(r.route).push(r);
|
||||
}
|
||||
|
||||
for (const [siteId, byRoute] of bySite) {
|
||||
for (const [route, rows] of byRoute) {
|
||||
console.log(`\n ${bold(`[${siteId}] ${route}`)}`);
|
||||
|
||||
// Collect the union of timing names present on this route across BOTH
|
||||
// cold and warm snapshots so every row gets a cell in each column,
|
||||
// even when a particular probe response lacked some entries.
|
||||
// Warm timings are prefixed with "w." in the column header to make
|
||||
// the split obvious (cold and warm snapshots share the same metric
|
||||
// names — "render", "rt", "mw" — so we'd collide otherwise).
|
||||
const coldNames = [];
|
||||
const warmNames = [];
|
||||
const seenCold = new Set();
|
||||
const seenWarm = new Set();
|
||||
for (const r of rows) {
|
||||
if (r.coldServerTimings) {
|
||||
for (const name of Object.keys(r.coldServerTimings)) {
|
||||
if (!seenCold.has(name)) {
|
||||
seenCold.add(name);
|
||||
coldNames.push(name);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (r.warmServerTimings) {
|
||||
for (const name of Object.keys(r.warmServerTimings)) {
|
||||
if (!seenWarm.has(name)) {
|
||||
seenWarm.add(name);
|
||||
warmNames.push(name);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Build row cells. Column order: region, cold, warm, p95, colo,
|
||||
// then all cold timings (cold-* intent), then warm timings.
|
||||
// Cold timings keep their bare names for backwards-compatible output;
|
||||
// warm timings get a "w." prefix.
|
||||
const warmHeaders = warmNames.map((n) => `w.${n}`);
|
||||
const header = ["region", "cold", "warm", "p95", "colo", ...coldNames, ...warmHeaders];
|
||||
const tableRows = rows.map((r) => {
|
||||
const cells = [
|
||||
r.region,
|
||||
formatMs(r.coldTtfbMs),
|
||||
formatMs(r.warmTtfbMs),
|
||||
formatMs(r.p95TtfbMs),
|
||||
r.cfColo ?? "-",
|
||||
];
|
||||
for (const name of coldNames) {
|
||||
const t = r.coldServerTimings?.[name];
|
||||
cells.push(t ? formatMs(t.dur) : "-");
|
||||
}
|
||||
for (const name of warmNames) {
|
||||
const t = r.warmServerTimings?.[name];
|
||||
cells.push(t ? formatMs(t.dur) : "-");
|
||||
}
|
||||
return cells;
|
||||
});
|
||||
|
||||
// Column widths = max(header, body) per column.
|
||||
const widths = header.map((h, col) =>
|
||||
Math.max(h.length, ...tableRows.map((cells) => cells[col].length)),
|
||||
);
|
||||
|
||||
const padCell = (s, col) => s.padEnd(widths[col]);
|
||||
const joinRow = (cells) => cells.map(padCell).join(" ");
|
||||
|
||||
console.log(` ${dim(joinRow(header))}`);
|
||||
for (const cells of tableRows) {
|
||||
console.log(` ${joinRow(cells)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
264
infra/perf-monitor/src/api.ts
Normal file
264
infra/perf-monitor/src/api.ts
Normal file
@@ -0,0 +1,264 @@
|
||||
/** HTTP API router for the perf monitor. */
|
||||
|
||||
import { runMeasurements } from "./measure.js";
|
||||
import {
|
||||
DEFAULT_SITE_ID,
|
||||
getSite,
|
||||
REGIONS,
|
||||
REGION_LABELS,
|
||||
SITES,
|
||||
TARGET_ROUTES,
|
||||
} from "./routes.js";
|
||||
import {
|
||||
queryResults,
|
||||
getLatestResults,
|
||||
getRollingMedians,
|
||||
getDeployResults,
|
||||
insertResults,
|
||||
type Source,
|
||||
} from "./store.js";
|
||||
|
||||
/** Route the request to the correct handler. */
|
||||
export async function handleApi(request: Request, url: URL, env: Env): Promise<Response | null> {
|
||||
const path = url.pathname;
|
||||
|
||||
if (path === "/api/results" && request.method === "GET") {
|
||||
return handleResults(url, env);
|
||||
}
|
||||
if (path === "/api/summary" && request.method === "GET") {
|
||||
return handleSummary(url, env);
|
||||
}
|
||||
if (path === "/api/chart" && request.method === "GET") {
|
||||
return handleChart(url, env);
|
||||
}
|
||||
if (path === "/api/config" && request.method === "GET") {
|
||||
return handleConfig();
|
||||
}
|
||||
if (path === "/api/trigger" && request.method === "POST") {
|
||||
return handleTrigger(request, env);
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/** Narrow a query string to the allowed source values without a cast. */
|
||||
function parseSource(raw: string | null): Source | undefined {
|
||||
if (raw === "deploy" || raw === "cron" || raw === "manual") return raw;
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the requested site param against the known SITES list. Falls back
|
||||
* to the default site when absent so existing clients (dashboard) keep
|
||||
* working unchanged.
|
||||
*/
|
||||
function parseSiteParam(raw: string | null): string {
|
||||
if (raw && getSite(raw)) return raw;
|
||||
return DEFAULT_SITE_ID;
|
||||
}
|
||||
|
||||
/** GET /api/results?route=X®ion=Y&source=Z&site=W&since=ISO&limit=N */
|
||||
async function handleResults(url: URL, env: Env): Promise<Response> {
|
||||
const source = parseSource(url.searchParams.get("source"));
|
||||
const siteParam = url.searchParams.get("site");
|
||||
// Results is intentionally loose: no site param = return across all sites
|
||||
// (for raw tabular inspection). Summary/chart default to a single site.
|
||||
const site = siteParam && getSite(siteParam) ? siteParam : undefined;
|
||||
|
||||
const results = await queryResults(env.DB, {
|
||||
route: url.searchParams.get("route") ?? undefined,
|
||||
region: url.searchParams.get("region") ?? undefined,
|
||||
source,
|
||||
site,
|
||||
since: url.searchParams.get("since") ?? undefined,
|
||||
limit: url.searchParams.has("limit") ? parseInt(url.searchParams.get("limit")!, 10) : undefined,
|
||||
});
|
||||
|
||||
return Response.json({ results });
|
||||
}
|
||||
|
||||
/** GET /api/summary?site=X -- latest per route+region, rolling averages */
|
||||
async function handleSummary(url: URL, env: Env): Promise<Response> {
|
||||
const site = parseSiteParam(url.searchParams.get("site"));
|
||||
|
||||
const [latest, medians] = await Promise.all([
|
||||
getLatestResults(env.DB, site),
|
||||
getRollingMedians(env.DB, site),
|
||||
]);
|
||||
|
||||
return Response.json({
|
||||
site,
|
||||
latest,
|
||||
medians,
|
||||
config: {
|
||||
sites: SITES.map((s) => ({ id: s.id, label: s.label, targetUrl: s.targetUrl })),
|
||||
routes: TARGET_ROUTES,
|
||||
regions: REGIONS.map((r) => ({ id: r, label: REGION_LABELS[r] })),
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/** GET /api/chart?route=X®ion=Y&site=W&since=ISO&limit=N -- time series data */
|
||||
async function handleChart(url: URL, env: Env): Promise<Response> {
|
||||
const route = url.searchParams.get("route");
|
||||
const region = url.searchParams.get("region");
|
||||
|
||||
if (!route || !region) {
|
||||
return Response.json({ error: "route and region are required" }, { status: 400 });
|
||||
}
|
||||
|
||||
const site = parseSiteParam(url.searchParams.get("site"));
|
||||
const since = url.searchParams.get("since") ?? undefined;
|
||||
const limit = url.searchParams.has("limit") ? parseInt(url.searchParams.get("limit")!, 10) : 200;
|
||||
|
||||
const [results, deployResults] = await Promise.all([
|
||||
queryResults(env.DB, { route, region, site, since, limit }),
|
||||
getDeployResults(env.DB, site, since),
|
||||
]);
|
||||
|
||||
// Query returns DESC -- reverse to chronological. Manual (ad-hoc) runs are
|
||||
// stripped from the graph so they don't create visual noise; they still
|
||||
// appear in the /api/results table.
|
||||
const graphResults = results.filter((r) => r.source !== "manual").toReversed();
|
||||
|
||||
// Deduplicate deploy results by SHA — multiple route/region combos produce
|
||||
// duplicates, but we only want one marker per deploy on the chart.
|
||||
const seenShas = new Set<string>();
|
||||
const deployMarkers = deployResults
|
||||
.filter((r) => {
|
||||
if (!r.sha) return false;
|
||||
if (r.route !== route || r.region !== region) return false;
|
||||
if (seenShas.has(r.sha)) return false;
|
||||
seenShas.add(r.sha);
|
||||
return true;
|
||||
})
|
||||
.map((r) => ({
|
||||
timestamp: r.timestamp,
|
||||
prNumber: r.pr_number,
|
||||
sha: r.sha,
|
||||
coldTtfbMs: r.cold_ttfb_ms,
|
||||
}));
|
||||
|
||||
return Response.json({
|
||||
route,
|
||||
region,
|
||||
site,
|
||||
data: graphResults.map((r) => ({
|
||||
timestamp: r.timestamp,
|
||||
coldTtfbMs: r.cold_ttfb_ms,
|
||||
warmTtfbMs: r.warm_ttfb_ms,
|
||||
p95TtfbMs: r.p95_ttfb_ms,
|
||||
source: r.source,
|
||||
sha: r.sha,
|
||||
prNumber: r.pr_number,
|
||||
})),
|
||||
deployMarkers,
|
||||
});
|
||||
}
|
||||
|
||||
/** GET /api/config -- available sites, routes, and regions */
|
||||
async function handleConfig(): Promise<Response> {
|
||||
return Response.json({
|
||||
sites: SITES.map((s) => ({ id: s.id, label: s.label, targetUrl: s.targetUrl })),
|
||||
defaultSite: DEFAULT_SITE_ID,
|
||||
routes: TARGET_ROUTES,
|
||||
regions: REGIONS.map((r) => ({ id: r, label: REGION_LABELS[r] })),
|
||||
});
|
||||
}
|
||||
|
||||
/** Accept short abbreviated or full-length hex SHAs. */
|
||||
const SHA_RE = /^[a-f0-9]{7,40}$/i;
|
||||
|
||||
/**
|
||||
* POST /api/trigger -- run an ad-hoc measurement, optionally record it.
|
||||
*
|
||||
* Body (all optional):
|
||||
* {
|
||||
* "note"?: string,
|
||||
* "sha"?: string,
|
||||
* "prNumber"?: number,
|
||||
* "ephemeral"?: boolean, // if true, run the probes but don't persist
|
||||
* "site"?: string // site id; omit to measure every site
|
||||
* }
|
||||
*
|
||||
* No auth in-Worker: this endpoint is expected to be protected by a
|
||||
* Cloudflare Access policy at the edge. If Access misroutes or is
|
||||
* misconfigured, the request will still run measurements -- keep Access
|
||||
* scoped tightly to POST /api/trigger.
|
||||
*
|
||||
* Persisted runs are tagged source=manual and are excluded from the
|
||||
* dashboard graph and summary cards but appear in the results table with
|
||||
* a "manual" badge. Ephemeral runs run the probes for real but skip the
|
||||
* insert entirely -- useful for private/local checks that shouldn't
|
||||
* appear on the dashboard at all.
|
||||
*/
|
||||
async function handleTrigger(request: Request, env: Env): Promise<Response> {
|
||||
let body: {
|
||||
note?: unknown;
|
||||
sha?: unknown;
|
||||
prNumber?: unknown;
|
||||
ephemeral?: unknown;
|
||||
site?: unknown;
|
||||
} = {};
|
||||
const contentLength = request.headers.get("content-length");
|
||||
if (contentLength && contentLength !== "0") {
|
||||
try {
|
||||
body = await request.json();
|
||||
} catch {
|
||||
return Response.json({ error: "invalid JSON body" }, { status: 400 });
|
||||
}
|
||||
}
|
||||
|
||||
const note = typeof body.note === "string" && body.note.trim() !== "" ? body.note.trim() : null;
|
||||
const sha = typeof body.sha === "string" && SHA_RE.test(body.sha) ? body.sha : null;
|
||||
const prNumber =
|
||||
typeof body.prNumber === "number" && Number.isInteger(body.prNumber) && body.prNumber > 0
|
||||
? body.prNumber
|
||||
: null;
|
||||
const ephemeral = body.ephemeral === true;
|
||||
|
||||
let sites = SITES;
|
||||
if (typeof body.site === "string") {
|
||||
const match = getSite(body.site);
|
||||
if (!match) {
|
||||
return Response.json(
|
||||
{ error: `unknown site "${body.site}"; valid: ${SITES.map((s) => s.id).join(", ")}` },
|
||||
{ status: 400 },
|
||||
);
|
||||
}
|
||||
sites = [match];
|
||||
}
|
||||
|
||||
const started = Date.now();
|
||||
const results = await runMeasurements(env, { source: "manual", sha, prNumber, note, sites });
|
||||
|
||||
if (results.length === 0) {
|
||||
return Response.json({ error: "no measurements returned from probes" }, { status: 502 });
|
||||
}
|
||||
|
||||
if (!ephemeral) {
|
||||
await insertResults(env.DB, results);
|
||||
}
|
||||
|
||||
return Response.json({
|
||||
inserted: ephemeral ? 0 : results.length,
|
||||
ephemeral,
|
||||
durationMs: Date.now() - started,
|
||||
note,
|
||||
sha,
|
||||
prNumber,
|
||||
sites: sites.map((s) => s.id),
|
||||
// Echo the structured result so the CLI can print it without a follow-up query.
|
||||
results: results.map((r) => ({
|
||||
site: r.site,
|
||||
route: r.route,
|
||||
region: r.region,
|
||||
coldTtfbMs: r.coldTtfbMs,
|
||||
warmTtfbMs: r.warmTtfbMs,
|
||||
p95TtfbMs: r.p95TtfbMs,
|
||||
cfColo: r.cfColo,
|
||||
coldServerTimings: r.coldServerTimings,
|
||||
warmServerTimings: r.warmServerTimings,
|
||||
})),
|
||||
});
|
||||
}
|
||||
59
infra/perf-monitor/src/events.ts
Normal file
59
infra/perf-monitor/src/events.ts
Normal file
@@ -0,0 +1,59 @@
|
||||
/**
|
||||
* Type definitions for Cloudflare event subscription messages.
|
||||
* See: https://developers.cloudflare.com/queues/event-subscriptions/events-schemas/
|
||||
*/
|
||||
|
||||
/** Workers Builds `build.succeeded` event. */
|
||||
export interface BuildSucceededEvent {
|
||||
type: "cf.workersBuilds.worker.build.succeeded";
|
||||
source: {
|
||||
type: "workersBuilds.worker";
|
||||
workerName: string;
|
||||
};
|
||||
payload: {
|
||||
buildUuid: string;
|
||||
status: "success";
|
||||
buildOutcome: "success";
|
||||
createdAt: string;
|
||||
initializingAt: string;
|
||||
runningAt: string;
|
||||
stoppedAt: string;
|
||||
buildTriggerMetadata: {
|
||||
buildTriggerSource: string;
|
||||
branch: string;
|
||||
commitHash: string;
|
||||
commitMessage: string;
|
||||
author: string;
|
||||
buildCommand: string;
|
||||
deployCommand: string;
|
||||
rootDirectory: string;
|
||||
repoName: string;
|
||||
providerAccountName: string;
|
||||
providerType: string;
|
||||
};
|
||||
};
|
||||
metadata: {
|
||||
accountId: string;
|
||||
eventSubscriptionId: string;
|
||||
eventSchemaVersion: number;
|
||||
eventTimestamp: string;
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Other event types we may receive from the subscription but ignore.
|
||||
* Kept loose (string `type`) so we don't block on schema updates.
|
||||
*/
|
||||
export interface UnknownEvent {
|
||||
type: string;
|
||||
source?: unknown;
|
||||
payload?: unknown;
|
||||
metadata?: unknown;
|
||||
}
|
||||
|
||||
export type PerfQueueMessage = BuildSucceededEvent | UnknownEvent;
|
||||
|
||||
/** Type guard for the only event we actually act on. */
|
||||
export function isBuildSucceeded(event: PerfQueueMessage): event is BuildSucceededEvent {
|
||||
return event.type === "cf.workersBuilds.worker.build.succeeded";
|
||||
}
|
||||
77
infra/perf-monitor/src/github.ts
Normal file
77
infra/perf-monitor/src/github.ts
Normal file
@@ -0,0 +1,77 @@
|
||||
/**
|
||||
* GitHub API helpers for resolving a commit SHA to a merged PR number.
|
||||
*
|
||||
* Uses the "list pull requests associated with a commit" endpoint:
|
||||
* https://docs.github.com/en/rest/commits/commits#list-pull-requests-associated-with-a-commit
|
||||
*
|
||||
* Called unauthenticated. The public repo endpoint has a 60 req/hr limit per IP,
|
||||
* which is far more than our deploy rate. If that ever changes, add a token:
|
||||
* `headers.authorization = "Bearer " + env.GITHUB_TOKEN`.
|
||||
*/
|
||||
|
||||
import { GITHUB_REPO } from "./routes.js";
|
||||
|
||||
interface AssociatedPR {
|
||||
number: number;
|
||||
state: string;
|
||||
merged_at: string | null;
|
||||
base: { ref: string };
|
||||
}
|
||||
const PR_NUMBER_REGEX = /\(#(\d+)\)\s*$/;
|
||||
/**
|
||||
* Parse a PR number from a commit message. GitHub squash merges append the PR
|
||||
* number in parentheses, e.g. "feat: add feature (#123)".
|
||||
*/
|
||||
function parsePrFromMessage(commitMessage: string): number | null {
|
||||
const match = commitMessage.match(PR_NUMBER_REGEX);
|
||||
if (!match?.[1]) return null;
|
||||
return parseInt(match[1], 10);
|
||||
}
|
||||
|
||||
/**
|
||||
* Find the merged PR for a given commit SHA, if any.
|
||||
*
|
||||
* Strategy:
|
||||
* 1. Parse the commit message for `(#N)` — works for squash merges (the common case).
|
||||
* 2. Fall back to the GitHub "list PRs for a commit" API — works for merge commits.
|
||||
*
|
||||
* Returns null if no PR exists (e.g. direct push to main) or the lookup fails.
|
||||
*/
|
||||
export async function resolvePrForSha(sha: string, commitMessage?: string): Promise<number | null> {
|
||||
if (commitMessage) {
|
||||
const fromMessage = parsePrFromMessage(commitMessage);
|
||||
if (fromMessage) return fromMessage;
|
||||
}
|
||||
|
||||
const url = `https://api.github.com/repos/${GITHUB_REPO}/commits/${sha}/pulls`;
|
||||
|
||||
let response: Response;
|
||||
try {
|
||||
response = await fetch(url, {
|
||||
headers: {
|
||||
accept: "application/vnd.github+json",
|
||||
"user-agent": "emdash-perf-monitor",
|
||||
"x-github-api-version": "2022-11-28",
|
||||
},
|
||||
});
|
||||
} catch (err) {
|
||||
console.error("PR lookup failed:", err);
|
||||
return null;
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
console.warn(`PR lookup for ${sha} returned ${response.status}`);
|
||||
return null;
|
||||
}
|
||||
|
||||
const prs = await response.json<AssociatedPR[]>();
|
||||
|
||||
// Prefer a merged PR targeting main. Fall back to any merged PR.
|
||||
const mainPr = prs.find((p) => p.merged_at && p.base.ref === "main");
|
||||
if (mainPr) return mainPr.number;
|
||||
|
||||
const anyMerged = prs.find((p) => p.merged_at);
|
||||
if (anyMerged) return anyMerged.number;
|
||||
|
||||
return null;
|
||||
}
|
||||
119
infra/perf-monitor/src/index.ts
Normal file
119
infra/perf-monitor/src/index.ts
Normal file
@@ -0,0 +1,119 @@
|
||||
/**
|
||||
* Perf monitor coordinator Worker.
|
||||
*
|
||||
* Triggers:
|
||||
* - Queue consumer: fires on every `build.succeeded` event from Cloudflare's event
|
||||
* subscriptions. We filter for the demo Worker and run measurements tagged with
|
||||
* the deploy's commit SHA. This is the primary deploy-attribution path.
|
||||
* - Cron (every 30 min): ambient baseline. Runs untagged; fills gaps between deploys
|
||||
* and catches drift the queue might miss (subscription downtime, DLQ, etc).
|
||||
* - POST /api/trigger: ad-hoc manual measurement, tagged `source=manual`.
|
||||
* Expected to be protected by a Cloudflare Access policy at the edge.
|
||||
*
|
||||
* HTTP endpoints other than /api/trigger are read-only: JSON API at /api/* and
|
||||
* the static dashboard at /.
|
||||
*/
|
||||
|
||||
import { handleApi } from "./api.js";
|
||||
import type { PerfQueueMessage } from "./events.js";
|
||||
import { isBuildSucceeded } from "./events.js";
|
||||
import { resolvePrForSha } from "./github.js";
|
||||
import { runMeasurements } from "./measure.js";
|
||||
import { TRIGGER_WORKER_NAME } from "./routes.js";
|
||||
import { insertResults } from "./store.js";
|
||||
|
||||
/**
|
||||
* Handle a single build-succeeded event: filter for the demo Worker, resolve
|
||||
* the PR number via GitHub, run measurements, persist. Errors are swallowed
|
||||
* so one bad message doesn't poison the batch.
|
||||
*/
|
||||
async function handleBuildSucceeded(
|
||||
env: Env,
|
||||
event: Extract<PerfQueueMessage, { type: "cf.workersBuilds.worker.build.succeeded" }>,
|
||||
): Promise<void> {
|
||||
const workerName = event.source.workerName;
|
||||
if (workerName !== TRIGGER_WORKER_NAME) {
|
||||
// Not our trigger worker -- ignore. Both demos build from the same
|
||||
// commit, so one event covers both sites; measuring on every known
|
||||
// worker's event would double our load without adding signal.
|
||||
return;
|
||||
}
|
||||
|
||||
const meta = event.payload.buildTriggerMetadata;
|
||||
if (meta.branch !== "main") {
|
||||
// Only measure main-branch deploys.
|
||||
return;
|
||||
}
|
||||
|
||||
const sha = meta.commitHash;
|
||||
if (!sha) {
|
||||
console.warn("build.succeeded event missing commitHash; skipping");
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(`Running deploy-triggered measurement for ${workerName} @ ${sha.slice(0, 7)}`);
|
||||
|
||||
const prNumber = await resolvePrForSha(sha, meta.commitMessage);
|
||||
const results = await runMeasurements(env, { source: "deploy", sha, prNumber });
|
||||
|
||||
if (results.length > 0) {
|
||||
await insertResults(env.DB, results);
|
||||
console.log(
|
||||
`Stored ${results.length} deploy measurements for ${sha.slice(0, 7)}${prNumber ? ` (PR #${prNumber})` : ""}`,
|
||||
);
|
||||
} else {
|
||||
console.warn(`No measurements returned for ${sha.slice(0, 7)}`);
|
||||
}
|
||||
}
|
||||
|
||||
export default {
|
||||
async fetch(request: Request, env: Env): Promise<Response> {
|
||||
const url = new URL(request.url);
|
||||
|
||||
const apiResponse = await handleApi(request, url, env);
|
||||
if (apiResponse) return apiResponse;
|
||||
|
||||
// Anything else falls through to Workers Assets for the dashboard.
|
||||
return new Response("Not found", { status: 404 });
|
||||
},
|
||||
|
||||
async scheduled(
|
||||
controller: ScheduledController,
|
||||
env: Env,
|
||||
_ctx: ExecutionContext,
|
||||
): Promise<void> {
|
||||
console.log(`Cron triggered at ${new Date(controller.scheduledTime).toISOString()}`);
|
||||
|
||||
const results = await runMeasurements(env, { source: "cron" });
|
||||
|
||||
if (results.length > 0) {
|
||||
await insertResults(env.DB, results);
|
||||
console.log(`Stored ${results.length} cron measurements`);
|
||||
} else {
|
||||
console.warn("No measurements returned from probes");
|
||||
}
|
||||
},
|
||||
|
||||
async queue(batch: MessageBatch<PerfQueueMessage>, env: Env): Promise<void> {
|
||||
// Messages are processed sequentially to avoid hammering the demo with
|
||||
// parallel measurement runs (each one issues N requests per region).
|
||||
// A batch of deploy events for different Workers is rare but possible.
|
||||
for (const message of batch.messages) {
|
||||
try {
|
||||
const event = message.body;
|
||||
if (!isBuildSucceeded(event)) {
|
||||
// Event type we don't care about (build.started, build.failed, etc).
|
||||
// Ack silently.
|
||||
message.ack();
|
||||
continue;
|
||||
}
|
||||
await handleBuildSucceeded(env, event);
|
||||
message.ack();
|
||||
} catch (err) {
|
||||
console.error("Failed to process queue message:", err);
|
||||
// Retry -- exhausted retries send to the DLQ configured in wrangler.jsonc.
|
||||
message.retry();
|
||||
}
|
||||
}
|
||||
},
|
||||
} satisfies ExportedHandler<Env, PerfQueueMessage>;
|
||||
101
infra/perf-monitor/src/measure.ts
Normal file
101
infra/perf-monitor/src/measure.ts
Normal file
@@ -0,0 +1,101 @@
|
||||
/** Orchestrates a measurement run across all regional probes. */
|
||||
|
||||
import type { MeasureResponse } from "../probe/src/measure.js";
|
||||
import { REGIONS, SITES, TARGET_ROUTES, WARM_REQUESTS } from "./routes.js";
|
||||
import type { Region, Site } from "./routes.js";
|
||||
import type { InsertParams, Source } from "./store.js";
|
||||
|
||||
const PROBE_BINDINGS: Record<
|
||||
Region,
|
||||
keyof Pick<Env, "PROBE_USE" | "PROBE_EUW" | "PROBE_APE" | "PROBE_APS">
|
||||
> = {
|
||||
use: "PROBE_USE",
|
||||
euw: "PROBE_EUW",
|
||||
ape: "PROBE_APE",
|
||||
aps: "PROBE_APS",
|
||||
};
|
||||
|
||||
function generateId(): string {
|
||||
const bytes = new Uint8Array(16);
|
||||
crypto.getRandomValues(bytes);
|
||||
return Array.from(bytes, (b) => b.toString(16).padStart(2, "0")).join("");
|
||||
}
|
||||
|
||||
/** Options for {@link runMeasurements} beyond the source tag. */
|
||||
export interface RunOptions {
|
||||
source: Source;
|
||||
sha?: string | null;
|
||||
prNumber?: number | null;
|
||||
note?: string | null;
|
||||
/**
|
||||
* Sites to measure. Defaults to every site in {@link SITES}. Pass a subset
|
||||
* when a caller wants to target only one deployment (e.g. manual triggers).
|
||||
*/
|
||||
sites?: readonly Site[];
|
||||
}
|
||||
|
||||
/** Dispatch measurements to all regional probes in parallel, for every site. */
|
||||
export async function runMeasurements(env: Env, opts: RunOptions): Promise<InsertParams[]> {
|
||||
const { source, sha = null, prNumber = null, note = null, sites = SITES } = opts;
|
||||
|
||||
// Fan out across (site × region). We run all probes in parallel -- each one
|
||||
// issues N requests per route on its own, so the measurement load on the
|
||||
// demos is bounded regardless of how many sites we have.
|
||||
const probePromises = sites.flatMap((site) =>
|
||||
REGIONS.map(async (region) => {
|
||||
const binding = PROBE_BINDINGS[region];
|
||||
const probe = env[binding];
|
||||
const payload = {
|
||||
targetUrl: site.targetUrl,
|
||||
routes: TARGET_ROUTES.map((r) => ({ path: r.path, label: r.label })),
|
||||
warmRequests: WARM_REQUESTS,
|
||||
region,
|
||||
};
|
||||
|
||||
try {
|
||||
const response = await probe.fetch("https://probe/measure", {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify(payload),
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const errText = await response.text();
|
||||
console.error(
|
||||
`Probe ${region} failed for site=${site.id}: ${response.status} ${errText}`,
|
||||
);
|
||||
return [];
|
||||
}
|
||||
|
||||
const data = await response.json<MeasureResponse>();
|
||||
|
||||
return data.results.map(
|
||||
(r): InsertParams => ({
|
||||
id: generateId(),
|
||||
sha,
|
||||
prNumber,
|
||||
route: r.path,
|
||||
region,
|
||||
coldTtfbMs: r.coldTtfbMs,
|
||||
warmTtfbMs: r.warmTtfbMs,
|
||||
p95TtfbMs: r.p95TtfbMs,
|
||||
statusCode: r.statusCode,
|
||||
cfColo: r.cfColo,
|
||||
cfPlacement: r.cfPlacement,
|
||||
coldServerTimings: r.coldServerTimings,
|
||||
warmServerTimings: r.warmServerTimings,
|
||||
note,
|
||||
source,
|
||||
site: site.id,
|
||||
}),
|
||||
);
|
||||
} catch (err) {
|
||||
console.error(`Probe ${region} error for site=${site.id}:`, err);
|
||||
return [];
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
const allResults = await Promise.all(probePromises);
|
||||
return allResults.flat();
|
||||
}
|
||||
111
infra/perf-monitor/src/routes.ts
Normal file
111
infra/perf-monitor/src/routes.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
/** Target routes to measure and their thresholds. */
|
||||
|
||||
export interface TargetRoute {
|
||||
path: string;
|
||||
label: string;
|
||||
/** Cold TTFB threshold in ms -- CI fails if exceeded. */
|
||||
coldThresholdMs: number;
|
||||
/**
|
||||
* HTTP status codes considered valid for this route. If a measurement returns
|
||||
* something outside this set, the CI trigger marks it as a sanity-check failure.
|
||||
* Measuring a 404 or 500 response tells us nothing about real-world perf -- the
|
||||
* route is either broken or has drifted (e.g. a referenced post was deleted).
|
||||
*
|
||||
* Note: the probe follows redirects, so this describes the final response status.
|
||||
* `/_emdash/admin` 302s to the login page (200), so 200 covers it.
|
||||
*/
|
||||
expectedStatuses: number[];
|
||||
}
|
||||
|
||||
/**
|
||||
* A deployed demo we measure. Sites share the same route set and are compared
|
||||
* head-to-head on the dashboard. `blog` is the baseline; `cache` runs with
|
||||
* Astro's experimental cache provider enabled.
|
||||
*/
|
||||
export interface Site {
|
||||
/** Stable slug stored in `perf_results.site`. */
|
||||
id: string;
|
||||
label: string;
|
||||
targetUrl: string;
|
||||
/** Cloudflare Worker name — matched against build.succeeded events. */
|
||||
workerName: string;
|
||||
}
|
||||
|
||||
export const SITES: readonly Site[] = [
|
||||
{
|
||||
id: "blog",
|
||||
label: "Baseline",
|
||||
targetUrl: "https://blog-demo.emdashcms.com",
|
||||
workerName: "emdash-demo-blog",
|
||||
},
|
||||
{
|
||||
id: "cache",
|
||||
label: "Astro cache",
|
||||
targetUrl: "https://cache-demo.emdashcms.com",
|
||||
workerName: "emdash-demo-cache",
|
||||
},
|
||||
] as const;
|
||||
|
||||
export const DEFAULT_SITE_ID = "blog";
|
||||
|
||||
export function getSite(id: string): Site | undefined {
|
||||
return SITES.find((s) => s.id === id);
|
||||
}
|
||||
|
||||
/**
|
||||
* Worker name whose build.succeeded events drive deploy-attributed
|
||||
* measurements. Both sites build from the same repo on every main-branch
|
||||
* commit, so measuring on the baseline worker's event covers both (see
|
||||
* `handleBuildSucceeded`). If only cache-demo deploys (rare), the cron
|
||||
* job will catch it on the next tick.
|
||||
*/
|
||||
export const TRIGGER_WORKER_NAME = "emdash-demo-blog";
|
||||
|
||||
/**
|
||||
* GitHub repo used for PR number lookup. SHA -> merged PR resolution happens
|
||||
* via the GitHub API when a deploy event arrives.
|
||||
*/
|
||||
export const GITHUB_REPO = "emdash-cms/emdash";
|
||||
|
||||
/**
|
||||
* Routes we measure. Each exercises a different code path on the demo:
|
||||
* - "/" hits the homepage template and queries the latest posts
|
||||
* - "/posts/<slug>" renders a single post (different template + single-row fetch)
|
||||
* - "/_emdash/admin" returns a redirect from the admin root -- measures auth middleware latency
|
||||
*
|
||||
* We avoid `/_emdash/api/content/*` -- it requires auth and returns 401 immediately,
|
||||
* which doesn't reflect real query latency.
|
||||
*/
|
||||
export const TARGET_ROUTES: TargetRoute[] = [
|
||||
{
|
||||
path: "/",
|
||||
label: "Homepage",
|
||||
coldThresholdMs: 2000,
|
||||
expectedStatuses: [200],
|
||||
},
|
||||
{
|
||||
path: "/posts/marshland-birds-at-the-lake-havasu-national-wildlife-refuge",
|
||||
label: "Single Post",
|
||||
coldThresholdMs: 2000,
|
||||
expectedStatuses: [200],
|
||||
},
|
||||
{
|
||||
path: "/_emdash/admin",
|
||||
label: "Admin (login page)",
|
||||
coldThresholdMs: 1500,
|
||||
expectedStatuses: [200],
|
||||
},
|
||||
];
|
||||
|
||||
export const REGIONS = ["use", "euw", "ape", "aps"] as const;
|
||||
export type Region = (typeof REGIONS)[number];
|
||||
|
||||
export const REGION_LABELS: Record<Region, string> = {
|
||||
use: "US East",
|
||||
euw: "Europe West",
|
||||
ape: "Asia Pacific East",
|
||||
aps: "Asia Pacific South",
|
||||
};
|
||||
|
||||
/** Number of warm requests per route (we take the median). */
|
||||
export const WARM_REQUESTS = 5;
|
||||
245
infra/perf-monitor/src/store.ts
Normal file
245
infra/perf-monitor/src/store.ts
Normal file
@@ -0,0 +1,245 @@
|
||||
/** D1 storage layer for perf results. */
|
||||
|
||||
/** All valid values for the `source` column. */
|
||||
export type Source = "deploy" | "cron" | "manual";
|
||||
|
||||
export interface PerfResult {
|
||||
id: string;
|
||||
sha: string | null;
|
||||
pr_number: number | null;
|
||||
route: string;
|
||||
region: string;
|
||||
cold_ttfb_ms: number | null;
|
||||
warm_ttfb_ms: number | null;
|
||||
p95_ttfb_ms: number | null;
|
||||
status_code: number | null;
|
||||
cf_colo: string | null;
|
||||
cf_placement: string | null;
|
||||
/** Raw JSON string as stored. Use {@link parseColdServerTimings} to decode. */
|
||||
cold_server_timings: string | null;
|
||||
/**
|
||||
* Median duration per metric across warm requests, same JSON shape as
|
||||
* `cold_server_timings`. Null when the target didn't emit Server-Timing
|
||||
* on warm responses, or when no warm requests were issued.
|
||||
*/
|
||||
warm_server_timings: string | null;
|
||||
note: string | null;
|
||||
timestamp: string;
|
||||
source: string;
|
||||
site: string;
|
||||
}
|
||||
|
||||
export interface InsertParams {
|
||||
id: string;
|
||||
sha: string | null;
|
||||
prNumber: number | null;
|
||||
route: string;
|
||||
region: string;
|
||||
coldTtfbMs: number | null;
|
||||
warmTtfbMs: number | null;
|
||||
p95TtfbMs: number | null;
|
||||
statusCode: number | null;
|
||||
cfColo: string | null;
|
||||
cfPlacement: string | null;
|
||||
/** Will be JSON.stringify'd on the way in. Null if unavailable. */
|
||||
coldServerTimings: Record<string, { dur: number; desc?: string }> | null;
|
||||
/** Median-per-metric snapshot of warm Server-Timing. Null if unavailable. */
|
||||
warmServerTimings: Record<string, { dur: number; desc?: string }> | null;
|
||||
note: string | null;
|
||||
source: Source;
|
||||
site: string;
|
||||
}
|
||||
|
||||
/** Column list shared between insertResult and insertResults. */
|
||||
const INSERT_COLUMNS =
|
||||
"id, sha, pr_number, route, region, cold_ttfb_ms, warm_ttfb_ms, p95_ttfb_ms, status_code, cf_colo, cf_placement, cold_server_timings, warm_server_timings, note, source, site";
|
||||
const INSERT_PLACEHOLDERS = "?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?";
|
||||
|
||||
function bindInsert(stmt: D1PreparedStatement, p: InsertParams): D1PreparedStatement {
|
||||
return stmt.bind(
|
||||
p.id,
|
||||
p.sha,
|
||||
p.prNumber,
|
||||
p.route,
|
||||
p.region,
|
||||
p.coldTtfbMs,
|
||||
p.warmTtfbMs,
|
||||
p.p95TtfbMs,
|
||||
p.statusCode,
|
||||
p.cfColo,
|
||||
p.cfPlacement,
|
||||
p.coldServerTimings ? JSON.stringify(p.coldServerTimings) : null,
|
||||
p.warmServerTimings ? JSON.stringify(p.warmServerTimings) : null,
|
||||
p.note,
|
||||
p.source,
|
||||
p.site,
|
||||
);
|
||||
}
|
||||
|
||||
/** Insert a single measurement result. */
|
||||
export async function insertResult(db: D1Database, params: InsertParams): Promise<void> {
|
||||
await bindInsert(
|
||||
db.prepare(`INSERT INTO perf_results (${INSERT_COLUMNS}) VALUES (${INSERT_PLACEHOLDERS})`),
|
||||
params,
|
||||
).run();
|
||||
}
|
||||
|
||||
/** Insert a batch of results in a single transaction. */
|
||||
export async function insertResults(db: D1Database, results: InsertParams[]): Promise<void> {
|
||||
const stmt = db.prepare(
|
||||
`INSERT INTO perf_results (${INSERT_COLUMNS}) VALUES (${INSERT_PLACEHOLDERS})`,
|
||||
);
|
||||
await db.batch(results.map((p) => bindInsert(stmt, p)));
|
||||
}
|
||||
|
||||
export interface QueryParams {
|
||||
route?: string;
|
||||
region?: string;
|
||||
source?: Source;
|
||||
site?: string;
|
||||
since?: string;
|
||||
limit?: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize an ISO-8601 timestamp (e.g. "2026-04-20T05:00:00.000Z") to the
|
||||
* " "-separated form D1's `datetime('now')` writes ("2026-04-20 05:00:00").
|
||||
*
|
||||
* SQLite compares TEXT lexicographically: space (0x20) sorts before "T"
|
||||
* (0x54). If we pass the client's ISO string straight into `timestamp >= ?`,
|
||||
* any stored row whose calendar date matches the since-boundary compares
|
||||
* LESS than since regardless of its actual time, so same-day filters (1h,
|
||||
* and the "today" portion of 24h) silently return zero rows.
|
||||
*/
|
||||
const SINCE_TIMESTAMP_RE = /^(\d{4}-\d{2}-\d{2})[T ](\d{2}:\d{2}:\d{2})/;
|
||||
|
||||
function normalizeSince(since: string): string {
|
||||
const match = SINCE_TIMESTAMP_RE.exec(since);
|
||||
return match ? `${match[1]} ${match[2]}` : since;
|
||||
}
|
||||
|
||||
/** Query historical results with optional filters. */
|
||||
export async function queryResults(db: D1Database, params: QueryParams): Promise<PerfResult[]> {
|
||||
const conditions: string[] = [];
|
||||
const bindings: (string | number)[] = [];
|
||||
|
||||
if (params.route) {
|
||||
conditions.push("route = ?");
|
||||
bindings.push(params.route);
|
||||
}
|
||||
if (params.region) {
|
||||
conditions.push("region = ?");
|
||||
bindings.push(params.region);
|
||||
}
|
||||
if (params.source) {
|
||||
conditions.push("source = ?");
|
||||
bindings.push(params.source);
|
||||
}
|
||||
if (params.site) {
|
||||
conditions.push("site = ?");
|
||||
bindings.push(params.site);
|
||||
}
|
||||
if (params.since) {
|
||||
conditions.push("timestamp >= ?");
|
||||
bindings.push(normalizeSince(params.since));
|
||||
}
|
||||
|
||||
const where = conditions.length > 0 ? `WHERE ${conditions.join(" AND ")}` : "";
|
||||
const limit = Math.min(params.limit ?? 500, 1000);
|
||||
|
||||
const query = `SELECT * FROM perf_results ${where} ORDER BY timestamp DESC LIMIT ?`;
|
||||
bindings.push(limit);
|
||||
|
||||
const result = await db
|
||||
.prepare(query)
|
||||
.bind(...bindings)
|
||||
.all<PerfResult>();
|
||||
return result.results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the latest result per route/region combo for a given site.
|
||||
* Manual runs are excluded -- they're ad-hoc probes and would otherwise
|
||||
* poison the dashboard's "current state" cards whenever one was the most
|
||||
* recent sample.
|
||||
*/
|
||||
export async function getLatestResults(db: D1Database, site: string): Promise<PerfResult[]> {
|
||||
const result = await db
|
||||
.prepare(
|
||||
`SELECT p.* FROM perf_results p
|
||||
INNER JOIN (
|
||||
SELECT route, region, MAX(timestamp) as max_ts
|
||||
FROM perf_results
|
||||
WHERE source != 'manual' AND site = ?
|
||||
GROUP BY route, region
|
||||
) latest ON p.route = latest.route AND p.region = latest.region AND p.timestamp = latest.max_ts
|
||||
WHERE p.source != 'manual' AND p.site = ?
|
||||
ORDER BY p.region, p.route`,
|
||||
)
|
||||
.bind(site, site)
|
||||
.all<PerfResult>();
|
||||
return result.results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get rolling medians for each route/region over the last N days for a given site.
|
||||
* Manual runs are excluded so ad-hoc probes don't pull the baseline around.
|
||||
*/
|
||||
export async function getRollingMedians(
|
||||
db: D1Database,
|
||||
site: string,
|
||||
days: number = 7,
|
||||
): Promise<
|
||||
Array<{ route: string; region: string; median_cold: number; median_warm: number; count: number }>
|
||||
> {
|
||||
const result = await db
|
||||
.prepare(
|
||||
`SELECT
|
||||
route,
|
||||
region,
|
||||
COUNT(*) as count,
|
||||
-- SQLite doesn't have PERCENTILE_CONT, so we approximate with AVG of middle values
|
||||
AVG(cold_ttfb_ms) as median_cold,
|
||||
AVG(warm_ttfb_ms) as median_warm
|
||||
FROM perf_results
|
||||
WHERE timestamp >= datetime('now', ?)
|
||||
AND cold_ttfb_ms IS NOT NULL
|
||||
AND source != 'manual'
|
||||
AND site = ?
|
||||
GROUP BY route, region
|
||||
ORDER BY region, route`,
|
||||
)
|
||||
.bind(`-${days} days`, site)
|
||||
.all<{
|
||||
route: string;
|
||||
region: string;
|
||||
median_cold: number;
|
||||
median_warm: number;
|
||||
count: number;
|
||||
}>();
|
||||
return result.results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all deploy-triggered results (with SHA and PR info) for chart markers.
|
||||
* Only 'deploy' source has SHA attribution -- 'cron' is untagged baseline.
|
||||
*/
|
||||
export async function getDeployResults(
|
||||
db: D1Database,
|
||||
site: string,
|
||||
since?: string,
|
||||
): Promise<PerfResult[]> {
|
||||
const sinceClause = since ? "AND timestamp >= ?" : "";
|
||||
const bindings: string[] = [site];
|
||||
if (since) bindings.push(normalizeSince(since));
|
||||
|
||||
const result = await db
|
||||
.prepare(
|
||||
`SELECT * FROM perf_results
|
||||
WHERE source = 'deploy' AND site = ? ${sinceClause}
|
||||
ORDER BY timestamp ASC`,
|
||||
)
|
||||
.bind(...bindings)
|
||||
.all<PerfResult>();
|
||||
return result.results;
|
||||
}
|
||||
17
infra/perf-monitor/tsconfig.json
Normal file
17
infra/perf-monitor/tsconfig.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"compilerOptions": {
|
||||
"target": "es2023",
|
||||
"module": "esnext",
|
||||
"moduleResolution": "bundler",
|
||||
"lib": ["es2023"],
|
||||
"types": [],
|
||||
"strict": true,
|
||||
"noUncheckedIndexedAccess": true,
|
||||
"noImplicitOverride": true,
|
||||
"verbatimModuleSyntax": true,
|
||||
"isolatedModules": true,
|
||||
"skipLibCheck": true,
|
||||
"noEmit": true
|
||||
},
|
||||
"include": ["src", "probe/src", "worker-configuration.d.ts"]
|
||||
}
|
||||
29
infra/perf-monitor/vite.config.ts
Normal file
29
infra/perf-monitor/vite.config.ts
Normal file
@@ -0,0 +1,29 @@
|
||||
import { cloudflare } from "@cloudflare/vite-plugin";
|
||||
import { defineConfig } from "vite";
|
||||
|
||||
const PROBE_REGIONS = [
|
||||
{ id: "use", region: "aws:us-east-1" },
|
||||
{ id: "euw", region: "aws:eu-west-2" },
|
||||
{ id: "ape", region: "aws:ap-northeast-1" },
|
||||
{ id: "aps", region: "aws:ap-southeast-1" },
|
||||
] as const;
|
||||
|
||||
export default defineConfig({
|
||||
plugins: [
|
||||
cloudflare({
|
||||
configPath: "./wrangler.jsonc",
|
||||
auxiliaryWorkers: PROBE_REGIONS.map((probe) => ({
|
||||
config: (_, { entryWorkerConfig }) => ({
|
||||
name: `emdash-perf-probe-${probe.id}`,
|
||||
main: "./probe/src/index.ts",
|
||||
account_id: entryWorkerConfig.account_id,
|
||||
compatibility_date: entryWorkerConfig.compatibility_date,
|
||||
compatibility_flags: entryWorkerConfig.compatibility_flags,
|
||||
placement: {
|
||||
region: probe.region,
|
||||
},
|
||||
}),
|
||||
})),
|
||||
}),
|
||||
],
|
||||
});
|
||||
14059
infra/perf-monitor/worker-configuration.d.ts
vendored
Normal file
14059
infra/perf-monitor/worker-configuration.d.ts
vendored
Normal file
File diff suppressed because it is too large
Load Diff
61
infra/perf-monitor/wrangler.jsonc
Normal file
61
infra/perf-monitor/wrangler.jsonc
Normal file
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"$schema": "node_modules/wrangler/config-schema.json",
|
||||
"name": "emdash-perf-coordinator",
|
||||
"main": "src/index.ts",
|
||||
"account_id": "1f74638c495bc9f0330ce5c8e64c1b6b",
|
||||
"compatibility_date": "2026-04-01",
|
||||
"compatibility_flags": ["nodejs_compat"],
|
||||
"routes": [
|
||||
{
|
||||
"pattern": "perf.emdashcms.com",
|
||||
"zone_name": "emdashcms.com",
|
||||
"custom_domain": true,
|
||||
},
|
||||
],
|
||||
"d1_databases": [
|
||||
{
|
||||
"binding": "DB",
|
||||
"database_name": "emdash_perf",
|
||||
"database_id": "84918738-8904-49bb-a306-b58b96edfc08",
|
||||
"migrations_dir": "migrations",
|
||||
},
|
||||
],
|
||||
"services": [
|
||||
{
|
||||
"binding": "PROBE_USE",
|
||||
"service": "emdash-perf-probe-use",
|
||||
},
|
||||
{
|
||||
"binding": "PROBE_EUW",
|
||||
"service": "emdash-perf-probe-euw",
|
||||
},
|
||||
{
|
||||
"binding": "PROBE_APE",
|
||||
"service": "emdash-perf-probe-ape",
|
||||
},
|
||||
{
|
||||
"binding": "PROBE_APS",
|
||||
"service": "emdash-perf-probe-aps",
|
||||
},
|
||||
],
|
||||
"triggers": {
|
||||
"crons": ["*/30 * * * *"],
|
||||
},
|
||||
"assets": {
|
||||
"directory": "public",
|
||||
},
|
||||
"queues": {
|
||||
"consumers": [
|
||||
{
|
||||
"queue": "emdash-perf-deploy-events",
|
||||
"max_batch_size": 10,
|
||||
"max_batch_timeout": 5,
|
||||
"max_retries": 3,
|
||||
"dead_letter_queue": "emdash-perf-deploy-events-dlq",
|
||||
},
|
||||
],
|
||||
},
|
||||
"observability": {
|
||||
"enabled": true,
|
||||
},
|
||||
}
|
||||
Reference in New Issue
Block a user