* fix: use stable site hash for install telemetry deduplication (#297) generateSiteHash() used Date.now() as the hash seed, producing a different hash on every call. Since the installs table uses PRIMARY KEY (plugin_id, site_hash), the same site could insert unlimited rows, inflating install counts and making "Most Popular" sorting meaningless. Fix: use the site's request origin as a stable hash seed. The same origin always produces the same hash, so the marketplace deduplicates correctly. Also denormalizes install_count on the plugins table to avoid a COUNT(*) subquery per row in searchPlugins(). The count is recalculated atomically on each upsertInstall() call. Fixes #297 * chore: add changeset for install telemetry fix * fix: address review feedback on install telemetry - Replace crypto.subtle fallback with FNV-1a hash to avoid origin leakage and collisions from truncated seed strings - Remove duplicate p.install_count from SELECT (p.* already includes it) - Use explicit p.install_count in ORDER BY clause - Use db.batch() for atomic upsert + count recomputation instead of separate statements with misleading meta.changes check
@emdash-cms/marketplace
Standalone Cloudflare Worker that hosts the EmDash plugin marketplace — discovery, publishing, and moderation.
Development
pnpm dev # starts wrangler dev server on :8787
pnpm test # runs vitest
Requires an AI binding (wrangler.jsonc has it configured). Code and image audits run on Workers AI.
Manual audit testing
The /api/v1/dev/audit endpoint (localhost only) runs the code + image audit pipeline without auth or DB writes. Use it to evaluate AI model accuracy against the fixture corpus.
Using the test script
# Run a single fixture
tests/fixtures/audit/test-audit.sh tests/fixtures/audit/prompt-injection
# Against a different host
tests/fixtures/audit/test-audit.sh tests/fixtures/audit/data-exfiltration http://localhost:8787
The script tars the fixture directory and POSTs it as a multipart bundle. Output is the raw audit JSON.
Using curl directly
Tarball mode (full bundle with manifest, code, and images):
tar -czf /tmp/bundle.tar.gz -C tests/fixtures/audit/crypto-miner .
curl -s -X POST http://localhost:8787/api/v1/dev/audit -F "bundle=@/tmp/bundle.tar.gz" | jq
JSON mode (code only, no manifest required):
curl -s -X POST http://localhost:8787/api/v1/dev/audit \
-H "Content-Type: application/json" \
-d '{"backendCode": "const x = eval(\"1+1\");"}' | jq
Running all fixtures
for d in tests/fixtures/audit/*/; do
echo "=== $(basename "$d") ==="
tests/fixtures/audit/test-audit.sh "$d"
echo
done
Compare the verdict and riskScore in each response against the fixture's expected.json to evaluate model accuracy.
Fixture format
Each fixture in tests/fixtures/audit/ is a directory containing:
| File | Required | Purpose |
|---|---|---|
manifest.json |
yes | Plugin manifest |
backend.js |
yes | Backend code (primary audit target) |
admin.js |
no | Admin UI code |
icon.png |
no | Plugin icon (triggers image audit) |
screenshots/*.png |
no | Screenshots (trigger image audit) |
expected.json |
yes | Expected verdict, score, categories |
expected.json shape:
{
"verdict": "pass" | "warn" | "fail",
"minRiskScore": 50,
"maxRiskScore": 10,
"categories": ["data-exfiltration", "obfuscation"]
}
minRiskScore and maxRiskScore are optional bounds. categories lists the finding categories the model should detect.