Files
moreminimore-vibe/drizzle/meta/_journal.json
Will Chen 6235f7bb9d Summarize chat trigger (#1890)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Adds a context-limit banner with one-click “summarize into new chat,”
refactors token counting with react-query, and persists per-message max
token usage.
> 
> - **Chat UX**
> - **Context limit banner** (`ContextLimitBanner.tsx`,
`MessagesList.tsx`): shows when within 40k tokens of `contextWindow`,
with tooltip and action to summarize into a new chat.
> - **Summarize flow**: extracted to `useSummarizeInNewChat` and used in
chat input and banner; new summarize system prompt
(`summarize_chat_system_prompt.ts`).
> - **Token usage & counting**
> - **Persist max tokens used per assistant message**: DB migration
(`messages.max_tokens_used`), schema updates, and saving usage during
streaming (`chat_stream_handlers.ts`).
> - **Token counting refactor** (`useCountTokens.ts`): react-query with
debounce; returns `estimatedTotalTokens` and `actualMaxTokens`;
invalidated on model change and stream end; `TokenBar` updated.
> - **Surfacing usage**: tooltip on latest assistant message shows total
tokens (`ChatMessage.tsx`).
> - **Model/config tweaks**
> - Set `auto` model `contextWindow` to `200_000`
(`language_model_constants.ts`).
>   - Improve chat auto-scroll dependency (`ChatPanel.tsx`).
>   - Fix app path validation regex (`app_handlers.ts`).
> - **Testing & dev server**
> - E2E tests for banner and summarize
(`e2e-tests/context_limit_banner.spec.ts` + fixtures/snapshot).
> - Fake LLM server streams usage to simulate high token scenarios
(`testing/fake-llm-server/*`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
2ae16a14d50699cc772407426419192c2fdf2ec3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->













<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds a “Summarize into new chat” trigger and a context limit banner to
help keep conversations focused and avoid hitting model limits. Also
tracks and surfaces actual token usage per assistant message, with a
token counting refactor for reliability.

- **New Features**
- Summarize into new chat from the input or banner; improved system
prompt with clear output format.
- Context limit banner shows when within 40k tokens of the model’s
context window and offers a one-click summarize action.
  - Tooltip on the latest assistant message shows total tokens used.

- **Refactors**
- Token counting now uses react-query and returns estimatedTotalTokens
and actualMaxTokens; counts are invalidated on model change and when
streaming settles.
- Persist per-message max_tokens_used in the messages table; backend
aggregates model usage during streaming and saves it.
- Adjusted default “Auto” model contextWindow to 200k for more realistic
limits.
- Improved chat scrolling while streaming; fixed app path validation
regex.

<sup>Written for commit 2ae16a14d50699cc772407426419192c2fdf2ec3.
Summary will update automatically on new commits.</sup>

<!-- End of auto-generated description by cubic. -->
2025-12-04 23:00:28 -08:00

132 lines
2.6 KiB
JSON

{
"version": "7",
"dialect": "sqlite",
"entries": [
{
"idx": 0,
"version": "6",
"when": 1744692127560,
"tag": "0000_nebulous_proemial_gods",
"breakpoints": true
},
{
"idx": 1,
"version": "6",
"when": 1744999922420,
"tag": "0001_hesitant_roland_deschain",
"breakpoints": true
},
{
"idx": 2,
"version": "6",
"when": 1745359640409,
"tag": "0002_unique_morlocks",
"breakpoints": true
},
{
"idx": 3,
"version": "6",
"when": 1746209201530,
"tag": "0003_open_bucky",
"breakpoints": true
},
{
"idx": 4,
"version": "6",
"when": 1746556241557,
"tag": "0004_flawless_jigsaw",
"breakpoints": true
},
{
"idx": 5,
"version": "6",
"when": 1747095436506,
"tag": "0005_clumsy_namor",
"breakpoints": true
},
{
"idx": 6,
"version": "6",
"when": 1749515724373,
"tag": "0006_mushy_squirrel_girl",
"breakpoints": true
},
{
"idx": 7,
"version": "6",
"when": 1750186036000,
"tag": "0007_dapper_overlord",
"breakpoints": true
},
{
"idx": 8,
"version": "6",
"when": 1752625491756,
"tag": "0008_medical_vulcan",
"breakpoints": true
},
{
"idx": 9,
"version": "6",
"when": 1753473275674,
"tag": "0009_previous_misty_knight",
"breakpoints": true
},
{
"idx": 10,
"version": "6",
"when": 1755110011615,
"tag": "0010_nappy_fat_cobra",
"breakpoints": true
},
{
"idx": 11,
"version": "6",
"when": 1755545060076,
"tag": "0011_light_zeigeist",
"breakpoints": true
},
{
"idx": 12,
"version": "6",
"when": 1758320228637,
"tag": "0012_bouncy_fenris",
"breakpoints": true
},
{
"idx": 13,
"version": "6",
"when": 1759068733234,
"tag": "0013_damp_mephistopheles",
"breakpoints": true
},
{
"idx": 14,
"version": "6",
"when": 1760034009367,
"tag": "0014_needy_vertigo",
"breakpoints": true
},
{
"idx": 15,
"version": "6",
"when": 1760474402750,
"tag": "0015_complete_old_lace",
"breakpoints": true
},
{
"idx": 16,
"version": "6",
"when": 1762297039106,
"tag": "0016_petite_thanos",
"breakpoints": true
},
{
"idx": 17,
"version": "6",
"when": 1764804624402,
"tag": "0017_sharp_corsair",
"breakpoints": true
}
]
}