Commit Graph

24 Commits

Author SHA1 Message Date
Kunthawat Greethong
29248688f3 feat: rebrand from Dyad to MoreMinimore
- Update package.json description to reflect new branding
- Add fix_chat_input function to update Pro URL references
- Rename all Dyad-related functions and tags to MoreMinimore
- Update test files to use new function names
- Remove Pro restrictions from Annotator component
- Update branding text throughout the application
2025-12-19 17:26:32 +07:00
Kunthawat Greethong
6bb756fdd7 Lastest change 2025-12-19 16:13:39 +07:00
Will Chen
6235f7bb9d Summarize chat trigger (#1890)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Adds a context-limit banner with one-click “summarize into new chat,”
refactors token counting with react-query, and persists per-message max
token usage.
> 
> - **Chat UX**
> - **Context limit banner** (`ContextLimitBanner.tsx`,
`MessagesList.tsx`): shows when within 40k tokens of `contextWindow`,
with tooltip and action to summarize into a new chat.
> - **Summarize flow**: extracted to `useSummarizeInNewChat` and used in
chat input and banner; new summarize system prompt
(`summarize_chat_system_prompt.ts`).
> - **Token usage & counting**
> - **Persist max tokens used per assistant message**: DB migration
(`messages.max_tokens_used`), schema updates, and saving usage during
streaming (`chat_stream_handlers.ts`).
> - **Token counting refactor** (`useCountTokens.ts`): react-query with
debounce; returns `estimatedTotalTokens` and `actualMaxTokens`;
invalidated on model change and stream end; `TokenBar` updated.
> - **Surfacing usage**: tooltip on latest assistant message shows total
tokens (`ChatMessage.tsx`).
> - **Model/config tweaks**
> - Set `auto` model `contextWindow` to `200_000`
(`language_model_constants.ts`).
>   - Improve chat auto-scroll dependency (`ChatPanel.tsx`).
>   - Fix app path validation regex (`app_handlers.ts`).
> - **Testing & dev server**
> - E2E tests for banner and summarize
(`e2e-tests/context_limit_banner.spec.ts` + fixtures/snapshot).
> - Fake LLM server streams usage to simulate high token scenarios
(`testing/fake-llm-server/*`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
2ae16a14d50699cc772407426419192c2fdf2ec3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->













<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds a “Summarize into new chat” trigger and a context limit banner to
help keep conversations focused and avoid hitting model limits. Also
tracks and surfaces actual token usage per assistant message, with a
token counting refactor for reliability.

- **New Features**
- Summarize into new chat from the input or banner; improved system
prompt with clear output format.
- Context limit banner shows when within 40k tokens of the model’s
context window and offers a one-click summarize action.
  - Tooltip on the latest assistant message shows total tokens used.

- **Refactors**
- Token counting now uses react-query and returns estimatedTotalTokens
and actualMaxTokens; counts are invalidated on model change and when
streaming settles.
- Persist per-message max_tokens_used in the messages table; backend
aggregates model usage during streaming and saves it.
- Adjusted default “Auto” model contextWindow to 200k for more realistic
limits.
- Improved chat scrolling while streaming; fixed app path validation
regex.

<sup>Written for commit 2ae16a14d50699cc772407426419192c2fdf2ec3.
Summary will update automatically on new commits.</sup>

<!-- End of auto-generated description by cubic. -->
2025-12-04 23:00:28 -08:00
Will Chen
af58d06aa6 remove smart auto (#1613)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Removes Smart Auto labels/tooltip and related checks from the model
picker, showing standard model names and descriptions only.
> 
> - **UI (Model Picker)**: `src/components/ModelPicker.tsx`
> - Remove Smart Auto-specific labeling and tooltip logic for `auto`
models.
>   - Drop "Pro only" badge tied to Smart Auto state.
> - Always render `model.displayName` and `model.description` without
Smart Auto overrides.
> - Remove Smart Auto enablement check previously derived from settings.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
211be087933f3f240a78c017b26a37633f1228d6. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-10-23 10:36:42 -07:00
Will Chen
8f31821442 super value (#1408)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Introduces a new `auto` model `value` (Super Value Pro), adds
configurable tag colors across model types, and updates Model Picker
filtering and badges.
> 
> - **Models and Types**:
> - Add new auto model `value` ("Super Value (Pro)") with `tag: Budget`
and `tagColor`.
>   - Enhance `turbo` auto model with `tag: Fast` and `tagColor`.
>   - Extend `LanguageModel` and `ModelOption` with optional `tagColor`.
> - **Model Picker UI**:
> - Render model tags with configurable colors via `tagColor` and `cn`
utility.
> - Update "Pro only" badge logic (hide when display name already
includes "(Pro)"); adjust badge text size.
> - Refine auto model visibility: non‑Pro hides `turbo` and `value`; Pro
hides `free`.
>   - Minor styling/labeling tweaks in tag and badge rendering.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4485fddad502237d4bceb43732043d3eaa60eaa0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-09-30 12:58:17 -07:00
Adeniji Adekunle James
de2cc2b48f fix(ui): add scrolling to model dropdown when list is long (#1279) (#1323)
## Problem
The model selection dropdown doesn't scroll when there are many models
available, causing the list to extend beyond the viewport and become
unusable.

## Solution
- Added `max-h-100 overflow-y-auto` classes to the DropdownMenuContent

This closes (#1279)

    
<!-- This is an auto-generated description by cubic. -->
---

## Summary by cubic
Adds vertical scrolling to the model selection dropdown so long lists
don’t overflow the viewport. Applies max height and overflow-y auto to
DropdownMenuSubContent across provider, Ollama, and LM Studio menus to
keep the list usable.

<!-- End of auto-generated description by cubic. -->
2025-09-18 16:35:57 -07:00
Will Chen
a8e9caf7b0 Turbo models (#1249)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds “Dyad Turbo” models for Pro users and centralizes model/provider
constants. Pro users can pick fast, cost‑effective models directly from
the ModelPicker, with clearer labels and gating.

- **New Features**
- Added Dyad Turbo provider in ModelPicker with Qwen3 Coder and Kimi K2
(Pro only).
- Turbo options are hidden for non‑Pro users; “Pro only” badge shown
where applicable.
- “Smart Auto” label now applies only to the Auto model to avoid
confusion.

- **Refactors**
- Moved all model/provider constants into language_model_constants.ts
and updated imports (helpers, client, thinking utils).

<!-- End of auto-generated description by cubic. -->
2025-09-10 15:59:54 -07:00
Will Chen
72acb31d59 More free models (#1244)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds support for free OpenRouter models and a new “Free (OpenRouter)”
auto option that fails over across free models for reliability. Improves
setup flow and UI with provider cards, a “Free” price badge, and an
OpenRouter setup prompt in chat.

- **New Features**
- Added OpenRouter free models: Qwen3 Coder (free), DeepSeek v3 (free),
DeepSeek v3.1 (free), marked with dollarSigns=0 and a “Free” badge.
- New auto model: “Free (OpenRouter)” that uses a fallback client to
cycle through free models with smart retry on transient errors.
- New SetupProviderCard component and updated SetupBanner with dedicated
Google and OpenRouter setup cards.
- Chat shows an OpenRouter setup prompt when “Free (OpenRouter)” is
selected and OpenRouter isn’t configured.
- New PriceBadge component in ModelPicker to display “Free” or price
tier.
- E2E: added setup flow test and option to show the setup screen in
tests.
- Model updates: added DeepSeek v3.1, updated Kimi K2 to kimi-k2-0905,
migrated providers to LanguageModelV2.

<!-- End of auto-generated description by cubic. -->
2025-09-10 14:20:17 -07:00
Will Chen
2842c61f7c Improve model picker UX (#1180)
1. Show less common AI providers (secondary) in submenu
2. Show $ signs for rough cost guide
3. Show "Pro" for supported AI providers with Pro is enabled

    
<!-- This is an auto-generated description by cubic. -->
---

## Summary by cubic
Improves the Model Picker UX by grouping less-used providers under an
“Other AI providers” submenu and adding clear cost and Pro indicators.
This makes picking models faster and more informative.

- **New Features**
- Grouped secondary providers under “Other AI providers” using a new
provider.secondary flag (Azure marked secondary).
- Added rough cost hints: models can set dollarSigns and the UI shows a
“$” badge accordingly.
- Shows a “Pro” badge on supported cloud providers when Pro is enabled;
added a “Custom” badge for custom providers.
- Extended types: LanguageModelProvider.secondary and
LanguageModel.dollarSigns; populated values across OpenAI, Anthropic,
Google, and OpenRouter.

<!-- End of auto-generated description by cubic. -->
2025-09-03 15:36:54 -07:00
Will Chen
5d678c2ead Smart auto (#476) 2025-06-23 23:08:29 -07:00
Will Chen
9fbd7031d9 build ask mode (#444) 2025-06-19 10:42:51 -07:00
Will Chen
b044acb61f Polish chat input UX (#388) 2025-06-11 14:39:55 -07:00
Will Chen
b3f1802924 Finetune model picker UI (#375) 2025-06-09 16:10:19 -07:00
Will Chen
ee5865dcf8 Precise custom model selection & simplify language model/provider log… (#147)
…ic (no merging)
2025-05-12 23:24:39 -07:00
Will Chen
b45dff3862 Inline model picker props (#146) 2025-05-12 23:07:36 -07:00
Will Chen
843a097e82 Fix model picker UX (#145) 2025-05-12 22:41:53 -07:00
Will Chen
d1027622b4 Loosen provider type to a string (#144) 2025-05-12 22:29:36 -07:00
Will Chen
11ba46db38 Update model picker to group language models by providers (from IPC) (#141) 2025-05-12 22:06:41 -07:00
Will Chen
993c5417e3 Fix DB schemas (#138) 2025-05-12 21:51:08 -07:00
Will Chen
0d56651220 Run prettier on everything (#104) 2025-05-06 23:02:28 -07:00
Will Chen
a4d3e04996 refine model picker (#87) 2025-05-05 16:03:36 -07:00
Piotr Wilkin (ilintar)
5fc49231ee Add LM Studio support (#22) 2025-05-02 14:51:32 -07:00
Will Chen
b616598bab Add ollama support (#7) 2025-04-23 14:48:57 -07:00
Will Chen
43f67e0739 Initial open-source release 2025-04-11 09:38:16 -07:00