Commit Graph

8 Commits

Author SHA1 Message Date
Will Chen
23ef2ed279 Gemini 3 (#1819)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Adds `gemini-3-pro-preview` to Google model options with 1,048,576
context window and 65,535 max output tokens.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
6aefc4449908e862476f61c0bcb2a625111256a3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

<!-- This is an auto-generated description by cubic. -->
## Summary by cubic
Added Gemini 3 Pro (Preview) to the Google model list so users can
select it in the app. Sets a 1,048,576-token context window and a safe
max output of 65,535 tokens, with temperature 1.0.

<sup>Written for commit 6aefc4449908e862476f61c0bcb2a625111256a3.
Summary will update automatically on new commits.</sup>

<!-- End of auto-generated description by cubic. -->
2025-11-18 16:13:22 -08:00
Adeniji Adekunle James
a29ffeee4c Support GPT-5.1: smarter, faster, and more conversational for complex tasks (#1783)
Added to OpenAI and Azure

GPT-5.1: adaptive reasoning 
GPT-5.1-codex: advanced coding workflows
GPT-5.1-codex-mini: compact and efficient








<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Added support for OpenAI and Azure GPT-5.1, including GPT-5.1 Codex and
Codex Mini, for smarter conversations and better coding tasks. All
models use a 400k context window, require temperature 1, and leave max
output tokens unspecified to align with API behavior.

<sup>Written for commit 9788541c88732aa7d522fc70bfb20f22c3544982.
Summary will update automatically on new commits.</sup>

<!-- End of auto-generated description by cubic. -->








<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds GPT-5.1, GPT-5.1 Codex, and Codex Mini to OpenAI and Azure model
catalogs with 400k context and temperature 1 (max output tokens
unspecified).
> 
> - **Model registry updates
(`src/ipc/shared/language_model_constants.ts`)**:
>   - **OpenAI**:
> - Add `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-mini` (400k
`contextWindow`, `temperature: 1`, `maxOutputTokens: undefined`).
> - Keep existing `gpt-5`, `gpt-5-codex`, `gpt-5-mini`, `gpt-5-nano`,
`o4-mini`.
>   - **Azure**:
> - Add `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-mini` with matching
settings (400k `contextWindow`, `temperature: 1`).
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
9788541c88732aa7d522fc70bfb20f22c3544982. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: Will Chen <willchen90@gmail.com>
2025-11-18 16:12:49 -08:00
Will Chen
bd14a4ddae replace qwen3 coder with glm 4.6 turbo (#1697)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> <sup>[Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) is
generating a summary for commit
5e44056b5644e60784f6be0085519d9fb533f0ce. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-11-03 16:10:00 -08:00
Will Chen
eae22bed90 glm 4.6 (#1557)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> <sup>[Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) is
generating a summary for commit
5e434b7d049c839504c726b096bf5fa4c22f162b. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-10-16 13:13:22 -07:00
Adeniji Adekunle James
2f138312a5 Update Gemini 2.5 Flash to point to latest (#1392) 2025-10-02 16:58:22 -07:00
Will Chen
39266416c7 Add GPT 5 Codex and Sonnet 4.5 (#1398)
Fixes #1405 
    
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds GPT-5 Codex (OpenAI and Azure) and Claude 4.5 Sonnet to the model
options to enable newer coding models and larger contexts. Also
increases Claude 4 Sonnet max output tokens to 32k.

<!-- End of auto-generated description by cubic. -->


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Adds GPT‑5 Codex (OpenAI/Azure) and Claude 4.5 Sonnet, and increases
Claude 4 Sonnet max output tokens to 32k across providers and tests.
> 
> - **Models**:
>   - **OpenAI**: add `gpt-5-codex` (400k context, default temp 1).
>   - **Anthropic**:
> - add `claude-sonnet-4-5-20250929` (1M context, `maxOutputTokens:
32_000`).
> - update `claude-sonnet-4-20250514` `maxOutputTokens` from `16_000` to
`32_000`.
> - **Azure**: add `gpt-5-codex` (400k context, `maxOutputTokens:
128_000`).
>   - **Bedrock**:
> - add `us.anthropic.claude-sonnet-4-5-20250929-v1:0` (1M context,
`maxOutputTokens: 32_000`).
> - update `us.anthropic.claude-sonnet-4-20250514-v1:0`
`maxOutputTokens` to `32_000`.
> - **E2E tests**:
> - Update snapshots to reflect `max_tokens` increased to `32000` for
`anthropic/claude-sonnet-4-20250514` in engine and gateway tests.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
73298d2da0c833468f957bb436f1e33400307483. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-09-30 13:46:44 -07:00
Will Chen
8f31821442 super value (#1408)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Introduces a new `auto` model `value` (Super Value Pro), adds
configurable tag colors across model types, and updates Model Picker
filtering and badges.
> 
> - **Models and Types**:
> - Add new auto model `value` ("Super Value (Pro)") with `tag: Budget`
and `tagColor`.
>   - Enhance `turbo` auto model with `tag: Fast` and `tagColor`.
>   - Extend `LanguageModel` and `ModelOption` with optional `tagColor`.
> - **Model Picker UI**:
> - Render model tags with configurable colors via `tagColor` and `cn`
utility.
> - Update "Pro only" badge logic (hide when display name already
includes "(Pro)"); adjust badge text size.
> - Refine auto model visibility: non‑Pro hides `turbo` and `value`; Pro
hides `free`.
>   - Minor styling/labeling tweaks in tag and badge rendering.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
4485fddad502237d4bceb43732043d3eaa60eaa0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-09-30 12:58:17 -07:00
Will Chen
a8e9caf7b0 Turbo models (#1249)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds “Dyad Turbo” models for Pro users and centralizes model/provider
constants. Pro users can pick fast, cost‑effective models directly from
the ModelPicker, with clearer labels and gating.

- **New Features**
- Added Dyad Turbo provider in ModelPicker with Qwen3 Coder and Kimi K2
(Pro only).
- Turbo options are hidden for non‑Pro users; “Pro only” badge shown
where applicable.
- “Smart Auto” label now applies only to the Auto model to avoid
confusion.

- **Refactors**
- Moved all model/provider constants into language_model_constants.ts
and updated imports (helpers, client, thinking utils).

<!-- End of auto-generated description by cubic. -->
2025-09-10 15:59:54 -07:00