Commit Graph

92 Commits

Author SHA1 Message Date
Will Chen
4926a01a0f always use engine (#1525)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Always route Dyad Pro through the Dyad engine (no gateway), simplify
logging/flags, and remove DYAD_GATEWAY usage and staging gateway script.
> 
> - **Engine/Model client (`src/ipc/utils/get_model_client.ts`)**:
> - Always use `createDyadEngine` when Dyad Pro is enabled; remove
gateway fallback and `DYAD_GATEWAY_URL` usage.
> - Simplify logs to engine-only; drop conditional gateway/engine
logging.
> - Always set `isEngineEnabled: true`; pass `{ files }` to the provider
unconditionally; strip `:free` from model names.
> - **Scripts (`package.json`)**:
>   - Remove `staging:gateway` script.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
5fdbbb703c47de4623ceeb82e6678ace34dc268e. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-10-13 19:14:05 -07:00
Will Chen
9691c9834b Support concurrent chats (#1478)
Fixes #212 


<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> Add concurrent chat support with per-chat state, chat activity UI, IPC
per-chat handling, and accompanying tests.
> 
> - **Frontend (Chat concurrency)**
> - Replace global chat atoms with per-chat maps:
`chatMessagesByIdAtom`, `isStreamingByIdAtom`, `chatErrorByIdAtom`,
`chatStreamCountByIdAtom`, `recentStreamChatIdsAtom`.
> - Update `ChatPanel`, `ChatInput`, `MessagesList`,
`DyadMarkdownParser`, and `useVersions` to read/write per-chat state.
> - Add `useSelectChat` to centralize selecting/navigating chats; wire
into `ChatList`.
> - **UI**
> - Add chat activity popover: `ChatActivityButton` and list; integrate
into `preview_panel/ActionHeader` (renamed from `PreviewHeader`) and
swap in `TitleBar`.
> - **IPC/Main**
> - Send error payloads with `chatId` on `chat:response:error`; update
`ipc_client` to route errors per chat.
> - Persist streaming partial assistant content periodically; improve
cancellation/end handling.
> - Make `FileUploadsState` per-chat (`addFileUpload({chatId,fileId},
...)`, `clear(chatId)`, `getFileUploadsForChat(chatId)`); update
handlers/processors accordingly.
> - **Testing**
> - Add e2e `concurrent_chat.spec.ts` and snapshots; extend helpers
(`snapshotMessages` timeout, chat activity helpers).
> - Fake LLM server: support `tc=` with options, optional sleep delay to
simulate concurrency.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
9035f30b73a1f2e5a366a0cac1c63411742b16f3. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-10-09 10:51:01 -07:00
Will Chen
75b4b7c299 Handle mentioned apps via smart context (#1412)
<!-- CURSOR_SUMMARY -->
> [!NOTE]
> Send mentioned apps (names + files) to the engine through
dyad_options, omit inline other-apps prefix when engine is enabled, and
adjust e2e to snapshot request payload.
> 
> - **Engine/Backend**:
> - Pass mentioned apps to engine via
`providerOptions['dyad-engine'].dyadMentionedApps` and forward as
`dyad_options.mentioned_apps` in `llm_engine_provider`.
> - Gate inline other-apps context: only include `otherCodebasePrefix`
when `isEngineEnabled` is false.
> - Enhance `extractMentionedAppsCodebases` to return `files` alongside
`codebaseInfo`.
> - **Tests**:
> - e2e: change `mention app (with pro)` snapshot to
`snapshotServerDump("request")` and update snapshot to assert request
payload contents.
> 
> <sup>Written by [Cursor
Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit
7ddddf6c16c53cd36b4c7e4ec6a57da0616d1bb0. This will update automatically
on new commits. Configure
[here](https://cursor.com/dashboard?tab=bugbot).</sup>
<!-- /CURSOR_SUMMARY -->
2025-09-30 17:05:18 -07:00
Md Rakibul Islam Rocky
29d8421ce9 Enhance Azure configuration handling and UI updates (#1289)
# Changes 

- Update Azure configuration components to manage API key and resource
name settings.
- Improve visibility of conThis pull request introduces a redesigned
Azure provider settings UI and refactors the logic for configuring Azure
OpenAI credentials, both in the frontend and supporting hooks. The main
changes focus on making the Azure configuration experience clearer and
more robust by supporting both environment variable and saved settings,
improving status indicators, and updating the logic for determining
provider readiness.

**Azure Provider UI and Logic Improvements**

* Added a new `AzureConfiguration` component that provides a dedicated
form for entering and saving Azure resource name and API key, with clear
status indicators and error handling. The UI now explains precedence
between saved settings and environment variables, and guides users
through configuration.
(`src/components/settings/AzureConfiguration.tsx`)
* Updated the main provider settings page and API key configuration
logic to pass Azure-specific settings and update functions to the new
component, ensuring seamless integration and correct state management.
(`src/components/settings/ApiKeyConfiguration.tsx`,
`src/components/settings/ProviderSettingsPage.tsx`)
[[1]](diffhunk://#diff-2104fb487cda3768cc5777889100e882f51e7fb3e13abe3cc89cf8ed1444300aR35)
[[2]](diffhunk://#diff-2104fb487cda3768cc5777889100e882f51e7fb3e13abe3cc89cf8ed1444300aR51-R61)
[[3]](diffhunk://#diff-9140e707ebb56ffed3272b4661ea1e6d8388ee604a8535c58e8a1564d280057cR297)
* Refactored the logic for determining whether Azure is configured to
check both saved settings and environment variables, ensuring accurate
status display and enabling fallback to environment variables if no
settings are saved. (`src/components/settings/ProviderSettingsPage.tsx`,
`src/hooks/useLanguageModelProviders.ts`)
[[1]](diffhunk://#diff-9140e707ebb56ffed3272b4661ea1e6d8388ee604a8535c58e8a1564d280057cL72-R99)
[[2]](diffhunk://#diff-9ac9e279a0cda34a0bc519348d5474b2e355b0828a678495be3af1e8984b5be5R35-R48)
* Updated the Azure provider E2E test to verify the new UI elements,
status indicators, and guidance, ensuring the test matches the new
configuration flow and messaging.
(`e2e-tests/azure_provider_settings.spec.ts`)

**Supporting Type and Import Updates**

* Added and updated type imports for `AzureProviderSetting` and
`VertexProviderSetting` where needed to support the new logic and UI.
(`src/components/settings/ProviderSettingsPage.tsx`,
`src/hooks/useLanguageModelProviders.ts`,
`src/ipc/utils/get_model_client.ts`)
[[1]](diffhunk://#diff-9140e707ebb56ffed3272b4661ea1e6d8388ee604a8535c58e8a1564d280057cL14-R14)
[[2]](diffhunk://#diff-9ac9e279a0cda34a0bc519348d5474b2e355b0828a678495be3af1e8984b5be5L5-R5)
[[3]](diffhunk://#diff-3cd526c6c10413c1387bfef450e48b880ba6f54865e96046044586ff4192bcceR15)
* Changed Azure model client import to use `createAzure` for consistency
and future extensibility. (`src/ipc/utils/get_model_client.ts`)
[Copilot is generating a summary...]figuration status and error handling
in the UI.
- Refactor environment variable checks to prioritize saved settings.
- Add support for Azure provider settings in the schema.
- Modify tests to reflect changes in Azure configuration requirements.

# Changes in short

- **Azure settings panel**
  - Replaced with a full form that:
    - Persists API key and resource name  
    - Surfaces save state  
    - Keeps the environment-variable helper  
  - *(src/components/settings/AzureConfiguration.tsx:23-214)*

- **Settings stack workflow**
  - Threaded the new Azure workflow:
    - Config shim now passes `updateSettings`  
    - Provider status checks prefer saved Azure values before env vars  
- *(src/components/settings/ApiKeyConfiguration.tsx:40-55,
src/components/settings/ProviderSettingsPage.tsx:60-105)*

- **Provider detection**
  - Azure treated like other saved credentials by:
    - Looking for both stored fields, or  
    - The pair of env vars  
  - *(src/hooks/useLanguageModelProviders.ts:5-57)*

- **Back-end model creation**
  - Reads saved Azure credentials (falling back to env vars)  
  - Builds the client via `createAzure`  
  - *(src/ipc/utils/get_model_client.ts:316-369)*

- **Provider schema support**
  - Extended so Azure can store its resource name alongside the secret  
  - *(src/lib/schemas.ts:82-109)*  

- **E2E tests**
  - Updated Azure Playwright spec to cover the new UI  
  - *(e2e-tests/azure_provider_settings.spec.ts:4-50)*

Issues resolved: #1275
2025-09-30 13:53:52 -07:00
Will Chen
d96e95c1da Support web search (#1370)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds web search to Dyad Pro chats with a new UI, tag parsing, and a Pro
Mode toggle that wires through to the engine.

- **New Features**
  - Pro Mode toggle: “Web Search” (settings.enableProWebSearch).
  - New custom tags: dyad-web-search, dyad-web-search-result, dyad-read.
- Collapsible Web Search Result UI with in-progress badge and markdown
rendering.
- Engine integration: passes enable_web_search and activates DyadEngine
when web search is on.

<!-- End of auto-generated description by cubic. -->
2025-09-24 19:39:39 -07:00
Will Chen
6d3c397d40 Add MCP support (#1028) 2025-09-19 15:43:39 -07:00
Will Chen
a8e9caf7b0 Turbo models (#1249)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds “Dyad Turbo” models for Pro users and centralizes model/provider
constants. Pro users can pick fast, cost‑effective models directly from
the ModelPicker, with clearer labels and gating.

- **New Features**
- Added Dyad Turbo provider in ModelPicker with Qwen3 Coder and Kimi K2
(Pro only).
- Turbo options are hidden for non‑Pro users; “Pro only” badge shown
where applicable.
- “Smart Auto” label now applies only to the Auto model to avoid
confusion.

- **Refactors**
- Moved all model/provider constants into language_model_constants.ts
and updated imports (helpers, client, thinking utils).

<!-- End of auto-generated description by cubic. -->
2025-09-10 15:59:54 -07:00
Will Chen
72acb31d59 More free models (#1244)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds support for free OpenRouter models and a new “Free (OpenRouter)”
auto option that fails over across free models for reliability. Improves
setup flow and UI with provider cards, a “Free” price badge, and an
OpenRouter setup prompt in chat.

- **New Features**
- Added OpenRouter free models: Qwen3 Coder (free), DeepSeek v3 (free),
DeepSeek v3.1 (free), marked with dollarSigns=0 and a “Free” badge.
- New auto model: “Free (OpenRouter)” that uses a fallback client to
cycle through free models with smart retry on transient errors.
- New SetupProviderCard component and updated SetupBanner with dedicated
Google and OpenRouter setup cards.
- Chat shows an OpenRouter setup prompt when “Free (OpenRouter)” is
selected and OpenRouter isn’t configured.
- New PriceBadge component in ModelPicker to display “Free” or price
tier.
- E2E: added setup flow test and option to show the setup screen in
tests.
- Model updates: added DeepSeek v3.1, updated Kimi K2 to kimi-k2-0905,
migrated providers to LanguageModelV2.

<!-- End of auto-generated description by cubic. -->
2025-09-10 14:20:17 -07:00
Adeniji Adekunle James
f8ec10ec6b feat: add xAI (Grok) as AI provider (#1209)
# Add xAI (Grok) Provider Support

## Overview
This PR adds support for xAI's Grok models as an AI provider, focusing
on coding-optimized models.

## Changes Made

### Provider Configuration (`language_model_helpers.ts`)
- Added xAI to `MODEL_OPTIONS` with 3 coding-focused models:
  - `grok-code-fast-1`: Fast, economical coding model (256k context)
  - `grok-4`: Most capable flagship model (256k context)
  - `grok-3`: Powerful coding model (131k context)

<img width="805" height="592" alt="image"
src="https://github.com/user-attachments/assets/a99b9495-e90e-40f3-a772-be9807b24501"
/>


<img width="805" height="653" alt="image"
src="https://github.com/user-attachments/assets/aad7b333-ee74-457a-b5b7-5d20bd54d7e0"
/>

## Dependencies
- Requires `@ai-sdk/xai` package (already imported)
- Uses existing provider pattern and infrastructure


## Why xAI for Coding?
xAI's Grok models have shown impressive results in coding benchmarks:
- Trained on high-quality programming datasets reflecting real-world
tasks
- Excels at agentic coding workflows with fast reasoning capabilities
- Strong performance across multiple programming languages (TypeScript,
Python, Java, Rust, C++, Go)
- Achieved 70.8% on SWE-Bench-Verified using internal evaluation
- Optimized for rapid iteration in development environments
    
<!-- This is an auto-generated description by cubic. -->
---

## Summary by cubic
Adds xAI (Grok) as a provider so users can pick Grok coding models in
the app. Integrates provider config, client wiring, and schema updates.

- **New Features**
- Added xAI provider with env var mapping (XAI_API_KEY) and provider
metadata.
- Exposed models: grok-code-fast-1 (256k), grok-4 (256k), grok-3 (131k).
  - Hooked up get_model_client to use @ai-sdk/xai (createXai).
  - Included "xai" in validation schemas and model options.

- **Migration**
  - Set XAI_API_KEY to enable xAI.

<!-- End of auto-generated description by cubic. -->

---------

Co-authored-by: Will Chen <willchen90@gmail.com>
2025-09-08 23:01:59 -07:00
Samrat Jha
938595aab2 Add support for Amazon Bedrock provider (#1185)
- follows existing patterns for AI SDK to provide Bedrock integration
- Uses Bedrock's API token feature for authentication which provides a
standard experience
- bedrock provided models match the Anthropic provided models (for now)


**Disclaimer**: The contributing docs are extremely sparse. I don't
actually know how to build this and get this running in Electron


## Testing

- AWS Bedrock provider is available for selection
<img width="994" height="496" alt="image"
src="https://github.com/user-attachments/assets/3cb21fed-9826-40e5-8019-b2b5df5e873b"
/>

- The provider settings also show the right models and offer the right
env variable to use
<img width="949" height="862" alt="image"
src="https://github.com/user-attachments/assets/8c23d5c8-d84d-4bf7-856a-8dc8d9d6c4b4"
/>


    
<!-- This is an auto-generated description by cubic. -->
---

## Summary by cubic
Adds AWS Bedrock as a provider so users can run Claude models via
Bedrock with API token authentication. The settings now list Bedrock
with supported models and a new env var.

- New Features
- New provider: bedrock using @ai-sdk/amazon-bedrock, wired into model
client and schemas.
- Models: Claude 4 Sonnet, Claude 3.7 Sonnet, Claude 3.5 Sonnet (Bedrock
model IDs).
- Settings: shows AWS Bedrock with correct models and env var
AWS_BEARER_TOKEN_BEDROCK.
  - Default region: us-east-1.

- Migration
  - Set AWS_BEARER_TOKEN_BEDROCK with your Bedrock API token.
  - Select AWS Bedrock in settings and pick a model.

<!-- End of auto-generated description by cubic. -->

Co-authored-by: Samrat Jha <samratj@amazon.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
2025-09-08 22:52:12 -07:00
Md Rakibul Islam Rocky
4db6d63b72 Add Google Vertex AI provider (#1163)
# Summary

* Adds first-class **Google Vertex AI provider** using
`@ai-sdk/google-vertex`.
* Supports **Gemini 2.5** models and partner **MaaS (Model Garden)**
models via full publisher IDs.
* New **Vertex-specific settings UI** for Project, Location, and Service
Account JSON.
* Implements a **“thinking” toggle** for Gemini 2.5 Flash

  * Pro: always on
  * Flash: toggleable
  * Flash Lite: none
* Fixes *“AI not found”* for Vertex built-ins by mapping to
`publishers/google` paths.
* Hardens **cross-platform file ops** and ensures all tests pass.

---

# What’s New

### Vertex AI Provider

* Uses `@ai-sdk/google-vertex` with `googleAuthOptions.credentials` from
pasted Service Account JSON.
* Configurable **project** and **location**.
* Base URL → `/projects/{project}/locations/{location}`

  * Built-ins: `publishers/google/models/<id>`
  * Partner MaaS: `publishers/<partner>/models/...`

### Built-in Vertex Models

* `gemini-2.5-pro`
* `gemini-2.5-flash`
* `gemini-2.5-flash-lite`

### Thinking Behavior

* Vertex + Google marked as thinking-capable.
* Pro: always thinking
* Flash: toggle in UI
* Flash Lite: none

### Vertex Settings UI

* New **Google Vertex AI panel** for Project ID, Location, Service
Account JSON.
* Keys encrypted like other secrets.

---

# Fixes

* **Model resolution:** built-ins auto-map to
`publishers/google/models/<id>`.
* **Partner MaaS support:** full publisher IDs work directly (e.g.
DeepSeek).
* **Cross-platform paths:** normalize file ops with `toPosixPath`,
preserve `safeJoin` semantics.

---

# Why This Is Better

* Users can select **Vertex alongside other providers**.
* **More models** available through Model Garden.
* **Dedicated setup UI** reduces misconfig.
* **Thinking toggle** gives control over cost vs. reasoning depth.

---

# Files Changed

* **Provider & Models**: `language_model_helpers.ts`,
`get_model_client.ts`
* **Streaming**: `chat_stream_handlers.ts`
* **Schemas & Encryption**: `schemas.ts`, `settings.ts`
* **Settings UI**: `VertexConfiguration.tsx`, `ApiKeyConfiguration.tsx`
* **Models UI**: `ModelsSection.tsx` (Flash toggle)
* **Setup Detection**: `useLanguageModelProviders.ts`
* **Path Utils**: `path_utils.ts`, `response_processor.ts`
* **Deps**: `package.json` → `@ai-sdk/google-vertex@3.0.16`

---

# Tests & Validation

* **TypeScript**: `npm run ts` → 
* **Lint**: `npm run lint` → 
* **Unit tests**: `npm test` →  231 passed, 0 failed

---

# Migration / Notes

* No breaking changes.
* For Vertex usage:

  * Ensure Vertex AI API is enabled.
  * Service Account needs `roles/aiplatform.user`.
  * Region must support model (e.g. `us-central1`).
* Thinking toggle currently affects **only** Gemini 2.5 Flash.

---

# Manual QA

1. Configure Vertex with Project/Location/Service Account JSON.
2. Test built-ins:

   * `gemini-2.5-pro`
   * `gemini-2.5-flash` (toggle on/off)
   * `gemini-2.5-flash-lite`
3. Test MaaS partner model (e.g., DeepSeek) via full publisher ID.
4. Verify other providers remain unaffected.
    
<!-- This is an auto-generated description by cubic. -->
---

## Summary by cubic
Adds a first-class Google Vertex AI provider with Gemini 2.5 models, a
Vertex settings panel, and a “thinking” toggle for Gemini 2.5 Flash.
Also fixes model resolution for Vertex and hardens cross-platform file
operations.

- **New Features**
- Vertex AI provider via @ai-sdk/google-vertex with project, location,
and service account JSON.
- Built-in models: gemini-2.5-pro, gemini-2.5-flash,
gemini-2.5-flash-lite.
- “Thinking” support: Pro always on; Flash toggle in Models UI; Flash
Lite none.
- MaaS partners supported via full publisher paths (e.g.,
publishers/<partner>/models/...).
  - Vertex settings UI with encrypted service account key storage.

- **Bug Fixes**
  - Built-in Vertex models auto-map to publishers/google/models/<id>.
  - Consistent file ops across platforms using toPosixPath.
- Vertex readiness detection requires project/location/service account
JSON.
- Streaming “thinking” behavior respects Vertex Flash toggle and Pro
always-on.

<!-- End of auto-generated description by cubic. -->

---------

Co-authored-by: Md Rakibul Islam Rocky <mdrirocky08@gmail.com>
Co-authored-by: graphite-app[bot] <96075541+graphite-app[bot]@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
2025-09-08 22:41:12 -07:00
Will Chen
56d0e76790 Make balanced smart context option the default (#1186)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Set “balanced” as the default smart context mode. Users now get balanced
when Smart Files Context is enabled and no mode is set; “conservative”
must be explicitly selected.

- **Refactors**
- Default fallback to balanced in UI and engine (proSmartContextOption
undefined -> "balanced").
- ProModeSelector saves "conservative" explicitly; selector reads
undefined as balanced.
  - Updated schema and types to allow "balanced" | "conservative".
- Engine payload now includes smart_context_mode with "balanced" by
default; e2e tests and snapshots updated.

- **Migration**
- No action needed. Existing users without an explicit mode will use
balanced by default; selecting conservative persists.

<!-- End of auto-generated description by cubic. -->
2025-09-04 11:06:46 -07:00
Tanner-Maasen
2ffbbbca8f Add Azure OpenAI Custom Model Integration (#1001)
Fixes #710 

This PR implements comprehensive Azure OpenAI integration for Dyad,
enabling users to leverage Azure
OpenAI models through proper environment variable configuration. The
implementation adds Azure as a
supported provider with full integration into the existing language
model architecture, including support
  for GPT-5 models. Key features include environment-based
configuration using `AZURE_API_KEY` and `AZURE_RESOURCE_NAME`,
specialized UI components that provide clear
setup instructions and status indicators, and seamless integration with
Dyad's existing provider system.
The Azure provider leverages the @ai-sdk/azure package (v1.3.25) for
compatibility with the current
  TypeScript language model interfaces.

The implementation includes robust error handling for missing
configuration, comprehensive test coverage
with 9 new unit tests covering critical functionality like model client
creation and error scenarios, and
  an E2E test for the Azure-specific settings UI. 

<img width="1510" height="908" alt="Screenshot 2025-08-18 at 9 14 32 PM"
src="https://github.com/user-attachments/assets/04aa99e1-1590-4bb0-86c9-a67b97bc7500"
/>

---------

Co-authored-by: graphite-app[bot] <96075541+graphite-app[bot]@users.noreply.github.com>
Co-authored-by: Will Chen <willchen90@gmail.com>
2025-08-30 20:47:25 -07:00
Will Chen
9869fefbcb Support dyad docker (#674)
TODOs:
- [ ] clean-up docker images

https://claude.ai/chat/13b2c5d3-0d46-49e3-a771-d10edf1e29f4
2025-08-26 11:07:40 -07:00
Will Chen
4e9a927a7b smart context v3 (#1022)
<!-- This is an auto-generated description by cubic. -->

## Summary by cubic
Adds Smart Context v3 with selectable modes (Off, Conservative,
Balanced) and surfaces token savings in chat. Also improves token
estimation by counting per-file tokens when Smart Context is enabled.

- **New Features**
- Smart Context selector in Pro settings with three options.
Conservative is the default when enabled without an explicit choice.
- New setting: proSmartContextOption ("balanced"); undefined implies
Conservative.
- Engine now receives enable_smart_files_context and smart_context_mode.
- Chat shows a DyadTokenSavings card when the message contains
token-savings?original-tokens=...&smart-context-tokens=..., with percent
saved and a tooltip for exact tokens.
- Token estimation uses extracted file contents for accuracy when Pro +
Smart Context is on; otherwise falls back to formatted codebase output.

<!-- End of auto-generated description by cubic. -->
2025-08-20 14:16:07 -07:00
Will Chen
d535db6251 Upgrade to AI sdk with codemod (#1000) 2025-08-18 22:21:27 -07:00
Will Chen
573642ae5f Prompt gallery (#957)
- [x] show prompt instead of app in autocomplete
- [x] use proper array/list for db (tags)
- [x] don't do <dyad-prompt> - replace inline
2025-08-18 13:25:11 -07:00
Will Chen
a6dca76d29 Allow referencing other apps (#692)
- [x] Update chat_stream_handlers
- [x] Update token handlers
- [x] Update HomeChatInput
- [x] update lexical chat input: do not allow referencing same app
(current app, or other already selected apps)
- [x] I don't think smart context will work on this...
- [x] Enter doesn't clear...
2025-08-13 16:22:49 -07:00
Will Chen
ab757d2b96 Add GPT 5 support (#902) 2025-08-11 15:00:02 -07:00
Will Chen
5db0b04400 Support exclude paths in manual context management (#774) 2025-08-05 14:33:39 -07:00
Will Chen
b0f08eaf15 Neon / portal template support (#713)
TODOs:
- [x] Do restart when checkout / restore if there is a DB
- [x] List all branches (branch id, name, date)
- [x] Allow checking out versions with no DB
- [x] safeguard to never delete main branches
- [x] create app hook for neon template
- [x] weird UX with connector on configure panel
- [x] tiny neon logo in connector
- [x] deploy to vercel
- [x] build forgot password page
- [x] what about email setup
- [x] lots of imgix errors
- [x] edit file - db snapshot
- [x] DYAD_DISABLE_DB_PUSH
- [ ] update portal doc
- [x] switch preview branch to be read-only endpoint
- [x] disable supabase sys prompt if neon is enabled
- [ ] https://payloadcms.com/docs/upload/storage-adapters
- [x] need to use main branch...

Phase 2?
- [x] generate DB migrations
2025-08-04 16:36:09 -07:00
Will Chen
03c200b932 Fix proxy server: remove URL param so it doesn't interfere with Next.js (#753)
Fixes #748
2025-07-31 15:18:37 -07:00
Will Chen
c490af720a Migration file path (#726)
Fixes #714
2025-07-28 14:57:59 -07:00
Will Chen
e947eede7a community templates (#691) 2025-07-23 10:11:16 -07:00
Will Chen
9edd0fa80f Upload image via chat (#686) 2025-07-22 15:45:30 -07:00
Will Chen
444397ea86 Create Publish panel to easy GitHub and Vercel push (#655) 2025-07-17 15:54:08 -07:00
Will Chen
cb60a0562b Send request id for LLM engine calls (#659) 2025-07-17 14:16:23 -07:00
Will Chen
2c284d0f20 Allow configuring environmental variables in panel (#626)
- [ ] Add test cases
2025-07-11 10:52:52 -07:00
Will Chen
65b3d9cb3e Fix git migration edge case (#612)
Fixes #608
2025-07-10 10:12:58 -07:00
Will Chen
a1aee5c2b8 Graduate file editing from experimental (#599) 2025-07-08 11:39:46 -07:00
Will Chen
bc38f9b2d7 Improve check error performance by off-loading to worker thread w/ incremental compilation (#575) 2025-07-07 12:47:33 -07:00
Will Chen
678cd3277e Problems: auto-fix & problem panel (#541)
Test cases:
- [x] create-ts-errors
  - [x] with auto-fix
  - [x] without auto-fix 
- [x] create-unfixable-ts-errors
- [x] manually edit file & click recheck
- [x] fix all
- [x] delete and rename case

THINGS
- [x] error handling for checkProblems isn't working as expected
- [x] make sure it works for both default templates (add tests) 
- [x] fix bad animation
- [x] change file context (prompt/files)

IF everything passes in Windows AND defensive try catch... then enable
by default
- [x] enable auto-fix by default
2025-07-02 15:43:26 -07:00
Will Chen
d5307a7207 Capacitor command fix (#498) 2025-06-25 16:45:49 -07:00
Will Chen
a985a5aadf Ensure fs changes in response processor are within app dir (#496) 2025-06-25 16:13:10 -07:00
Will Chen
2ea9500f73 Configurable thinking budget (default to medium) (#494) 2025-06-25 15:36:05 -07:00
Will Chen
47f3ec460a Add Capacitor support (#483)
Co-authored-by: cubic-dev-ai[bot] <191113872+cubic-dev-ai[bot]@users.noreply.github.com>
2025-06-24 14:35:05 -07:00
Will Chen
5d678c2ead Smart auto (#476) 2025-06-23 23:08:29 -07:00
Will Chen
2f00dc5955 Add more logging around git commit failure (#467)
Help with #464

Co-authored-by: William Chen <will@mac.lan>
2025-06-23 13:11:28 -07:00
Will Chen
c879a4e01d always use engine if pro modes are enabled (#449) 2025-06-19 12:04:24 -07:00
Will Chen
9fbd7031d9 build ask mode (#444) 2025-06-19 10:42:51 -07:00
Will Chen
8464609ba8 Fix parsing dyad tags with nested tags: < > (#445)
Fixes #441
2025-06-19 10:08:26 -07:00
Will Chen
897d2e522c enable engine for all models (#434) 2025-06-18 17:17:29 -07:00
Will Chen
e326f14779 Revert "Revert "Update gemini 2.5 models to GA variants (#425)" (#429)" (#437)
This reverts commit ff4e93d747.
2025-06-18 17:11:17 -07:00
Will Chen
382fe9bab5 support setting for writing supabase migration files (#427) 2025-06-17 15:14:02 -07:00
Will Chen
d6d6918d1b Safe send (#421) 2025-06-16 21:58:20 -07:00
Will Chen
30b5c0d0ef Replace thinking with native Gemini thinking summaries (#400)
This uses Gemini's native [thinking
summaries](https://cloud.google.com/vertex-ai/generative-ai/docs/thinking#thought-summaries)
which were recently added to the API.

Why? The grafted thinking would sometimes cause weird issues where the
model, especially Gemini 2.5 Flash, got confused and put dyad tags like
`<dyad-write>` inside the `<think>` tags.

This also improves the UX because you can see the native thoughts rather
than having the Gemini response load for a while without any feedback.

I tried adding Anthropic extended thinking, however it requires temp to
be set at 1, which isn't ideal for Dyad's use case where we need precise
syntax following.
2025-06-16 17:29:32 -07:00
Will Chen
67dc9f4c42 Print engine/gateway URL more clearly (#396) 2025-06-13 10:05:12 -07:00
Will Chen
99e6592abc commit upgrade (#391) 2025-06-11 15:42:10 -07:00
Will Chen
fa80014e16 Remove budget saver mode (#378)
This code was quite complex and hairy and resulted in very opaque errors
(for both free and pro users). There's not much benefit to budget saver
because Google removed 2.5 Pro free quota a while ago (after it
graduated the model from experimental to preview). Dyad Pro users can
still use 2.5 Flash free quota by disabling Dyad Pro by clicking on the
Dyad Pro button at the top.
2025-06-10 13:54:27 -07:00
Will Chen
534cbad909 Allow manual context management (#376) 2025-06-10 13:52:20 -07:00