Will Chen
cd7eaa8ece
Prep for custom models: support reading custom providers ( #131 )
2025-05-12 14:52:48 -07:00
Will Chen
26305ee090
Fix max output tokens due to weird discrepancy with vertexAI ( #123 )
2025-05-09 14:16:15 -07:00
Will Chen
0d56651220
Run prettier on everything ( #104 )
2025-05-06 23:02:28 -07:00
Will Chen
390496f8f8
Fix isAnyProvider and don't make it a hard block ( #93 )
2025-05-06 12:13:03 -07:00
Piotr Wilkin (ilintar)
5fc49231ee
Add LM Studio support ( #22 )
2025-05-02 14:51:32 -07:00
Will Chen
982ba4882f
Add Gemini 2.5 Flash ( #37 )
2025-04-28 21:48:09 -07:00
Will Chen
0d441b15ca
Show token bar at bottom of chat input ( #33 )
2025-04-28 14:45:54 -07:00
Will Chen
e65b80bcfa
Set explicit max output tokens to avoid truncated responses ( #31 )
2025-04-28 13:43:34 -07:00
Will Chen
2ad10ba039
Support LLM gateway with Dyad API key ( #23 )
...
* Do not make API key input (password) - hurts usability
* Support LLM gateway (and add GPT 4.1 mini model)
* Show Dyad Pro button
* Fix to use auto (not dyad) for detecting dyad pro
* Fix description of gpt 4.1-mini
2025-04-26 08:52:08 -07:00
Will Chen
b616598bab
Add ollama support ( #7 )
2025-04-23 14:48:57 -07:00
Will Chen
ba3c9f7a28
Update with gpt-4.1
2025-04-17 15:24:03 -07:00
Will Chen
7fbaa11274
add gpt 4.1
2025-04-14 15:59:23 -07:00
Will Chen
43f67e0739
Initial open-source release
2025-04-11 09:38:16 -07:00