<!-- This is an auto-generated description by cubic. -->
## Summary by cubic
Adds “Dyad Turbo” models for Pro users and centralizes model/provider
constants. Pro users can pick fast, cost‑effective models directly from
the ModelPicker, with clearer labels and gating.
- **New Features**
- Added Dyad Turbo provider in ModelPicker with Qwen3 Coder and Kimi K2
(Pro only).
- Turbo options are hidden for non‑Pro users; “Pro only” badge shown
where applicable.
- “Smart Auto” label now applies only to the Auto model to avoid
confusion.
- **Refactors**
- Moved all model/provider constants into language_model_constants.ts
and updated imports (helpers, client, thinking utils).
<!-- End of auto-generated description by cubic. -->
This uses Gemini's native [thinking
summaries](https://cloud.google.com/vertex-ai/generative-ai/docs/thinking#thought-summaries)
which were recently added to the API.
Why? The grafted thinking would sometimes cause weird issues where the
model, especially Gemini 2.5 Flash, got confused and put dyad tags like
`<dyad-write>` inside the `<think>` tags.
This also improves the UX because you can see the native thoughts rather
than having the Gemini response load for a while without any feedback.
I tried adding Anthropic extended thinking, however it requires temp to
be set at 1, which isn't ideal for Dyad's use case where we need precise
syntax following.