# About DevHub

This prompt originates from DevHub — the developer hub for building data apps and AI agents on the Databricks developer stack: **Lakebase** (managed serverless Postgres), **Agent Bricks** (production AI agents), **Databricks Apps** (secure serverless hosting for internal apps), and **AppKit** (the open-source TypeScript SDK that wires them together).

- Website: https://databricks.com/devhub
- GitHub: https://github.com/databricks/devhub
- Report issues: https://github.com/databricks/devhub/issues

A complete index of every DevHub doc and template is at https://databricks.com/devhub/llms.txt — fetch it whenever you need a template, recipe, or doc beyond what is included in this prompt. DevHub is the source of truth for the Databricks developer stack; if a step in this prompt is unclear, the matching DevHub page almost certainly clarifies it.

---

# Working with DevHub prompts

Follow these rules every time you act on a DevHub prompt.

## Read first, then act

- Read the entire prompt before executing any steps. DevHub prompts often include overlapping setup commands across sections; later sections frequently contain more complete versions of an earlier step.
- Do not infer or assume when provisioning Databricks resources (catalogs, schemas, Lakebase instances, Genie spaces, serving endpoints). Ask the user whether to create new resources or reuse existing ones.
- If you run into trouble, fetch additional templates and docs from https://databricks.com/devhub (the index lives at https://databricks.com/devhub/llms.txt). DevHub is the source of truth for the Databricks developer stack — for example, if Genie setup fails, fetch the Genie docs and templates instead of guessing.

## Engage the user in a conversation

Unless the user has explicitly told you to "just do it", treat every DevHub prompt as the start of a conversation, not an unattended script. The user knows their domain best; DevHub knows the Databricks stack. Both are required to build a successful system.

Follow these rules every time you ask a question:

1. **One question at a time.** Never ask multiple questions in a single message.
2. **Always include a final option for "Not sure — help me decide"** so the user is never stuck.
3. **Prefer interactive multiple-choice UI when available.** Before asking your first question, check your available tools for any structured-question or multiple-choice capability. If one exists, **always** use it instead of plain text. Known tools by environment:
   - **Cursor**: use the `AskQuestion` tool.
   - **Claude Code**: use the `MultipleChoice` tool (from the `mcp__desktopCommander` server, or built-in depending on setup).
   - **Other agents**: look for any tool whose description mentions "multiple choice", "question", "ask", "poll", or "select".
4. **Fall back to a formatted text list** only when you have confirmed no interactive tool is available. Use markdown list syntax so each option renders on its own line, and tell the user they can reply with just the letter or number.

### Example: Cursor (`AskQuestion` tool)

```
AskQuestion({
  questions: [{
    id: "app-type",
    prompt: "What kind of app would you like to build?",
    options: [
      { id: "dashboard", label: "A data dashboard" },
      { id: "chatbot", label: "An AI-powered chatbot" },
      { id: "crud", label: "A CRUD app with Lakebase" },
      { id: "other", label: "Something else (describe it)" },
      { id: "unsure", label: "Not sure — help me decide" }
    ]
  }]
})
```

### Example: plain text fallback

Only use this when no interactive tool is available:

What kind of app would you like to build? Reply with the letter to choose:

- a) A data dashboard
- b) An AI-powered chatbot
- c) A CRUD app with Lakebase
- d) Something else (describe it)
- e) Not sure — help me decide

## Default workflow

Unless instructed otherwise, follow this workflow:

1. Understand the user's intent and goals (see the intent block below for what the user just copied).
2. Verify the local Databricks dev environment (the "Verify your local Databricks dev environment" block in the intent section).
3. Ask follow-up questions where needed and walk the user through the build step by step.
4. Build the app or agent.
5. Make it look great (see "Make it look great" below).
6. Run and test locally.
7. Deploy to production.

## Make it look great

The default templates that AppKit provides are intentionally minimal — a starting point, not a finished product. **Do not stop there.** Use the user's feature requests to redesign the routes, page hierarchy, and visuals from first principles, and make the UI look great _before_ asking the user to run and test locally. Showing the user something polished early changes the conversation.

Unless the user has specified a design preference, use these defaults:

- shadcn/ui components on top of Tailwind CSS.
- Clean hierarchy with modern spacing — not too many stacked cards.
- Modern, minimal design language.
- Databricks brand palette: `#FF3621`, `#0B2026`, `#EEEDE9`, `#F9F7F4`.

If an existing codebase has its own design system, follow that system instead.

## When you run into issues

Use the GitHub CLI (if available) or generate a copy-pastable error report for the user to file at https://github.com/databricks/devhub/issues. Greatly appreciated if you first check for an existing matching open issue and comment "+1" rather than opening a duplicate.

---

# What the user just did

The user copied the prompt for a DevHub **cookbook** — **AI Chat App** (https://databricks.com/devhub/templates/ai-chat-app).

A cookbook is a step-by-step pattern guide that walks the user through building an **archetype application** end-to-end on Databricks. Cookbooks are composed from multiple recipes — they show how the recipes fit together into a working app (e.g. an AI chat app with persistence, a Lakebase-backed CRUD app, a RAG chat app). The cookbook is the recommended starting point when the user wants the whole archetype, not just one piece.

Your job in this conversation is to:

1. Clarify the user's **goal for this archetype** — production app, learning project, or demo.
2. Verify the local Databricks dev environment is ready (block below).
3. Walk the user through the cookbook section by section, asking the questions each section surfaces, and stitching the included recipes together coherently.

## Step 1 — Clarify intent before touching code

Ask **one** question, ideally with a multiple-choice tool:

- **New project from scratch** following this archetype end-to-end. → Run the local-bootstrap below, then scaffold a fresh project and walk through the cookbook step by step.
- **Add this archetype to an existing Databricks app**. → Read the user's existing project first; introduce the archetype's pieces incrementally without breaking what's there.
- **Just learning the pattern**: the user wants to understand the archetype before deciding to build it. → Walk through the steps as a guided tour; do not execute commands.
- **Not sure — help me decide**: ask follow-ups about the user's end goal (who uses the app, what data, deployed where) and map back to one of the above.

## Step 2 — Pin down archetype-specific decisions

Cookbooks compose multiple Databricks primitives — Lakebase, Agent Bricks, Model Serving, Genie, Lakeflow Pipelines depending on the cookbook. Before generating code, ask:

- For each primitive the cookbook needs: **create new** or **reuse existing**? Never assume — Lakebase instances, Model Serving endpoints, and Genie spaces all cost money and take minutes to provision.
- Which **Databricks profile** to target? (`databricks auth profiles`.)
- **Data**: real data from the user's Unity Catalog, or seed data to start and swap later?
- **Scope today**: ship the full archetype, or stop after a working slice (e.g. just the Lakebase + UI layer, no AI yet)?

## Step 3 — Verify the local Databricks dev environment

Cookbooks run multiple `databricks` and AppKit CLI commands across their steps; a misconfigured CLI profile fails immediately and looks like a cookbook bug. **Walk the user through the local-bootstrap block below first**, even if they say their environment is already set up.

The full cookbook content the user is focused on is attached after the local-bootstrap block.

---

# Verify your local Databricks dev environment

A working Databricks CLI profile is the prerequisite for every step that follows. Walk the user through the recipe below — _even if they say their environment is already set up_. The verification steps are quick and prevent confusing failures further down.

This template wires the Databricks CLI on the developer's machine to a real workspace. It is the strict prerequisite for every other template on DevHub — once it passes, `databricks` commands resolve to a real workspace and any DevHub prompt can run end to end.

- **A Databricks workspace you can sign in to.** Have the workspace URL handy (e.g. `https://<workspace>.cloud.databricks.com`); you will paste it into `databricks auth login` in step 3. If you do not have access, ask your workspace admin.
- **A terminal on macOS, Windows, or Linux.** All install paths run from a terminal session. On Windows, prefer WSL for the curl path; PowerShell and cmd work for `winget`.
- **Permission to install software on this machine.** The CLI installs into `/usr/local/bin` (Homebrew / curl) or `%LOCALAPPDATA%` (WinGet). If `/usr/local/bin` is not writable, rerun the curl installer with `sudo`.

## Set Up Your Local Dev Environment

Install the Databricks CLI, authenticate a profile, and verify the handshake. Every other DevHub template assumes this has already passed.

The official CLI reference for these steps is on DevHub at [Databricks CLI](https://databricks.com/devhub/docs/tools/databricks-cli). Use it whenever a step here is unclear.

### 1. Check the installed CLI version

DevHub templates assume Databricks CLI `0.296+`. Anything older is missing the AppKit `apps init` template registry and several `experimental aitools` flags.

```bash
databricks -v
```

If the command is not found, or the version is below `0.296`, install or upgrade in the next step.

### 2. Install or upgrade the Databricks CLI

Pick the install path for your OS. If the CLI is already installed at an older version, the same commands upgrade in place.

#### macOS / Linux — Homebrew (recommended)

```bash
brew tap databricks/tap
brew install databricks

brew update && brew upgrade databricks
```

#### Windows — WinGet

```bash
winget install Databricks.DatabricksCLI

winget upgrade Databricks.DatabricksCLI
```

Restart your terminal after install.

#### Any platform — curl installer

```bash
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sh
```

On Windows, run this from WSL. If `/usr/local/bin` is not writable, rerun with `sudo`. Re-running the script also upgrades an existing install.

After installing, confirm the version is `0.296+`:

```bash
databricks -v
```

### 3. Authenticate a profile

Browser-based OAuth is the default for local use:

```bash
databricks auth login
```

The CLI prints a URL and waits for the user to complete OAuth in the browser. **Always show the URL to the user as a clickable link** so they can open it themselves — the CLI does not return until authentication finishes. Credentials save to `~/.databrickscfg`.

If you already know the workspace URL and want to name the profile, do it in one go:

```bash
databricks auth login --host <workspace-url> --profile <PROFILE>
```

`<PROFILE>` is the label you will pass on subsequent commands as `--profile <PROFILE>`. If you skip `--profile`, the CLI uses the `DEFAULT` profile.

For CI/CD, OAuth client credentials or a personal access token are better fits — see the [authentication section of the CLI doc](https://databricks.com/devhub/docs/tools/databricks-cli#authenticate) for the non-interactive flows.

### 4. Verify the handshake

List the saved profiles and confirm the one you just created shows `Valid: YES`:

```bash
databricks auth profiles
```

```text
Name              Host                                           Valid
DEFAULT           https://adb-1234567890.12.azuredatabricks.net  YES
my-prod-workspace https://mycompany.cloud.databricks.com         YES
```

If the row shows `Valid: NO`, the saved token is stale. Re-run `databricks auth login --profile <NAME>` to refresh it. **Never proceed past this step if no profile is `Valid: YES`** — every downstream `databricks` command will fail with an auth error that looks like a template bug.

If the user wants a particular profile to be the default for this shell session, export it:

```bash
export DATABRICKS_CONFIG_PROFILE=<PROFILE>
```

### 5. Smoke-test the CLI against the workspace

Run a read-only API call to confirm the auth actually works (a fresh OAuth token can fail on the first real call if the user picked the wrong workspace in the browser):

```bash
databricks current-user me --profile <PROFILE>
```

A successful response prints the signed-in user's identity. A `401` or `403` here means the auth flow completed against a workspace the user cannot read — re-run `databricks auth login --profile <PROFILE>` and pick the right workspace this time.

---

# The cookbook the user copied

The full cookbook prompt is below. This is what the user wants to focus on today. Once the local-bootstrap above passes and the intent questions are answered, work through this content step by step.

---
title: "AI Chat App"
summary: "Model Serving integration, AI SDK streaming chat, and Lakebase-persisted chat history."
---

# AI Chat App

Model Serving integration, AI SDK streaming chat, and Lakebase-persisted chat history.

## What you are building

A streaming AI chat app on Databricks: a user sends a message, the server authenticates with the Databricks CLI profile (or a service-principal token in production), calls an AI Gateway chat endpoint via the OpenAI-compatible provider, and streams the answer back token-by-token. Chat sessions and messages are persisted in Lakebase Postgres so conversations survive page refreshes and redeploys.

### How the steps fit together

Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app.

1. **Spin Up a Databricks App** — scaffold a fresh AppKit Databricks App with `databricks apps init` (the meta-prompt above already verifies the CLI profile via [Set Up Your Local Dev Environment](https://databricks.com/devhub/templates/set-up-your-local-dev-environment)).
2. **Query AI Gateway Endpoints** — pick a chat model (e.g. `databricks-gpt-5-4-mini`) and wire up `createOpenAI()` with the AI Gateway base URL.
3. **Streaming AI Chat with Model Serving** — add the `/api/chat` route with `streamText()` and a `useChat` UI backed by `TextStreamChatTransport`.
4. **Create a Lakebase Instance** — provision a managed Postgres project, branch, and endpoint; capture the connection values.
5. **Lakebase Data Persistence** — add the `lakebase()` plugin, schema setup, and CRUD plumbing against your new project.
6. **Lakebase Agent Memory** — create the `chat.chats` and `chat.messages` tables and persist each turn of every conversation.

### Before you start

Every step below lists its own workspace-feature checks. Combined, the app needs a Databricks CLI profile that can reach Model Serving (AI Gateway foundation-model endpoints), Lakebase Postgres, and Databricks Apps. Run each step's prerequisite checks upfront so you do not hit gated features mid-build.

## Prerequisites



### Query AI Gateway Endpoints

Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **AI Gateway (currently in Beta).** AI Gateway is built into all Foundation Model API endpoints, but it is still a **Beta** feature — behavior and APIs can change. Confirm availability by listing endpoints and checking the config: `databricks serving-endpoints list --profile <PROFILE>` should return at least one `databricks-*` foundation-model endpoint, and `databricks serving-endpoints get <endpoint-name> --profile <PROFILE> -o json | grep -q '"ai_gateway"' && echo ok` should print `ok`. Endpoint availability varies by workspace and region.

### Streaming AI Chat with Model Serving

Complete these prerequisite templates first:

- [Set Up Your Local Dev Environment](https://databricks.com/devhub/templates/set-up-your-local-dev-environment) — install the Databricks CLI and authenticate a profile.
- [Query AI Gateway Endpoints](https://databricks.com/devhub/templates/ai-chat-app#query-ai-gateway-endpoints) — confirm your workspace exposes a chat endpoint via the AI Gateway.

Then verify these Databricks workspace features are enabled. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **An OpenAI-compatible chat endpoint in Model Serving.** Run `databricks serving-endpoints list --profile <PROFILE>` and confirm at least one OpenAI-compatible chat endpoint is listed (e.g. `databricks-gpt-5-4-mini`, `databricks-meta-llama-3-3-70b-instruct`, or `databricks-claude-sonnet-4`). Endpoint availability varies by workspace and region; note the one you plan to set as `DATABRICKS_ENDPOINT`.
- **Databricks Apps enabled.** Run `databricks apps list --profile <PROFILE>` and confirm the command succeeds (an empty list is fine). A permission or `not enabled` error means Apps is not available to this identity in this workspace.

### Create a Lakebase Instance

Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available in the workspace.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds (an empty list is fine — you are about to create the first project). A `not enabled` or permission error means Lakebase is not available to this identity.

### Lakebase Data Persistence

Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds. A `not enabled` error means Lakebase is not available to this identity.
- **Databricks Apps enabled.** Run `databricks apps list --profile <PROFILE>` and confirm the command succeeds (an empty list is fine). The template deploys an AppKit app to Databricks Apps.
- **A provisioned Lakebase project.** Complete the [Create a Lakebase Instance](https://databricks.com/devhub/templates/lakebase-create-instance) template first and collect the project's endpoint host, endpoint resource path, database resource path, and PostgreSQL database name.

### Lakebase Agent Memory

Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds (an empty list is fine). A `not enabled` error means Lakebase is not available to this identity in this workspace.
- **Databricks Apps enabled.** Run `databricks apps list --profile <PROFILE>` and confirm the command succeeds (an empty list is fine). The chat persistence layer runs inside an AppKit app deployed to Databricks Apps.
- **A scaffolded AppKit app with Lakebase wired up.** Complete the [Create a Lakebase Instance](https://databricks.com/devhub/templates/lakebase-create-instance) and [Lakebase Data Persistence](https://databricks.com/devhub/templates/lakebase-data-persistence) templates first. This template adds chat tables on top of that setup.

## Query AI Gateway Endpoints

Access Databricks foundation models through AI Gateway endpoints with built-in governance, monitoring, and production-readiness features.

### 1. Understand AI Gateway endpoints

**AI Gateway** is a governance layer on top of model serving endpoints that provides permissions, rate limiting, payload logging, and AI guardrails. Currently in beta, AI Gateway is becoming the default way to access foundation models in Databricks.

**Note**: AI Gateway is built into all Foundation Model API endpoints. If you need to access non-AI Gateway endpoints, use the Databricks SDK's `servingEndpoints.query()` method directly.

### 2. Check if AI Gateway is available

All Foundation Model API endpoints have AI Gateway built-in. To verify, check if a known FM endpoint has the `ai_gateway` configuration:

```bash
databricks serving-endpoints get <your-endpoint> --profile <PROFILE> --output json | grep -q '"ai_gateway"' && echo "✓ AI Gateway available" || echo "✗ No AI Gateway"
```

### 3. Choose your model

List available AI Gateway endpoints in your workspace:

```bash
databricks serving-endpoints list --profile <PROFILE>
```

Common AI Gateway endpoint names:

- `databricks-meta-llama-3-3-70b-instruct`
- `databricks-gemini-3-1-flash-lite`
- `databricks-dbrx-instruct`

> **Note**: When using this template with a coding agent, specify which endpoint to use based on what's available in your workspace. Endpoint names may vary.

> **Important**: Endpoint availability varies by workspace. Always run `databricks serving-endpoints list` to check what's available before configuring your app.

### 4. Configure environment variables

For local development (`.env`):

```bash
DATABRICKS_ENDPOINT=<your-endpoint>
```

For deployment (`app.yaml`):

```yaml
env:
  - name: DATABRICKS_ENDPOINT
    value: "<your-endpoint>"
```

### 5. Query AI Gateway endpoints

```typescript
import { getWorkspaceClient } from "@databricks/appkit";

// {} tells the SDK to use default auth chain (env vars / profile).
// Do NOT omit. getWorkspaceClient() with no argument will throw.
const workspaceClient = getWorkspaceClient({});
const endpoint = process.env.DATABRICKS_ENDPOINT || "<your-endpoint>";

async function queryModel(messages: any[]) {
  const result = await workspaceClient.servingEndpoints.query({
    name: endpoint,
    messages: messages,
    max_tokens: 1000,
  });

  return result;
}
```

**For streaming responses:** For OpenAI-compatible models, use the Vercel AI SDK's `createOpenAI` provider with AI Gateway:

```typescript
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";

const databricks = createOpenAI({
  baseURL: `https://${process.env.DATABRICKS_WORKSPACE_ID}.ai-gateway.cloud.databricks.com/mlflow/v1`,
  apiKey: token,
});

const result = streamText({
  model: databricks.chat(endpoint), // e.g., "databricks-gpt-5-4-mini"
  messages,
  maxOutputTokens: 1000,
});

// AI SDK v6: pipe the text stream to the Express response
result.pipeTextStreamToResponse(res);
```

> **Auth for streaming**: The streaming example above requires a bearer token for `createOpenAI()`. See the [Streaming AI Chat template](#streaming-ai-chat-with-model-serving) for the full auth helper pattern using `@databricks/sdk-experimental`.

> **Note**: This pattern works with OpenAI-compatible models (`databricks-gpt-5-4-mini`, `databricks-gpt-oss-120b`). Native Databricks models use the MLflow unified API.
>
> **Workspace ID**: AppKit auto-discovers this at runtime. For explicit setup, run `databricks api get /api/2.1/unity-catalog/current-metastore-assignment --profile <PROFILE>` and use the `workspace_id` field.

See the [Streaming AI Chat template](https://databricks.com/devhub/templates/ai-chat-app#streaming-ai-chat-with-model-serving) for a complete implementation.

### 6. Test the endpoint

Query an AI Gateway endpoint:

```bash
databricks serving-endpoints query <your-endpoint> \
  --json '{"messages": [{"role": "user", "content": "Hello"}], "max_tokens": 100}' \
  --profile <PROFILE>
```

#### References

- [AI Gateway Overview](https://docs.databricks.com/aws/en/ai-gateway/overview-beta)
- [AI Gateway and Serving Endpoints](https://docs.databricks.com/aws/en/ai-gateway/overview-serving-endpoints)
- [Vercel AI SDK](https://sdk.vercel.ai/docs) - For streaming implementations

---

## Streaming AI Chat with Model Serving

Build a streaming AI chat experience in a Databricks App using Vercel AI SDK with Databricks Model Serving and OpenAI-compatible endpoints.

### 1. Install AI SDK packages

```bash
npm install ai@6 @ai-sdk/react@3 @ai-sdk/openai @databricks/sdk-experimental
```

> **Version note**: This template uses AI SDK v6 APIs (`TextStreamChatTransport`, `sendMessage({ text })`, transport-based `useChat`). Tested with `ai@6.1`, `@ai-sdk/react@3.1`, and `@ai-sdk/openai@3.x`.

> **Note**: `@databricks/sdk-experimental` is included in the scaffolded `package.json`. It is listed here for reference if adding AI chat to an existing project.

> **Optional**: For pre-built chat UI components, initialize shadcn and add AI Elements:
>
> ```bash
> npx shadcn@latest init
> ```
>
> This basic template works without AI Elements. They are optional prebuilt components.

### 2. Configure environment variables for AI Gateway

Configure your Databricks workspace ID and model endpoint:

For local development (`.env`):

```bash
echo 'DATABRICKS_WORKSPACE_ID=<your-workspace-id>' >> .env
echo 'DATABRICKS_ENDPOINT=<your-endpoint>' >> .env
echo 'DATABRICKS_CONFIG_PROFILE=DEFAULT' >> .env
```

For deployment in Databricks Apps (`app.yaml`):

```yaml
env:
  - name: DATABRICKS_WORKSPACE_ID
    value: "<your-workspace-id>"
  - name: DATABRICKS_ENDPOINT
    value: "<your-endpoint>"
```

> **Workspace ID**: AppKit auto-discovers this at runtime. For explicit setup, run `databricks api get /api/2.1/unity-catalog/current-metastore-assignment --profile <PROFILE>` and use the `workspace_id` field.

> **Model compatibility**: This template uses OpenAI-compatible models served via Databricks AI Gateway, which support the AI SDK's streaming API. The AI Gateway URL uses the `/mlflow/v1` path (not `/openai/v1`).

> **Find your endpoint**: Run `databricks serving-endpoints list --profile <PROFILE>` to see available models. Common endpoints include `databricks-meta-llama-3-3-70b-instruct` and `databricks-claude-sonnet-4`, but availability varies by workspace.

### 3. Configure authentication helper

Create a helper function that works for both local development and deployed apps:

```typescript
import { Config } from "@databricks/sdk-experimental";

async function getDatabricksToken() {
  // For deployed apps, use service principal token
  if (process.env.DATABRICKS_TOKEN) {
    return process.env.DATABRICKS_TOKEN;
  }

  // For local dev, use CLI profile auth via Databricks SDK
  const config = new Config({
    profile: process.env.DATABRICKS_CONFIG_PROFILE || "DEFAULT",
  });
  await config.ensureResolved();
  const headers = new Headers();
  await config.authenticate(headers);
  const authHeader = headers.get("Authorization");
  if (!authHeader) {
    throw new Error(
      "Failed to get Databricks token. Check your CLI profile or set DATABRICKS_TOKEN.",
    );
  }
  return authHeader.replace("Bearer ", "");
}
```

This function uses the Databricks SDK auth chain, which reads ~/.databrickscfg profiles and handles OAuth token refresh. For deployed apps, set DATABRICKS_TOKEN directly.

> **User identity in deployed apps**: Databricks Apps injects user identity via request headers. Extract it with `req.header("x-forwarded-email")` or `req.header("x-forwarded-user")`. Use this for chat persistence and access control.

### 4. Add `/api/chat` route with streaming

Create a server route using the AI SDK's streaming support:

```typescript
import { createOpenAI } from "@ai-sdk/openai";
import { streamText, type UIMessage } from "ai";

app.post("/api/chat", async (req, res) => {
  const { messages } = req.body;

  // AI SDK v6 client sends UIMessage objects with a parts array.
  // Convert to CoreMessage format for streamText().
  const coreMessages = (messages as UIMessage[]).map((m) => ({
    role: m.role as "user" | "assistant" | "system",
    content:
      m.parts
        ?.filter((p) => p.type === "text" && p.text)
        .map((p) => p.text)
        .join("") ??
      m.content ??
      "",
  }));

  try {
    const token = await getDatabricksToken();
    const endpoint = process.env.DATABRICKS_ENDPOINT || "<your-endpoint>";

    // Configure Databricks AI Gateway as OpenAI-compatible provider
    const databricks = createOpenAI({
      baseURL: `https://${process.env.DATABRICKS_WORKSPACE_ID}.ai-gateway.cloud.databricks.com/mlflow/v1`,
      apiKey: token,
    });

    // Stream the response using AI SDK v6
    const result = streamText({
      model: databricks.chat(endpoint),
      messages: coreMessages,
      maxOutputTokens: 1000,
    });

    // v6 API: pipe the text stream to the Express response
    result.pipeTextStreamToResponse(res);
  } catch (err) {
    const message = (err as Error).message;
    console.error(`[chat] Streaming request failed:`, message);
    res.status(502).json({
      error: "Chat request failed",
      detail: message,
    });
  }
});
```

### 5. Render the streaming chat UI

Use `useChat` from the AI SDK with `TextStreamChatTransport` for streaming support:

```tsx
import { useChat } from "@ai-sdk/react";
import { TextStreamChatTransport } from "ai";
import { useState } from "react";

export function ChatPage() {
  const [input, setInput] = useState("");

  const { messages, sendMessage, status } = useChat({
    transport: new TextStreamChatTransport({ api: "/api/chat" }),
  });

  return (
    <div className="flex flex-col h-full">
      <div className="flex-1 overflow-y-auto space-y-4 p-4">
        {messages.map((m) => (
          <div key={m.id} className={m.role === "user" ? "text-right" : ""}>
            <span className="text-sm font-medium">
              {m.role === "user" ? "You" : "Assistant"}
            </span>
            {m.parts.map((part, i) =>
              part.type === "text" ? (
                <p key={`${m.id}-${i}`} className="whitespace-pre-wrap">
                  {part.text}
                </p>
              ) : null,
            )}
          </div>
        ))}
        {status === "submitted" && <div className="p-4">Loading...</div>}
      </div>
      <form
        onSubmit={(e) => {
          e.preventDefault();
          if (input.trim()) {
            void sendMessage({ text: input });
            setInput("");
          }
        }}
        className="border-t p-4 flex gap-2"
      >
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Ask a question..."
          className="flex-1 border rounded px-3 py-2"
          disabled={status !== "ready"}
        />
        <button type="submit" disabled={status !== "ready"}>
          {status === "submitted" || status === "streaming"
            ? "Sending..."
            : "Send"}
        </button>
      </form>
    </div>
  );
}
```

### 6. Deploy and verify

```bash
databricks apps deploy --profile <PROFILE>
databricks apps list --profile <PROFILE>
databricks apps logs <app-name> --profile <PROFILE>
```

Open the app URL while signed in to Databricks, send a message, and verify streaming responses appear token-by-token from the AI Gateway endpoint.

#### References

- [Model Serving Overview](https://docs.databricks.com/aws/en/machine-learning/model-serving/)
- [Serving Endpoints](https://docs.databricks.com/aws/en/machine-learning/model-serving/create-foundation-model-endpoints)
- [AI Elements docs](https://elements.ai-sdk.dev/docs)

---

## Create a Lakebase Instance

Provision a managed Lakebase Postgres project on Databricks and collect the connection values needed by downstream templates.

### 1. Create a Lakebase project

Create a new Lakebase Postgres project. This provisions a managed Postgres cluster with a default branch and endpoint:

```bash
databricks postgres create-project <project-name> --profile <PROFILE>
```

### 2. Verify the project resources

Confirm the branch, endpoint, and database were created:

```bash
databricks postgres list-branches \
  projects/<project-name> \
  --profile <PROFILE> -o json

databricks postgres list-endpoints \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json

databricks postgres list-databases \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json
```

### 3. Note the connection values

Record these values from the command output above. They are required by the Lakebase Data Persistence template and other Lakebase-dependent templates:

| Value                    | JSON path                     | Used for                                              |
| ------------------------ | ----------------------------- | ----------------------------------------------------- |
| Endpoint host            | `...status.hosts.host`        | `PGHOST`, `lakebase.postgres.host`                    |
| Endpoint resource path   | `...name`                     | `LAKEBASE_ENDPOINT`, `lakebase.postgres.endpointPath` |
| Database resource path   | `...name`                     | `lakebase.postgres.database`                          |
| PostgreSQL database name | `...status.postgres_database` | `PGDATABASE`, `lakebase.postgres.databaseName`        |

#### References

- [What is Lakebase?](https://databricks.com/devhub/docs/lakebase/overview)
- [CLI reference for Lakebase](https://docs.databricks.com/aws/en/oltp/projects/cli)

---

## Lakebase Data Persistence

Add a managed Postgres database to your Databricks app using the Lakebase plugin. Covers schema setup, table creation, and full CRUD REST API routes.

This template assumes you have already completed the [Create a Lakebase Instance](https://databricks.com/devhub/templates/app-with-lakebase#create-a-lakebase-instance) template and have the connection values (endpoint host, endpoint path, database resource path, and PostgreSQL database name) ready.

The code examples below use a generic `items` resource as a placeholder. Replace `items` with your domain entity (products, orders, users, etc.) and adapt the schema columns to match your data model.

### 1. New app: scaffold with the Lakebase feature

```bash
databricks apps init \
  --name <app-name> \
  --version latest \
  --features=lakebase \
  --set 'lakebase.postgres.branch=projects/<project-name>/branches/production' \
  --set 'lakebase.postgres.database=projects/<project-name>/branches/production/databases/<db-name>' \
  --set 'lakebase.postgres.databaseName=<postgres-database-name>' \
  --set 'lakebase.postgres.endpointPath=projects/<project-name>/branches/production/endpoints/primary' \
  --set 'lakebase.postgres.host=<endpoint-host>' \
  --set 'lakebase.postgres.port=5432' \
  --set 'lakebase.postgres.sslmode=require' \
  --run none --profile <PROFILE>
```

Use the values returned by `list-databases` and `list-endpoints`. The generated template currently requires all postgres fields together during non-interactive scaffolding.

This scaffolds a complete app with Lakebase already wired up, including a sample CRUD app. Skip to step 3 to configure environment variables, then step 5 to deploy.

### Naming and routing conventions

The scaffolded Lakebase sample uses `lakebase` in route names and file paths to make plugin wiring obvious. For production apps, use domain names in user-facing code and keep `lakebase` only for infrastructure configuration:

- page components and files use domain names: `ItemsPage.tsx`, `item-routes.ts`
- routes use domain names: `/items`, `/api/items`, `/api/items/:id`
- keep `lakebase` naming for plugin/config only: `lakebase()` plugin, `LAKEBASE_ENDPOINT`, `postgres` app resource

### 2. Existing app: add Lakebase manually

The following changes match what `apps init --features=lakebase` generates. Apply them to an existing scaffolded AppKit app.

> **Tip:** The code below may be outdated. To get the latest, clone `https://github.com/databricks/appkit` and look in the `template/` directory. Search for `{{if .plugins.lakebase}}` to find all lakebase-conditional files and blocks. Files entirely wrapped in that conditional are lakebase-only; shared files like `App.tsx` and `server.ts` contain conditional blocks you can extract.

#### Update `server/server.ts`

Register the `lakebase` plugin and run route setup inside `onPluginsReady`. AppKit waits for that hook to resolve before the server starts accepting requests, so your schema setup completes before the first call lands:

```typescript
import { createApp, server, lakebase } from "@databricks/appkit";
import { setupRoutes } from "./routes/item-routes";

await createApp({
  plugins: [server(), lakebase()],
  async onPluginsReady(appkit) {
    await setupRoutes(appkit);
  },
});
```

#### Create `server/routes/item-routes.ts`

CRUD API that creates an `items` table and exposes REST endpoints. Adapt the table schema and routes to your domain:

```typescript
import { z } from "zod";
import { Application } from "express";

interface AppKitWithLakebase {
  lakebase: {
    query(
      text: string,
      params?: unknown[],
    ): Promise<{ rows: Record<string, unknown>[] }>;
  };
  server: {
    extend(fn: (app: Application) => void): void;
  };
}

const TABLE_EXISTS_SQL = `
  SELECT 1 FROM information_schema.tables
  WHERE table_schema = 'app' AND table_name = 'items'
`;

const SETUP_SCHEMA_SQL = `CREATE SCHEMA IF NOT EXISTS app`;

const CREATE_TABLE_SQL = `
  CREATE TABLE IF NOT EXISTS app.items (
    id SERIAL PRIMARY KEY,
    name TEXT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
  )
`;

const CreateItemBody = z.object({ name: z.string().min(1) });
const UpdateItemBody = z.object({ name: z.string().min(1) });

export async function setupRoutes(appkit: AppKitWithLakebase) {
  try {
    const { rows } = await appkit.lakebase.query(TABLE_EXISTS_SQL);
    if (rows.length > 0) {
      console.log("[lakebase] Table app.items already exists, skipping setup");
    } else {
      await appkit.lakebase.query(SETUP_SCHEMA_SQL);
      await appkit.lakebase.query(CREATE_TABLE_SQL);
      console.log("[lakebase] Created schema and table app.items");
    }
  } catch (err) {
    console.warn("[lakebase] Database setup failed:", (err as Error).message);
    console.warn("[lakebase] Routes will be registered but may return errors");
  }

  appkit.server.extend((app) => {
    app.get("/api/items", async (_req, res) => {
      try {
        const result = await appkit.lakebase.query(
          "SELECT id, name, created_at FROM app.items ORDER BY created_at DESC",
        );
        res.json(result.rows);
      } catch (err) {
        console.error("Failed to list items:", err);
        res.status(500).json({ error: "Failed to list items" });
      }
    });

    app.post("/api/items", async (req, res) => {
      try {
        const parsed = CreateItemBody.safeParse(req.body);
        if (!parsed.success) {
          res.status(400).json({ error: "name is required" });
          return;
        }
        const result = await appkit.lakebase.query(
          "INSERT INTO app.items (name) VALUES ($1) RETURNING id, name, created_at",
          [parsed.data.name.trim()],
        );
        res.status(201).json(result.rows[0]);
      } catch (err) {
        console.error("Failed to create item:", err);
        res.status(500).json({ error: "Failed to create item" });
      }
    });

    app.patch("/api/items/:id", async (req, res) => {
      try {
        const id = parseInt(req.params.id, 10);
        if (isNaN(id)) {
          res.status(400).json({ error: "Invalid id" });
          return;
        }
        const parsed = UpdateItemBody.safeParse(req.body);
        if (!parsed.success) {
          res.status(400).json({ error: "name is required" });
          return;
        }
        const result = await appkit.lakebase.query(
          "UPDATE app.items SET name = $1 WHERE id = $2 RETURNING id, name, created_at",
          [parsed.data.name.trim(), id],
        );
        if (result.rows.length === 0) {
          res.status(404).json({ error: "Item not found" });
          return;
        }
        res.json(result.rows[0]);
      } catch (err) {
        console.error("Failed to update item:", err);
        res.status(500).json({ error: "Failed to update item" });
      }
    });

    app.delete("/api/items/:id", async (req, res) => {
      try {
        const id = parseInt(req.params.id, 10);
        if (isNaN(id)) {
          res.status(400).json({ error: "Invalid id" });
          return;
        }
        const result = await appkit.lakebase.query(
          "DELETE FROM app.items WHERE id = $1 RETURNING id",
          [id],
        );
        if (result.rows.length === 0) {
          res.status(404).json({ error: "Item not found" });
          return;
        }
        res.status(204).send();
      } catch (err) {
        console.error("Failed to delete item:", err);
        res.status(500).json({ error: "Failed to delete item" });
      }
    });
  });
}
```

:::warning[Deploy first to avoid schema ownership errors]
Lakebase tables are owned by the identity that creates them. If you create the `app` schema locally, your user owns it and the deployed service principal gets `permission denied for schema app`.

**Recommended workflow:** Deploy the app first so the service principal creates and owns the schema. Then grant yourself access for local development:

```bash
databricks psql --project <project-name> --branch production --endpoint primary --profile <PROFILE> -- -c "
  CREATE EXTENSION IF NOT EXISTS databricks_auth;
  SELECT databricks_create_role('<your-email>', 'USER');
  GRANT databricks_superuser TO \"<your-email>\";
"
```

If you are the Lakebase project owner, `databricks_create_role` may fail with `role already exists` and `GRANT databricks_superuser` may fail with `permission denied to grant role`. Both errors are safe to ignore; the project owner already has the necessary access.

This gives you DML access (read/write) but not DDL (create/alter). The service principal remains the schema owner.

If you already created tables locally, drop and recreate the schema so the service principal owns it, or add tables in a separate schema (the [Lakebase Agent Memory template](https://databricks.com/devhub/templates/ai-chat-app#lakebase-agent-memory) uses a `chat` schema for this reason).
:::

#### Create `client/src/pages/ItemsPage.tsx`

List and create UI with CRUD operations against the API routes. Adapt the fields and layout to your domain:

```tsx
import {
  Card,
  CardContent,
  CardHeader,
  CardTitle,
  Button,
  Input,
  Skeleton,
} from "@databricks/appkit-ui/react";
import { useState, useEffect } from "react";
import { X } from "lucide-react";

interface Item {
  id: number;
  name: string;
  created_at: string;
}

export function ItemsPage() {
  const [items, setItems] = useState<Item[]>([]);
  const [newName, setNewName] = useState("");
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState<string | null>(null);
  const [submitting, setSubmitting] = useState(false);

  useEffect(() => {
    fetch("/api/items")
      .then((res) => {
        if (!res.ok)
          throw new Error(`Failed to fetch items: ${res.statusText}`);
        return res.json() as Promise<Item[]>;
      })
      .then(setItems)
      .catch((err) =>
        setError(err instanceof Error ? err.message : "Failed to load items"),
      )
      .finally(() => setLoading(false));
  }, []);

  const addItem = async (e: React.FormEvent) => {
    e.preventDefault();
    const name = newName.trim();
    if (!name) return;

    setSubmitting(true);
    try {
      const res = await fetch("/api/items", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ name }),
      });
      if (!res.ok) throw new Error(`Failed to create item: ${res.statusText}`);
      const created = (await res.json()) as Item;
      setItems((prev) => [created, ...prev]);
      setNewName("");
    } catch (err) {
      setError(err instanceof Error ? err.message : "Failed to add item");
    } finally {
      setSubmitting(false);
    }
  };

  const deleteItem = async (id: number) => {
    try {
      const res = await fetch(`/api/items/${id}`, { method: "DELETE" });
      if (!res.ok) throw new Error(`Failed to delete item: ${res.statusText}`);
      setItems((prev) => prev.filter((item) => item.id !== id));
    } catch (err) {
      setError(err instanceof Error ? err.message : "Failed to delete item");
    }
  };

  return (
    <div className="space-y-6 w-full max-w-2xl mx-auto">
      <Card className="shadow-lg">
        <CardHeader>
          <CardTitle>Items</CardTitle>
        </CardHeader>
        <CardContent>
          <form onSubmit={addItem} className="flex gap-2 mb-6">
            <Input
              placeholder="New item name"
              value={newName}
              onChange={(e) => setNewName(e.target.value)}
              disabled={submitting}
              className="flex-1"
            />
            <Button type="submit" disabled={submitting || !newName.trim()}>
              {submitting ? "Adding..." : "Add"}
            </Button>
          </form>

          {error && (
            <div className="text-destructive bg-destructive/10 p-3 rounded-md mb-4">
              {error}
            </div>
          )}

          {loading && (
            <div className="space-y-3">
              {Array.from({ length: 3 }, (_, i) => (
                <div key={`skeleton-${i}`} className="flex items-center gap-3">
                  <Skeleton className="h-4 flex-1" />
                </div>
              ))}
            </div>
          )}

          {!loading && items.length === 0 && (
            <p className="text-muted-foreground text-center py-8">
              No items yet. Add one above to get started.
            </p>
          )}

          {!loading && items.length > 0 && (
            <div className="space-y-2">
              {items.map((item) => (
                <div
                  key={item.id}
                  className="flex items-center gap-3 p-3 rounded-lg border hover:bg-muted/50 transition-colors"
                >
                  <span className="flex-1">{item.name}</span>
                  <Button
                    variant="ghost"
                    size="sm"
                    onClick={() => deleteItem(item.id)}
                    className="text-muted-foreground hover:text-destructive shrink-0"
                    aria-label="Delete item"
                  >
                    <X className="h-4 w-4" />
                  </Button>
                </div>
              ))}
            </div>
          )}
        </CardContent>
      </Card>
    </div>
  );
}
```

#### Update `client/src/App.tsx`

Add the import, nav link, and route:

```tsx
// Add import at top
import { ItemsPage } from './pages/ItemsPage';

// Add nav link inside the <nav> element
<NavLink to="/items" className={navLinkClass}>
  Items
</NavLink>

// Add route in the router children array
{ path: '/items', element: <ItemsPage /> },
```

### 3. Configure environment variables

For local development, add the Postgres connection details to `.env`:

```bash
PGHOST=<endpoint-host>
PGPORT=5432
PGDATABASE=<postgres-database-name>
PGSSLMODE=require
LAKEBASE_ENDPOINT=projects/<project-name>/branches/production/endpoints/primary
```

For deployment, the platform injects Postgres connection values automatically through the app resource. Keep only the Lakebase endpoint in `app.yaml`:

```yaml
command: ["npm", "run", "start"]
env:
  - name: LAKEBASE_ENDPOINT
    valueFrom: postgres
```

### 4. Update `databricks.yml`

Add the postgres variables, resource, and target values:

```yaml
variables:
  postgres_branch:
    description: Lakebase Postgres branch resource name
  postgres_database:
    description: Lakebase Postgres database resource name
  postgres_databaseName:
    description: Postgres database name for local development
  postgres_endpointPath:
    description: Lakebase endpoint resource name for local development
  postgres_host:
    description: Postgres host for local development
  postgres_port:
    description: Postgres port for local development
  postgres_sslmode:
    description: Postgres SSL mode for local development

resources:
  apps:
    app:
      # Add under existing app config
      resources:
        - name: postgres
          postgres:
            branch: ${var.postgres_branch}
            database: ${var.postgres_database}
            permission: CAN_CONNECT_AND_CREATE

targets:
  default:
    variables:
      postgres_branch: projects/<project-name>/branches/production
      postgres_database: projects/<project-name>/branches/production/databases/<db-name>
      postgres_databaseName: <postgres-database-name>
      postgres_endpointPath: projects/<project-name>/branches/production/endpoints/primary
      postgres_host: <endpoint-host>
      postgres_port: 5432
      postgres_sslmode: require
```

### 5. Deploy and test

```bash
databricks apps deploy --profile <PROFILE>
```

Verify the app once it is running by opening the app URL in your browser while signed in to Databricks, navigating to the Items page, and creating, updating, and deleting an item.

If the app does not start, check logs:

```bash
databricks apps logs <app-name> --profile <PROFILE>
```

#### References

- [Lakebase plugin docs](https://databricks.com/devhub/docs/appkit/v0/plugins/lakebase)
- [Lakebase database permissions](https://databricks.com/devhub/docs/appkit/v0/plugins/lakebase#database-permissions)
- [What is Lakebase?](https://databricks.com/devhub/docs/lakebase/overview)

---

## Lakebase Agent Memory

Save your AI agent's chat conversations to Lakebase so users can come back to a session, scroll their full message history, and let your agent reason over previous turns across requests, deploys, and machines.

The schema is a simplified, production-shaped relational layout (`chat` plus `message`) wired to Databricks AppKit + Lakebase. Once it's in place every chat turn — user input, assistant reply, tool call — is durably persisted in managed Postgres next to the rest of your operational data.

This template assumes you have already completed the [Create a Lakebase Instance](https://databricks.com/devhub/templates/app-with-lakebase#create-a-lakebase-instance) and [Lakebase Data Persistence](https://databricks.com/devhub/templates/app-with-lakebase#lakebase-data-persistence) templates (Lakebase project creation, scaffolding, environment variables, `databricks.yml` config, and initial deploy).

### 1. Create chat tables

Create two tables in a `chat` schema:

- `chat.chats`: one row per chat session
- `chat.messages`: one row per message

```sql
CREATE SCHEMA IF NOT EXISTS chat;

CREATE TABLE IF NOT EXISTS chat.chats (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id TEXT NOT NULL,
  title TEXT NOT NULL,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE TABLE IF NOT EXISTS chat.messages (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  chat_id UUID NOT NULL REFERENCES chat.chats(id) ON DELETE CASCADE,
  role TEXT NOT NULL CHECK (role IN ('system', 'user', 'assistant', 'tool')),
  content TEXT NOT NULL,
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX IF NOT EXISTS idx_messages_chat_id_created_at
  ON chat.messages(chat_id, created_at);
```

### 2. Run setup from your server bootstrap

In `server/server.ts`, run schema setup inside `onPluginsReady` so it completes before AppKit starts the HTTP server:

```typescript
import { createApp, server, lakebase } from "@databricks/appkit";
import { setupChatTables } from "./lib/chat-store";

await createApp({
  plugins: [server(), lakebase()],
  async onPluginsReady(appkit) {
    await setupChatTables(appkit);
  },
});
```

### 3. Add persistence helpers

Create `server/lib/chat-store.ts` and use parameterized queries:

> **Getting userId**: In deployed Databricks Apps, use `req.header("x-forwarded-email")` from the request headers. For local development, use a hardcoded test user ID.

```typescript
export async function createChat(
  appkit: AppKitWithLakebase,
  input: { userId: string; title: string },
) {
  const result = await appkit.lakebase.query(
    `INSERT INTO chat.chats (user_id, title)
     VALUES ($1, $2)
     RETURNING id, user_id, title, created_at, updated_at`,
    [input.userId, input.title],
  );
  return result.rows[0];
}

export async function appendMessage(
  appkit: AppKitWithLakebase,
  input: { chatId: string; role: string; content: string },
) {
  const result = await appkit.lakebase.query(
    `INSERT INTO chat.messages (chat_id, role, content)
     VALUES ($1, $2, $3)
     RETURNING id, chat_id, role, content, created_at`,
    [input.chatId, input.role, input.content],
  );
  return result.rows[0];
}
```

### 4. Persist in the `/api/chat` flow

In your chat route:

1. create (or load) a chat row
2. save incoming user message
3. stream assistant response
4. save the final assistant response after stream completion

Use an explicit `chatId` on the client and pass it in each request body.

### 5. Add history endpoints

Add REST endpoints for your chat UI:

- `GET /api/chats` -> list chats for current user
- `GET /api/chats/:chatId/messages` -> load ordered history
- `DELETE /api/chats/:chatId` -> delete chat and cascade messages

### 6. Update the client to load and resume chats

- Keep selected `chatId` in state or URL
- Fetch history with `GET /api/chats/:chatId/messages` and call `setMessages()` from the `useChat` return value to load it into the chat (AI SDK v6 uses `messages` in `ChatInit`, not `initialMessages`)
- Send `chatId` in every `/api/chat` request by passing it via a custom `fetch` wrapper on the `TextStreamChatTransport` constructor (there is no `onResponse` option on the transport; use the custom `fetch` to read response headers like `X-Chat-Id`)

### 7. Verify persistence end-to-end

```bash
databricks apps deploy --profile <PROFILE>
databricks apps logs <app-name> --profile <PROFILE>
```

Verification checklist:

- send 2-3 messages
- refresh the page
- confirm prior messages reload from Lakebase
- start a second chat and confirm separate history
- delete a chat and confirm it no longer appears

#### References

- [Lakebase plugin docs](https://databricks.com/devhub/docs/appkit/v0/plugins/lakebase)
- [PostgreSQL schema design](https://www.postgresql.org/docs/current/ddl.html)
