# About DevHub

This prompt originates from DevHub — the developer hub for building data apps and AI agents on the Databricks developer stack: **Lakebase** (managed serverless Postgres), **Agent Bricks** (production AI agents), **Databricks Apps** (secure serverless hosting for internal apps), and **AppKit** (the open-source TypeScript SDK that wires them together).

- Website: https://databricks.com/devhub
- GitHub: https://github.com/databricks/devhub
- Report issues: https://github.com/databricks/devhub/issues

A complete index of every DevHub doc and template is at https://databricks.com/devhub/llms.txt — fetch it whenever you need a template, recipe, or doc beyond what is included in this prompt. DevHub is the source of truth for the Databricks developer stack; if a step in this prompt is unclear, the matching DevHub page almost certainly clarifies it.

---

# Working with DevHub prompts

Follow these rules every time you act on a DevHub prompt.

## Read first, then act

- Read the entire prompt before executing any steps. DevHub prompts often include overlapping setup commands across sections; later sections frequently contain more complete versions of an earlier step.
- Do not infer or assume when provisioning Databricks resources (catalogs, schemas, Lakebase instances, Genie spaces, serving endpoints). Ask the user whether to create new resources or reuse existing ones.
- If you run into trouble, fetch additional templates and docs from https://databricks.com/devhub (the index lives at https://databricks.com/devhub/llms.txt). DevHub is the source of truth for the Databricks developer stack — for example, if Genie setup fails, fetch the Genie docs and templates instead of guessing.

## Engage the user in a conversation

Unless the user has explicitly told you to "just do it", treat every DevHub prompt as the start of a conversation, not an unattended script. The user knows their domain best; DevHub knows the Databricks stack. Both are required to build a successful system.

Follow these rules every time you ask a question:

1. **One question at a time.** Never ask multiple questions in a single message.
2. **Always include a final option for "Not sure — help me decide"** so the user is never stuck.
3. **Prefer interactive multiple-choice UI when available.** Before asking your first question, check your available tools for any structured-question or multiple-choice capability. If one exists, **always** use it instead of plain text. Known tools by environment:
   - **Cursor**: use the `AskQuestion` tool.
   - **Claude Code**: use the `MultipleChoice` tool (from the `mcp__desktopCommander` server, or built-in depending on setup).
   - **Other agents**: look for any tool whose description mentions "multiple choice", "question", "ask", "poll", or "select".
4. **Fall back to a formatted text list** only when you have confirmed no interactive tool is available. Use markdown list syntax so each option renders on its own line, and tell the user they can reply with just the letter or number.

### Example: Cursor (`AskQuestion` tool)

```
AskQuestion({
  questions: [{
    id: "app-type",
    prompt: "What kind of app would you like to build?",
    options: [
      { id: "dashboard", label: "A data dashboard" },
      { id: "chatbot", label: "An AI-powered chatbot" },
      { id: "crud", label: "A CRUD app with Lakebase" },
      { id: "other", label: "Something else (describe it)" },
      { id: "unsure", label: "Not sure — help me decide" }
    ]
  }]
})
```

### Example: plain text fallback

Only use this when no interactive tool is available:

What kind of app would you like to build? Reply with the letter to choose:

- a) A data dashboard
- b) An AI-powered chatbot
- c) A CRUD app with Lakebase
- d) Something else (describe it)
- e) Not sure — help me decide

## Default workflow

Unless instructed otherwise, follow this workflow:

1. Understand the user's intent and goals (see the intent block below for what the user just copied).
2. Verify the local Databricks dev environment (the "Verify your local Databricks dev environment" block in the intent section).
3. Ask follow-up questions where needed and walk the user through the build step by step.
4. Build the app or agent.
5. Make it look great (see "Make it look great" below).
6. Run and test locally.
7. Deploy to production.

## Make it look great

The default templates that AppKit provides are intentionally minimal — a starting point, not a finished product. **Do not stop there.** Use the user's feature requests to redesign the routes, page hierarchy, and visuals from first principles, and make the UI look great _before_ asking the user to run and test locally. Showing the user something polished early changes the conversation.

Unless the user has specified a design preference, use these defaults:

- shadcn/ui components on top of Tailwind CSS.
- Clean hierarchy with modern spacing — not too many stacked cards.
- Modern, minimal design language.
- Databricks brand palette: `#FF3621`, `#0B2026`, `#EEEDE9`, `#F9F7F4`.

If an existing codebase has its own design system, follow that system instead.

## When you run into issues

Use the GitHub CLI (if available) or generate a copy-pastable error report for the user to file at https://github.com/databricks/devhub/issues. Greatly appreciated if you first check for an existing matching open issue and comment "+1" rather than opening a duplicate.

---

# What the user just did

The user copied the prompt for a DevHub **cookbook** — **Lakebase Off-Platform** (https://databricks.com/devhub/templates/lakebase-off-platform).

A cookbook is a step-by-step pattern guide that walks the user through building an **archetype application** end-to-end on Databricks. Cookbooks are composed from multiple recipes — they show how the recipes fit together into a working app (e.g. an AI chat app with persistence, a Lakebase-backed CRUD app, a RAG chat app). The cookbook is the recommended starting point when the user wants the whole archetype, not just one piece.

Your job in this conversation is to:

1. Clarify the user's **goal for this archetype** — production app, learning project, or demo.
2. Verify the local Databricks dev environment is ready (block below).
3. Walk the user through the cookbook section by section, asking the questions each section surfaces, and stitching the included recipes together coherently.

## Step 1 — Clarify intent before touching code

Ask **one** question, ideally with a multiple-choice tool:

- **New project from scratch** following this archetype end-to-end. → Run the local-bootstrap below, then scaffold a fresh project and walk through the cookbook step by step.
- **Add this archetype to an existing Databricks app**. → Read the user's existing project first; introduce the archetype's pieces incrementally without breaking what's there.
- **Just learning the pattern**: the user wants to understand the archetype before deciding to build it. → Walk through the steps as a guided tour; do not execute commands.
- **Not sure — help me decide**: ask follow-ups about the user's end goal (who uses the app, what data, deployed where) and map back to one of the above.

## Step 2 — Pin down archetype-specific decisions

Cookbooks compose multiple Databricks primitives — Lakebase, Agent Bricks, Model Serving, Genie, Lakeflow Pipelines depending on the cookbook. Before generating code, ask:

- For each primitive the cookbook needs: **create new** or **reuse existing**? Never assume — Lakebase instances, Model Serving endpoints, and Genie spaces all cost money and take minutes to provision.
- Which **Databricks profile** to target? (`databricks auth profiles`.)
- **Data**: real data from the user's Unity Catalog, or seed data to start and swap later?
- **Scope today**: ship the full archetype, or stop after a working slice (e.g. just the Lakebase + UI layer, no AI yet)?

## Step 3 — Verify the local Databricks dev environment

Cookbooks run multiple `databricks` and AppKit CLI commands across their steps; a misconfigured CLI profile fails immediately and looks like a cookbook bug. **Walk the user through the local-bootstrap block below first**, even if they say their environment is already set up.

The full cookbook content the user is focused on is attached after the local-bootstrap block.

---

# Verify your local Databricks dev environment

A working Databricks CLI profile is the prerequisite for every step that follows. Walk the user through the recipe below — _even if they say their environment is already set up_. The verification steps are quick and prevent confusing failures further down.

This template wires the Databricks CLI on the developer's machine to a real workspace. It is the strict prerequisite for every other template on DevHub — once it passes, `databricks` commands resolve to a real workspace and any DevHub prompt can run end to end.

- **A Databricks workspace you can sign in to.** Have the workspace URL handy (e.g. `https://<workspace>.cloud.databricks.com`); you will paste it into `databricks auth login` in step 3. If you do not have access, ask your workspace admin.
- **A terminal on macOS, Windows, or Linux.** All install paths run from a terminal session. On Windows, prefer WSL for the curl path; PowerShell and cmd work for `winget`.
- **Permission to install software on this machine.** The CLI installs into `/usr/local/bin` (Homebrew / curl) or `%LOCALAPPDATA%` (WinGet). If `/usr/local/bin` is not writable, rerun the curl installer with `sudo`.

## Set Up Your Local Dev Environment

Install the Databricks CLI, authenticate a profile, and verify the handshake. Every other DevHub template assumes this has already passed.

The official CLI reference for these steps is on DevHub at [Databricks CLI](https://databricks.com/devhub/docs/tools/databricks-cli). Use it whenever a step here is unclear.

### 1. Check the installed CLI version

DevHub templates assume Databricks CLI `0.296+`. Anything older is missing the AppKit `apps init` template registry and several `experimental aitools` flags.

```bash
databricks -v
```

If the command is not found, or the version is below `0.296`, install or upgrade in the next step.

### 2. Install or upgrade the Databricks CLI

Pick the install path for your OS. If the CLI is already installed at an older version, the same commands upgrade in place.

#### macOS / Linux — Homebrew (recommended)

```bash
brew tap databricks/tap
brew install databricks

brew update && brew upgrade databricks
```

#### Windows — WinGet

```bash
winget install Databricks.DatabricksCLI

winget upgrade Databricks.DatabricksCLI
```

Restart your terminal after install.

#### Any platform — curl installer

```bash
curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sh
```

On Windows, run this from WSL. If `/usr/local/bin` is not writable, rerun with `sudo`. Re-running the script also upgrades an existing install.

After installing, confirm the version is `0.296+`:

```bash
databricks -v
```

### 3. Authenticate a profile

Browser-based OAuth is the default for local use:

```bash
databricks auth login
```

The CLI prints a URL and waits for the user to complete OAuth in the browser. **Always show the URL to the user as a clickable link** so they can open it themselves — the CLI does not return until authentication finishes. Credentials save to `~/.databrickscfg`.

If you already know the workspace URL and want to name the profile, do it in one go:

```bash
databricks auth login --host <workspace-url> --profile <PROFILE>
```

`<PROFILE>` is the label you will pass on subsequent commands as `--profile <PROFILE>`. If you skip `--profile`, the CLI uses the `DEFAULT` profile.

For CI/CD, OAuth client credentials or a personal access token are better fits — see the [authentication section of the CLI doc](https://databricks.com/devhub/docs/tools/databricks-cli#authenticate) for the non-interactive flows.

### 4. Verify the handshake

List the saved profiles and confirm the one you just created shows `Valid: YES`:

```bash
databricks auth profiles
```

```text
Name              Host                                           Valid
DEFAULT           https://adb-1234567890.12.azuredatabricks.net  YES
my-prod-workspace https://mycompany.cloud.databricks.com         YES
```

If the row shows `Valid: NO`, the saved token is stale. Re-run `databricks auth login --profile <NAME>` to refresh it. **Never proceed past this step if no profile is `Valid: YES`** — every downstream `databricks` command will fail with an auth error that looks like a template bug.

If the user wants a particular profile to be the default for this shell session, export it:

```bash
export DATABRICKS_CONFIG_PROFILE=<PROFILE>
```

### 5. Smoke-test the CLI against the workspace

Run a read-only API call to confirm the auth actually works (a fresh OAuth token can fail on the first real call if the user picked the wrong workspace in the browser):

```bash
databricks current-user me --profile <PROFILE>
```

A successful response prints the signed-in user's identity. A `401` or `403` here means the auth flow completed against a workspace the user cannot read — re-run `databricks auth login --profile <PROFILE>` and pick the right workspace this time.

---

# The cookbook the user copied

The full cookbook prompt is below. This is what the user wants to focus on today. Once the local-bootstrap above passes and the intent questions are answered, work through this content step by step.

---
title: "Lakebase Off-Platform"
summary: "Use Lakebase from apps hosted outside Databricks App Platform (for example on AWS, Vercel, or Netlify) with portable env, token, and Drizzle patterns."
---

# Lakebase Off-Platform

Use Lakebase from apps hosted outside Databricks App Platform (for example on AWS, Vercel, or Netlify) with portable env, token, and Drizzle patterns.

## Prerequisites



### Create a Lakebase Instance

Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available in the workspace.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds (an empty list is fine — you are about to create the first project). A `not enabled` or permission error means Lakebase is not available to this identity.

### Lakebase Env Management for Off-Platform Apps

This template collects the environment variables needed to reach Lakebase from an app running outside Databricks App Platform. Verify these Databricks workspace features are enabled before starting.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds. A `not enabled` error means Lakebase is not available to this identity.
- **A provisioned Lakebase project.** Complete the [Create a Lakebase Instance](https://databricks.com/devhub/templates/lakebase-create-instance) template first. You will read connection values from its branch, endpoint, and database.
- **Machine-to-machine OAuth for production (optional).** If you plan to run in production with a service principal, have `DATABRICKS_CLIENT_ID` / `DATABRICKS_CLIENT_SECRET` ready for that service principal. For local development, a workspace token from `databricks auth token --profile <PROFILE>` is sufficient.

### Lakebase Token Management

This template fetches and caches Lakebase Postgres credentials from a Node.js process. Verify these Databricks workspace features are enabled before starting.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds. A `not enabled` error means Lakebase is not available to this identity.
- **A provisioned Lakebase project.** Complete the [Create a Lakebase Instance](https://databricks.com/devhub/templates/lakebase-create-instance) template first so you have a `LAKEBASE_ENDPOINT` resource path to pass to the credentials API.
- **An env management setup.** Complete the [Lakebase Env Management for Off-Platform Apps](https://databricks.com/devhub/templates/lakebase-off-platform-env-management) template first — this template imports the validated `env` module and expects `DATABRICKS_HOST`, `LAKEBASE_ENDPOINT`, and either `DATABRICKS_TOKEN` or `DATABRICKS_CLIENT_ID` + `DATABRICKS_CLIENT_SECRET` to be set.

### Drizzle + Lakebase in an Off-Platform App

This template connects an off-platform Node.js app (e.g. AWS, Vercel, Netlify) to Lakebase Postgres. Verify these Databricks workspace features are enabled before starting.

- **Databricks CLI authenticated.** Run `databricks auth profiles` and confirm at least one profile shows `Valid: YES`. If none do, authenticate with `databricks auth login --host <workspace-url> --profile <PROFILE>`.
- **Lakebase Postgres available.** Run `databricks postgres list-projects --profile <PROFILE>` and confirm the command succeeds. A `not enabled` error means Lakebase is not available to this identity.
- **A provisioned Lakebase project.** Complete the [Create a Lakebase Instance](https://databricks.com/devhub/templates/lakebase-create-instance) template first so you have an endpoint host, database, and endpoint resource path available as `PGHOST`, `PGDATABASE`, and `LAKEBASE_ENDPOINT`.
- **An env management setup for off-platform auth.** Complete the [Lakebase Env Management for Off-Platform Apps](https://databricks.com/devhub/templates/lakebase-off-platform-env-management) and [Lakebase Token Management](https://databricks.com/devhub/templates/lakebase-token-management) templates first — this template imports `env` and `getLakebasePostgresToken` from those modules.

## Create a Lakebase Instance

Provision a managed Lakebase Postgres project on Databricks and collect the connection values needed by downstream templates.

### 1. Create a Lakebase project

Create a new Lakebase Postgres project. This provisions a managed Postgres cluster with a default branch and endpoint:

```bash
databricks postgres create-project <project-name> --profile <PROFILE>
```

### 2. Verify the project resources

Confirm the branch, endpoint, and database were created:

```bash
databricks postgres list-branches \
  projects/<project-name> \
  --profile <PROFILE> -o json

databricks postgres list-endpoints \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json

databricks postgres list-databases \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json
```

### 3. Note the connection values

Record these values from the command output above. They are required by the Lakebase Data Persistence template and other Lakebase-dependent templates:

| Value                    | JSON path                     | Used for                                              |
| ------------------------ | ----------------------------- | ----------------------------------------------------- |
| Endpoint host            | `...status.hosts.host`        | `PGHOST`, `lakebase.postgres.host`                    |
| Endpoint resource path   | `...name`                     | `LAKEBASE_ENDPOINT`, `lakebase.postgres.endpointPath` |
| Database resource path   | `...name`                     | `lakebase.postgres.database`                          |
| PostgreSQL database name | `...status.postgres_database` | `PGDATABASE`, `lakebase.postgres.databaseName`        |

#### References

- [What is Lakebase?](https://databricks.com/devhub/docs/lakebase/overview)
- [CLI reference for Lakebase](https://docs.databricks.com/aws/en/oltp/projects/cli)

---

## Lakebase Environment Management for Off-Platform Apps

Define and validate the environment variables needed to connect to Lakebase from apps deployed outside Databricks App Platform (for example on AWS, Vercel, or Netlify).

### 1. Collect connection values via the Databricks CLI

Every value below can be obtained from the CLI. Run each command and record the result.

**Workspace host** (`DATABRICKS_HOST`):

```bash
databricks auth profiles
```

Use the `Host` column for your profile (e.g. `https://dbc-xxxxx.cloud.databricks.com`).

**Lakebase endpoint and Postgres host** (`LAKEBASE_ENDPOINT`, `PGHOST`):

```bash
databricks postgres list-endpoints \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json
```

- `LAKEBASE_ENDPOINT` = the `name` field (e.g. `projects/<project>/branches/production/endpoints/primary`)
- `PGHOST` = the `status.hosts.host` field

**Postgres database name** (`PGDATABASE`):

```bash
databricks postgres list-databases \
  projects/<project-name>/branches/production \
  --profile <PROFILE> -o json
```

Use the `status.postgres_database` field (typically `databricks_postgres`).

**Postgres user** (`PGUSER`):

For local development with token auth, this is your Databricks email:

```bash
databricks current-user me --profile <PROFILE> -o json
```

Use the `userName` field.

For production with M2M auth, this is the service principal's application ID used for `DATABRICKS_CLIENT_ID`.

**Auth credentials:**

For local development, get a short-lived workspace token:

```bash
databricks auth token --profile <PROFILE> -o json
```

Use the `access_token` field for `DATABRICKS_TOKEN`. This token expires after about one hour; the [Token Management](https://databricks.com/devhub/templates/lakebase-off-platform#lakebase-token-management) template covers automated refresh.

For production, use OAuth M2M credentials (`DATABRICKS_CLIENT_ID` + `DATABRICKS_CLIENT_SECRET`) from a service principal configured in your workspace.

### 2. Validate env at startup with Zod

Create `src/lib/env.ts`. Parsing `process.env` through a Zod schema on import ensures the app fails fast with a clear error when a variable is missing:

```typescript
import { z } from "zod";

const baseSchema = z.object({
  DATABRICKS_HOST: z.string().min(1),
  LAKEBASE_ENDPOINT: z.string().min(1),
  PGHOST: z.string().min(1),
  PGPORT: z.coerce.number().default(5432),
  PGDATABASE: z.string().min(1),
  PGUSER: z.string().min(1),
  PGSSLMODE: z.enum(["require", "prefer", "disable"]).default("require"),
  DATABRICKS_TOKEN: z.string().optional(),
  DATABRICKS_CLIENT_ID: z.string().optional(),
  DATABRICKS_CLIENT_SECRET: z.string().optional(),
});

type AppEnv = z.infer<typeof baseSchema>;

function validateAuth(env: AppEnv): AppEnv {
  const hasToken = Boolean(env.DATABRICKS_TOKEN);
  const hasM2M =
    Boolean(env.DATABRICKS_CLIENT_ID) && Boolean(env.DATABRICKS_CLIENT_SECRET);
  if (!hasToken && !hasM2M) {
    throw new Error(
      "Set DATABRICKS_TOKEN or both DATABRICKS_CLIENT_ID and DATABRICKS_CLIENT_SECRET",
    );
  }
  return env;
}

export const env = validateAuth(baseSchema.parse(process.env));
```

### 3. Commit an `.env.example`

Commit this file so every developer (and CI) knows which variables are required. Set the same keys in your hosting platform's secret/env configuration:

```bash
DATABRICKS_HOST=https://<workspace-host>
LAKEBASE_ENDPOINT=projects/<project>/branches/production/endpoints/primary
PGHOST=<status.hosts.host from list-endpoints>
PGPORT=5432
PGDATABASE=<status.postgres_database from list-databases>
PGUSER=<your Databricks email or service principal application ID>
PGSSLMODE=require

# Option A: local dev, token auth (expires ~1h, use refresh script)
DATABRICKS_TOKEN=

# Option B: production, M2M auth (service principal)
DATABRICKS_CLIENT_ID=
DATABRICKS_CLIENT_SECRET=
```

### 4. Import `env` early in your server entry point

Import `env` at the top of your server bootstrap file. The Zod parse runs on import, so any missing or invalid variable throws before the app starts accepting requests.

#### References

- [Databricks OAuth machine-to-machine auth](https://docs.databricks.com/en/dev-tools/auth/oauth-m2m.html)
- [Lakebase credentials API](https://docs.databricks.com/api/workspace/postgres/credentials)

---

## Lakebase Token Management

Fetch, cache, and automatically refresh the short-lived Postgres credentials that Lakebase requires. Supports both direct token auth (local dev) and M2M OAuth (production).

### 1. Add a token manager for workspace auth and Lakebase credentials

Create `src/lib/lakebase/tokens.ts`:

```typescript
import { env } from "@/lib/env";

const REFRESH_BUFFER_MS = 2 * 60 * 1000;

type CachedToken = {
  value: string;
  expiresAt: number;
};

type AuthStrategy =
  | { kind: "token"; token: string }
  | { kind: "m2m"; host: string; clientId: string; clientSecret: string };

let cachedWorkspaceToken: CachedToken | null = null;
let workspaceRefreshPromise: Promise<CachedToken> | null = null;
let cachedLakebaseToken: CachedToken | null = null;
let lakebaseRefreshPromise: Promise<CachedToken> | null = null;

function isFresh(token: CachedToken | null): token is CachedToken {
  return token !== null && Date.now() < token.expiresAt - REFRESH_BUFFER_MS;
}

function authStrategyFromEnv(): AuthStrategy {
  if (env.DATABRICKS_TOKEN) {
    return { kind: "token", token: env.DATABRICKS_TOKEN };
  }
  return {
    kind: "m2m",
    host: env.DATABRICKS_HOST.replace(/\/$/, ""),
    clientId: env.DATABRICKS_CLIENT_ID!,
    clientSecret: env.DATABRICKS_CLIENT_SECRET!,
  };
}

async function fetchWorkspaceTokenM2M(
  host: string,
  clientId: string,
  clientSecret: string,
): Promise<CachedToken> {
  const response = await fetch(`${host}/oidc/v1/token`, {
    method: "POST",
    headers: { "Content-Type": "application/x-www-form-urlencoded" },
    body: new URLSearchParams({
      grant_type: "client_credentials",
      client_id: clientId,
      client_secret: clientSecret,
      scope: "all-apis",
    }),
  });
  if (!response.ok) {
    throw new Error(`M2M token request failed: ${response.status}`);
  }
  const data = (await response.json()) as {
    access_token?: string;
    expires_in?: number;
  };
  if (!data.access_token || !data.expires_in) {
    throw new Error("Invalid M2M token response");
  }
  return {
    value: data.access_token,
    expiresAt: Date.now() + data.expires_in * 1000,
  };
}

async function getWorkspaceToken(auth: AuthStrategy): Promise<string> {
  if (auth.kind === "token") {
    return auth.token;
  }
  if (isFresh(cachedWorkspaceToken)) {
    return cachedWorkspaceToken.value;
  }
  if (!workspaceRefreshPromise) {
    workspaceRefreshPromise = fetchWorkspaceTokenM2M(
      auth.host,
      auth.clientId,
      auth.clientSecret,
    )
      .then((token) => {
        cachedWorkspaceToken = token;
        return token;
      })
      .finally(() => {
        workspaceRefreshPromise = null;
      });
  }
  return (await workspaceRefreshPromise).value;
}

async function fetchLakebaseCredential(
  databricksHost: string,
  workspaceToken: string,
): Promise<CachedToken> {
  const response = await fetch(
    `${databricksHost}/api/2.0/postgres/credentials`,
    {
      method: "POST",
      headers: {
        Authorization: `Bearer ${workspaceToken}`,
        "Content-Type": "application/json",
        Accept: "application/json",
      },
      body: JSON.stringify({ endpoint: env.LAKEBASE_ENDPOINT }),
    },
  );
  if (!response.ok) {
    throw new Error(`Lakebase credential request failed: ${response.status}`);
  }
  const data = (await response.json()) as {
    token?: string;
    expire_time?: string;
  };
  if (!data.token || !data.expire_time) {
    throw new Error("Invalid Lakebase credential response");
  }
  return {
    value: data.token,
    expiresAt: new Date(data.expire_time).getTime(),
  };
}

export async function getLakebasePostgresToken(): Promise<string> {
  if (isFresh(cachedLakebaseToken)) {
    return cachedLakebaseToken.value;
  }
  if (!lakebaseRefreshPromise) {
    lakebaseRefreshPromise = (async () => {
      const auth = authStrategyFromEnv();
      const workspaceToken = await getWorkspaceToken(auth);
      return fetchLakebaseCredential(
        env.DATABRICKS_HOST.replace(/\/$/, ""),
        workspaceToken,
      );
    })()
      .then((token) => {
        cachedLakebaseToken = token;
        return token;
      })
      .finally(() => {
        lakebaseRefreshPromise = null;
      });
  }
  return (await lakebaseRefreshPromise).value;
}
```

### 2. Add a script to refresh `DATABRICKS_TOKEN` for local dev

CLI-issued tokens expire after about one hour. Create `scripts/refresh-lakebase-token.ts` to write a fresh token into your local env file:

```typescript
import { execSync } from "node:child_process";
import { readFileSync, writeFileSync, existsSync } from "node:fs";

const envFile = process.argv[2] ?? ".env.local";
const profile = process.env.DATABRICKS_CONFIG_PROFILE ?? "DEFAULT";

const raw = execSync(`databricks auth token --profile "${profile}" -o json`, {
  encoding: "utf-8",
});
const parsed = JSON.parse(raw) as { access_token?: string };
if (!parsed.access_token) {
  throw new Error("Failed to get access token from Databricks CLI");
}

if (!existsSync(envFile)) {
  throw new Error(`Env file not found: ${envFile}`);
}

const content = readFileSync(envFile, "utf-8");
const tokenLine = `DATABRICKS_TOKEN="${parsed.access_token}"`;
const updated = content.includes("DATABRICKS_TOKEN=")
  ? content.replace(/^DATABRICKS_TOKEN=.*/m, tokenLine)
  : `${content.trimEnd()}\n${tokenLine}\n`;

writeFileSync(envFile, updated);
console.log(`Updated DATABRICKS_TOKEN in ${envFile}`);
```

### 3. Verify token and credential flow

```bash
databricks auth token --profile <PROFILE> -o json

curl -sS -X POST "https://<workspace-host>/api/2.0/postgres/credentials" \
  -H "Authorization: Bearer <workspace-access-token>" \
  -H "Content-Type: application/json" \
  -d '{"endpoint":"projects/<project>/branches/<branch>/endpoints/<endpoint>"}'
```

The response should include `token` and `expire_time`.

#### References

- [Databricks CLI auth token command](https://docs.databricks.com/aws/en/dev-tools/cli/reference/auth-commands)
- [Lakebase credentials API](https://docs.databricks.com/api/workspace/postgres/credentials)

---

## Drizzle ORM with Lakebase in an Off-Platform App

Connect Drizzle ORM to Lakebase in any Node.js server outside Databricks App Platform. Uses a `pg` Pool with a password callback for automatic credential refresh.

### 1. Install Drizzle and the node-postgres driver

```bash
npm install drizzle-orm pg
npm install -D drizzle-kit @types/pg tsx
```

`drizzle-orm` and `drizzle-kit` must be on the same major version. If `drizzle-kit` errors with "This version of drizzle-kit is outdated," check that both packages share the same major (e.g. both 0.x or both 1.x).

### 2. Create a Lakebase-backed `pg` pool

Create `src/lib/db/pool.ts`:

```typescript
import { Pool, type PoolConfig } from "pg";
import { env } from "@/lib/env";
import { getLakebasePostgresToken } from "@/lib/lakebase/tokens";

function sslConfig(mode: "require" | "prefer" | "disable"): PoolConfig["ssl"] {
  switch (mode) {
    case "require":
      return { rejectUnauthorized: true };
    case "prefer":
      return { rejectUnauthorized: false };
    case "disable":
      return false;
  }
}

export function createLakebasePool(): Pool {
  return new Pool({
    host: env.PGHOST,
    port: env.PGPORT,
    database: env.PGDATABASE,
    user: env.PGUSER,
    password: () => getLakebasePostgresToken(),
    ssl: sslConfig(env.PGSSLMODE),
    max: 10,
    idleTimeoutMillis: 30_000,
    connectionTimeoutMillis: 10_000,
  });
}
```

### 3. Define a Drizzle schema

Create `src/lib/items/schema.ts` with a starter table. Adapt the table name, columns, and types to your domain (e.g. `products`, `orders`, `users`):

```typescript
import { pgTable, serial, text, timestamp } from "drizzle-orm/pg-core";

export const items = pgTable("items", {
  id: serial("id").primaryKey(),
  name: text("name").notNull(),
  createdAt: timestamp("created_at", { withTimezone: true })
    .notNull()
    .defaultNow(),
});
```

Add more schema files under `src/lib/<domain>/schema.ts` as your app grows. The `drizzle.config.ts` glob (`./src/lib/*/schema.ts`) picks them all up automatically.

### 4. Initialize Drizzle with the pool

Create `src/lib/db/client.ts`. Import every domain schema and spread it into the `schema` option:

```typescript
import { drizzle } from "drizzle-orm/node-postgres";
import { createLakebasePool } from "@/lib/db/pool";
import * as itemsSchema from "@/lib/items/schema";

const pool = createLakebasePool();
export const db = drizzle({ client: pool, schema: { ...itemsSchema } });
```

### 5. Handle drizzle-kit migrations with a temporary `DATABASE_URL`

`drizzle-kit` needs a connection string and cannot use `pg` password callbacks. Build a one-time URL with a fresh Lakebase credential in `scripts/db-migrate.ts`:

```typescript
import { execSync } from "node:child_process";
import { env } from "@/lib/env";
import { getLakebasePostgresToken } from "@/lib/lakebase/tokens";

async function runMigrations() {
  const token = await getLakebasePostgresToken();
  const encodedUser = encodeURIComponent(env.PGUSER);
  const encodedPassword = encodeURIComponent(token);

  const databaseUrl =
    `postgresql://${encodedUser}:${encodedPassword}` +
    `@${env.PGHOST}:${env.PGPORT}/${env.PGDATABASE}` +
    `?sslmode=${env.PGSSLMODE}`;

  execSync("npx drizzle-kit migrate", {
    stdio: "inherit",
    env: { ...process.env, DATABASE_URL: databaseUrl },
  });
}

runMigrations().catch((error) => {
  console.error(error);
  process.exit(1);
});
```

### 6. Keep `drizzle.config.ts` minimal

Lakebase Postgres passwords are short-lived tokens, so there is no static `DATABASE_URL` to store in `.env`. The migration script from step 5 builds a temporary URL with a fresh credential and passes it as `DATABASE_URL` when it shells out to `drizzle-kit migrate`. Commands like `generate` only read schema files and never connect, so `dbCredentials` is optional:

```typescript
import { defineConfig } from "drizzle-kit";

export default defineConfig({
  schema: "./src/lib/*/schema.ts",
  out: "./src/lib/db/migrations",
  dialect: "postgresql",
  ...(process.env.DATABASE_URL && {
    dbCredentials: { url: process.env.DATABASE_URL },
  }),
});
```

### 7. Verify schema generation and migration

Generate reads schema files locally (no database connection):

```bash
npx drizzle-kit generate
```

Migrate fetches a fresh Lakebase credential and applies the generated SQL:

```bash
npx dotenv -e .env.local -- npx tsx scripts/db-migrate.ts
```

`tsx` does not load `.env.local` automatically (that is a Next.js-specific behavior), so use `dotenv-cli` or your framework's env-loading mechanism to inject the variables.

If both commands succeed, your Drizzle schema and Lakebase connection are working.

#### References

- [Drizzle ORM with PostgreSQL](https://orm.drizzle.team/docs/get-started-postgresql)
- [Lakebase credentials API](https://docs.databricks.com/api/workspace/postgres/credentials)
