Two ways to use this template
- 1. Click "Copy prompt" below
- 2. Paste into Cursor, Claude Code, Codex, or any coding agent
- 3. Your agent builds the app — it asks questions along the way so the result is exactly what you want
Follow the steps below to set things up manually, at your own pace.
Lakebase Env Management for Off-Platform Apps
Define and validate cross-platform environment variables for Lakebase-backed apps deployed outside Databricks App Platform.
Prerequisites
This template collects the environment variables needed to reach Lakebase from an app running outside Databricks App Platform. Verify these Databricks workspace features are enabled before starting.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - Lakebase Postgres available. Run
databricks postgres list-projects --profile <PROFILE>and confirm the command succeeds. Anot enablederror means Lakebase is not available to this identity. - A provisioned Lakebase project. Complete the Create a Lakebase Instance template first. You will read connection values from its branch, endpoint, and database.
- Machine-to-machine OAuth for production (optional). If you plan to run in production with a service principal, have
DATABRICKS_CLIENT_ID/DATABRICKS_CLIENT_SECRETready for that service principal. For local development, a workspace token fromdatabricks auth token --profile <PROFILE>is sufficient.
Lakebase Environment Management for Off-Platform Apps
Define and validate the environment variables needed to connect to Lakebase from apps deployed outside Databricks App Platform (for example on AWS, Vercel, or Netlify).
1. Collect connection values via the Databricks CLI
Every value below can be obtained from the CLI. Run each command and record the result.
Workspace host (DATABRICKS_HOST):
databricks auth profiles
Use the Host column for your profile (e.g. https://dbc-xxxxx.cloud.databricks.com).
Lakebase endpoint and Postgres host (LAKEBASE_ENDPOINT, PGHOST):
databricks postgres list-endpoints \
projects/<project-name>/branches/production \
--profile <PROFILE> -o json
LAKEBASE_ENDPOINT= thenamefield (e.g.projects/<project>/branches/production/endpoints/primary)PGHOST= thestatus.hosts.hostfield
Postgres database name (PGDATABASE):
databricks postgres list-databases \
projects/<project-name>/branches/production \
--profile <PROFILE> -o json
Use the status.postgres_database field (typically databricks_postgres).
Postgres user (PGUSER):
For local development with token auth, this is your Databricks email:
databricks current-user me --profile <PROFILE> -o json
Use the userName field.
For production with M2M auth, this is the service principal's application ID used for DATABRICKS_CLIENT_ID.
Auth credentials:
For local development, get a short-lived workspace token:
databricks auth token --profile <PROFILE> -o json
Use the access_token field for DATABRICKS_TOKEN. This token expires after about one hour; the Token Management template covers automated refresh.
For production, use OAuth M2M credentials (DATABRICKS_CLIENT_ID + DATABRICKS_CLIENT_SECRET) from a service principal configured in your workspace.
2. Validate env at startup with Zod
Create src/lib/env.ts. Parsing process.env through a Zod schema on import ensures the app fails fast with a clear error when a variable is missing:
import { z } from "zod";
const baseSchema = z.object({
DATABRICKS_HOST: z.string().min(1),
LAKEBASE_ENDPOINT: z.string().min(1),
PGHOST: z.string().min(1),
PGPORT: z.coerce.number().default(5432),
PGDATABASE: z.string().min(1),
PGUSER: z.string().min(1),
PGSSLMODE: z.enum(["require", "prefer", "disable"]).default("require"),
DATABRICKS_TOKEN: z.string().optional(),
DATABRICKS_CLIENT_ID: z.string().optional(),
DATABRICKS_CLIENT_SECRET: z.string().optional(),
});
type AppEnv = z.infer<typeof baseSchema>;
function validateAuth(env: AppEnv): AppEnv {
const hasToken = Boolean(env.DATABRICKS_TOKEN);
const hasM2M =
Boolean(env.DATABRICKS_CLIENT_ID) && Boolean(env.DATABRICKS_CLIENT_SECRET);
if (!hasToken && !hasM2M) {
throw new Error(
"Set DATABRICKS_TOKEN or both DATABRICKS_CLIENT_ID and DATABRICKS_CLIENT_SECRET",
);
}
return env;
}
export const env = validateAuth(baseSchema.parse(process.env));
3. Commit an .env.example
Commit this file so every developer (and CI) knows which variables are required. Set the same keys in your hosting platform's secret/env configuration:
DATABRICKS_HOST=https://<workspace-host>
LAKEBASE_ENDPOINT=projects/<project>/branches/production/endpoints/primary
PGHOST=<status.hosts.host from list-endpoints>
PGPORT=5432
PGDATABASE=<status.postgres_database from list-databases>
PGUSER=<your Databricks email or service principal application ID>
PGSSLMODE=require
# Option A: local dev, token auth (expires ~1h, use refresh script)
DATABRICKS_TOKEN=
# Option B: production, M2M auth (service principal)
DATABRICKS_CLIENT_ID=
DATABRICKS_CLIENT_SECRET=
4. Import env early in your server entry point
Import env at the top of your server bootstrap file. The Zod parse runs on import, so any missing or invalid variable throws before the app starts accepting requests.