Skip to main content

Lakebase Postgres development

AppKit plugin API

The lakebase() plugin provides a standard pg.Pool with automatic OAuth token refresh. Once registered, access it via AppKit.lakebase:

import { createApp, server, lakebase } from "@databricks/appkit";

const AppKit = await createApp({
plugins: [server(), lakebase()],
});

// Standard parameterized query
const { rows } = await AppKit.lakebase.query<{ id: number; name: string }>(
"SELECT id, name FROM app.items WHERE active = $1",
[true],
);

// ORM-ready config (Drizzle, Prisma, TypeORM, etc.)
const ormConfig = AppKit.lakebase.getOrmConfig();
// Returns: { host, port, database, ssl, user, ... }

// pg-compatible config
const pgConfig = AppKit.lakebase.getPgConfig();

// Raw pg.Pool for advanced usage
const pool = AppKit.lakebase.pool;

Pool configuration

Override connection pool defaults by passing a pool object:

lakebase({
pool: {
max: 10, // max connections (default: 10)
connectionTimeoutMillis: 5000, // connection timeout ms (default: 10000)
idleTimeoutMillis: 30000, // idle timeout ms (default: 30000)
},
});

Caching integration

Lakebase Postgres also backs the AppKit caching plugin when healthy. For the full API, ORM integration, and connection configuration, read the plugin reference.

Auth model

Lakebase Postgres authenticates database connections using OAuth tokens or native Postgres passwords. The method depends on where your app runs.

Deployed apps: When you add it as a resource to a Databricks App, Databricks creates a service principal automatically, grants it a matching Postgres role, and injects connection details as environment variables. AppKit's lakebase() plugin handles OAuth token refresh automatically.

Local development: Your personal Databricks identity connects with an OAuth token generated by databricks postgres generate-database-credential. Tokens expire after one hour, but expiration is enforced only at login. Open connections remain active after the token expires. Run databricks apps deploy at least once before running npm run dev. Local setup explains why order matters and what to do if you hit permission errors.

About authentication covers Postgres password auth, token rotation, and machine-to-machine flows.

Local setup

databricks apps init populates .env with the correct Lakebase Postgres connection values. Run databricks apps deploy before npm run dev. Deploying sets up a managed identity (the app's service principal) that creates the app schema and tables on first startup and owns them. If npm run dev runs first instead, your personal credentials create those objects. The deployed app then can't access them and hits permission denied for schema app.

Local database access

If you created the Lakebase Postgres project, your identity already has the access it needs. After databricks apps deploy runs once, npm run dev works.

For collaborators and other identities that need local read/write access, add them in the Lakebase UI: open Branch Overview, click Add role, select the identity, and check the databricks_superuser system role.

Postgres password authentication is a simpler alternative that avoids OAuth role setup. Set a password in the Branch Overview page and use it as PGPASSWORD in .env.

You can also generate a short-lived credential for use with any PostgreSQL client (DBeaver, pgAdmin, DataGrip, or a language driver):

databricks postgres generate-database-credential \
projects/my-project/branches/production/endpoints/primary

The AppKit plugin docs: local development cover fine-grained permission alternatives for teams that need schema-scoped access.

Feature branches

Use Lakebase Postgres branches to isolate schema changes and test migrations without affecting production:

databricks postgres create-branch projects/my-project feature-xyz
Options
OptionRequiredDescription
PARENTyesProject resource path: projects/{project_id}
BRANCH_IDyesUnique branch identifier (1-63 chars, lowercase)
--jsonnoJSON spec with source_branch and expiration policy (no_expiry, ttl, or expire_time). If omitted, branches from the project's default branch with default expiration
--no-waitnoReturn immediately with operation details
--timeoutnoMax time to wait for completion
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

A primary read-write endpoint is created automatically, inheriting the project's default_endpoint_settings. Branches require an expiration policy (ttl, expire_time, or no_expiry: true). Branch expiration details the available policies.

Delete when done:

databricks postgres delete-branch projects/my-project/branches/feature-xyz
Options
OptionRequiredDescription
NAMEyesBranch resource path: projects/{project_id}/branches/{branch_id}
--no-waitnoReturn immediately with operation details
--timeoutnoMax time to wait for completion
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

Off-platform apps

For apps hosted outside Databricks (AWS, Vercel, Netlify, and others), the platform does not inject connection details or refresh OAuth tokens automatically. Token rotation is the app's responsibility. About Lakebase authentication covers token rotation and machine-to-machine patterns. The Lakebase Off-Platform template includes a complete implementation with environment setup and Drizzle ORM integration.

Manual provisioning (without a template)

Create a project

databricks postgres create-project my-project
Options
OptionRequiredDescription
PROJECT_IDyesUnique project identifier (1-63 chars, lowercase letter start, lowercase/numbers/hyphens)
--jsonnoInline JSON or @path/to/file.json with project spec (display_name, pg_version, history_retention_duration, default_endpoint_settings)
--no-waitnoReturn immediately with operation details instead of waiting for completion
--timeoutnoMax time to wait for completion (for example, 10m). Ignored with --no-wait
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

The optional display_name sets a human-readable label. This creates a project with a default production branch, a databricks_postgres database, and a read-write endpoint.

Get connection values

databricks postgres list-endpoints projects/my-project/branches/production -o json
Options
OptionRequiredDescription
PARENTyesBranch resource path: projects/{project_id}/branches/{branch_id}
-o jsonnoOutput as JSON (default: text)
--page-sizenoItems per API request (upper bound per page; must be at least 10 if set)
--limitnoMaximum total results to return; CLI auto-paginates and stops at this count
--debugnoEnable debug logging
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name
Example response
[
{
"create_time": "2026-01-15T10:30:00Z",
"name": "projects/my-project/branches/production/endpoints/primary",
"parent": "projects/my-project/branches/production",
"status": {
"autoscaling_limit_max_cu": 1,
"autoscaling_limit_min_cu": 1,
"current_state": "ACTIVE",
"disabled": false,
"endpoint_type": "ENDPOINT_TYPE_READ_WRITE",
"group": {
"enable_readable_secondaries": false,
"max": 1,
"min": 1
},
"hosts": {
"host": "ep-cool-breeze-abc123.database.cloud.databricks.com"
},
"settings": {}
},
"uid": "ep-cool-breeze-abc123",
"update_time": "2026-01-15T10:31:00Z"
}
]
databricks postgres list-databases projects/my-project/branches/production -o json
Options
OptionRequiredDescription
PARENTyesBranch resource path: projects/{project_id}/branches/{branch_id}
-o jsonnoOutput as JSON (default: text)
--page-sizenoItems per API request (upper bound per page)
--limitnoMaximum total results to return; CLI auto-paginates and stops at this count
--debugnoEnable debug logging
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name
Example response
[
{
"create_time": "2026-01-15T10:30:00Z",
"name": "projects/my-project/branches/production/databases/db-abc123",
"parent": "projects/my-project/branches/production",
"status": {
"postgres_database": "databricks_postgres",
"role": "projects/my-project/branches/production/roles/rol-xyz789"
},
"update_time": "2026-01-15T10:30:05Z"
}
]

Key values from the output:

ValueJSON pathUsed for
Endpoint hoststatus.hosts.hostPGHOST
Endpoint resource pathnameLAKEBASE_ENDPOINT
Database resource pathname (from list-databases)lakebase.postgres.database
PostgreSQL database namestatus.postgres_databasePGDATABASE

Connect

The simplest way to connect is with databricks psql:

databricks psql --project my-project
Options
OptionRequiredDescription
--projectnoProject ID. With a TTY, omit to choose from prompts. In CI or scripts, set explicitly when needed
--branchnoBranch ID (default: auto-select when only one exists)
--endpointnoEndpoint ID (default: auto-select when only one exists)
--autoscalingnoOnly show Lakebase Autoscaling projects
--provisionednoOnly show Lakebase Provisioned instances
--max-retriesnoConnection retries, 0 to disable (default: 3)
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name
-- PSQL_ARGSnoAdditional arguments passed through to psql

Without a TTY (for example in CI), the CLI auto-selects when only one branch or endpoint exists. When multiple exist, specify --project, --branch, and --endpoint explicitly so the command does not block on prompts.

If you don't have a psql client installed, generate a short-lived credential and use it with any PostgreSQL client (DBeaver, pgAdmin, DataGrip, or a language driver):

databricks postgres generate-database-credential \
projects/my-project/branches/production/endpoints/primary
Options
OptionRequiredDescription
ENDPOINTyesEndpoint resource path: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}
--jsonnoInline JSON or @path/to/file.json with request body
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

Use the returned token as the password, with your Databricks email as the username and the endpoint host from list-endpoints above.

Long-running operations

Create, update, and delete commands block until complete by default. Use --no-wait to return immediately and poll status:

databricks postgres create-project my-project \
--json '{"spec": {"display_name": "My Project"}}' \
--no-wait

databricks postgres get-operation projects/my-project/operations/<operation-id>

Declarative Automation Bundles

Declarative Automation Bundles (DABs) let you define Lakebase Postgres infrastructure as code in databricks.yml, versioned alongside your application. A bundle specifies postgres_projects, postgres_branches, and postgres_endpoints under resources.

Example databricks.yml with a project, dev branch, and read-only replica
bundle:
name: my-lakebase-app

resources:
postgres_projects:
my_app:
project_id: "my-lakebase-app"
display_name: "My Lakebase Postgres App"
pg_version: 17
history_retention_duration: "172800s"
default_endpoint_settings:
autoscaling_limit_min_cu: 0.5
autoscaling_limit_max_cu: 1.0
suspend_timeout_duration: "300s"
pg_settings:
log_min_duration_statement: "1000"

postgres_branches:
dev_branch:
parent: ${resources.postgres_projects.my_app.id}
branch_id: "dev"
no_expiry: true
is_protected: false

postgres_endpoints:
read_replica:
parent: ${resources.postgres_branches.dev_branch.id}
endpoint_id: "replica"
endpoint_type: "ENDPOINT_TYPE_READ_ONLY"
autoscaling_limit_min_cu: 0.5
autoscaling_limit_max_cu: 0.5

Validate and deploy

databricks bundle validate
databricks bundle deploy

bundle deploy is idempotent. It creates new resources and updates existing ones to match the configuration. Unlike Databricks Jobs or Apps, there is no bundle run step; Lakebase Postgres resources are active once deployed. The Declarative Automation Bundles documentation covers all options.

Update masks

Update commands require an update mask specifying which fields to modify. The --json payload contains the new values. Only masked fields change.

databricks postgres update-branch \
projects/my-project/branches/production \
spec.is_protected \
--json '{"spec": {"is_protected": true}}'

For multiple fields, use a comma-separated update mask (for example, spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu).

Troubleshooting

For Databricks Apps configuration issues (resources in databricks.yml and app.yaml), Add a Lakebase resource to a Databricks app has the resource and environment variable reference. For connection problems including idle wake-up and endpoint format, Troubleshooting in Connect external apps has fixes.

  • permission denied for schema app (deployed app): npm run dev ran before databricks apps deploy, so the schema is owned by your personal credentials and the app's service principal can't access it. The app fails to read or write to the database. To fix without losing data: connect via databricks psql and run ALTER SCHEMA app OWNER TO "<sp-client-id>"; and ALTER TABLE app.<your-table> OWNER TO "<sp-client-id>"; for each table (the client ID is in the app's Authorization tab), then redeploy. To start fresh: databricks psql --project <project-id> -- -c "DROP SCHEMA IF EXISTS app CASCADE;" then databricks apps deploy.
  • permission denied for schema app (local dev, collaborator): Only the Lakebase project creator has databricks_superuser access automatically. To grant a teammate local access, the project creator opens Branch Overview in the Lakebase UI, clicks Add role, selects the teammate's identity, and checks databricks_superuser. Postgres password auth is a simpler alternative: set a password in Branch Overview and add PGPASSWORD=<password> to .env.
  • Unknown field path in update_mask: 'spec.suspend_timeout_duration': Use spec.suspension as the update mask for all endpoint-level suspension changes with update-endpoint. To disable scale to zero, pass {"spec": {"no_suspension": true}}; to change the timeout, pass {"spec": {"suspend_timeout_duration": "300s"}}. Setting no_suspension: false is not supported.
  • Connection refused after period of inactivity: Lakebase Autoscaling scales to zero when idle. The first connection after inactivity triggers a wake-up and may take a few seconds. If your connection library doesn't retry automatically, add a short retry loop.

AppKit docs

Access the AppKit API reference, component docs, and plugin docs from the terminal:

npx @databricks/appkit docs                    # browse the documentation index
npx @databricks/appkit docs "lakebase" # view Lakebase Postgres plugin docs

Or view the AppKit Lakebase Postgres plugin reference on this site.

Where to next

Templates cover common Lakebase Postgres patterns. Browse them to find a starting point, or copy one into your coding agent to scaffold a working app.