Lakebase Postgres development
AppKit plugin API
The lakebase() plugin provides a standard pg.Pool with automatic OAuth token refresh. Once registered, access it via AppKit.lakebase:
import { createApp, server, lakebase } from "@databricks/appkit";
const AppKit = await createApp({
plugins: [server(), lakebase()],
});
// Standard parameterized query
const { rows } = await AppKit.lakebase.query<{ id: number; name: string }>(
"SELECT id, name FROM app.items WHERE active = $1",
[true],
);
// ORM-ready config (Drizzle, Prisma, TypeORM, etc.)
const ormConfig = AppKit.lakebase.getOrmConfig();
// Returns: { host, port, database, ssl, user, ... }
// pg-compatible config
const pgConfig = AppKit.lakebase.getPgConfig();
// Raw pg.Pool for advanced usage
const pool = AppKit.lakebase.pool;
Pool configuration
Override connection pool defaults by passing a pool object:
lakebase({
pool: {
max: 10, // max connections (default: 10)
connectionTimeoutMillis: 5000, // connection timeout ms (default: 10000)
idleTimeoutMillis: 30000, // idle timeout ms (default: 30000)
},
});
Caching integration
Lakebase Postgres also backs the AppKit caching plugin when healthy. For the full API, ORM integration, and connection configuration, read the plugin reference.
Auth model
Lakebase Postgres authenticates database connections using OAuth tokens or native Postgres passwords. The method depends on where your app runs.
Deployed apps: When you add it as a resource to a Databricks App, Databricks creates a service principal automatically, grants it a matching Postgres role, and injects connection details as environment variables. AppKit's lakebase() plugin handles OAuth token refresh automatically.
Local development: Your personal Databricks identity connects with an OAuth token generated by databricks postgres generate-database-credential. Tokens expire after one hour, but expiration is enforced only at login. Open connections remain active after the token expires. Run databricks apps deploy at least once before running npm run dev. Local setup explains why order matters and what to do if you hit permission errors.
About authentication covers Postgres password auth, token rotation, and machine-to-machine flows.
Local setup
databricks apps init populates .env with the correct Lakebase Postgres connection values. Run databricks apps deploy before npm run dev. Deploying sets up a managed identity (the app's service principal) that creates the app schema and tables on first startup and owns them. If npm run dev runs first instead, your personal credentials create those objects. The deployed app then can't access them and hits permission denied for schema app.
Local database access
If you created the Lakebase Postgres project, your identity already has the access it needs. After databricks apps deploy runs once, npm run dev works.
For collaborators and other identities that need local read/write access, add them in the Lakebase UI: open Branch Overview, click Add role, select the identity, and check the databricks_superuser system role.
Postgres password authentication is a simpler alternative that avoids OAuth role setup. Set a password in the Branch Overview page and use it as PGPASSWORD in .env.
You can also generate a short-lived credential for use with any PostgreSQL client (DBeaver, pgAdmin, DataGrip, or a language driver):
databricks postgres generate-database-credential \
projects/my-project/branches/production/endpoints/primary
The AppKit plugin docs: local development cover fine-grained permission alternatives for teams that need schema-scoped access.
Feature branches
Use Lakebase Postgres branches to isolate schema changes and test migrations without affecting production:
- Common
- All Options
databricks postgres create-branch projects/my-project feature-xyz
databricks postgres create-branch \
projects/$PROJECT_ID \
$BRANCH_ID \
--json '{"spec": {"source_branch": "projects/$PROJECT_ID/branches/$SOURCE_BRANCH_ID", "no_expiry": true}}' \
--debug \
-o json \
--target $TARGET \
--no-wait \
--timeout 10m \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
PARENT | yes | Project resource path: projects/{project_id} |
BRANCH_ID | yes | Unique branch identifier (1-63 chars, lowercase) |
--json | no | JSON spec with source_branch and expiration policy (no_expiry, ttl, or expire_time). If omitted, branches from the project's default branch with default expiration |
--no-wait | no | Return immediately with operation details |
--timeout | no | Max time to wait for completion |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
A primary read-write endpoint is created automatically, inheriting the project's default_endpoint_settings. Branches require an expiration policy (ttl, expire_time, or no_expiry: true). Branch expiration details the available policies.
Delete when done:
- Common
- All Options
databricks postgres delete-branch projects/my-project/branches/feature-xyz
databricks postgres delete-branch \
projects/$PROJECT_ID/branches/$BRANCH_ID \
--no-wait \
--timeout 10m \
--debug \
-o json \
--target $TARGET \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
NAME | yes | Branch resource path: projects/{project_id}/branches/{branch_id} |
--no-wait | no | Return immediately with operation details |
--timeout | no | Max time to wait for completion |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Off-platform apps
For apps hosted outside Databricks (AWS, Vercel, Netlify, and others), the platform does not inject connection details or refresh OAuth tokens automatically. Token rotation is the app's responsibility. About Lakebase authentication covers token rotation and machine-to-machine patterns. The Lakebase Off-Platform template includes a complete implementation with environment setup and Drizzle ORM integration.
Manual provisioning (without a template)
Create a project
- Common
- All Options
databricks postgres create-project my-project
databricks postgres create-project $PROJECT_ID \
--json '{"spec": {
"display_name": "My Lakebase Postgres Project",
"pg_version": 17,
"history_retention_duration": "172800s",
"default_endpoint_settings": {
"autoscaling_limit_min_cu": 0.5,
"autoscaling_limit_max_cu": 1.0,
"suspend_timeout_duration": "300s"
}
}}' \
--no-wait \
--timeout 10m \
--debug \
-o json \
--target $TARGET \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
PROJECT_ID | yes | Unique project identifier (1-63 chars, lowercase letter start, lowercase/numbers/hyphens) |
--json | no | Inline JSON or @path/to/file.json with project spec (display_name, pg_version, history_retention_duration, default_endpoint_settings) |
--no-wait | no | Return immediately with operation details instead of waiting for completion |
--timeout | no | Max time to wait for completion (for example, 10m). Ignored with --no-wait |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
The optional display_name sets a human-readable label. This creates a project with a default production branch, a databricks_postgres database, and a read-write endpoint.
Get connection values
- Common
- All Options
databricks postgres list-endpoints projects/my-project/branches/production -o json
databricks postgres list-endpoints \
projects/$PROJECT_ID/branches/$BRANCH_ID \
-o json \
--page-size 100 \
--limit 100 \
--debug \
--target $TARGET \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
PARENT | yes | Branch resource path: projects/{project_id}/branches/{branch_id} |
-o json | no | Output as JSON (default: text) |
--page-size | no | Items per API request (upper bound per page; must be at least 10 if set) |
--limit | no | Maximum total results to return; CLI auto-paginates and stops at this count |
--debug | no | Enable debug logging |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Example response
[
{
"create_time": "2026-01-15T10:30:00Z",
"name": "projects/my-project/branches/production/endpoints/primary",
"parent": "projects/my-project/branches/production",
"status": {
"autoscaling_limit_max_cu": 1,
"autoscaling_limit_min_cu": 1,
"current_state": "ACTIVE",
"disabled": false,
"endpoint_type": "ENDPOINT_TYPE_READ_WRITE",
"group": {
"enable_readable_secondaries": false,
"max": 1,
"min": 1
},
"hosts": {
"host": "ep-cool-breeze-abc123.database.cloud.databricks.com"
},
"settings": {}
},
"uid": "ep-cool-breeze-abc123",
"update_time": "2026-01-15T10:31:00Z"
}
]
- Common
- All Options
databricks postgres list-databases projects/my-project/branches/production -o json
databricks postgres list-databases \
projects/$PROJECT_ID/branches/$BRANCH_ID \
-o json \
--page-size 100 \
--limit 100 \
--debug \
--target $TARGET \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
PARENT | yes | Branch resource path: projects/{project_id}/branches/{branch_id} |
-o json | no | Output as JSON (default: text) |
--page-size | no | Items per API request (upper bound per page) |
--limit | no | Maximum total results to return; CLI auto-paginates and stops at this count |
--debug | no | Enable debug logging |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Example response
[
{
"create_time": "2026-01-15T10:30:00Z",
"name": "projects/my-project/branches/production/databases/db-abc123",
"parent": "projects/my-project/branches/production",
"status": {
"postgres_database": "databricks_postgres",
"role": "projects/my-project/branches/production/roles/rol-xyz789"
},
"update_time": "2026-01-15T10:30:05Z"
}
]
Key values from the output:
| Value | JSON path | Used for |
|---|---|---|
| Endpoint host | status.hosts.host | PGHOST |
| Endpoint resource path | name | LAKEBASE_ENDPOINT |
| Database resource path | name (from list-databases) | lakebase.postgres.database |
| PostgreSQL database name | status.postgres_database | PGDATABASE |
Connect
The simplest way to connect is with databricks psql:
- Common
- All Options
databricks psql --project my-project
databricks psql \
--project $PROJECT_ID \
--branch $BRANCH_ID \
--endpoint $ENDPOINT_ID \
--autoscaling \
--max-retries 3 \
--debug \
-o json \
--target $TARGET \
--profile $DATABRICKS_PROFILE \
-- -c "SELECT 1"
Options
| Option | Required | Description |
|---|---|---|
--project | no | Project ID. With a TTY, omit to choose from prompts. In CI or scripts, set explicitly when needed |
--branch | no | Branch ID (default: auto-select when only one exists) |
--endpoint | no | Endpoint ID (default: auto-select when only one exists) |
--autoscaling | no | Only show Lakebase Autoscaling projects |
--provisioned | no | Only show Lakebase Provisioned instances |
--max-retries | no | Connection retries, 0 to disable (default: 3) |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
-- PSQL_ARGS | no | Additional arguments passed through to psql |
Without a TTY (for example in CI), the CLI auto-selects when only one branch or endpoint exists. When multiple exist, specify --project, --branch, and --endpoint explicitly so the command does not block on prompts.
If you don't have a psql client installed, generate a short-lived credential and use it with any PostgreSQL client (DBeaver, pgAdmin, DataGrip, or a language driver):
- Common
- All Options
databricks postgres generate-database-credential \
projects/my-project/branches/production/endpoints/primary
databricks postgres generate-database-credential \
projects/$PROJECT_ID/branches/$BRANCH_ID/endpoints/$ENDPOINT_ID \
--json '{}' \
--debug \
--target $TARGET \
-o json \
--profile $DATABRICKS_PROFILE
Options
| Option | Required | Description |
|---|---|---|
ENDPOINT | yes | Endpoint resource path: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
--json | no | Inline JSON or @path/to/file.json with request body |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Use the returned token as the password, with your Databricks email as the username and the endpoint host from list-endpoints above.
Long-running operations
Create, update, and delete commands block until complete by default. Use --no-wait to return immediately and poll status:
databricks postgres create-project my-project \
--json '{"spec": {"display_name": "My Project"}}' \
--no-wait
databricks postgres get-operation projects/my-project/operations/<operation-id>
Declarative Automation Bundles
Declarative Automation Bundles (DABs) let you define Lakebase Postgres infrastructure as code in databricks.yml, versioned alongside your application. A bundle specifies postgres_projects, postgres_branches, and postgres_endpoints under resources.
Example databricks.yml with a project, dev branch, and read-only replica
bundle:
name: my-lakebase-app
resources:
postgres_projects:
my_app:
project_id: "my-lakebase-app"
display_name: "My Lakebase Postgres App"
pg_version: 17
history_retention_duration: "172800s"
default_endpoint_settings:
autoscaling_limit_min_cu: 0.5
autoscaling_limit_max_cu: 1.0
suspend_timeout_duration: "300s"
pg_settings:
log_min_duration_statement: "1000"
postgres_branches:
dev_branch:
parent: ${resources.postgres_projects.my_app.id}
branch_id: "dev"
no_expiry: true
is_protected: false
postgres_endpoints:
read_replica:
parent: ${resources.postgres_branches.dev_branch.id}
endpoint_id: "replica"
endpoint_type: "ENDPOINT_TYPE_READ_ONLY"
autoscaling_limit_min_cu: 0.5
autoscaling_limit_max_cu: 0.5
Validate and deploy
databricks bundle validate
databricks bundle deploy
bundle deploy is idempotent. It creates new resources and updates existing ones to match the configuration. Unlike Databricks Jobs or Apps, there is no bundle run step; Lakebase Postgres resources are active once deployed. The Declarative Automation Bundles documentation covers all options.
Update masks
Update commands require an update mask specifying which fields to modify. The --json payload contains the new values. Only masked fields change.
databricks postgres update-branch \
projects/my-project/branches/production \
spec.is_protected \
--json '{"spec": {"is_protected": true}}'
For multiple fields, use a comma-separated update mask (for example, spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu).
Troubleshooting
For Databricks Apps configuration issues (resources in databricks.yml and app.yaml), Add a Lakebase resource to a Databricks app has the resource and environment variable reference. For connection problems including idle wake-up and endpoint format, Troubleshooting in Connect external apps has fixes.
permission denied for schema app(deployed app):npm run devran beforedatabricks apps deploy, so the schema is owned by your personal credentials and the app's service principal can't access it. The app fails to read or write to the database. To fix without losing data: connect viadatabricks psqland runALTER SCHEMA app OWNER TO "<sp-client-id>";andALTER TABLE app.<your-table> OWNER TO "<sp-client-id>";for each table (the client ID is in the app's Authorization tab), then redeploy. To start fresh:databricks psql --project <project-id> -- -c "DROP SCHEMA IF EXISTS app CASCADE;"thendatabricks apps deploy.permission denied for schema app(local dev, collaborator): Only the Lakebase project creator hasdatabricks_superuseraccess automatically. To grant a teammate local access, the project creator opens Branch Overview in the Lakebase UI, clicks Add role, selects the teammate's identity, and checksdatabricks_superuser. Postgres password auth is a simpler alternative: set a password in Branch Overview and addPGPASSWORD=<password>to.env.Unknown field path in update_mask: 'spec.suspend_timeout_duration': Usespec.suspensionas the update mask for all endpoint-level suspension changes withupdate-endpoint. To disable scale to zero, pass{"spec": {"no_suspension": true}}; to change the timeout, pass{"spec": {"suspend_timeout_duration": "300s"}}. Settingno_suspension: falseis not supported.- Connection refused after period of inactivity: Lakebase Autoscaling scales to zero when idle. The first connection after inactivity triggers a wake-up and may take a few seconds. If your connection library doesn't retry automatically, add a short retry loop.
AppKit docs
Access the AppKit API reference, component docs, and plugin docs from the terminal:
npx @databricks/appkit docs # browse the documentation index
npx @databricks/appkit docs "lakebase" # view Lakebase Postgres plugin docs
Or view the AppKit Lakebase Postgres plugin reference on this site.
Where to next
Templates cover common Lakebase Postgres patterns. Browse them to find a starting point, or copy one into your coding agent to scaffold a working app.