Lakebase Postgres configuration
AppKit connects to Lakebase Postgres using a postgres resource declared in databricks.yml and LAKEBASE_ENDPOINT set in app.yaml.
Connection values
Databricks Apps injects most connection values at startup. LAKEBASE_ENDPOINT is the exception. It is declared in app.yaml via valueFrom: postgres and resolved at startup from the postgres resource:
env:
- name: LAKEBASE_ENDPOINT
valueFrom: postgres
| Variable | Description | Source |
|---|---|---|
LAKEBASE_ENDPOINT | Endpoint resource path (projects/.../branches/.../endpoints/...) | Set via valueFrom: postgres in app.yaml |
PGHOST | Lakebase Postgres host | Auto-injected by the platform |
PGDATABASE | PostgreSQL database name | Auto-injected by the platform |
PGSSLMODE | TLS mode (require) | Auto-injected by the platform |
PGPORT | Port (5432) | Auto-injected by the platform |
For local development, these values come from your .env file. Local setup explains how to populate them.
Plugin manifest
When you register the lakebase() plugin in createApp, AppKit generates appkit.plugins.json declaring the plugin's resource requirements. Run npx @databricks/appkit plugin sync --write to regenerate it after adding or changing plugins:
npx @databricks/appkit plugin sync --write
This runs automatically during npm run dev and npm run build. Commit it alongside your code.
The AppKit configuration reference covers app.yaml plugin resource bindings in detail.
Resource hierarchy
Lakebase Postgres organizes resources as projects containing branches, with branches containing computes and databases.
projects/{project_id}
└── branches/{branch_id}
├── endpoints/{endpoint_id} (compute)
└── databases/{database_id}
- Project: top-level container. Created with
databricks postgres create-project. - Branch: isolated database environment. New projects get a default
productionbranch with adatabricks_postgresdatabase. - Compute: provides processing power and memory for a branch. Each branch gets a
primaryread-write compute created automatically. Read-only replicas can be added for read scaling. - Database: a PostgreSQL database within a branch. List with
databricks postgres list-databases <branch>.
The CLI and API refer to computes as endpoints (ENDPOINT_TYPE_READ_WRITE for read-write, ENDPOINT_TYPE_READ_ONLY for read replicas). Commands and resource paths in this doc use that term.
The postgres CLI reference covers all databricks postgres commands.
Branching
Branches create isolated database environments. When you branch, Lakebase Postgres copies the source branch's schema and data via copy-on-write. New branches are instant and you only pay for data you change.
Each new branch gets a primary read-write endpoint at projects/{project_id}/branches/{branch_id}/endpoints/primary, inheriting the project's default_endpoint_settings. Use create-endpoint to add read replicas (ENDPOINT_TYPE_READ_ONLY).
Branches require an expiration policy (ttl, expire_time, or no_expiry: true). Branch expiration details all options. For CLI commands, Feature branches has examples.
Project, branch, endpoint, and database IDs must be 1-63 characters, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.
Autoscaling
Computes autoscale between a configured min and max compute unit (CU) range. Default settings by branch type when created via API or CLI:
- Production branch: 1 CU (min and max), scale to zero disabled.
- Child branches: 1 CU (min and max), scale to zero enabled (5-minute default).
The Lakebase Postgres UI sets higher defaults: 8–16 CU for production and 2–4 CU for child branches.
Autoscaling is supported from 0.5 to 32 CU; computes from 36 to 112 CU are fixed size. The difference between max and min cannot exceed 16 CU (max - min <= 16).
Compute units (CU) are the capacity measure for Lakebase Postgres. Each CU provides approximately 2 GB of RAM.
Scaling within the configured range happens without connection interruptions. Changing the min/max configuration may cause a brief interruption.
Configure autoscaling
- Common
- All Options
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu" \
--json '{"spec": {"autoscaling_limit_min_cu": 1.0, "autoscaling_limit_max_cu": 8.0}}'
databricks postgres update-endpoint \
projects/$PROJECT_ID/branches/$BRANCH_ID/endpoints/$ENDPOINT_ID \
$UPDATE_MASK \
--json '{"spec": {
"autoscaling_limit_min_cu": 1.0,
"autoscaling_limit_max_cu": 8.0
}}' \
--no-wait \
--timeout 10m \
--debug \
-o json \
--target $TARGET \
--profile $DATABRICKS_PROFILE
| Option | Required | Description |
|---|---|---|
NAME | yes | Endpoint resource path: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
UPDATE_MASK | yes | Comma-separated fields (for example, spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu) |
--json | yes | JSON with new field values |
--no-wait | no | Return immediately with operation details |
--timeout | no | Max time to wait for completion |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Scale to zero
Scale to zero suspends idle computes to eliminate costs. When a new query arrives, the compute resumes automatically (typically a few hundred milliseconds).
| Setting | Default |
|---|---|
| Timeout | 5 minutes |
| Minimum timeout | 60 seconds |
Apps connecting to a scaled-down compute will see a brief pause on the first query. Implement connection retry logic in your app.
When a compute resumes, session context resets (temporary tables, prepared statements, session settings, connection pools).
Configure scale to zero
Project defaults (new branches inherit these settings):
- Common
- All Options
databricks postgres update-project \
projects/my-project \
"spec.default_endpoint_settings" \
--json '{"spec": {"default_endpoint_settings": {"suspend_timeout_duration": "300s"}}}'
databricks postgres update-project \
projects/$PROJECT_ID \
$UPDATE_MASK \
--json '{
"spec": {
"default_endpoint_settings": {
"autoscaling_limit_min_cu": 0.5,
"autoscaling_limit_max_cu": 1.0,
"suspend_timeout_duration": "300s"
}
}
}' \
--no-wait \
--timeout 10m \
--debug \
-o json \
--target $TARGET \
--profile $DATABRICKS_PROFILE
| Option | Required | Description |
|---|---|---|
NAME | yes | Project resource path: projects/{project_id} |
UPDATE_MASK | yes | Fields to update (for example, spec.default_endpoint_settings) |
--json | yes | JSON with new field values |
--no-wait | no | Return immediately with operation details |
--timeout | no | Max time to wait for completion |
--debug | no | Enable debug logging |
-o json | no | Output as JSON (default: text) |
--target | no | Bundle target to use (if applicable) |
--profile | no | Databricks CLI profile name |
Per-endpoint (change or disable on an existing endpoint):
Use spec.suspension as the update mask for all suspension changes on update-endpoint.
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.suspension" \
--json '{"spec": {"suspend_timeout_duration": "300s"}}'
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.suspension" \
--json '{"spec": {"no_suspension": true}}'
Setting no_suspension: false is not supported and returns an error. To re-enable scale to zero after disabling it, set suspend_timeout_duration instead.
Where to next
See Lakebase Postgres development for local setup, feature branches, and the full plugin API, or browse the templates catalog for complete patterns.