Skip to main content

Lakebase Postgres configuration

AppKit connects to Lakebase Postgres using a postgres resource declared in databricks.yml and LAKEBASE_ENDPOINT set in app.yaml.

Connection values

Databricks Apps injects most connection values at startup. LAKEBASE_ENDPOINT is the exception. It is declared in app.yaml via valueFrom: postgres and resolved at startup from the postgres resource:

env:
- name: LAKEBASE_ENDPOINT
valueFrom: postgres
VariableDescriptionSource
LAKEBASE_ENDPOINTEndpoint resource path (projects/.../branches/.../endpoints/...)Set via valueFrom: postgres in app.yaml
PGHOSTLakebase Postgres hostAuto-injected by the platform
PGDATABASEPostgreSQL database nameAuto-injected by the platform
PGSSLMODETLS mode (require)Auto-injected by the platform
PGPORTPort (5432)Auto-injected by the platform

For local development, these values come from your .env file. Local setup explains how to populate them.

Plugin manifest

When you register the lakebase() plugin in createApp, AppKit generates appkit.plugins.json declaring the plugin's resource requirements. Run npx @databricks/appkit plugin sync --write to regenerate it after adding or changing plugins:

npx @databricks/appkit plugin sync --write

This runs automatically during npm run dev and npm run build. Commit it alongside your code.

The AppKit configuration reference covers app.yaml plugin resource bindings in detail.

Resource hierarchy

Lakebase Postgres organizes resources as projects containing branches, with branches containing computes and databases.

projects/{project_id}
└── branches/{branch_id}
├── endpoints/{endpoint_id} (compute)
└── databases/{database_id}
  • Project: top-level container. Created with databricks postgres create-project.
  • Branch: isolated database environment. New projects get a default production branch with a databricks_postgres database.
  • Compute: provides processing power and memory for a branch. Each branch gets a primary read-write compute created automatically. Read-only replicas can be added for read scaling.
  • Database: a PostgreSQL database within a branch. List with databricks postgres list-databases <branch>.

The CLI and API refer to computes as endpoints (ENDPOINT_TYPE_READ_WRITE for read-write, ENDPOINT_TYPE_READ_ONLY for read replicas). Commands and resource paths in this doc use that term.

The postgres CLI reference covers all databricks postgres commands.

Branching

Branches create isolated database environments. When you branch, Lakebase Postgres copies the source branch's schema and data via copy-on-write. New branches are instant and you only pay for data you change.

Each new branch gets a primary read-write endpoint at projects/{project_id}/branches/{branch_id}/endpoints/primary, inheriting the project's default_endpoint_settings. Use create-endpoint to add read replicas (ENDPOINT_TYPE_READ_ONLY).

Branches require an expiration policy (ttl, expire_time, or no_expiry: true). Branch expiration details all options. For CLI commands, Feature branches has examples.

note

Project, branch, endpoint, and database IDs must be 1-63 characters, start with a lowercase letter, and contain only lowercase letters, numbers, and hyphens.

Autoscaling

Computes autoscale between a configured min and max compute unit (CU) range. Default settings by branch type when created via API or CLI:

  • Production branch: 1 CU (min and max), scale to zero disabled.
  • Child branches: 1 CU (min and max), scale to zero enabled (5-minute default).

The Lakebase Postgres UI sets higher defaults: 8–16 CU for production and 2–4 CU for child branches.

Autoscaling is supported from 0.5 to 32 CU; computes from 36 to 112 CU are fixed size. The difference between max and min cannot exceed 16 CU (max - min <= 16).

Compute units (CU) are the capacity measure for Lakebase Postgres. Each CU provides approximately 2 GB of RAM.

Scaling within the configured range happens without connection interruptions. Changing the min/max configuration may cause a brief interruption.

Configure autoscaling
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu" \
--json '{"spec": {"autoscaling_limit_min_cu": 1.0, "autoscaling_limit_max_cu": 8.0}}'
OptionRequiredDescription
NAMEyesEndpoint resource path: projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id}
UPDATE_MASKyesComma-separated fields (for example, spec.autoscaling_limit_min_cu,spec.autoscaling_limit_max_cu)
--jsonyesJSON with new field values
--no-waitnoReturn immediately with operation details
--timeoutnoMax time to wait for completion
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

Scale to zero

Scale to zero suspends idle computes to eliminate costs. When a new query arrives, the compute resumes automatically (typically a few hundred milliseconds).

SettingDefault
Timeout5 minutes
Minimum timeout60 seconds

Apps connecting to a scaled-down compute will see a brief pause on the first query. Implement connection retry logic in your app.

When a compute resumes, session context resets (temporary tables, prepared statements, session settings, connection pools).

Configure scale to zero

Project defaults (new branches inherit these settings):

databricks postgres update-project \
projects/my-project \
"spec.default_endpoint_settings" \
--json '{"spec": {"default_endpoint_settings": {"suspend_timeout_duration": "300s"}}}'
OptionRequiredDescription
NAMEyesProject resource path: projects/{project_id}
UPDATE_MASKyesFields to update (for example, spec.default_endpoint_settings)
--jsonyesJSON with new field values
--no-waitnoReturn immediately with operation details
--timeoutnoMax time to wait for completion
--debugnoEnable debug logging
-o jsonnoOutput as JSON (default: text)
--targetnoBundle target to use (if applicable)
--profilenoDatabricks CLI profile name

Per-endpoint (change or disable on an existing endpoint):

Use spec.suspension as the update mask for all suspension changes on update-endpoint.

Change timeout
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.suspension" \
--json '{"spec": {"suspend_timeout_duration": "300s"}}'
Disable scale to zero
databricks postgres update-endpoint \
projects/my-project/branches/production/endpoints/primary \
"spec.suspension" \
--json '{"spec": {"no_suspension": true}}'
note

Setting no_suspension: false is not supported and returns an error. To re-enable scale to zero after disabling it, set suspend_timeout_duration instead.

Where to next

See Lakebase Postgres development for local setup, feature branches, and the full plugin API, or browse the templates catalog for complete patterns.