Skip to main content

Quickstart

Prerequisites

Template path

Browse the templates below, pick one for your use case, and copy it into your AI coding assistant. Each includes the Create a Lakebase Instance resource, which walks through project creation and connection value collection.

TemplateBest for
App with LakebaseCRUD apps with persistent storage
AI Chat AppConversational AI with chat history
Operational Data AnalyticsBidirectional sync between Lakebase Postgres and Unity Catalog

Customize your app

After deploying a Lakebase Postgres-backed app, consider the following customizations:

  • Add tables: Follow the Lakebase Data Persistence template to define schemas, generate types, and create CRUD routes.
  • Add agent memory: Use the Lakebase Agent Memory template to persist your agent's chat conversations.
  • Use feature branches: Create isolated branches for development and testing. The Development: Feature branches section has CLI commands.
  • Sync data to/from Unity Catalog: Use Lakehouse Sync (CDC) to replicate Lakebase Postgres tables into Delta, or Sync Tables to serve Unity Catalog data through it.
  • Deploy outside Databricks: Use the Lakebase Off-Platform template for apps hosted on AWS, Vercel, Netlify, and others.

Manual path

When you scaffold without a template, databricks apps init generates a working AppKit project.

Interactive (recommended for local development): run without flags and the CLI prompts for project name and feature (plugin) selection:

databricks apps init
Project name: my-app
┃ Select features
┃ [ ] Analytics Plugin
┃ [ ] Files Plugin
┃ [ ] Genie Plugin
┃ [x] Lakebase
┃ [ ] Model Serving Plugin

Select Lakebase and the CLI walks you through selecting an existing project, branch, and database.

Non-interactive (for scripts and CI): pass --name and the required --set fields for each selected plugin feature. The database value must be the full resource path, retrieved via databricks postgres list-databases projects/<project-id>/branches/<branch-id> -o json (use the name field):

databricks apps init --name my-app --features lakebase \
--set lakebase.postgres.branch=projects/<project-id>/branches/<branch-id> \
--set lakebase.postgres.database=projects/<project-id>/branches/<branch-id>/databases/<database-id>

Then deploy first to create the schemas, and run locally:

cd my-app
databricks apps deploy
tip

Execute databricks apps deploy before npm run dev. Deploying sets up a managed identity (the app's service principal) that creates the database schema on first startup. If you start npm run dev first instead, the schema gets created under your personal credentials, and when you later deploy the app's managed identity can't access it. Local setup explains this further.

npm install && npm run dev

Where to next

For local development workflow, feature branches, and the full plugin API, see Lakebase Postgres development.