Two ways to use this template
- 1. Click "Copy prompt" below
- 2. Paste into Cursor, Claude Code, Codex, or any coding agent
- 3. Your agent builds the app — it asks questions along the way so the result is exactly what you want
Follow the steps below to set things up manually, at your own pace.
Agentic Support Console
End-to-end AI-powered support console combining Lakebase, Lakehouse Sync, a medallion pipeline, an LLM agent job, reverse sync, and a Databricks App with Genie analytics.

Includes a working starter app
Real, runnable code lives on GitHub. When you copy the prompt above, your coding agent clones it as the starting point and adapts it to your data and use case.
examples/agentic-support-console/template/Agentic Support Console
This template brings together the full Databricks developer stack into a single operational data application: an AI-powered support console where every customer message is automatically triaged by an LLM, and support agents review, approve, or override the suggestion from a purpose-built internal tool.
Data Flow
Customer interactions flow from your application's OLTP database (Lakebase Postgres) through the lakehouse via CDC, get enriched by an AI agent, and are served back to the support console through reverse sync:
- OLTP writes land in Lakebase Postgres (users, orders, support cases, messages).
- Lakehouse Sync replicates every change into Unity Catalog as CDC history tables (bronze layer).
- A Lakeflow Declarative Pipeline transforms CDC history into current-state silver tables and analytical gold materialized views (daily revenue, support overview, user profiles, case context).
- A Lakeflow Job runs every minute, finds unanswered messages, builds rich context from gold tables, calls an LLM via AI Gateway, and merges suggested responses into a Delta table.
- Sync Tables (reverse sync) replicate gold tables back into Lakebase for sub-10ms reads.
- The Support Console (Databricks App) reads from both OLTP and synced gold tables to present cases, AI suggestions, and analytics.
What to Adapt
Provisioning (manual steps and SQL), seeding, pipeline deploys, reverse sync, and app deploy are documented in the repository’s template/README.md alongside the code.
To make this template your own:
- Catalog: Set the
catalogvariable in each pipeline'sdatabricks.ymlto your Unity Catalog catalog name. - Lakebase: Point the app's
databricks.ymlat your own Lakebase project, branch, and database. - Tables: The seed script creates the OLTP schema. After seeding, configure Lakehouse Sync to replicate your
publicschema tables. - Sync Tables: Manually create the four reverse sync configurations (see the README for the exact table mappings).
- AI Gateway: Set the
endpointvariable to your preferred model serving endpoint. - Genie Space: Create a Genie space over your gold tables and set the
genie_space_idin the app bundle.
Built on these templates
This example's codebase and the agent prompt above both build on top of the templates below. Open one to dive into a specific technique on its own or apply it to a different project.
End-to-end setup for analyzing operational database data in the lakehouse: Unity Catalog with external storage, Lakebase provisioning, Lakehouse Sync CDC replication, and a medallion architecture pipeline with silver and gold layers.
Wire up a Databricks App with Lakebase for persistent data storage. Includes schema setup and full CRUD API routes.
Embed a Databricks AI/BI Genie chat interface so users can explore data through natural language. Configure a Genie space, wire up server and client plugins, declare app resources, and deploy.
Query AI Gateway endpoints for production-ready access to foundation models with built-in governance.