Two ways to use this template
- 1. Click "Copy prompt" below
- 2. Paste into Cursor, Claude Code, Codex, or any coding agent
- 3. Your agent builds the app — it asks questions along the way so the result is exactly what you want
Follow the steps below to set things up manually, at your own pace.
Lakebase Agent Memory
Persist your AI agent's chat sessions and messages in Lakebase so users can resume conversations and your agent can reason over prior turns across deploys.
Prerequisites
Verify these Databricks workspace features are enabled before starting. If any check fails, ask your workspace admin to enable the feature.
- Databricks CLI authenticated. Run
databricks auth profilesand confirm at least one profile showsValid: YES. If none do, authenticate withdatabricks auth login --host <workspace-url> --profile <PROFILE>. - Lakebase Postgres available. Run
databricks postgres list-projects --profile <PROFILE>and confirm the command succeeds (an empty list is fine). Anot enablederror means Lakebase is not available to this identity in this workspace. - Databricks Apps enabled. Run
databricks apps list --profile <PROFILE>and confirm the command succeeds (an empty list is fine). The chat persistence layer runs inside an AppKit app deployed to Databricks Apps. - A scaffolded AppKit app with Lakebase wired up. Complete the Create a Lakebase Instance and Lakebase Data Persistence templates first. This template adds chat tables on top of that setup.
Lakebase Agent Memory
Save your AI agent's chat conversations to Lakebase so users can come back to a session, scroll their full message history, and let your agent reason over previous turns across requests, deploys, and machines.
The schema is a simplified, production-shaped relational layout (chat plus message) wired to Databricks AppKit + Lakebase. Once it's in place every chat turn — user input, assistant reply, tool call — is durably persisted in managed Postgres next to the rest of your operational data.
This template assumes you have already completed the Create a Lakebase Instance and Lakebase Data Persistence templates (Lakebase project creation, scaffolding, environment variables, databricks.yml config, and initial deploy).
1. Create chat tables
Create two tables in a chat schema:
chat.chats: one row per chat sessionchat.messages: one row per message
CREATE SCHEMA IF NOT EXISTS chat;
CREATE TABLE IF NOT EXISTS chat.chats (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
title TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS chat.messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
chat_id UUID NOT NULL REFERENCES chat.chats(id) ON DELETE CASCADE,
role TEXT NOT NULL CHECK (role IN ('system', 'user', 'assistant', 'tool')),
content TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX IF NOT EXISTS idx_messages_chat_id_created_at
ON chat.messages(chat_id, created_at);
2. Run setup from your server bootstrap
In server/server.ts, run schema setup inside onPluginsReady so it completes before AppKit starts the HTTP server:
import { createApp, server, lakebase } from "@databricks/appkit";
import { setupChatTables } from "./lib/chat-store";
await createApp({
plugins: [server(), lakebase()],
async onPluginsReady(appkit) {
await setupChatTables(appkit);
},
});
3. Add persistence helpers
Create server/lib/chat-store.ts and use parameterized queries:
Getting userId: In deployed Databricks Apps, use
req.header("x-forwarded-email")from the request headers. For local development, use a hardcoded test user ID.
export async function createChat(
appkit: AppKitWithLakebase,
input: { userId: string; title: string },
) {
const result = await appkit.lakebase.query(
`INSERT INTO chat.chats (user_id, title)
VALUES ($1, $2)
RETURNING id, user_id, title, created_at, updated_at`,
[input.userId, input.title],
);
return result.rows[0];
}
export async function appendMessage(
appkit: AppKitWithLakebase,
input: { chatId: string; role: string; content: string },
) {
const result = await appkit.lakebase.query(
`INSERT INTO chat.messages (chat_id, role, content)
VALUES ($1, $2, $3)
RETURNING id, chat_id, role, content, created_at`,
[input.chatId, input.role, input.content],
);
return result.rows[0];
}
4. Persist in the /api/chat flow
In your chat route:
- create (or load) a chat row
- save incoming user message
- stream assistant response
- save the final assistant response after stream completion
Use an explicit chatId on the client and pass it in each request body.
5. Add history endpoints
Add REST endpoints for your chat UI:
GET /api/chats-> list chats for current userGET /api/chats/:chatId/messages-> load ordered historyDELETE /api/chats/:chatId-> delete chat and cascade messages
6. Update the client to load and resume chats
- Keep selected
chatIdin state or URL - Fetch history with
GET /api/chats/:chatId/messagesand callsetMessages()from theuseChatreturn value to load it into the chat (AI SDK v6 usesmessagesinChatInit, notinitialMessages) - Send
chatIdin every/api/chatrequest by passing it via a customfetchwrapper on theTextStreamChatTransportconstructor (there is noonResponseoption on the transport; use the customfetchto read response headers likeX-Chat-Id)
7. Verify persistence end-to-end
databricks apps deploy --profile <PROFILE>
databricks apps logs <app-name> --profile <PROFILE>
Verification checklist:
- send 2-3 messages
- refresh the page
- confirm prior messages reload from Lakebase
- start a second chat and confirm separate history
- delete a chat and confirm it no longer appears