Session
From Vector Search to AI Search: Rebuilding Retrieval for the AI Era
Overview
| Experience | In Person |
|---|---|
| Track | Artificial Intelligence & Agents |
| Industry | Enterprise Technology, Financial Services |
| Technologies | Unity Catalog, Agent Bricks |
| Skill Level | Intermediate |
Search bars, recommendations, entity resolution, real-time matching, agents — all run on retrieval, and all stand or fall on whether the right result arrives fast enough and ranked well enough. Teams now need hybrid keyword-plus-semantic search, reranking as a first-class step, domain-tuned models, retrieval quality evaluation without hand-labels, and feedback loops that turn traffic into better results.We've rebuilt Databricks Vector Search from the ground up — and today we're relaunching it as Databricks AI Search: an AI-native retrieval platform that unifies search, ranking, evaluation, and continuous learning on the Lakehouse. You'll see what's shipped — BM25 and full-text indexes, hybrid retrieval with a native reranker, domain-tunable rerankers, LLM-judged evaluation, high-QPS endpoints — and what's next: click logging, continuous quality monitoring, and learn-to-rank.Walk away with best practices for measuring retrieval quality and a production-ready stack for the AI era.
Session Speakers
Ankit Vij
/Engineering Lead - AI Search
Databricks
Adam Gurary
/Senior Product Manager
Databricks