SESSION

Red Teaming of LLM Applications: Going from Prototype to Production

Accept Cookies to Play Video

OVERVIEW

EXPERIENCEIn Person
TYPEBreakout
TRACKGenerative AI
INDUSTRYEnterprise Technology, Health and Life Sciences, Financial Services
TECHNOLOGIESGenAI/LLMs, Governance
SKILL LEVELIntermediate
DURATION40 min
DOWNLOAD SESSION SLIDES

LLM applications are notoriously difficult to put into production, in large part due to the fact that there are so many potential vulnerabilities in their performance. From hallucinations to discriminatory behavior and prompt injection attacks, there are many ways LLM-based systems can fail to go beyond small-scale prototypes. In this breakout session, we'll focus on exploring techniques to detect and identify vulnerabilities in LLM applications; the aim is to transform the journey of LLM deployment into a secure and confident stride towards innovation. The session will introduce the concepts of LLM app vulnerabilities and the red-teaming process. We’ll then do a deep-dive on automated detection techniques and benchmarking methods. Attendees will leave the session with a better understanding of automated safety and security assessments tailored to GenAI systems.

SESSION SPEAKERS

Corey Abshire

/Senior AI Specialist Solutions Architect
Databricks