Session

Sponsored by: Datadog | Experimentation & Product Analytics in the Agentic World: Bringing User Signals to Every Release

Overview

ExperienceIn Person
TrackArtificial Intelligence & Agents
IndustryCommunications, Media & Entertainment
TechnologiesAI/BI
Skill LevelIntermediate

When developing software with autonomous agents, the bottleneck shifts from writing code to evaluating it. Agents can generate changes continuously. What limits progress is how quickly you can tell whether those changes are acceptable. Agentic development looks less like traditional software engineering and more like reinforcement learning: a loop of build, release, evaluate, where the fastest signals dominate. If your unit tests return in seconds and your conversion data takes two weeks, guess which one shapes the product. Teams need fast proxies for signals that arrive too slowly to influence an agent's build loop, and governance is needed to reason about signal quality, noise, and coverage. This session introduces an evaluation infrastructure that orchestrates synthetic tests, production telemetry, analytics, and controlled releases to gate autonomous changes. Datadog is building toward this layer, but the framework applies to any team rethinking infrastructure for an agentic world.

Session Speakers

Chetan Sharma

/Director, Product Management
Datadog