eCommerceNews India - Technology news for digital commerce decision-makers
Datadog

Datadog launches Experiments to track product impact

Tue, 7th Apr 2026

Datadog has launched Datadog Experiments, now generally available to customers worldwide.

The service lets product teams design, run and measure A/B tests and other product experiments within Datadog's platform. It combines business metrics stored in a company's data warehouse with product analytics and application observability data, giving teams a single place to assess the impact of product changes.

The launch reflects the growing importance of experimentation in software development as companies release features more frequently and look for clearer links between code changes, user behaviour and commercial outcomes. Existing workflows often require separate analytics, experimentation and monitoring tools, making it harder to tell whether a change improved results or introduced technical problems.

Tool consolidation

By bringing experimentation into its broader observability platform, Datadog is extending its reach beyond infrastructure and application monitoring into product decision-making. The product is built on technology from Eppo, the experimentation startup it acquired, and is intended to help product managers, designers and engineers work from the same dataset and measurement methods.

This approach addresses a common problem for product teams: the gap between business reporting and technical monitoring. In many organisations, commercial metrics sit in a warehouse, product usage data in analytics software, and performance data in engineering tools. Teams must then reconcile those sources manually when testing new features.

Datadog Experiments includes statistical analysis tools and live guardrails designed to alert teams to problems while an experiment is running. It also connects with Datadog's existing observability products, including Real User Monitoring, Product Analytics, application performance monitoring and logs.

The company argues this link becomes more important as businesses deploy more AI-related features, increasing the pace of software releases and making it harder to verify whether changes are delivering the intended outcome. More broadly, vendors in this market argue that faster release cycles increase the need for measurement systems that connect technical performance with business results.

Yanbing Li, Chief Product Officer, Datadog, said the cost of poor visibility rises as release frequency increases. "The faster teams ship, the more expensive it becomes to not know what's working. When signals are scattered across disconnected tools, teams make decisions with incomplete information-missing what's actually driving revenue and killing the bold bets that will move the business forward," said Li.

AI pressure

Datadog says its aim is to standardise experimentation so teams can move from test design to result analysis without relying on multiple systems. It also says decisions can be measured directly against business metrics from native data warehouses, which should make results easier to audit and compare across teams.

Competition in this segment has intensified as observability vendors, analytics providers and feature management companies all seek a larger role in software development workflows. Experimentation has traditionally been handled by specialist tools, while product analytics and application monitoring have remained separate categories. Datadog's move suggests further overlap between those markets as suppliers try to offer broader platforms.

For customers, the appeal is likely to be operational simplicity as much as statistical rigour. A single environment for monitoring an application, tracking user behaviour and measuring the business impact of a release could reduce the data movement and reconciliation work required after each test.

Questions remain around adoption, including how easily companies can align warehouse data with product events and whether teams already using standalone experimentation tools will shift their workflows. Larger organisations often have entrenched systems for analytics, release management and governance, which may slow consolidation.

Li said AI development has sharpened the need for a uniform way to assess releases.

"AI has increased the pace and complexity of software releases exponentially. Too often, though, teams are flying blind when it comes to measuring the efficacy of new code. That's because they don't have a uniform way to validate changes and monitor their impact," said Li.

"With Datadog Experiments, teams have the guardrails needed to safely validate AI-driven changes. By tying experiments to Real User Monitoring (RUM), Product Analytics, APM and logs, organisations can measure both business impact and performance implications to reduce risk without slowing innovation," added Li.