Monitoring Your Experiment

Monitoring and analysis in FlagSync let you assess how feature flag variants perform during and after an experiment.

By reviewing impressions, events, and metrics, teams can determine which variants optimize goals like conversions or user engagement, refining strategies with data-driven insights.

Data Sources

Impressions

Automatically logged when client.flag() serves a variant.


e.g., tracking which users see “Join Now” or “Register”.

Events

Captured via client.track() in your code.


e.g., logging sign-ups or purchases tied to those variants.

Metrics

Aggregates impressions and events into conversion rates or numeric values for analysis.


e.g., average spend

Interpreting Results

Conversion Rate

Compare the percentage of users acting after seeing a flag variant (events ÷ impressions).


e.g., if “Sign Up” has 25% conversions and “Register” has 15%, “Sign Up” may outperform.

Numeric Metrics

Assess aggregated values per user—e.g., higher purchase amounts with a discount.


e.g., if no discount averages $50 and a 10% discount averages $60, the 10% variant wins.

Use these insights to adopt the winning variant or inform the next experiment iteration.

Real-World Example

Here’s how the Register Conversion experiment plays out in a live FlagSync experiment, testing if new button copy increases sign-up conversions:

Flag creation form filled out

It’s clear that “Join Now” has resulted in more conversions.

Accessing Experiment Data

  • Live Monitoring: View real-time results on the Experiments Dashboard while the experiment runs.
  • Past Iterations: For completed experiments, go to the specific experiment page and select the “Previous Iterations” tab to review historical data.

Next Steps

Decided on a winner?