Experiments (or A/B/n testing) compare multiple feature variants (e.g., different button texts, layouts, or price discounts) and determine which performs best against goals like conversion rates, user engagement, or page time.By rolling out variants to user segments and measuring real-world results, teams gain data-driven insights to refine product design and user experience.
A FlagSync experiment integrates Feature Flags, Events, and Metrics into a unified testing framework. It tests flag variants (e.g., “baseline” vs. “new design”) against one or more metrics by correlating:
Impressions: What users see
Events: What users do
This evaluates if a variant outperforms, underperforms, or matches the baseline—e.g., does “Join Now” increase conversions over “Register”?
Impressions: Automatically logged when client.flag() serves a variant.
Events: Manually tracked with client.track() to capture user actions.