What Are Experiments?

Experiments (or A/B/n testing) compare multiple feature variants (e.g., different button texts, layouts, or price discounts) and determine which performs best against goals like conversion rates, user engagement, or page time.

By rolling out variants to user segments and measuring real-world results, teams gain data-driven insights to refine product design and user experience.

Why Experiments Matter

Data-Driven

Validate changes with real user data, not guesses.

Objective

Metrics cut through opinions with clear numbers.

Iterative

Test, tweak, and improve continuously.

What Is a FlagSync Experiment?

A FlagSync experiment integrates Feature Flags, Events, and Metrics into a unified testing framework. It tests flag variants (e.g., “baseline” vs. “new design”) against one or more metrics by correlating:

  • Impressions: What users see
  • Events: What users do

This evaluates if a variant outperforms, underperforms, or matches the baseline—e.g., does “Join Now” increase conversions over “Register”?

Impressions: Automatically logged when client.flag() serves a variant.

Events: Manually tracked with client.track() to capture user actions.

Best Practices

  • Single Hypothesis: Test one change at a time (e.g., text, not text + color) for clear results.
  • Adequate Sample: Ensure enough users experience each variant to avoid skewed results.
  • Clear Goals: Match metrics to hypothesis objectives (e.g., revenue, engagement) for actionable insights.

Next Steps

Get started with a guided flow in Quickstart: Overview, then

  • Learn about event tracking at Events.
  • Set up metrics in Metrics.