Create an experiment

Below is an example-driven walkthrough to illustrate how you might set up and analyze an experiment in FlagSync.

To create an experiment, click the "Create Experiment" button from the Experiments dashboard.

We’ll use a “Register CTA” feature flag that has three variants—“Register,” “Join Now,” and “Sign Up.” Our goal is to compare these variants using two key metrics:

  1. Register CTA Click – Tracks when a user clicks the call-to-action button they were served.

  2. Registration Event – Tracks when a user actually completes the sign-up process after clicking through.

In this scenario, “Register” is the baseline variant and is served to 33% of users, while the other two variants each receive 33% and 34% respectively.

As you follow along with the screenshots and instructions, you’ll see how to define the experiment, link it to these metrics, and interpret the results to determine which variant drives the most sign-ups.

Step-by-Step: Creating an Experiment

1. Details

  • Name

    • Give your experiment a descriptive title (e.g., “Landing Page Headline Test” or “Optimized Checkout Flow”).

    • You can change this name later if needed.

  • Hypothesis

    • State your assumption about the experiment’s outcome and why you believe it will happen.

    • Example: “If we change the checkout button text from ‘Buy Now’ to ‘Complete Purchase,’ we expect more completed orders.”

Tip: The more specific your hypothesis, the easier it is to evaluate whether the changes truly had an effect.

2. Select Metrics

  • Event Metrics: Choose the metrics that will measure this experiment’s performance.

  • If you haven’t created a relevant metric yet, click Create metric to set one up.

    • Remember: Metrics rely on Events (manually tracked actions) and Impressions (automatically tracked when a user sees a particular variant).

    • A “conversion rate” metric might use the number of Events triggered versus the total Impressions served.

    • A “numeric” metric might sum or average a value from the event properties.

The selected metric(s) will be the yardstick for determining which variant wins (or if there’s no significant difference).

3. Choose Flag

  • Feature Flag: Select the flag you want to test. For instance, if you have a flag called register-button-copy with variants like “Register,” “Sign Up,” and “Join Now,” you can pick it here.

  • If you need a new flag, click Create flag to configure a fresh one on the fly.

Note: Each flag has multiple variants, and an Experiment will measure how these variants perform relative to your chosen metric.

4. Choose Baseline

  • Baseline Variant: Once you’ve selected the flag, you’ll pick which variant serves as the control group or “original.” The performance of the other variants will be compared against this one.

  • Typically, the baseline is the variant you already have in production, or the one that’s historically performed best.

Example: If your register-button-copy has three variants—“Register” (original), “Sign Up,” and “Join Now”—you might choose “Register” as the baseline.

5. Set Rollout

  • Distribution: Configure how traffic is allocated among your variants. You can set an even split (e.g., 33% each for three variants) or allocate more traffic to the baseline if you want lower risk.

  • Timeframe or End Criteria: Some teams run an experiment for a fixed time (two weeks, for example), while others monitor data until they have enough statistical confidence.

Note: Rollout percentages must always equal 100%.

6. Create Experiment

  • After you’ve filled in each section, click Create Experiment.

  • Once you start the experiment, FlagSync will begin collecting impressions and tracking the selected metric(s) against each variant as soon as the experiment is active.

Next Steps: Monitor your experiment’s results in Experiments dashboard.

As users see variants and trigger events, you’ll see aggregated metrics and can determine which variant is performing best.

Last updated