Skip to main content

Referring to this part of the docs “What happens to customers that were enrolled in an experiment after it's been stopped?”. I want to target users who we’re entered into an experiment with a follow-up experiment. But it states they will see the default offering even though the experiment has ended?
https://www.revenuecat.com/docs/tools/experiments-v1/experiments-results-v1

 

Is it possible to include these users in a new cohort for a new experiment without removing their user records or waiting for lots of new users to make the test useful?

 

Our goal is to price test different offerings. Our first test gave use some good results but we want to continue targeting users who were a part of this first test but did not subscribe. 

Hi mhigbee,

Thanks for the question!

Once you stop an experiment, new customers aren’t enrolled and existing experiment users fall back to your Default Offering if they hit a paywall again. That means they won’t automatically be part of any new experiment—you’ll see their renewals in your original results for up to 400 days, but any new subscriptions they start post-experiment won’t feed into a fresh test.

Experiments only enroll new users while running, so there’s no built-in switch to “roll” old cohort members into another experiment. This is done to protect the statistical integrity of your A/B test by keeping cohorts clean and comparable.

Hope this helps! Let me know if you have any other questions.

Best,

Hussain


Hi mhigbee,

Thanks for the question!

Once you stop an experiment, new customers aren’t enrolled and existing experiment users fall back to your Default Offering if they hit a paywall again. That means they won’t automatically be part of any new experiment—you’ll see their renewals in your original results for up to 400 days, but any new subscriptions they start post-experiment won’t feed into a fresh test.

Experiments only enroll new users while running, so there’s no built-in switch to “roll” old cohort members into another experiment. This is done to protect the statistical integrity of your A/B test by keeping cohorts clean and comparable.

Hope this helps! Let me know if you have any other questions.

Best,

Hussain

What would you recommend if I still want to run experiments with those users, even if it break the integrity of the experiment? Do we need to implement some custom logic with another platform like Firebase A/B testing?


Reply