The basic of A\B testing

A/B testing really is just a rebranded version of experimental design and statistical inference. Here are three topics that may come up in interview questions about A/B testing.Significance

Can you explain to me what significance means? What a confidence interval means?

When you are running an experiment, you are trying to disprove the null hypothesis that there is no difference between the two groups.If the statistical test returns significant, then you conclude that the effect is unlikely to arise from random chance alone. If you reject something with 95% confidence, then in the case that there is no true effect, then a result like ours (or a result more extreme than ours) will happen in less than 5% of all possible samples.

Significance has a precise meaning in statistics and many people without a formal statistical background misunderstand the concept.To better understand it I would highly recommend reviewing materials on Statistical Inference, for example these MOOC courses: UC BerkeleyX: Stat2.3x: Introduction to Statistics: Inference and CourseraCheck out What is an intuitive explanation of the t-test in hypothesis testing?for a little bit more on significance testing.

Randomization

Why is randomization important in experimental design? How would you answer the question, does attending local meetups cause Etsy sellers to gather more sales?

You can claim that Etsy sellers who attend local meetups tend to be more successful, but can you claim that their success is caused by their attendance? If a seller started to attend local meetups, will she start seeing more sales?In this case, we cannot draw a causal conclusion because of the confounding factors at play. Etsy sellers who attend local meetups are much more likely to be those who sell as a full-time profession. In this case, the confounding variable is *level of commitment as a seller. *This variable drives both attendance at local meetups and amount of sales, so we cannot conclude anything about the causal relationship between meetup attendance and amount of sales.


Randomization is at the core of experimentation because it balances out these confounding variables.By assigning 50% of users to a control group and 50% of users to a treatment group, you can ensure that the rough level of seller commitment is on average balanced between the two groups, as is every single other possible confounding variable, measured or not.There's a slight difficultly here now, since we can't just force 50% of users to go to a local meetup and 50% to not.What we can do however is to have the treatment be an encouragement to attend a local meetup, either directly from Etsy HQ or from the local seller groups themselves.Now you can measure the causal effect of this encouragement, e.g. * Does encouraging sellers to attend local meetups increase the total number of sales?

Multiple Comparisons
What things might we need to be worried about if we have an experiment with 20 different metrics? What if we run 20 experiments simultaneously?Let's say that you're Amazon and you're testing 20 different metrics on your item page - conversion rate, add to cart rate, looking at third-party sellers rate.
The more metrics you are measuring, the more likely you are to get at least one false positive.*Ways to attempt to correct for this include changing your confidence level (e.g. Bonferroni Correction) or doing family-wide tests before you dive in to the individual metrics (e.g. Fisher's Protected LSD). However, these are not used often in practice, and most people decide to just *proceed with caution *and be wary of spurious results.xkcd: Significant has a humorous take on this.

你可能感兴趣的:(The basic of A\B testing)