 Salutations, AJ here. Got a new mic. Do I sound good? Yeah, I thought so. In this video, I'm gonna walk you through A.B. testing, a topic very crucial if you're working in the data science space. We're gonna walk through the steps to conduct such a test ourselves, so stay tuned. You're gonna learn a lot. Say you're running a company online. You're selling a product and you're making some decent money. But you know what's better than making money? Making more money. So your buddy Brian has this awesome idea of getting a new feature that he thinks will break in more cash. And he's like, dude, let's just use this. And you're like, for show, let's just up and implement this, right? No, you're wrong. You can't just do that just because Brian says so. What if he's wrong? What if this feature doesn't actually increase sales? What if it actually decreases sales? Hmm? So to check the effect on sales, we can conduct an A.B. test. To do so, you'll need to follow a few steps. Still with me? First, have a goal. Make more money. Not really. A real metric that you want to increase. Say average weekly use time. Next, construct a hypothesis. Then all hypothesis, the default position, would be that Brian's method doesn't affect use time. Alternative hypothesis would be that it does. Third, gather users. Like I said before, we have a lot of users. Let's split them into two groups. Call them group A, group B, control variant chickens, donuts, walnuts, mushrooms, whatever. Just give one of them the original product and the other Brian's new feature. Now conduct the experiment. If you're a noob, because that's not how you do it, you need to check the experiment's power. Statistical power, the probability of rejecting the null hypothesis when it's false. So we want this number to be high, typically over 80%. If you can't get 80% with the number of users you have, you can't conduct the experiment. Why? Because you risk committing type 2 false negative errors. If Brian's feature has an impact, our experiment may say it's useless. Poor Brian. We don't want to do that because I love Brian. Say we're all good with statistical power. Can we conduct the experiment yet? Of course we can. NOT! We need to know how long to run the experiment and how, just use an online calculator. Now conduct the experiment by giving the original version to the control group and Brian's version to the variant group. Monitor usage over the course of experiment time. Once complete, we get two distributions. If we want to compare their means, we can use the t-test. Want to compare the distributions themselves, the Man-Whitney-U test works just as well. Get a test statistic and a p-value. Test statistic, the number of standard deviations. P-value, probability of observing such a test statistic given the null hypothesis is true. So high p-values, we can't reject the null hypothesis and Brian's feature doesn't make a difference. So toss the idea out the window along with Brian. Low p-values reject the null hypothesis and move on to the next phase of decision-making. A-V testing is just one tool in the entire decision-making process to launch a new feature. We would also want to take care of causation correlation assumptions. And that's all I have for you now, so if you liked the video, hit that like button. If you're new here, welcome and hit that subscribe button. Ring that bell for notifications when I upload. Still looking for your daily dose of data sciences and I'll see you in the next one. Bye!