 All right, good afternoon, everyone. Thank you all for attending the webinar this afternoon. My name is Courtney Soderberg. I am the Statistical and Methodological Consultant at the Center for Open Science. And today, we have our first guest webinar speaker, Alan Xander S. from UC Irvine, as well as a part of the JASP group, which he's going to be telling us more about. So today, as it says by the title of the webinar, we're going to be talking about using JASP for statistics, as well as how JASP and the OSF can talk to each other to make your analyses a little bit more reproducible. I just wanted to quickly go over how you can ask questions in terms of after the webinar, you can also ask me questions or ask Alex questions as well. So here are a few links for both JASP and the OSF, as well as Twitter handles and emails. Contact.cos.io if you have any questions about Center for Open Science or the Open Science framework. And if you have any questions about JASP, you can either email Alex or you can tweet at them at JASPStats, right? So I will hand it over to Alex for the rest of the webinar. Excellent, hello everybody. And today, I think I'll just go through a few short steps. First, I'm just going to show off JASP a little bit, show you how to use it, show its nice features. Then I'm going to show you a little bit of the OSF. I'm going to assume you can handle making your own account. I won't show you all of that technical stuff. Then I'm going to just go through how to sync all these things up to make your workflow a little bit easier. So first, let's just go into JASP here. So now you'll see that we're into the JASP screen. I'm just going to make it a little bit bigger there. So this is the welcome window here. So you have your version software here. And now if you are not keeping your software up to date, they do keep rolling out updates. And so you might get left behind a little bit there. Then we've got the file and comment tabs here. The file tab is where you do all of your starting stuff. So you can see here we've got our recent files. We've got our computer files. The OSF, eventually I'll show you how to do this. And then there are a few built-in examples which are kind of fun to play around with. So I'm just going to open up from my computer. I'm going to go into my downloads folder and let's go to date modified here. There we go. So now that's all it takes to load the data. That's just a CSV file there. This can load in a few different types of data sets, CSV, text files. I think it can load in SPSS files in the latest release. So I've got your window here. This has all the rows, participant numbers, columns, conditions, all these different things. And so you scroll over, you can see more if you want to see all of it at once, you can go like this. So this has a lot of columns, but just a few are really of interest. So we'll just keep it smaller. So now we might want to see the descriptives of this. So we'll click descriptives up at the top. So this is where all of your action is going to happen. So we'll do descriptive statistics here. And now we've collapsed the data into the left over here. We can make it go all the way away if we would like. Then we can go to, oh, maybe we'll see participant numbers. Well, there are 102 participants, conditions. Let's go to maybe the means. We can do some plots if we like, all this fun stuff. But we're just gonna remove that for now. Now for this data set, it's a t-test data set. So we're gonna go into t-tests. And then we're gonna go into this dropdown menu that just says independent samples t-test. So we click that. And now we're into the options panel here. Now the options panel is where you specify all of the little bits of your analysis. So for this data set, the dependent variable is the mean NEO. So we're gonna put the dependent variable here. Now we need to say what the grouping variable is. And in this, we're going to do rotation as the grouping variable. And so now you see that it just pops up into the table and it shows the t-value, degrees of freedom and the p-value. And if you say, oh, well, I'm worried about different variances, oh well, then we're gonna say check equality of variances. And we see here, non-significant tests for equality of variances, which doesn't mean that they're equal, but it just means you might not worry too much. But we'll get rid of that. Now we could do the different t-tests here. If we wanted to do a little bit more robust, we can add the man Whitney and that just adds another row to our table. We'll take that off for now. We can also do is change our hypothesis. If we had a one-sided or hypothesis either direction, we can click on these and it just automatically updates it. So now it has a little note that says, for all tests, the alternative is specifying that clock is greater than counter. So clockwise is greater than counter-clockwise. But we'll change it back. And you notice that every time I press a button, it all changes immediately in the panel to the right. Now, what we're gonna also show is the mean difference between the groups, so the raw mean difference, and that's gonna show up right here. And then we're gonna say, well, we also want the effect size. So now we have our coins, the effect size. And, oh, but now we also want a confidence interval. Now we have our confidence interval on our coins, the effect size. We could change that. Oh, we actually want 99%. Well, you just click 99% and press Enter. And there you go, 99%. We can also do descriptives from here. And I'm gonna show you just our plotting here, so we can do a descriptives plot. And that just does a standard dot plot. I do believe they're adding more fun plots soon, which have like these scatter points, sort of violin plot with the densities added around it, which are pretty cool. But okay, so this is just a general example of how you would do your analysis. So it's pretty quick, press OK. That goes away, now you're left with this. So if you wanted to, you could take this and copy it and paste it right into Word. So it's fully APA-formatted, you don't have to worry about messing with it at all. Copy citations, if you do your analysis in Chasp and you wanna include it in your paper, you would add the citations to the software using that button. So I think this is all a little easy I feel like compared to say SPSS, where you have to go through, I think four menus to do a t-test. One thing I'd like to point out is that when you do your t-test like this, if you say, oh shoot, what I really wanted to do was a one-sided test, but now I have just a two-sided test. So you would just click this again and go back in here and click group one, greater than group two and hit okay. So it's really easy. Now I'm gonna show you how you might go about using the OSF to store your data and show you why using Chasp and the OSF together will really make your workflow a lot easier. So let's close out of Chasp. Let's don't save. And now I'm gonna show you the OSF. So if I wanted to share my data using the OSF in my paper, I would create a new project here from the dashboard and let's just call it fun things. So we have a couple of options, description, templates, but we're just gonna go blank. New project, go to project. Now here we are in our project fun things. It's private, so no one can see this but me until I hit make public. And I'm just gonna go into the files tab here and from my desktop, which you cannot see right now, let's see. Oh, we are going to go to, we're going to drag into here with our data set, kitchenrolls.csv. And you should be able to see here now that there's a CSV file loaded into our OSF page. So if we go back to the homepage, you'll see that. Okay, here we go, kitchen rolls. So now our data set is on there. Now you wanna share your data. You say, hey guys, we have this link to our OSF page. The data set is right on there. And then if you wanted to say, share with your collaborators, you could send them the link and keep it private. And then they would come in and they would click on it and they would download it themselves. And you would say, okay guys, I did a t-test and I used these settings and I did a one-sided with confidence intervals and all of these things. But there's a way to do this that it's a little bit easier and there's less room for confusion and for different analysis to be run on accident so that everybody isn't quite on the same page. So I'm going to show you how to access this file from JASP itself directly and then how to upload your analysis that you do in JASP straight to the OSF so that everybody else can see it and then use it on their own computer. So I'm gonna go back into JASP. So now you should be seeing JASP on the screen share. Okay, so now we're gonna go into the OSF tab in JASP and you see that I've got my email here. Now I need to type in my password. So we hope it's correct. And it was, you never know. Now we're going to have all of our projects here that we have on our dashboard on the OSF. So you see I just made fun things. So now I'm gonna click on fun things and that brings me into my storage files. So now I'm gonna click OSF storage and then you see my kitchen rules data set. So now I just double click on that and it loads it straight from the OSF. So that's pretty nice. One of the nicer things about this is is the ability to go straight from the OSF into JASP. Now we're gonna go back to our T test. So we click on that from the top menu. We go rotation is our grouping variable mean again. And I'm just gonna go with that for now. Now we go to save as. And you see that since we're in our OSF, we can save as straight to the OSF. So we're gonna save into our OSF folder for fun things. This will be just example. So we hit save and it pings it and says, please save this now and now it's saved. So now if you see, you go back into file into the save folders. Now you see there's example dot JASP. JASP is the output.jasp file type is the output of JASP. So this saves it directly to the OSF so there's not a copy on your computer. If you wanna save a copy on your computer, you just save it to computer and you would just name it here and click save. So now I'm going to close this data set and you go back to common now it's gone, right? Now if I go back to my OSF page and refresh it, you see we've got an example dot JASP. So it uploads it right there. You say, oh no, but I meant to also include a different analysis. Well, you go back to JASP, you say file open from my OSF example dot JASP double click. It loads the data and it loads all of your results. So you click on it, brings back up the menu, shows you all of the cool stuff there. Now you say, oh, I want my descriptives, you add your descriptives, or I wanted effect size and confidence interval and then you add those. Now there's a neat little feature here that I'd like to show you where you can add notes to this. So I'm gonna click add notes and now there's a text box here. So now I'm gonna put a little note that says this is the primary result. It shows a non-significant p-value. This was pre-registered. It wasn't this time, so that's a lie, but it was in your hypothetical paper. So you say, this was pre-registered. So now you click up. Now you submit your paper, you show them your JASP file, everything is in the open and then somebody comes to you and says, oh, well, you know your p-value there, that's not a significant p-value. So you can't really interpret that very well. You say, yes, that's true, I guess. What should I do? They say, well, you can do a Bayesian test, which allows you to see if you have evidence for an absence or just the evidence of absence. So is it inconclusive or does it really support a null hypothesis? So now we go back into t-tests and I'll show you how to do that. We go to Bayesian independent samples t-test and now that's just gonna pop up down at the bottom here. So we're gonna do the same settings as before, where we have our mean is our dependent variable and we have rotation as our grouping variable and now it calculates a Bayes factor and that just pops right in there. So when you do this, you've got to realize that it has preset Bayesian settings. So if you're doing a test and you expect a large effect size, then you would use this Cauchy prior width. That just says roughly the rough guess of what you expect the population means to be in terms of coins D. So the standardized values of the mean difference. Now you get this default setting for various purposes, but you say, well, really, I expect a pretty small population effect size. Maybe around coins D of 0.3. So now you click 0.3 and you hit enter and that changes the result. So you usually don't have a good reason to pick a precise value, say 0.3, but you really could pick 0.1. Say, well, you could pick 0.1. There's no reason I couldn't pick that. But I also, you know, you could pick 0.25. There's no reason you couldn't pick that. But what you can do is click on this Bayes factor robustness check option. And what that's gonna do is it's gonna show you how the Bayes factor changes depending on the prior that you choose. So we're running, we're running. Here we go. I'm gonna click this button just to switch so that it's on top instead of on bottom. You can do it. There we go. So now we have this robustness check. So you say, we've got our user prior width, just 0.25. So if we expected a small population effect size, then the evidence in favor of the null hypothesis, BF01 in favor of the null is not very big. So this falls into the so-called anecdotal category. I prefer to just call these inconclusive. So these are just rough guidelines. There's no real bright line here. But if you were to expect a very large population effect size, like say you were studying, oh, I don't know, an anchoring effect, which are shown to be fairly large effect sizes. And you say, I wanna see if the, some Southeast Asian population also shows anchoring effect sizes or something like that. Then you would expect probably a pretty big population effect size. And so if you did, then you would have a little bit stronger evidence in favor of the null because you actually found a pretty small effect size. So now you can add this and you can say add a note, or you say, we expect a large population effect size. So blah, blah, blah, right? You can add your interpretations there and then you click to get out of it. So now you wanna push this back up to the OSF. All you gotta do is go in here, click save, and then click save. There we go. Now I'm gonna show you again on the OSF. So we go back here. Now we're back. We're gonna reload it. And you see that the files have again updated. Again, updated. So we're going to then, so now what we've done is we've, instead of having to download the data set from the OSF, load it back into JASP, run the analysis ourselves. We've saved the JASP file onto the OSF where it can do version control and it makes for easy sharing. Now, one other very nice thing about integration with the OSF is that when these JASP files are on the OSF, you can click on them like so. And it brings up a browser that shows all of the OSF results and all of your comments. So people don't actually have to have JASP installed on their own computer in order to see what's going on with your results. You can just send them this page. You see it's version two. If you wanted to see version one, you could do that. But we just say, no, we want version two. So you've got this here and you can see all of the notes. So anybody who sees it, you can say, you can put the citation. You could put the link to the paper. You could do all these things. And then we have our t-test. We might have put a note that says, there's also a Bayesian t-test down below, but nonetheless, we have our Bayesian t-test. We have our plot here. And so I think it's a pretty simple use, right? You just, you do your analysis as you do. You link it to the OSF through JASP and it takes care of the rest all on its own. I would like to show just one more thing before I wrap up here. So let's go back into JASP. Now we've, say you want to somehow your JASP file crashes or something goes terribly wrong and you can't access JASP or something like that. Well, if you have it on the OSF, of course, then that's safe. But say it's on your computer. So let's save it just to downloads. Can it pull out exist? Well, let's replace it. So now I'm going to show you a neat little check. So we'll go to our finder here. Now you see this, we have example.jasp in our folder. So if you were to double click on this, it would open it back up into JASP, but say you're a JASP broke or something. Well, what you can do is you can right click it and you can rename it. Where's the rename button? Well, let's just do this. So we rename it here and instead of .jasp, we do .zip. So we save it and it says, are you sure you want to change this? You say, yes. Now you double click it and you have access to all of your data, your analyses, your, all the different metadata and things. You have, what's in this folder? We have our, let's see. Oh yeah, that's our figures. So if you want, if you're, you can't get in there, you've got to get your figures out. Those are saved in here as PNGs. And if you were to double click on the index, I believe it would pull up that the same thing you see on the OSF viewer. Now, let's see, what else can we do here? We've got a couple of minutes. Does anybody have any questions? I can also show just different analyses if somebody has a question about like a correlation test or something. I'm happy to answer those, but I believe that's all I've got to show. Is there any way to script a set on analyses? Well, no. There's not. So the way that Jasp is set up is that it's just a user interface that interacts with R. So everything under the hood is R code. Most of the Bayesian analyses are from the Bayes Factor package. So, and I believe all of the main, classical tests are just using the regular R functions. And so if you wanted to do your own scripting analyses, you could get all the same results, but you'd have to do it in R yourself. That's a really nice thing about Jasp is that it's all sort of self-contained and it makes sure that everything is all set for you. But of course, if you're at the point where you want to do custom analyses, then I think you're already sort of more advanced than most users. And so you would sort of be on your own using R for that. But R's pretty easy to, I mean, it's not too bad. Alex, if people wanted to see exactly what R code was being run underneath the Jasp calls, they can look at all of that on your All's GitHub repo, correct? Yes, you could, yes. So all of the, so this is all open source. So anything that you want to know how it works, it's all on GitHub. That's also where you put the, if you have a bug report, or if you've got a question or a feature request, that's the main way to interact with the software developers. Now, we've got Q&A teams and people on Twitter and on Facebook, so yeah. There's another question here. Are we assuming anything like Gaussian distributions during Bayesian analyses? If so, is there any non-parametric version of these tests? Wow, what a great question. We typically are assuming, so for a t-test, we're assuming that the sampling distribution of the data is normal. And there are not yet non-parametric versions of these tests, but those are in development, I can tell you. So one day they will be in there, but it's usually not as straightforward as the classical ones, because you have this much more complicated combination of robust sampling distributions and prior distributions and how do you sort of navigate between them is an open question. But yeah, so right now you should definitely know that say for the correlation, they're assuming by very normal, ANOVA's, they're assuming all the typical ANOVA things. Yeah, so yeah, good question though. So yeah, I mean, these all depend on assumptions just like all the regular tests that you normally do. All right, so if we don't have any other questions right now, thank you all so much for attending. If you end up thinking about questions later on, either for myself or for Alex and the JASP team, you can always email us for anything about the OSF, contact at cos.io is what you want. And then for JASP, the best way to get in touch with the JASP team is probably through their Twitter handle, which is at JASP stats, or you can email Alex at aets at uci.edu. Thank you all so much.