 Good morning everyone. Thank you for attending this webinar. We are thrilled to have a guest speaker this morning or this afternoon depending on your time zone. Dr. Natalie per se dessert from NC three RS is going to be talking to us about the experimental design assistant tool. So I will go ahead and hand it off to Natalie. Hi, thank you. Okay, so well this webinar is about the experimental design assistant or idea for short, which freely available online software which was developed by the entity for yours to guide researchers in the planning and analysis of animal experiments. So I think I've got a one hour slot and I'm going to start by providing a bit of background to explain who we are and why we develop such resource. Then I'll present the idea itself and what you can do with it. And then I'll do a demonstration of it and that should still leave us plenty of time for questions if you've got any. So I'm going to start by introducing the entity for us just to give you an idea of where we're coming from. So the entity for us is an independent scientific organization which was funded, which is funded primarily by the UK government. And we lead on the discovery and application of new technologies and approaches to replace reduce and we find the use of animal and research and testing. So that's traditionally known as the three RS partnership is key to everything that we do. So we work with scientists and organizations across the life sciences sector and that includes other funders, the universities, regulators, journals and the pharmaceutical chemical and consumer product industries. And most of our money goes into funding research. So with the primary fund of three hours research in the UK, but we also have in house program of work which are led by the scientists in the office. And we've been running a program on experimental design for many years. Our perspective is that an experiment which doesn't yield robust results, for example, because it's underpowered because the risk of bias have not been addressed is a waste of animal use and it's an ethical. Not to mention the implications when an entire program of clinical work is based on the findings of animal research as well. So the two main resources that have been developed as part of that program are the web guidelines which were developed to improve the reporting of animal research. And I'm not going to talk about the web guidance today. So if you want to find more information, you can just go to the link. And the second one is the experimental design assistant, which was developed to improve the design and analysis of in vivo experiments. And those two resources are pretty much complementary. So just to give you a bit of background about the reproducibility issue. So the reproducibility of preclinical research is very much a topical issue. A lot of major scientific organizations are concerned about it and are trying to address it. And I'd like to talk about two publications which I think really started the momentum and reproducibility. So these two papers report the findings of in-house validation studies from Bayer-Halske and Amzhin. So these were published in 2011 and 2012. And basically before embarking on big transitional efforts, most farmers will try to reproduce in-house interesting findings that they see in literature. So these validation studies can take like six months to a year and they put a lot of effort into it. And the issue is that in most cases, the published findings could not be reproduced in-house. So something like 10 to 20% of the studies could be reproduced. And in the case of Amzhin, they did not only base their efforts on publications that were actually in touch with investigators. In some instances actually try to reproduce the findings in the same labs that the original findings had been obtained from and still could not reproduce. So there's obviously a massive problem. And there are many factors or many reasons for which findings might not be reproduced, but things like experimental design and reporting have been practiced as major concerns. And we also looked at the quality of animal research when we started a program on experimental design. And we carried out a survey of the perfectly funded research in the US and in the UK. And we looked at experiments involving rats, mice, and non-human primates. And we found significant scope for improvement in experimental design in the way that experiments were being conducted, the way they were being analyzed, and the way they were being reported. So for example, in terms of design, we found that very few publications reported the use of randomization or blinding. None of the publications described how the sample size was chosen. In terms of the analysis, only 70% of the publication described the statistical method that was used and reported the result with a measure of precision or viability. So that means that a third of the publications were actually missing the minimum information necessary to understand the results. And then in terms of reporting, we looked at right range of things. We looked at how the experiments were being described, the animal characteristics. And we found, for example, that a quarter of the publication reported neither the weight or the age of the animal that we used. And so we found significant scope for improvements. And these findings are binomial isolated. Virtually every study that's looked at the quality of animal research has found the quality wanting. So we started thinking about what we could do to improve standards in design and reporting of animal research. And we published draft guidelines. And after that, we decided to develop a system called the experimenters and assistants to guide researchers to the design of animal experiments. So the EDA is a web application with a supporting website. And the target audience is anyone that uses animals in their research. It was developed as a collaboration between in vivo researchers and statisticians from academia and industry and a team of software designers specializing in expert systems. So it's the extensively tested by researchers and statisticians. And you can access it freely on the link below EDA.anticiars.org.uk. So what does the EDA do? So the first thing is that the EDA gives you the ability to build a set-wise visual representation of an experiment. So that's what we call the EDA diagram. So for the EDA, we've developed an ontology to allow any experiment to be represented as one of these diagrams. And then these diagrams are machine readable. So you can think of the ontology as Lego bricks, for example. And you can combine the bricks in any way that you want and you can represent any experiment that you want. So this might be another approach and it might take a little time to get used to it. But for example, this is a very simple two-group comparison and I'm going to talk you through it. So in this experiment, a pool of animal is split in two. Group one gets a vehicle injection. Group two gets a drug injection. And then a measurement is taken. In this case, plasma glucose levels are measured and the data is analyzed. The independent variable of interest is the drug with two levels, vehicle and drug. So each of these colorful things are called nodes and each nodes contain more information. So these are the properties of the allocation nodes. So in there you can indicate your randomization strategy. In this case, a complete randomization or your randomization procedure, whether you're going to do it by flipping a coin or you're going to use the spreadsheet that's generated from the EDA. So the diagrams are a lot more explicit than text descriptions. So inside each of these nodes, you get more information providing details about that specific step in the experiment. And if you think of blinding, for example, in a text description, the best that you can get is the experiment was done blind. And you've got no idea who was blinded to what step of the experiment was done blind. In the EDA diagram, you can indicate the blinding status in the properties of the allocation, measurement and analysis nodes. So for each step of the process, you know exactly who's blinding to what. And the granularity here is very important because animal experiments have constraints and you might not be able to blind every step of the process. For example, if you're working with lean and obese rats, there's no way that you could blind the measurement step. You're always going to be able to tell which ones of the rats are obese and which ones are lean. But there's no reason that you shouldn't blind the analysis stage, for example. So it's about providing more transparency on the experimental plan. So in the EDA, every experiment is represented by one of these diagrams. And the diagrams are actually in three parts. So the gray nodes provide high level information about the experiments. So in there, you find information about your hypothesis, your effect of interest, the animal characteristics, experimental units and so on. The blue and purple nodes provide information about the practical steps. So these are the practical steps in the lab. So you've got groups and they're divided and then they're subjected to interventions, measurements and so on. And then the green and the pink nodes are about the analysis and the variables included in the analysis. And if you've never used a system before, it might seem quite difficult to come up with your own diagram. So we've actually included a lot of help in the system. So we've included examples. So in there, you can find text description and the corresponding diagram representation. So you can see how different features are represented as diagrams. We've also included templates. So you can actually load the template to the starting point and you can customize it. You can remove groups, add groups, remove intervention and so on just to make it represent the experiment that you want to do. We've also included definitions about each of the nodes that are in the palette. So if you don't know what we mean by outcome measure, for example, you can open the information box and in there you'll find what the outcome measures are also known as, what the outcome measures is, a definition of that. Common examples of outcome measures that are seen in animal experiments and so on. So that's just going to help you recognize what it is that you're working with in your experiment. And I'll talk about the feedback feature in a lot more detail later, but you can actually use the feedback to build your diagram. So you could just put the practical steps that you're going to do in the lab, critique your experiment, get feedback from the system and the system will help you identify the rest of the information and build the rest of the diagram. And we've also included video tutorials. So you can find tutorials on how to create a diagram, how to critique it, how to drag nodes and connect them and so on. So the second thing that the EDA does is that it provides feedback on your experimental plan. So once you've built your diagram, you can critique it and you get feedback from the system. So that feedback is actually based on a data set of rules that we've included in the back end of the system. And the EDA uses computer-based logical reasoning to do this. So the feedback could come in different shapes and forms. So you could get feedback, for example, on the diagram structure. So if the system does not understand your diagram, then you'll basically get feedback to help you bring it to a state that both you and the system understand. The feedback could ask you to provide more information. So this would be an example of a prompt asking you to specify whether the outcome measure is continuous or categorical. So in there you'll find information as to what is continuous and categorical data, common examples of continuous and categorical data, and the implication of working with each type of data. So that should give you enough information for you to make an informed decision and decide what you want to work with in your experiment. That's another example of a prompt asking for more information, asking you to indicate the blinding stages during the assessment of the outcome. So in there you'll find information as to why blinding is important, the different stages of the experiment that can be done blind, or the different ways that you can blind different stages of the experiment. The feedback could also point out inconsistencies. For example, detect that two variables are completely confunded. It could prompt you to consider things that are not in the diagram. So for example, other sources of viability that ought to be considered in an animal experiment and included in the design of the experiment. It could prompt you to highlight the implication of some of the choices that you've made. So for example, you're measuring your animal at different time points and you've included time as a variable in the analysis. That means that you're actually interested in the difference between each of the time points. If you're not, then the system will suggest an attentive method of analysis and ways that you could simplify the analysis. And then once the system has helped you identify all your variables, then it will provide a suggestion for a method of analysis that's compatible with your diagram. And that's based on the number of variables and the type of variable that you've included in your recording. So the rule sets will be expanded over time. So at the minute we've got around 140 rules in the system, but we're going to expand this so that we can provide more feedback. And maybe identify more subtle issues that could lead to problems. Another thing that the ADA does, it provides supports for randomization, branding and sample size calculation. So there are a couple of sample size calculators in the system. There's nothing unusual about them. These are the type of calculators that you can find easily online. But when most of the work has gone into the guidance on how to use them. And how to identify each of the parameters that you need to input into the power calculation. So we've got loads of guidance in there that you can use. And then once you know how many animals you need per group, the system can generate the randomization sequence for you. And it will actually not give it to you, it'll send it to the person that's helping you with the blinding. So that you can remain unaware of the group allocation for the duration of the experiment. And the website also contains a lot of information around experimental design. So even if you're not using the app, you can still refer to the website for trusted source of information and experimental design. So in there you can find information on sample size calculation, for example, and when and how to use standardized effect sizes. Or method of analysis like data transformation, multiple testing corrections and so on. And the diagrams really improve the communication around experimental design. These are explicit descriptions of your experimental plan. So you can keep this for your own records. I mean, it's always handy to have an explicit description of what you did when you come to publish your experiment three years down the line. Or you can use them to get feedback from your colleagues or a PhD student, for example, could use this to get feedback from the supervisors. So this would be the workflow when you're using the system. So you first start by drawing a diagram, then you add details into the node properties. You critique your diagram, get feedback from the system and you will go through this first three stages several times until you're actually happy with your design. Then you can choose a method of analysis and the system will help you with that. You can calculate your sample size, generate your Wonder Beaches in sequence and send it to the person that's going to help you with the blinding. You can share your diagram as well. So you can share your experimental plans with another EDA user. You can actually do this at any stage of the process. Soon you'll be able to export a diagram report. So that functionality is not in the system yet. We're currently developing it. But basically that would be a report that contains key information about your experiments and an image of your diagram. And that happens to be exactly the information that the major funders in the UK want to see in grant applications. So the idea is to help you provide this information in a standardized format without having to repeat that information into your grant application. And then hopefully you get funded and you can carry out the experiment. And then you can go back and update the diagram. Some of the fields can be edited afterwards. For example, the number of animals that you ended up analyzing might be different from the numbers that you planned on having because you've had some unexpected attrition. So you can actually go back and update the number that was actually analyzed with a reason as to why that number is different from the numbers that you planned on having. So you can keep an accurate description of your experiment. So why would you use the EDA? So the first thing is to improve the reliability of published results. So the idea will help you by addressing obvious sources of bias in your experiments. The idea also promotes better understanding of the experimented design and awareness by why these issues are important. So every time you get feedback from the system, you actually learn something new. The idea is not a black box telling you how to do things. It basically highlights the implications of doing things certain ways so that you can make an informed decision. The idea also facilitates the assessment of the experimental plan with an explicit description. So that can be at the level of the current application, the ethical review process, manuscript submission, or even for readers of a journal. So that everyone's got access to all the information that they need to have access to about the experiment. And that can also be used as a form of pre-registration. So you can register your diagram that's got the entire experimental plan. And you can provide that as evidence that you've not changed your primary outcome measure, for example, when you got the results. And with the EDA, we really want to promote a more careful consideration of the experimental plan. So we want researchers to spend more time planning the experiment. And that's on purpose. I mean, I often get asked how long it takes to run an experiment through the EDA. And to be honest, if you know exactly what you're going to do, you've identified all your var bonds, you know exactly how the experiment is going to be done. It only takes 10 minutes to put it through a system. That's not what takes time. What takes time is getting feedback from the system and considering that feedback and changing your plans and maybe discussing it further with your lab. And that will take a fair amount of time, but that's on purpose. And it's much better to spend that time now rather than after data collection, which is too late to change anything. And the diagrams really facilitate discussion. So I've got anecdotal evidence that they are used at lab meetings. And the fact that having a diagram on screen to discuss your plan is really, really helpful because everyone is very clear as to how this experiment is going to be done. It's very clear how many groups they're going to be in this experiment, what the var bonds are going to be, what reasons var bond you're considering, how you're going to incorporate that into the design, you know, what data transformation you're going to be using. And you can have a meaningful discussion about all these things. And that means that you get an opportunity to actually optimize your plans before doing the experiment. So the idea can be used in different ways by different users. So the core users are the people that we had in mind when we designed the resource. So these are in vivo researchers primarily in academia with no or very little access to training or statistical support. So these people would mainly benefit from using the EDA app itself and design the experiment in EDA. But then there's a whole bunch of secondary users who are very interested in the EDA. And these are expert in vivo researchers, statisticians, regulators, funders and journals. So all these users would not benefit from using the app, but they want to see the output of the app. They want to see the diagrams that have been generated by the researchers. And that's going to be really helpful for them to actually see this and have access to this information. So that's all I have in terms of slides. So I'm happy to take any questions now if there are any questions. So some questions come in. So somebody asked, is there any reason this would not be suitable for using the social sciences, e.g. economic field experiments? So we designed this resource for any more experiments. The principle actually are relevant across the board. So it's actually the same principle for any type of experiments. The risk of bias is going to be the same. The only thing that I'd say is that basically all the feedback that the system provides, all the information that we provide on the website, is geared towards animal experiments. Which means that you're going to get examples about type of bar balls that are common in animal experiments. We're going to be talking about cages as a bar ball, that kind of thing. So obviously that's not going to be relevant to other research fields. But a principle are the same. So you just have to take that with a pinch of salt and try to interpret it if you wanted to use it for a different research field. So Ruby asked, is there a timeline for when the export diagram report functionality will be available? So it should be available very, very soon. So next couple of months at least that will be available. It's nearly ready. We're just doing final testing on it. Great. And then I actually had a follow-up question to you one about using the design tool for non-animal experiments. Obviously you could just kind of ignore some of the feedbacks related to variables related to animals. But are there any kind of longer term plans to make a variant or give variable feedback where somebody could select, hey, I'm actually doing human research? So basically developing a variant for human research wouldn't fall within the remit of the NT3Rs because our remit is animal research. But there's no reason that this couldn't be done and we'd be happy to collaborate with anyone that actually wants to do this and provide as much support as possible. So if anyone is interested, then that could be doable. All right, great. So we have another question. Can the tool be used to carry out studies without doing lab experiments? So I'm not sure what that refers to. And so you could actually do, you could actually represent an observation study into the EDA. There's no reason that you can't represent this. Obviously if this is not a lab setting, if you're not randomizing animals into groups, then the feedback that the EDA will provide might not be as relevant. But you could still represent it as a diagram and use the, you know, they have an explicit description that you can use to communicate with your peers and colleagues. So you could still put them through the EDA. It's just that the feedback that the EDA provides is geared towards internal validity. So an observational study actually would probably gain very little from the EDA feedback. Great. So Ross asked, do you envision that funding bodies will require pre-registration of EDA experimental designs for proposed research? So, well in the UK, main funders like the MRC and actually ourselves, the NCCRs, recommend that our grant holders use the EDA to prepare the grant application. We are not making it compulsory on purpose. We don't think that it should be compulsory. You know, different people might have different ways of doing it. As long as we see the experimental design information that we request in grant application, then people can provide it in any way that they want. So using the EDA will actually help them because that actually will save time for them. But if they want to provide this information in any other way, then we will not force them to use the EDA. And I think it should not be made mandatory. Is there a cost to using the tool? No, it's free to use. Anyone can use it. And then somebody asked, does the EDA feedback feature provide comments on all stages of the workflow? So I'm referring to the EDA workflow diagram. I'm wondering which steps it is available to assist with. So, right, the workflow diagram. Well, the feedback was actually one step in the workflow diagram. So the EDA feedback specifically provides feedback on the plans and then provides suggestions for method of analysis. So that's what the feedback does. That's part of the workflow basically. All right. And then let's take one more question and then move into the demonstration. And then there will be time at the end to also answer questions as well. So Julian asked, is there a way to export all the experimental settings in order to share them? Or do people only, or is the only way to view the EDA through the EDA portal? Right, okay. So at the minute you will need to read a diagram in the EDA. So there's different ways that you can share a diagram in the EDA. And I'll show that in the demonstration. You can just share it with, you know, using an email address. And that means that that person will have access to your experiment in the experiment list. Or you could download the entire diagram data and save that locally. But then if you want to view it, you can have to upload it back into the EDA. So at the minute, that's the way that you can do it. But as I said, anyone can register for an account and you only need to provide minimal information like an email address and a password to register for an account. Great, great. So let's go ahead and move on to the demonstration and then we'll have more time for questions at the end. Okay. So I'm going to try to switch to my Chrome. All right. Can everyone see this? Yep. So I mean, this is, so this is the homepage when you first get to the EDA. So if you go to EDA.ncchars.org.uk, that's where you get to. So in there, you can see just some more information about what the EDA is and a very summarized workflow and kind of like how you'd work with us. And then if you want to access the app, you can access it via that tab here, EDA app. And I'm actually going to make that full screen so that you can all see it. So you, well, if you've never used it, you'd have to register for an account, but then you can just log in. And then when you log in, you will get to your experimental list. Your experimental list, sorry. So my experiment list is quite full because I've been using the system forever. But if you first, if you've never used it before, the only thing that you're going to find in your experiment list are the templates and the examples. And then as you start using it, then your experiment will start probulating that list. So you can actually filter that list. So you can actually use keywords in the title or in the type of experiment that if you want to look for a specific experiment. And you can also order the different experiments according to the dates that were created, modified and so on. So I'm going to go ahead and create new experiments. So you just click on that button here in the top right corner, new experiment. And that will take you to the canvas. So that's the canvas where you can design your diagram. So I'm just going to start by showing you the help menu. So in there you can find information about the general process. So that's a useful reminder of what, you know, the steps that you need to go through when you're using the EDA. So that's a summary of the workflow that I showed earlier. So you need to start by drawing a diagram, add detail in the node properties, critique your diagram to get feedback, choose an analysis method, calculate your sample size, and generate your animation sequence and so on. Oops, that window's moved. I don't know if I'm going to be able to close it. I'm going to close this. And then in the help menu, you can also find the user guide that will send you back to the website. So you can find a complete user guide. You can also access the examples from the help menu as well and the video tutorials. And then you can access, you can actually load the template directly onto your blank canvas. And so if you hover over the different templates, you get a brief description. So the first one would be a two group comparison. And this one would be across of a design and so on. So you can actually load the template to start with. But for the purpose of the demonstration, I'm just going to start by drawing a diagram from scratch. But if you've never used it before, I highly recommend that you study the examples and you start with the template. So starting from scratch, I'm just going to reproduce the diagram that I showed earlier, the two group comparison. So on the left hand side, you have the pellets and that's where you find all the nodes that you can use to build your diagram. And as you can see, as I hover over the different nodes, you get a little blue icon and that's how you access the information box. So you can access more information about each of the nodes in the pallets. So for example, here, the independent value of interest, you can access information and I will tell you what it is unknown as what is the independent value of interest. Common examples of independent value of interest in any more experiments. So to start building the diagram, you just drag and drop nodes into the canvas. So I'm starting by drawing an experiment node in there. And you can see that the node has red boundaries because it's not connected to anything yet. So anything that's disconnected will have red boundaries. That little icon here, the little lines actually give you access to the properties of that node. So if you click on this, that opens the properties. And in there you can enter information about the hypothesis, the effect of interest, the effect size and the justification for effect size and so on. So you can input all this information. The red stars mean that the information is mandatory. So you're going to have to provide that information, otherwise you'll get feedback from the EDA. It's not going to prevent you from using the system, but you will get feedback and you will get a reminder that you need to provide this information. If you don't know the hypothesis at this stage or the effect of interest, you can just leave it blank for now. And then when you critique your diagram, you'll get more information and you'll get help to identify this information. So don't try to guess if you don't know, leave it blank and wait for the feedback. And then, so that node is selected and if you hover to the right of it, there's a little node menu. So basically in that menu, you'll only see the nodes that you can connect to that node. So in this case, you can only connect the animal characteristics to the experiment node. So if you just click on this, that's going to add the animal characteristics node into your diagram and connect it automatically. Again, you've got properties of the animal characteristics and you can enter information about the species, the strength, the sex, the age and so on. If you've got different animal characteristics in your experiment, so let's say you're working with male and female rats, then you'd have to put two different nodes in there. So you'll have your animal characteristics for the male and then you just add a different node for the females. So you did that way. And then in terms of the practical steps that we're going to do in this experiment. So we've got a group. So that's the pool group. I'm just going to change the label. So I just double clicked on the nodes to change the label group. And then that group is going to be split into two. So I'm going to add an allocation and that group is allocated into two different groups, one and two. And you can just drag the nodes to tidy up your diagram. Then each group is subjected to different pharmacological intervention. They were getting drug injections. So I'm going to add two different pharmacological interventions. And then a measurement is taken. So I'm going to add a measurement node. All animals are actually measured together. They're all getting the same measurements. So I'm going to connect that pharmacological intervention to that measurement node. So the way you do this is that you hover to the right again, look at the node menu, select the measurement. And instead of clicking on it, you just drag it to the existing measurement node. And as you can see, grain corners up here around the measurement node. That means that the collection is allowed. So you can just let go and that's just going to create the connection. And you can just tidy up the diagram again. If your connection is not allowed, then you'll get immediate feedback as well. For example, if I was trying to subject my group of animal to another group, that doesn't make sense. And the system will not actually allow this. So if I let go, nothing happens basically. It's just not connected. The arrow is not disconnected. If I'm not reconnected to the pharmacological intervention, I can just do this. And it reconnects automatically. And then after my measurement was recorded as an outcome measure. So it was recorded as the glucose levels. And then the data was analyzed. So I'm just going to add an analysis node in there. So I'm just going to look at the critique and see what sort of feedback the system gives us now. If you want more space on the canvas, you can collapse the pallet. And that gives you a little bit more space. You can also zoom in and out. So for example, you can zip to fit the model and you've got your diagram full screen. So we've got the results of the critique. So the feedback comes in the shape of icons. So you've got red icons, which are errors. And you've got these little yellow triangles, which are warnings. You can also get a blue circle. That means advice. So there's no advice on this one yet when you have warnings and errors. And the way that you access the prompt is by just clicking on the icon. So I'm going to click on this arrow here on the experiment nodes. And I've got two different arrows. One's telling me that the independent verbal of interest is not specified. And another one is telling me that the experimental unit is missing. So you can access the information there. So here you can find information related to the independent verbal of interest, what it is, common example of verbal of interest. So for example, drug is a common example of independent verbal of interest. That's exactly what we're doing in this experiment actually. So we should add drug as an independent verbal of interest. I can just close that prompt, close this and then follow the advice, drag an independent verbal of interest into my diagram, call it drug. And I've got two different categories. I've got a bacon and I've got the drug. And so that's going to be included in my analysis. So I'll just connect that verbal to the analysis by dragging the analysis icon into the analysis nodes. And that's automatically connected as a factor of interest in the analysis. And then I can just go around my diagram and look for all the arrows and try to address them. So this one is that specifying whether the outcome measure is continuous or category call. So you find information as to how to recognize this and the implications. So close this, go to the properties and specify that this outcome measure. So plasma levels are actually continuous. You can ignore primary outcome measure because there's only one in this experiment. So you don't actually need to specify this. That's going to be primary by default. If you had a second one, you'd have to specify it. What else do we have? There was something about the experimental unit missing here. So you can open that prompt, read it, find out what the experimental unit is, how to recognize it, the different experimental units that you can come across in animal experiments. So you can read all of us. In this case, we're giving animals injections. So the experimental unit is actually the animal. We can do that independently of other animals. So you can just add your experimental unit in there. So you add your nodes and specify that the experimental unit is the animal. What else? There's another error here. So this one is telling us that the groups are not differentiated. So that's basically a prompt telling you that the system does not understand your diagram and that can't actually see the difference between group one and two. So you're going to have to indicate what is the difference between these two groups. And the way you do this is by tagging the interventions with the different marble categories. So you can read this and then follow the advice. And use the marble categories as tags on the intervention. So you select them by just going around them. You can do copy and paste using your normal keyboard shortcut. And then you can indicate that this intervention is the vehicle and this one is the drug. So we've actually tackled a fair bit of feedback. So I'm going to critique it again to see whether the feedback has evolved. So if you address the feedback, it will not automatically update. So if you want to get up to date feedback, you actually need to critique again. I'm just going to zoom out a bit again. So we've got a new feedback and you can see that's different now. So we've got a new feedback on that independent bubble of interest that we added earlier. So the feedback is that we actually need to specify whether that's continuous a category call and whether that's repeated factors. So you can go and read the prompts there. I'm just going to do it quickly for the purpose of the demonstration. So in this case, we've got a category call independent bubble of interest and that's not a repeated factor. But if you read the prompt, you have one inclusion that you need to know to decide on this. What else do we have in terms of feedback? There's a warning on the experimental load here. So we've not actually provided any information inside the node so far. So you get a lot of feedback about information not provided. So FXR is Effective Interest. Null and Alternative are processes that are not provided. This one is quite interesting. So other sources of availability are not accounted for in the design of this experiment. So you can get more detail. So basically, there are a lot of common reasons for our bubble in any animal experiments and you really need to consider it when you design your experiment. So there's a few examples of the type of valve on stuff you should consider. So you could read this and realize that, yes, actually that experiment is going to be done over two days because you can't process all the animals in one day. So you could use that as a blocking factor. So you can, there's a link for more information here. So you can access more information on the website here. So you can decide to add a blocking factor in your experiment based on that feedback. So you open the palette again and you can drag a nuisance valve on. And that's going to be the day of the experiment. And that experiment is going to be carried out over two days. So day one and day two. And the advice was to include this as a blocking factor in the analysis. So you select the analysis icon, drag it to the analysis nodes. And as you can see, that's not created the link automatically. And the link is read as it's unlinked. So that's because there's actually more than one possible relationship between the nuisance valve on a new analysis. So you need to select that link, hover on the red spanner and specify what you want to do with that nuisance valve on. So in this case, you want this to be a blocking factor in the analysis because that's what the feedback says. So you just select this and that's going to indicate that it's a blocking factor in the analysis. So we can try critiquing again and see whether the feedback has evolved. So we've got new feedback here. There's a new error on this nuisance valve that we've just added. So it's telling us that the nuisance valve, if it's used as a blocking factor, that might be categorical. So you've got information to see whether you really want to treat that nuisance valve or as a blocking factor. So you can actually read this and make up your mind. And then if you don't and actually really want to treat this as continuous, for example, then these are the different suggestions that you can do with it and different ways that you can account for this variable in the design of the experiment. So in this case, we actually know, but we actually want to treat this as categorical. There's no way that it could be continuous. So you can specify this as categorical and you're going to account for this variable by blocking. There's another warning here. So a warning telling us that a blocking factor included in the analysis should also be included in the randomization. So you can look for more information there. And then you find information as to the implications of including a blocking factor in the analysis and why you should also include it in the randomization. So you can read this and decide that you also want to include it in the randomization. So you just go back to your notes, select the allocation nodes, connect it. And again, this is unlinked because there are more than one possible way of linking it to the allocation. So you hover over the red spanner and you specify that this is a blocking factor for the allocation as well. And if you want to tidy up your diagram so that the links just don't go across, you can actually bend the link. So you just go over the link until you see that little yellow dot. And then once you see it, you just click on it and drag the link and create a bend in the link. So I think we've tackled all the errors in that diagram. So there are still quite a fair amount of warnings, but we're not going to go through this today. So you can actually read that in your own time. So now we can actually move in and ask the system to suggest a method of analysis that's compatible with this design. So if you've got any errors in the diagram, you won't do it, but we've now tackled all the errors. This one is actually tackled already. So you can just ask the system to provide a method of analysis compatible with the design. So it's same principle. It'll take about 20 seconds. And then you will see little icons on the analysis node. So here it is. You've got a little green tick on the analysis node. So you can click on this. And you can see that the system recommends... I mean, the system basically suggests that you could use a one-way ANOVA with blocking factors to analyze these experiments. So in this prompt, you'll find information regarding the parametric assumptions, the assumption of that particular analysis, and how you check that your data meets the assumption. And then if it doesn't, you get advice regarding data transformation so that you can make sure that you're not actually violating the assumptions. And then you will also get advice... Or it's not advice, it's basically an example of a software that you could use to run this particular analysis. So you can use in vivo stats to run a one-way ANOVA with blocking factors. So in terms of examples of software, we've made sure that we've included either software that we're very commonly used in animal labs, or software that we're freely available on. So in vivo stats is an example of a freely available software which runs on R and which was specifically designed for animal experiments. It's quite easy to use. But obviously you could use any other type of software. This is just an example. So I think that's it about the feedback that you could get on your diagram. So other things that you can do with the system. So there's the power calculation. So the power calculation times actually hidden when you start, but you can reveal it by just clicking on this little arrow here. So that opens the power calculation tab. So you also have a notes pad there. So you could actually record notes about that experiment. So you can type anything that you want. Write notes. And you can save them. Just make sure you save them when you save the diagram. They're actually saved separately. So make sure that you save it. And you get a confirmation saying save completes. And here's your power calculation tab. So as I said, there's only two different calculators in the ADA. So those are very standard calculators for unpaired T-tests and paired T-tests. But we've got loads of guidance on how to use them. So you can access the full guidance here. So that takes you to the website where you can find information about power analysis and what actually does. There's also a very handy decision tree to help you choose what power calculator you need to use because that's not that straightforward. So you can actually go through the decision tree. And then there's only three options in that decision tree. Either you're going to use one of the two power calculators that are provided from the ADA app with all the guidance that we've got to help you identify the parameters or we recommend that you actually talk to a statistician because power calculations and identifying the parameters for power calculations other than T-tests are actually quite complex and you need expert knowledge. So we wouldn't recommend that you do it on your own if you don't actually have that knowledge. And then if you scroll down you've got information about the parameters. How to identify your effect size. How to use coins D. How to identify the variability and we've actually ranked this in order of kind of the most reliable to the least reliable depending on what information you've got access to. And then the significance level on the power, one sign of the test and the number of the group. So let's just go back to the app now. So you can just, you know, let's say in that experiment, so we're measuring plasma glucose levels so let's say that you're interested in an effect size of 2 millimole per liter and your viability is 1. So you can calculate number of animal that you need per group in this case that's 7. So you can update your diagram so in the properties of the groups you can indicate that that group will contain 7 animals and actually n is 7 because the number of experimental units is the number of animals in this specific experiment you can do the same for the second group, 7 animals and 7 experimental units. All right I'm just going to collapse the power calculation tab now so now that you know how many animals you need per group you can actually generate the randomization sequence. So if you click on this or you actually need to save your diagram first let's do that you cancelled this so save the diagram if you want to rename it and call it whatever you want okay save it and then so generate your randomization sequence so the system is going to help you it's going to ask you for the email address of the person that you want to send that to so that's the person that's going to be helping you with the blinding so I'm just going to send that to myself for the purpose of the demonstration I'm just going to click okay so the only thing that you're going to see as the user, the investigator is a summary of what the randomization sequence contains so in this case it tells you that 16 animals have been randomized into two balanced groups so the number of animals has been rounded up we had 7 per group it's actually randomized 16 in two groups because the idea only does balance randomization so basically we've randomized the same number of animals in each group on each day so that's why we've run it up to 16 so that's all you have and then I'm just going to show you what the person who received the randomization sequence will see so I need to share Excel spreadsheet so this is what the person will receive they receive an email with an Excel spreadsheet and in that spreadsheet they've got the randomization sequence so there's two different tabs for each day so on day one four animals have been randomized to each of the groups and on day two another four animals have been randomized to each of the groups so you've got eight animals on day one eight animals on day two so the person receiving that spreadsheet is going to have to input the unique animal identifier and then they'll either code the syringes for you or they'll inject the animals for you so that way you can remain unaware of the group allocation for the duration of the experiment so let me go back to Chrome there it is so I think that's about it I mean I've shown you most of the functionalities so on my system I've got an icon for the reports which I mentioned earlier so this is not available yet so you will not be able to see that icon if you're logged in right now and then I can just go through that menu quickly so in there you'll find this standard save and save as options in the EDM you can go back to your experiment list your icon settings where you can change your passwords and you can log out in the file menu you've got your save and save as again you've got share so that's how you share an experiment with another EDA users so basically what that's going to do is that's going to make your experiment available to that user as a redone-ly version so if you update that experiment after saving after sharing it then the person that you shared it with will always have access to the up-to-date version but they will not be able to update it themselves they'll be able to save it as something else but they would not be able to change your experiment in that menu you also have the options of exporting your diagrams so that exports an image of the diagram as a PDF or as an SVG image that will now export all the information that's inside the nodes in the properties if you want to export everything by the diagram including the properties you've got to use export diagram data so that's going to export your experiment as an EDA file that you can save locally and that you can share with whoever you want but then if you want to read it you can have to re-import it into the EDA to read it so you'll have to open a blank canvas and then import that data to reload your diagram and then here you've got your standard edit menu so that's pretty much what you will find in something like PowerPoint so you've got delete, cut, copy, paste group and group redo so you can also use all the normal keyboard shortcuts to do that and I've shown the view menu earlier so you can zoom in zoom out this one, clear prompt will actually clear the prompt from the feedback so see that little prompt that's left there, if I do clear prompt it removes it so I think that's about it so if anyone wants me to demonstrate anything else I'm happy to do it so any questions? Alright great, yeah we've had some questions come in but feel free to also continue to ask questions as we go through the ones that have come in so far so we've had a couple of questions about the security of the tool you know it's an online tool could you comment on the privacy of the data that's being put up? So basically, I mean the EDA sorry the NTFURS is a research funder and the type of information that you'd input into the EDA is actually very similar to the type of information that you'd put into a grant application so we've actually used the same level of security and we've had several consultants involved at every stage of the process to make sure that the EDA database was extremely secure no one is actually able to access your diagrams unless you share them specifically with me for example so you'd have to input my email address if you want to share a diagram with me there's no other way that I could access any of the diagrams no one's accessing the diagram, no one's looking at them the the information that's stored on the server is very secure we're actually checking the security of the server including the physical security of the server on a regular basis we're doing penetration testing on the system on a regular basis as well and despite all of this if you actually didn't trust us, that's fine you don't actually need to leave your experiment on the server so as I mentioned the information that you're going to have to provide us to create an account is very very limited we just need an email address and a password, that doesn't need to be your institution email address it could be anything and then when you create your experiment you don't actually have to save it onto the server you can just save it locally as I mentioned as I showed earlier, you can download the diagram data, save it locally as an edf file and then only upload it when you want to get feedback when you get feedback from the system, that's going to send information to the server, but that's actually not going to leave any footprint to the server so if you do it this way and don't store your experiments on the system you're not actually leaving any footprint there great, great so Tori asked what the diagram might look like if there were multiple outcomes in the experiment if there are multiple outcomes okay, well I can actually show this so depending on what you're doing so here you're taking a measurement we've recorded the plasma glucose levels you could have another one there and you could say that that measurement I don't know what else you could measure you could measure activity as well during that measurement so you could have it this way so now that you have two outcome measures you're going to have to specify which one is the primary one so let's say that the glucose level I have the primary one and then you're going to have to specify what you're going to do with the outcome measures whether they're both going to be analysed in the semanalysis using a multivirus but the two outcome measures are part of the semanalysis whether you're actually going to run different analysis for the two different outcome measures so one might be your primary analysis and the primary outcome measure and then you could indicate that you're actually going to run another secondary analysis which is more exploratory so you've got analysis one and analysis two and then you indicate what variables you want in this analysis so drug might be a variable of interest in the second analysis as well so that's how you deal with it you'd actually represent each outcome measures with the different outcome measure node Great, great Roy asked, does the system allow for the use of non-parametric analyses? Yes, so basically you're going to have to decide what you do so when you get the suggestion for a method of analysis that's compatible with your design you normally get the parametric analysis and it's non-parametric equivalent and we're trying to we do recommend that you use parametric analysis because they're a lot more powerful so basically we're just trying to help you make your data fit the parametric assumption and we provide a lot of advice on data transformation and so on and then if all of this fails then you still have the non-parametric equivalent in some instances the non-parametric equivalent does not exist so there's nothing that you can do and you might have to do a rank transformation and still use parametric analysis but that's going to be a choice that you make basically we're providing you both options with a recommendation and then you decide how you analyze your data Right, so Peter asked, what does the diagram look like for if you have a repeated measures factor? Okay, so I can share that as well so let me just move things around so that I can make bits of room right okay so let's say so this was measured as Google's level of activity so instead of so in there I've put a measurement node but I could actually have a repeated measurement node so I can delete that node and delete it I'm just going to delete the arrows there and then from the node menu as you can see there's a different measurement node and this one is a repeated measurement so it's just a slightly different node basically and in there in the property you've got additional fields and you'll have to indicate how many times you're repeating that measurement and when the measurement is done and the blinding and so on so you can just use one of these nodes if time is a variable in your analysis then you'll be asked to specify what is what are the different things that you're doing what are the different timings so the way that you'd indicate it if time was a variable in your analysis let's say that you're going to repeat that day one, day two and day three and you're going to analyze time as well so you just add your timing so time and then add your different timing that you're going to do your measurement so that could be day one actually I can't call it day one, if I do this I'm going to get some feedback from the director I'm saying that I can't have different categories called the same way so I'm going to call it measure one and measure two and that just indicates on my repeated measurement nodes that this is my repeated measurement, I've got two measures in this node and that's going to be included as a variable in my analysis so you would do it this way and you could have a combination of simple measurements and then repeated measurements for example if you only have two measurements you could decide to be more explicit and rather than having a repeated measurement node you could actually put two measurement nodes so one following another so that's up to you the reason repeated measurement node to basically save space in the diagram if you're doing the same thing over and over again all right so Ruby asked that she's going to be teaching study design can this tool be used for teaching purposes yes so it's actually been used a lot in that way already so basically yes it is really really handy I've actually run a workshop with statisticians using the EDA so teaching experimental design with having a diagram on the screen to discuss all the different factors all the different parameters that go into an experimental design an experimental plan and it's just really really handy so yes it can definitely be used that way all right and then another question about kind of an alternative use of the tool do you think it would be useful at all to upload completed experiments to the tool to get some possible feedback on the experimental design that was used even after the fact well you can yeah you really want to do this you can I'm not sure that's really going to be helpful I mean I'd much rather that people used it before the design and experiment because I think it's yeah it's just not going to be as helpful there's no point knowing that you've done things wrong actually it's pretty better that you get the next experiment set up properly all right and then we had a couple of questions related to the fact that some of the arrows say you know for uses animals or animal characteristics for researchers who are using the tool who aren't doing animal research we mentioned how they can just ignore the feedback but Julio in particular was wondering if you know of anybody who has used the tool for non-animal research submitted it to a funder kind of saying just ignore where it says animals and if there's if the funder has been okay with that or has there been any confusion around that or okay well the tool is fairly recent so I don't know of any example of someone using doing human experiments and then submitting an idea diagram as part of it the short answer is no you can't turn it off that's part of the ontology so that would have to be adapted if you were to do system for human research so you're just going to have to consider that humans are big animals all right great so I think that was most of the main questions thank you so much Natalie for doing this guest webinar for us thank you all for participating in the webinar as I mentioned we will hope to have the video posted to the center for open science YouTube channel in a few weeks so thank you all for joining us thank you very much I hope it was useful