 Great looks great. Okay, so I'm on you are yeah, go ahead, please. Oh very good Well, thanks to everyone who's here for hanging around until the very end of a long but very interesting and productive day I'm going to talk this afternoon about Randomization in particular Randomization is something that we all know in clinical trials Everybody pays lip service. Yes, we should randomize the patients and it's very rare in my Experience that people worry about it more than that or think about what's involved But what I hope to tell you in the next 18 or so minutes is about some of the issues involved in randomization in complex clinical trials and I'm going to acquaint you with a method that you may not have heard of before but It's very useful and it should be more widely known than it is Okay First a couple of quick words about the need for efficient randomization The figure on my top line is a bit out of date I think I got it about four or five years ago But at that time the top 50 life sciences companies spent about a hundred billion dollars annually on research and development I'm sure the numbers higher now As we've heard in fact even earlier this afternoon a little while ago Many of you know that the randomized controlled trial or RCT is widely considered to be the gold standard of clinical trials Because it minimizes bias and allows rigorous probability based inference Comparing the outcomes of different treatments Doing the randomization efficiently Increases statistical power gives you more power for the specified number of students or Looking at it the other way Decreases the number of patients needed to Achieve a specified power and so approaching randomization wisely Can often get big improvements in bang for the buck Okay, so Randomization minimizes bias as I just noted Minimizes confounding if done properly It also Allows us to Conceal the treatment allocation We don't want the investigators the patients the clinical staff or the evaluators to know Who which of the patients received which treatment randomization helps cover that? information and As I again said a minute ago it permits scientifically valid inference Okay I'm going to talk this afternoon about three basic types of randomization Complete or simple or unrestricted randomization This one is the easiest to understand and implement But very inefficient because there's a great risk of unbalanced groups In other words groups of patients that differ substantially among the treatments in both basic ways Such as gender and age there could be surprisingly large differences based on random variability from one treatment to another and more complex ways and these Situations that are unbalanced can make it especially difficult to Make the comparisons that we would like to make a a More complex randomization method involves stratified permuted blocks where we take the patients divide them into strata groups that are relatively homogeneous on attributes that are important for the Clinical study we're doing And each block of similar patients is then allocated among the treatments we take a bunch of very similar patients and Assign different patients within this group to different treatments and that Blocking makes the Playing field much more even the groups that the treatments are Administered to are much more similar than with unrestricted randomization and last and least known is a relatively new approach Dynamic allocation or covariate adjusted randomization This is valuable particularly when patient blocks can't be formed in advance We in order to perform this approach we need to select relative weights for imbalances I'll say more about that in a minute To compare different randomization methods and plans We need to know what our evaluation goals are a Randomization plan should minimize the imbalance among treatment groups with respect to factors of prognostic importance With respect to institution in multicenter trials We don't want one treatment to be administered mostly to people in one Medical center and another treatment to be administered largely to patients in another Medical center for example There are several performance criteria of interest to us. We want to minimize the total Total balance. I should say total imbalance of the design We want to minimize the departure from the desired treatment allocation how the treatments are Allocated we might want equal numbers of Patients for each treatment or in some cases we might want for example twice as many patients for the Experimental treatment as for the control treatment. We want to maximize efficiency and again Minimize selection bias and confounding and enable blinding Okay, so complete randomization We've talked about a few minutes already Basically, we take each patient and independently Randomize that patient among the treatments the probabilities might be One to one we might want as I said a minute ago Equal numbers of patients in for all treatments or we might want more patients a twice as many for one treatment as another simple to understand and execute and Minimize the selection bias and confounding but the cons that we've Mentioned are the risk of imbalance on important factors and also Complete randomization doesn't take advantage of the similarities among patients that we use when we block So the method I've talked a couple of minutes ago about constructing the blocks and then allocating treatments within each block and this ensures that the treatments are You know Administered to very similar groups of patients The pros are that we have this nice balance and we have the increased efficiency due to the blocking because of the Homogeneity within blocks and This minimizes or if we're lucky can even eliminate the departure from the desired treatment ratio But a big con is that whoops that this requires Patient information in advance to form the blocks We need to take a large bunch of patients and break that patient population Into blocks of very similar patients and this gets much more complicated as the number of factors that we block on increases Okay, so now here's the Sort of the the big reveal I'll be interested in knowing whether many of you are aware of this method already the method of Dynamic allocation or covariate adjusted randomization and the idea here is basically this I'm not giving you any formulas, but I want to give you the ideas every time we have a new patient like if for example if I have the 15th patient in front of me and there are 14 patients who've been recruited to the study and I'm trying to allocate the 15th most effectively and once I do that when the next patient shows up in a few days or a couple of weeks I'll keep doing the same thing iteratively so what we want to do whenever a new patient has to be Incorporated into the study in addition to those we already have first we evaluate every possible treatment that we could Assign that patient to by calculating its total imbalance. We'll just call it T. I and The basic idea is To assign the patient most intelligently we want to assign the patient to the treatment That has the minimum total imbalance among all the treatments that are possible But there's a problem with that a huge problem and that is if we do this kind of assignment it's deterministic not random and Deterministic assignment puts right out the window our ability to make Probability statements about P values and effects So in order to retain validity we must modify the assignment of each patient to the Treatment that gives the best the lowest total imbalance By including some randomization in the process and we'll talk now for a few minutes about how to do that so Allowing patients to enter the trial individually is huge as an advantage because there are many Trials where we don't have the luxury of having our whole patient Population in front of us where in advance where we can form the blocks and It also adjusts the allocation probabilities to reflect the current state in other words to reflect The assignments of patients that we have already made Patients previous to the one we're working on now The cons are first that this is a very complex process To implement and to analyze well, it has to be complex because the allocation is complex and it also requires weights for us to Properly calculate the total Inbalance and we'll talk a little bit about how to Assign those weights, but not very much that's sort of a big topic for another talk So for each new patient in dynamic allocation using data from everyone who's been Assigned so far This just repeats what I said a minute ago for the new patient we see how much imbalance is created by looking at every possible treatment we could assign the patient to and The treatment imbalance is a linear combination of Balances across factors Wait a weighted sum of four types of components treatment group stratum site prognostic factors We want to sort of take account of the imbalance from all of these four Sources and we want to assign the new patient to the treatment with the minimum total imbalance Mostly but modify this by putting enough randomness into the process to give us validity Which we're going to talk about here so In order to avoid making a deterministic or a nearly deterministic assignment There are several approaches that have been studied and two that I'm going to tell you about very briefly are What I'll call dynamic allocation one and two D a one best and second best What we're basically going to do is if there are several treatments that share the minimum total imbalance We're just going to randomize among those But if there is one treatment that has the minimum total imbalance Then we'll randomize the patient among treatments with that minimum and the second lowest Total imbalance, so we'll let that patient stray from the minimum, but not very far another approach dynamic allocation with complete randomization says with a with a specified probability What we will do is assign patients to a treatment with the minimum total imbalance At random if there are more than one share treatment sharing the minimum and With the remaining complimentary probability, we'll just assign the patient randomly among all treatments So we want that latter probability to be large enough to give us validity, but not too large We want to keep a good strong likelihood of going with the minimum total imbalance Statistical analysis for these randomization methods well for complete randomization and the block stratified permuted block randomization Everything's very simple Intuitive well understood these have been widely used for a hundred years Software is available in all our packages that perform basic statistical analysis because these are basic analyses With dynamic allocation the analysis is complex It has to be in order to accommodate the complex randomization procedure And there's a real danger that I want to flag for you might say well, I've assigned the patients by Dynamic allocation, but the analysis is too hard. I'll just run the analysis of as if it were a Permuted block design well if you perform an incorrect analysis the inferences can be Substantially wrong and that makes sense if you Randomize the patients in a certain way the analysis has to reflect that fact I've got an example here of DA1 and I also have an example of DA2, but I don't really have time to go through these So what I'm going to do is if you look at the slide I've written them up in a way that I think makes them easy enough to work through it shouldn't take more than a Few minutes to read through them as I would do now if I had five more minutes but the key principle in both of these examples is this whether we perform dynamic allocation by best and second best or by Complete randomization with a relatively small probability say 10 or 15 or 20 percent in either case We have a high enough level of randomization to permit us to make rigorous scientifically valid conclusions while at the same time there's a High enough emphasis on reducing the total imbalance To increase the accuracy of the statistical calculations on treatment effects and p-values So I have a few words here about the suitability of are for Performing the dynamic allocation randomization and I've found Two are packages, and I'm sure there are several more that perform Dynamic allocation. There's mini-rand and seek a lot I've also listed down here a few key References the first two are like the seminal papers of this method from the mid 1970s A paper from about 10 years ago that describes the errors that can pop up if we don't do the analysis correctly Rosenberger and Latchin is a very good and well-known book on randomization of clinical trials and the Gin polish and hard sell is a relatively recent paper on these methods so This my next to last slide says that the performance of clinical trial designs Compared to one another Depends on a lot of factors the number of treatments the allocation ratios the number of factors the relative weights of the four Types of components and total imbalance and there's more so this is a very complex problem and our Conclusions are that the performance of randomization designs the performance is greatly influenced by many aspects of the specific study we're Performing is performing is permuted block randomization possible if the patients arrive one by one And we have to administer them. We have to assign them as they come in the answers. No When we can do both a permuted block design And a covariant adjusted or dynamic allocation design when both of these are options We have to ask how much better does one perform than the other For block designs again, we have to keep in mind how long it takes to accumulate patients to create the blocks Remember that both of these block designs and dynamic allocations designs are not just a design but each of these is a very wide class of designs on its own and So my advice is to assess relative performance for the specific study with the Characteristics of your clinical trial rather than going from general principles. So Thank you very much. Thanks, Steve so One question is roughly how much do the D a 1 and d a 2 approaches reduce sample size That's a great question and I Don't have a number, but what I will tell you is that It is likely to be very substantial and here's what I mean by that If we look at block designs permuted block designs versus completely randomized designs Permuted block designs will often Reduce the Number of patients needed to get a specified level of accuracy by 25 to 50 percent which is quite substantial and I Don't have a number for dynamic allocation, but I Should try to find one because I think you know my my intuition tells me that if we can keep the total imbalance very low we're making the Groups of patients To which each treatment is applied the groups are going to be very similar more similar than if we Use the traditional methods Complete randomization or permuted blocks so Yeah, I'm sorry. I don't have a number, but I I think it's certainly very much worth exploring put it that way and an increasing yet an increasing fraction of clinical trials are Being are moving over to Dynamic allocation as people wake up to the fact that it's there And my hope is that for some folks in the audience today This may be a new tool For their tool could kit that they hadn't been aware of but it it definitely is Highly worth considering