 Welcome everyone. My name is Sebastian Weber. I'm working as a biostatistician for Novartis Pharma in Basel and today I would like to introduce to you the R-Basin Evidence Synthesis tools, our package for which I am the main developer. So R-Best has been designed for applications in drug development and the primary application when we wrote the package was for the use of historical control data from past clinical trials for the purpose of using it for future clinical trials with the aim to reduce the sample size in the control group while maintaining statistical power. But now we see that there are various further applications in drug development as you see listed here. So the workflow we have in mind when now using R-Best for designing a new study is that you would first, which would be outside of the domain of the software, assess historical control data for relevance for your new study. So you would look at the historical data, assess its extantability assumption essentially from a model perspective, which comes down to asking questions around population similarity of the historical data which you have in terms of enrollment criteria and things like this. This will then be translated into your assumptions on the between trial h90 of the model. Then once you have compiled the data set and made this assessment, you can run the analysis to obtain an informative prior in parametric form, which you do with R-Best with the GMAP command to analyze the historical data using MCMC. Then the MCMC result is approximated with a parametric density using the auto mix fit function. And finally, we strongly recommend to robustify the so obtained informative map prior. Now that we have our map prior, R-Best aims to support you in evaluating the frequent design properties when you were to analyze such a trial. And for that the OC1S and OC2S function are available. Finally, R-Best can also support you in running the final file analysis with a post mix command. We will discuss the example of the introductory vignette on a binary endpoint, which is in the indication of Ankylosing Spondylitis published in the Lancet in 2013. This was a Novartis sponsored trial, which is a double-blinded proof-of-concept study to test the Kokinomab against placebo. The endpoint is a binary response of ASAS 20 at week 6, so higher response rates are better. The design was fully Bayesian and the success criteria defined was that we wanted to see that at least with 95% probability the response rate in the active group exceeds the response rate in the placebo group. For the placebo group, a meta-analytic predictive prior was used as you see here for the active group and prior without any interpreting any information to the analysis was used. The historical data, which has been used to formulate the map prior you see on the right. And the randomization ratio put forward in the trial was 4 to 1, so 24 patients were put on active and only 6 on placebo. When you load the R-Best Packet, then you have immediately available the data set, set AS, which is analyzed in this example. So you see each row of the data set contains the summary data of the historical study and reported here is the number of patients in the control group and the number of responders in the control group. Overall, this data set consists of more than 500 patients of historical data. In R-Best, we've implemented the generalized meta-analytic predictive model, which is a hierarchical model to obtain the predictive of the mean parameter of a new study. So let y be the control group summary data for age historical trials, then the data of each historical study is modeled with the likelihood f and a trial specific control parameter beta h. The same model holds true for the future study for which you don't yet have the data in your hand. You can think of this data being essentially missing in the context of this model and we denote the parameters of the future study with a star here. Now we have many different control parameters and we now put forward the exchangeability assumption namely we transform each theta with a link function g to a link scale where we then put forward that these different parameters are coming all from a common normal distribution with a population mean beta and a between trial heterogeneity tau. So R-Best supports the canonical combinations of likelihood and link functions namely a binomial likelihood and locket link, a normal likelihood with a known standard deviation and identity link and the Poisson likelihood with a lock link. For the population mean parameter beta prior is required but the most important prior actually of this model is the prior which you put on the between trial heterogeneity parameter tau because in most cases we may only have actually three or even only two studies to run the analysis and in that case the prior which we put on the between trial heterogeneity parameter becomes relatively important. For that and how to do that I refer you to the documentation of the GMAP command in R-Best. The parameter tau is very important in this model since whenever this parameter approach is zero then the model will essentially pull the available information and we get an unbound use of the historical data whenever tau approach is very large values the model will essentially stratify the data and there would be no use of the historical data for any future study. So the evidence synthesis in R-Best is done with the GMAP command and here you see an example of how you could analyze this data set and we recommend then once you've done the MCMC analysis to also plot the results with this command for example and here then you see the model estimates in comparison to the stratified estimates from each study. So the dashed lines is the stratified estimates and the model estimates are the solid depicted here in the solid lines. The solid lines are always shrunk towards the overall population mean which is denoted in the middle and these also have very much a lot shorter credible intervals as these benefit from the information from all the other studies through the model. But the key part really at the result of this analysis is for one the population mean estimate which is typical response rate estimate but what's more interesting is the MAP estimate which includes in addition to the uncertainty in the estimation of the population mean the between trial heterogeneity such that we have a much wider credible interval here. Now the analysis result is at this stage an MCMC sample for which you see here the histogram of the four chains being run by default but this is still at this current stage very inconvenient to communicate or pre-specify in a protocol. So we need now a parametric approximation to be able to say exactly what is our prior for the new study and this is what our best and also do well in that it provides conjugate mixture priors which can be fit to these MCMC results. So why do we use mixture priors? Well you obviously see if you were only to use a single beta prior which you can moment match to the MCMC sample which you get then that's a very inaccurate description of the MCMC result whereas if you use two component beta mixture prior then this is already becoming relatively accurate and if you use three component beta mixtures prior it's essentially the same information and using even four components wouldn't add any additional accuracy here. So once you have now this parametric prior in your hand Arbus supports you in evaluating the trial design. So the idea that's informative map prior enables unequal randomization by substituting sample size of the control group by this prior information. So the trial power is maintained at the reduced sample size in the control group due to the use of the informative prior in the final analysis. So the way you would go about this is you would to use first request design principles to determine your starting N which you could conclude for the eclipsing spotlight as example to be around about 24 patients are needed per group. And then the question becomes like how much can we now reduce the control group and here by using the informative prior and for that Arbus supports the concept of the effective sample size which is a measure of informativeness of your prior and these things like summarizing the MCMC result from GMAP into a parametric form and then as calculating the effective sample size are done by just two lines of code using Arbus. We see that we have an effective sample size of 39 which is a lot less than the more than 500 patients we started with but still a lot of information in comparison that we only need 24 or the control group if you were to use request design without any informative prior. And next Arbus allows you to now compare the operating characteristics of various designs in a very straightforward way. So that is Arbus calculates things behind the scenes analytically and as such very fast and accurately so you can even instead of looking at tables what would be typically common when you do this operating characteristics calculation you can even use graphs. And you can compare the different designs so when using no prior you would see the type one error here which is the frequency for go whenever the response rates are exactly the same in both groups. For the flat prior here which stays below the five percent and for the when you use a map prior for the control group you see that the type one error increases a lot as we move away from what we expect from the prior to see. So remember in the historical data we saw response rate of around 25 percent and whenever we see a control group in the future of a response rate of more than 40 percent then of course the prior will drag the posterior in the direction of what we have seen in the past and as such lead with a certain chance to a go decision in spite of that the two rates are the same. And the robust map is always a compromise between the two. Now the type one error is so to say the risk of using this approach whereas the power is the frequency for go whenever there's a true difference between the two groups. This is sort of say the gain and when using this approach as we can see when we compare that at a given delta difference between the two response rates the using no informative prior has much lower power in comparison to using the informative priors. And the robust map is always a compromise between the two priors of flat map. It brings me to the summary of this. So ARBEST facilitates the application of the meta-analytic predictive approach in clinical trials. It supports binary endpoints, non-endpoints and Poisson endpoints and via a Poisson approximation can also support time-to-event data using a piecewise constant formulation. ARBEST tries to make these complex computation easy and straightforward for users where as main user we have a trial substitution in our mind and by now there is a public package home page for where users can provide feedback and interact with developers on GitHub. And here you also can find more details on the package details in the Journal of Statistical Software publication that I'd like to thank for your attention and hope to hear from you soon. Bye.