 Hi everyone, this is Shubram Pandey. Good morning, good afternoon, and good evening to all of you. I would like to welcome all of you in the SMAR conference. Today, I will be presenting my R package, which is based on meta-analysis of multiple survival curves and where you will be getting, as a result, is the pooled survival probabilities and the pooled Kaplan-Bier curve. I hope that you can see my screen. I'll be putting the presentation in the presentation mode. So the title of this presentation is meta-analysis of survival curves using the multivariate methodology of DL method. So there was a paper published in 2014, and we have used that paper. We have used the methodology to write the R codes and to make our R package. So before starting, I would like to give a brief introduction about myself. I'm Shubram Pandey, and currently I'm holding multiple positions, one in managing director at Heorlytex and the other in head of modeling and advanced analytics at Pharmacoevidence. Both these companies, like Heorlytex and Pharmacoevidence, are health economics outcomes research companies. I'm a statistician by training and health economics to my profession, and having more than half a decade of experience in core health economics outcomes research domain. My expertise is to develop the health economic models from scratch for various indications and vaccines. And also, I am amongst a few global names who is transforming the Excel-based models into R and R Shiny-based models, as we know that the R models are the future. I'm also author of the MetaServe R package, which is available on a CRAN and will have to download, which I'll be presenting in this meeting. So just to give you a brief background that there are a few existing methods which suggests to pool the survival probabilities at multiple time points. So the meta-analysis of survival studies have focused mainly on the combination of studies comparing two arms, where we assess a pool intervention effect, a pool measure of an intervention effect, for example, hazard ratio. But sometimes, the pool measure of the intervention effects may not reflect the exact situation like it can give you the direction. But it will not tell you the exact situation that how different curves are behaving over the time. So there was the earlier proposed method for pooling the survival probabilities, which is like we can pool the survival probabilities as even the survival probabilities as proportions. And they can be combined at each time point using either fixed effect univariate methods or a DL method for random effect. However, there were one disadvantage, which is like the correlations between the survival probabilities at multiple follow-up time points were not assumed, because we're not considered because at each time point we're analyzed independently. Also, if we go with the parametric approaches, then they have a strong assumption of making the assumption on the shape of the survival curve, which may or may not be appropriate for the data. So the methodology which is explained in this paper is like the paper proposes a distribution-free summary survival curve, assuming the random effects where the summary survival curves are derived using the product limit estimator. And the advantage of this method is that there will be no assumption on the shape of the survival curves is needed. So you don't have to assume any shape like exponential, viable, log logistic, log normal, any kind of distributional assumptions on the shape of the survival curves. To assess the between this study, heterogeneity in the estimation of the pooled conditional survival probabilities, a recent extension of the DL methodology was used. So they are using a recent extension of the DL method, which also allows the estimation of the mean and median survival times. But there is one limitation of this method is that this method is to be used cautiously in case of rare events or in case of less number of studies as the DL method assumes that the estimates to be pooled are normally distributed or that the number of studies is sufficient for the central limit theorem to imply that the weighted average is approximately normally distributed. And since this paper is using a recent extension of the DL method, so this assumption holds true for this method as well. So in case of rare events, in case of where the events are happening rarely in all the studies or the number of studies are less, the result should be interpreted with caution. So I'm not saying that the results were not true, but it should be interpreted with caution. So the meta, so the meta-serve R package is available on the CRAM, repository and the development version of this package, which I am updating continuously with different functionalities which can be installed from the Githam. So I'll be presenting both these links here. The goal of this package is to ease running the analysis with the methodology discussed in the reference paper. So this is the reference paper which I am using. So the goal of this package is only to use the methodology so that the user can use the methodology with ease. And the main function of this package is M-Serve, which is used to estimate the pooled Kaplanier and some resurvival estimates. So if I can show you the CRAN and the Githam page, so yeah, as you can see here, this is the CRAN page and from here you can download the DR package either in tar.gz format or you can install it via install.packages in R. If you want to install it, install the development version, you can go on this repository and from here you can download the development version of the meta-survival package. And as you can see the statistics seems like these are the downloaded stats. So around 8K downloads till now and around 50 plus downloads over a week and 200 plus downloads over a month. So in case of any issues, you can go with this link where I have explained that how to load the dataset and how to run the different analysis. So here you can see all the gray lines indicate the multiple Kaplaniers while the red ones is a pooled Kaplanier for the random effect. And this is for the fixed effect. So yeah, coming on to how to use this package. So assuming you have the Kaplanier curves from different published sources and so what you have to do as a first step is you have to digitize all the Kaplanier curves using any web plot digitizer software and arrange that data in such a format that is used by the GIOT algorithm and then you have to generate the pseudo IPD using the GIOT algorithm. So the first step is to digitize the Kaplanier curve. The second step is to generate the pseudo IPD using the GIOT algorithm. And then the third step is to estimate if there is number at risk table given in the published paper then that is completely fine. If not, then you have to estimate the number at risk table using the summary function in the survival package because the meta survival package takes the number at risk table as one of the input. So the first step is digitize the Kaplanier curve. Second step is to generate pseudo IPD. Third step, I'm being repetitive because it's good to familiarize yourself that how you can use this package very easily. And believe me, this package is because we have, because whoever is doing the meta analysis and whoever is in this domain they usually come across this situation that we have five curves and how we can get a pooled estimate. So if you have a pooled Kaplanier you know that from the Kaplanier you can derive anything like the cumulative events, cumulative hazard ratio, anything like that. So a pooled Kaplanier can be very, very helpful for you. So that's why this package will be useful for you. So the last step is to collate the pseudo IPD data and number at risk in one file. We have the data format given in the package, the sample data format. So in that, in the same format you can collate the pseudo IPD and number at risk table. And then you can use the MSOF function to estimate the pooled survival probabilities and pooled Kaplanier. It also gives you some goodness of a statistic like heterogeneity i-square which you can use to see that how better your model fit is. Now, if you want to avoid the, like avoid estimate the number at risk table and avoid, you want to avoid using the MSOF package, writing the R codes or something. Still, you have to digitize the Kaplaniers and you have to generate the pseudo IPD. This package comes with the R Shiny app as well, which I can show here. So this is the meta survival R Shiny app which is developed by me. So this app is like, you just need to upload the pooled IPD, upload the pseudo IPD data here. No need to estimate the number at risk tables, nothing you just need. I will show you this app live, but you can see here how convenient to use this app and you can get the estimates here which you can download in CSV and PDF for your reference purpose. So this R Shiny app makes the use of R package very easy in terms of making the data set according to the R package and the user only needs to upload the collated pseudo IPD file. That's it without any number at risk table and user can download the results in the Excel format and this app is for now, this app is free to use, but as soon as we are adding the functionalities in this app, like the extrapolation techniques using the parametric clients and then cure fraction, something. If we are adding the more functionalities like dynamic reports kind of thing, so that functionalities later on will be planning to implement that functionality on a subscription based model. But yeah, for now, this app is free to use and this app is hosted on the GitHub. Sorry, this app is hosted on the AWS server. So I'll be quickly taking you through the meta survival R Shiny app, okay? So yeah, so this is the landing page of this app. You cannot skip this small dialog box and believe me, we are not collecting any kind of data. We just want to know that who is using our app. So we'll not send any spam mails, but you need to enter a valid mail ID to access this platform. So for now, I'm just putting mine one, shubhant.parm to accurately all the bits. And when you enter, you will see a welcome message here and then here are the logos of the company in which I'm working. Now, we have two sets of data on the left-hand side. We have some settings to see the data, download sample data and all those on the right-hand side we'll be getting the results, but the results will be generated after running the analysis. So the platform takes data into format. If you don't have data and you just want to interact with the platform, just want to see that how this platform works. So you can click on the sample data which is available in the platform. It will be loaded directly. And then you can select treatments to be pulled in the analysis. So you can simply remove whatever you don't want to pull or if you want to select all, you can just do it from here. And then you can see the data from here. Like if you click on view data, you can see whatever the data that you have. So as I have mentioned that you just need to upload the collated pseudo IPD. So you can see time and event as one of the column. And then a third column which is labeled which is a unique identifier to see that how many arms to be pulled in the analysis. And you can see that this is just a pseudo IPD data where the events are in the format of one and zero and then we have a time where the events happens. And then a third column. The column should be renamed in this way. You can just hide this data from by clicking the this view data button again. And then you can click on download sample data. Once you click on download sample data, so the sample data which is in the platform will be downloaded in a CSV format and you can see time event and label here. So the column name should be same but you can change your data, whatever the data you want to put here you can change your data here. So once you have selected the sample data, now you don't need this option in this sample data option. Now, once you click to run the analysis you will see the analysis is being done and you can see the results now. So the first table is the mean median and restricted mean estimates. So the median from fixed effect and random effect model, these are the point estimates and these are the confidence intervals around that. Then we have the restricted mean, fixed effect, restricted mean from the FEM and REM. So these are the point estimates and these are the lower and upper bounds. You can download this table in the CSV and the PDF format in order to, if you want to share this with someone. Then we have the goodness of it statistic like the heterogeneity Q H square and I square, all these statistics were explained in the paper. Then we have the pooled Kaplan near plot from the fixed effect and from the random effect. You can save these from this like save images. You can download these two plots as well. And you can see the gray lines indicate the, like the Kaplan years from multiple studies and the red one with the confidence bands including presenting the pooled Kaplan year curve. Then the next is the pooled tabular results for fixed effect and random effect. So in this result, you will see the time pooled survival probability at that time point, lower bound and upper bound. So you can download this table from in the CSV and the PDF format as again. And then this is the results from the random effect. At last, you have one table, which is the data verification. And this table tells you that all the studies which you have included in this, they are seems to be found okay and they are included in the analysis. So what is the reason behind this? So let's say for example, you have some events happening which is maybe by mistake, there is some problem in the data. Like there is some negative sign at the time point and there is or maybe in the event column in the format of one zero, there will be some additional number or text. So then that study will be dropped from the analysis just to avoid any break even in the app. And you can, and then you will found here not okay. Like if there is any error message here, then you can check that error message and you can resolve that as well. Now, if you are simply removing, let's say you can see multiple curves here. Now I just want to pull, let's say only three curves, you just need to remove it from here. Again, click to run the analysis to update the data. And now you can see now there are three kaplan here since we have selected only three studies and then a red one from the fixed effect and random effect model. And then now you can see in the data verification there are only three studies and all the three studies bound to be passed checked. Now coming on to the other functionality of this app, this is related to the sample data. Now, if you have your own dataset and you can click here, custom data. And once you click on the custom data, there will be one other option which is the upload data in CSV format. So you can download the sample data here from here, change the data in this format and then re-upload. So I have just did one for this presentation, GitHub and then sample data. So once you'll upload the sample data, you can see now the treatments, names and all those things were changed. So you can now run the analysis. It's running. Yeah, now you can see, you can see our different set of results here. There are multiple Kaplan Views here. Like I don't think I don't see the median estimates because the median is not reached in the pooled Kaplan View. So that's why you will not see the median and then these are the results. These are the data verification. You can simply change it from here and then again run the analysis. Yeah. So this is it from the R Shiny app perspective. As a next step, what we are planning to do is to include the extrapolation techniques in this and make this graph interactive rather than the static one because the static one is more used for the reporting purpose and the dynamic one is more for the interactive. So we'll be providing two more options here. If the user want to see the interactive or the static and then we'll be adding the extrapolation techniques here using the parametric and splines and then the results from the extrapolation can be used in the economic models as well directly. So that's all from my side. If you have any questions and concerns related to this app please feel free to raise an issue on the GitHub page or feel free to drop me a mail at chogram.pandeyattherate. Yourlytex.com. I'll be very happy to response the questions and I'll try to respond as soon as I can and have a wonderful conference. Thank you. Bye. See you again.