 All right, and we have our last speaker for the conference, Andreas Bender, who is going to do survival analysis with generalized additive mixed models. This is a pre-recording, so as it plays, you can chat with him in the chat window, and then we'll come back on for questions afterwards. In this talk, I want to present the R-Package Outdoors, which is joined by Fabia Chaitrin and Fabia Popa, and provides an alternative to coax regression type models for survival analysis using generalized additive mixed models. To provide some background, survival analysis is a branch of statistics that is applied when the outcome of interest is time until some event occurs, and it requires specialized methods because, for example, the outcome cannot always be fully observed due to send a ring called frontation, covariate values as well as covariate effects, and change over time, and we might be interested in modeling multiple events or transition between events which then leads to computer risk and with the state models. Now, historically, survival analysis was dominated by coax regression type models. However, many survival models can also be represented as a Poisson regression model on suitably transformed data, which leads to the model class of these most exponential models or extensions thereof that I call these sorts of conventional additive mixed models. The big advantage of this model class is that, given that it supports optimization with Poisson likelihood, any method or implementation thereof can also be used for the different types of survival analysis. The focus of the PAMPUS package is on generalized additive models via the MGCE package. The general workflow using this model class is illustrated here, where on the left-hand side, we have some input data in standard time given format that we then transform to the so-called research exponential data in abbreviated PED, and while different survival tasks require different data transformations, for example, today the transformation required for right-hander data will be different to the one required for computing risk data. The general principle is always the same, and once we have to transform data, we can now select any algorithm that supports optimization of the Poisson likelihood in the middle part of the graph for the actual computation, and then it's always handy to have some post-processing convenience functions, for example, for prediction or visualization. The data transformation required to transform a survival task to a Poisson regression task is illustrated here, where on the left-hand side, we have the data in standard time given format with one row for each subject, and it transforms this data by dividing the polo into three intervals, from time 0 to 1, 1 to 1.5 and 4.5 to 3, which results in the data set on the right-hand side, where these first three columns are just there for bookkeeping and indicate the subject and interval, and then we create a new status variable, a pseudo status variable that takes value 0 if the subject survives an interval and 1 if the subject experiences the event in the interval. This new status variable will enter the Poisson regression as the outcome, and we also create a variable that indicates how much time the subject spent in the respective intervals, which will enter the analysis as an offset, and we also have to create a variable that represents time in the respective intervals, which will be used to estimate the baseline hazard and can also be used to estimate time-varying effects by interacting this variable with other covariates. Similar data transformation can also be applied to computerized data and logistic data. While this data transformation is relatively straightforward, it has never been as cumbersome and one of the reasons I originally designed the complex package, which among other things, facilitates data transformations for different survival settings using the SPV function and has a couple of post-processing functions, for example, for prediction or model evaluation, which need to be aware of the original data transformation, and there are a lot of more convenience functions. We will show some examples of how the pump-route package can be used for survival analysis. I look here at data of patients who had a stomach area tumor removed, and their survival status was threatened for multiple years, even here in the base and status variables, and we also have some covariates here, like age, sex, and complications, the letter indicating whether a complication occurred during the operation, yes or no. The first step in the analysis with these special conventional models is the data transformation, and here we use the SPV function, which in this case only requires the original data set and a formula indicating the variables that store the time and status information, and then by default it will use all unique event times to split the data sets, the intervals, the subset of the resulting data is illustrated here, and importantly, we have the TN variable, which is the representation of time in the respective intervals, the offset and the pseudo status variable that will be used as an outcome. In given this transformed data, we can now estimate interval-specific hazards, conditional covariates, for example here using the MTC function, and I specify the D status as the outcome, and model the baseline hazard as a non-linear function estimated by a thing less prime, and the same goes for the age variable, and we also have the sex and complications variables, and you see in the summary output here that the baseline hazard was estimated as a non-linear function with four degrees of freedom. Once the model is estimated, we usually want to visualize the hazard or survival probability for different time points and covariate specifications, and the workflow we propose to do so is usually to create a new data set using the make new data function in the second row here, where we specify the covariate values for which you want to make predictions, and then we can use one of the add functions, here for example the add hazard functions to which we provide the estimated model, and as a result we get the data frame similar to the one illustrated here, where for each value of the complications variable, and for each time point we get the hazard and the respective lower and upper confidence intervals, and the same workflow can be applied to additional survival probabilities replacing the add hazard function with the add server function in both cases for hazard and survival probabilities, the returned object is a tidy data frame which can then be used to directly visualize the estimated survival probabilities, for example using digit plot as illustrated here. To extend the current model to a non-proportional hazards model, we can integrate time-varying effects using interactions between the time-variability and and other covariates, in this example I fit a stratified model where we estimate one baseline hazards for each value of the complications variable, resulting in two baseline hazards for the group of patients with and without complications, which again can be easily visualized either using the same techniques as before or the higher level convenience function gg-slides as illustrated here, and as you can clearly see in this case we have non-proportional hazards. And with this I would like to conclude this lightning talk, unfortunately I didn't have enough time to go through all the features, however when you visit the package home page you will find that in the article section that there is plenty of material to go through starting from data transformation and basic modeling to time-varying effects and competing risks, and while the package currently is focused on generalized edited models with package MGCB and currently starting to extend the package and make it more extensible, for example by making the accounts as function in this tree function such that other algorithms can be included more easily. Finally if you are interested in promptals or have questions, don't hesitate to reach out and thanks for watching. Thanks Andreas, thanks for that talk, that was that was great and it looks like it looks like most of the questions were answered because I don't see any other questions posted, but thank you for taking part in allowing us to share in your work. Yep, thanks for letting me participate, great conference. Thank you very much, and with that we'll start heading over to the closing statements in a couple of minutes or seconds. I'm just taking a few moments to gear up but we'll switch over there. Sure. Thanks Rapa. Okay, I will end the session.