 Hello everyone, I'm Hilary, and I'll be presenting on the package GLMMPEN, which fits High-Dimensional Penalized Generalized Linear Mixed Models. The main purpose of the R-Package GLMMPEN is to fit High-Dimensional Penalized Generalized Linear Mixed Models, or PGLMMs. This package performs variable selection on both the fixed and random effects concurrently. In order to fit such PGLMM models, the package uses a Monte Carlo Expectation Conditional Maximization, or MCECN algorithm. In this package, we utilize the existing tools, RCPP Armadillo and STAN, in order to aid in computational efficiency. By using RCPP Armadillo, we can significantly reduce the required memory for and increase the speed of the maximization step. Using STAN helps us to improve the computational efficiency of the expectation step. By utilizing the MCECN algorithm, as well as these computational efficiency tools, the GLMMPEN package can handle 50 or more fixed random effects. Currently, the package handles the binomial, Gaussian and Poisson jammies with counterpoints. The GLMMPEN package can perform variable selection in generalized linear mixed models, using the function with the same name. The formula is designed to follow similar formula conventions to the LME4R package, which is very popular for fitting heal on the models. The package can perform penalization using the MCP SCAD and lasso penalties. In order to perform model selection, the package can use several BIC-derived selection criteria, including the regular BIC, the BIC-ICQ, and the BIC-H. In order to calculate the BIC or BIC-H, we need to calculate the marginal log likelihood, which we calculate using a corrected arithmetic mean estimator, or CAME, which is described by Poisson. We have found the simulations of the CAME estimator, gives marginal log likelihood estimates extremely close to the Laplace estimator used by the LME4R package in low dimensions. The select control function has controls over several other parameters that specify the penalty parameters used to search over these penalty parameters and pre-screening a parameter. We describe here the default two-stage grid search over the penalty parameters to find the best overall model. We label the penalty parameters for the fixed random effects as lambda 0 and lambda 1, respectively. In the first stage, we fix the fixed effects penalty as the min penalty, and then the random effects penalty ranges from the min penalty to the max penalty. The best model from this first stage gives us the best random effects penalty. In the second stage, we fix the random effects penalty as this best penalty from the first stage, and the fixed effects penalty ranges from the min penalty to the max penalty. The best model from the second stage gives us the best overall model. The Stuckey's abbreviated grid search has several advantages. We found that this abbreviated grid search works very well in simulations, which means that the algorithm can avoid performing a complete grid search over all penalty parameter combinations. Additionally, we found that progressing from the min-penalty to the max-penalty gives us good initialization and subsequent models. This good initialization then helps us to reduce the total amount of time it takes for the algorithm to complete the subsequent models. The GLMM-PEN package can be downloaded at GitHub. Once again, it fits penalized generalized linear mixed models and it performs variable selection of what the fixed random effects concurrently. We designed the user interface of the package to be as similar as possible to the package LME4 so that the many users of LME4 would then easily use our package. When developing the package, we took advantage of several existing tools, including RCPP Armadillo and STAN, in order to improve computational efficiency. In addition, we were able to speed up the package by taking advantage of good initialization of subsequent models, as well as pre-screening of random effects. I would now like to acknowledge several individuals who were instrumental in the package development. And for anyone interested, they can look at the slides online and look at the references, as well as the simulation results. The details of the simulation results are provided in the speaker notes in the slides, in a particular interest. I give some time estimates for the completion of algorithm.