 Hi, I'm Adam Loy. Welcome to my elevator pitch. I'm going to be talking about bootstrapping multi-level models in R using LME resampler. The motivation for LME resampler comes from van der Lieden and co-authors 2008 chapter where they pointed out that many bootstrap procedures were unavailable in R, so users needed to program their own bootstraps if they wanted something other than the parametric bootstrap. In 2009, an R package was presented at USAR, outlining a comprehensive framework for bootstrapping multi-level models. Unfortunately, the package never made its way to Cran, and the project appears to have been abandoned. Since that time, there have been additions to LME4 in terms of bootstrapping capabilities, but numerous procedures are still missing. LME resampler implements many of these missing procedures. LME resampler provides users with easy access to a larger set of bootstrap procedures. Currently, it implements five procedures for Gaussian response models, including the parametric, residual, cases, random effects block, and wild bootstraps. The parametric, residual, and cases bootstraps are also available for generalized models with non-Gaussian response that are fit via GLME R. As a first example, let's consider the Junior School Project, or JSP, data set provided by LME resampler. This data set is discussed in Goldstein's book on multi-level modeling. The data set is comprised of measurements taken on 728 elementary school students across 48 schools in London. We'll fit a random intercept model here, where we're considering the same model that Goldstein did, where math score at age 8, gender, and a father's social class were used to describe the math scores of the child at age 11. Here, we fit this model using LME R and store the results in the JSP mod object. If we suspect possible distributional mis-specification in the model, then a robust alternative to typical parametric inference is the residual bootstrap that was proposed by Carpenter Goldstein and Raspash. The bootstrap procedure is similar to the residual bootstrap from classical regression, but there are two types of residuals, so-called error turns, or conditional residuals, and random effects. In addition, before resampling the residual quantities, we center and reflate them. Carpenter and co-authors termed reflation as the adjustment to the residuals to ensure the empirical covariance matrices match the estimated covariance matrices. The bootstrap command provides a unified interface to all of the bootstrap procedures in LME resampler. For example, we can easily run a residual bootstrap for our fitted model using the bootstrap command, specifying the function that calculates the quantities of interest, the type of bootstrap, and the number of bootstrap replicates. Here, we're interested in extracting the fixed effects using the fixf command, and we ran 10,000 bootstrap replicates. Bootstrap returns an object of class LME resamp, and we provided familiar methods to explore the results. For example, the summary function allows us to quickly explore the means, standard error, and bias of our results. It also informs us of any warnings encountered along the way, such as convergence issues. Here, we see that we have no messages, warnings, or errors. The confident function provides normal, basic, and percentile bootstrap confidence intervals for all of the parameters by default. Here, we calculate only basic bootstrap intervals by setting type to basic. The plot function works similarly, creating a half-eye plot for the specified parameter. Bootstrapping is a computationally demanding task, but is easily run in parallel since iteration of the bootstrap requires no interaction with other iterations. We do not implement parallel processing within LME resampler. Rather, we provide the combine LME resam function so that the user can implement parallelization via the for each package. This provides flexibility to the user, allowing them to choose the type of cluster based on their situation and hardware. In this example, I'm using a small for cluster with five cores. Within the for each call, I specify that B equals 2000 replicates should be run on each of the five cores, and the combine LME resamp function should be used to combine the results. Then I use the do par operator to call the bootstrap command. On my laptop, the runtime decreased by a factor of about 4.4. Thanks for watching my pitch. I've only just scratched the surface of bootstrapping using LME resampler, so please check out the package on either Cran or GitHub. Also, you can read a preprint detailing the functionality and a few use cases on the archive. For example, I illustrate how LME resampler makes it easy to create lineup diagnostic plots for fitted models.