 Hello, my name is Peter Boyd. I am a PhD candidate in statistics at Oregon State University. And today I'll be talking about the NP-Hawks R package that I've built. So a Hawks process or a self-exciting point process, it's basically just a collection of points in which the occurrence of one point causes the temporary elevation in the occurrence of future points in nearby space and time. So sort of a two-part sequence in which we have background or parent events that happen randomly by some process or mechanism, which these background points are then able to trigger child events. So a few examples we can consider something like seismology, where an earthquake main shock will occur due to seismological phenomenon. And that main shock can then trigger aftershocks. Similarly, social network chains, such as emails or retweets on Twitter, can be modeled in this framework, as well as the spread of an epidemic disease like COVID-19 or mass shootings and gang violence. Self-exciting point processes maybe fit parametrically or non-parametrically, which will be the focus here. And non-parametrically, we will use the model-independent stochastic declustering algorithm or MISD for short. So this approach will first estimate the background rate, mu, which is the rate at which these background events occur stochastically in time. Then we can estimate the ability of each of these events to trigger new events in space, which will denote h, and time, denoted g. And both of these will have some sort of dependence on the magnitude k. So the goal here is to estimate the conditional intensity function, denoted lambda, which is the expected rate at which points occur within a very small spacetime window given the history of the process h. So we can define this conditional intensity again as a function of space and time, conditioned on history, which is equal to however many background rates we expect to see plus however many triggered events we expect to see. So the MISD algorithm can be quite computationally expensive, especially with larger data. So I have built the NP-Hawks package to easily and quickly fit these models using a non-parametric lens along with some other related analysis tools. So the functions within the package are largely written in C++ using the RCPP package. It also requires very minimal inputs and it definitely needs time and can also flexibly account for space as well as a magnitude type covariate. So using the lines of code provided here, the package can be installed via GitHub. And I'll walk through a very brief implementation here where we will use the MISD function within the NP-Hawks package to analyze an earthquake catalog containing the HectorMine earthquake, which occurred in October of 1999 in Southern California. And this catalog has roughly 540 subsequent earthquake events. So we can first read in the HectorMine data that is provided in the package. And then we can carry out this analysis by defining the times, latitude, longitude, and magnitude of each of these events. What we then find is that roughly 60% of the data in this catalog were treated as background events. And the HectorMine earthquake itself, that one single event, was found to trigger about 110 aftershocks. The background events within this catalog tend to occur at a rate of about 0.9 main shocks per day. And thankfully, this algorithm took only about 11 seconds to complete. So once we've fit a model, this package can allow for several other things like creating a heat map of the background rate. It can also display histogram estimators of those triggering functions, as well as plot the conditional intensity over time. So I hope you've learned a bit. And yeah, good luck trying this approach out with some of your own data.