 Hey, everyone, I'm Nikolas Kuschlik and I'm accompanied by my co-author, Lukas Vasshold. Today we'll be telling you about BVAR, an R package for Bayesian vector auto regression models with hierarchical prior selection. First, I'll talk to you what actually is a vector auto regression model, then why you might want to estimate the Bayesian hierarchical model and then Lukas will demonstrate what BVAR is and how you actually use it. So first things first, vector auto regression models are a generalization of univariate auto regressive models. The idea is that there's interdependencies between lagged values of all variables and they're used, for example, to perform analysis of monetary policy also to forecast infrastructural analysis. So the specification of a VAR model looks as follows and the thing to note here is that the vector of dependent variables, yt, also appears on the right-hand side in a lagged form and you have aj, which are m by m coefficient matrices. Now, since there's a lot of coefficients, there's quite a dense parameterization, also referred to as the curse of dimensionality. Now, this is a problem for estimation, which in the Bayesian setting is dealt with by imposing extra structure by informative prior beliefs. So the idea is you have some kind of additional information that allows you to estimate larger models and thus mitigating the curse of dimensionality. The easier is how to choose these priors and how to choose their parameters. So in the multivariate context, flat priors don't work as well and you need some kind of information in your prior and a preferred source of information is economic theory as demonstrated by Vilani, who places his priors on the steady state, which is usually well understood by economists. Another approach would be to just maximize the fit with respect to the parameter and set it to this. The, yeah, what to carry away from this is basically there's uncertainty about these parameter values or these prior parameters. And in the Bayesian hierarchical model, this uncertainty and this extra information that may be present in the data can be captured. Basically, we extend Bayes' theorem here in a simple way and where Y denotes the data, we do our bar coefficients and gamma is a set of additional hyperparameters. Now, this is quite appealing in a theoretical way, but it's often hard to implement. Now, to get around to implementation and improve efficiency or computational efficiency, Biva uses a conjugate normal inverse vision prior setup. So beta, our coefficient vector, is then conditionally on sigma, just a normal density, and sigma comes from an inverse visual density. And our prior parameters, B, omega, psi and d are just functions of a vector of hyperparameters gamma. Now, one prior from this normal inverse vision family is the Minnesota prior, which implies that variables follow random walks, which are also known for their aptitude at Wall Street analysis. The parameters of a Minnesota prior are lambda, alpha and psi. For more details, you can see our working paper, but just to sum up, lambda controls the overall tightness, alpha shrinks more distant observations, and psi shrinks legs of variables other than the dependent. The issue with the Minnesota prior is that it has a deterministic component because it primes the model on the initial values, which is often counteracted by additional priors, such as the single unit root and sum of coefficient priors. Now, these are in Biva, which allows you to estimate and analyze such models quite hassle-free. Now, the features are, for one, the flexible prior construction. The Minnesota prior is always there as a baseline, and you can also include sum of coefficient and single unit root priors, or construct your own dummy priors from the normal Inverse Vision family. You can also find you the posterior exploration of the Metropolis Hastings step. You could perform forecasting and structural analysis, for example, of impulse response functions, and you have the ThreadMD and ThreadQD data sets included. So much for Biva. Now, how do you actually use Biva? And I'll hand over to my colleague Lukas, who will tell you about this. So, yeah, thanks. I would just go quickly through functionalities of Biva and how to use them. And to do that, I will just go through a typical workflow, which consists of preparing the data, customizing the prior setup and the sample setup, then estimating the model, and analyzing the output. So, first of all, we start by loading the package, and then we already jump right directly to the data preparation. For this example, we chose to use six variables from the ThreadQD database, which among others include the cross-domestic product and the federal funds rate. Before we try to estimate the model, we may want to transform these variables for which the health function for that transfer can be used. You can either provide your own transformations or use the ones suggested by the Federal Reserve Bank. And as you can see here, after transforming our time series, quickly, you can clearly see that they are non-stationary, which we have to keep in mind when setting up our priors, for which we do right now. So, as Lukas already mentioned, the Minnesota prior is always included as a baseline, and with the function VDAMN, we can set up its hyper-parameters. So, for this example, we chose to use to only treat lambda, the tightness parameter, hierarchically, while alpha is fixed to its mode, and not treated hierarchically. The resulting object can then be passed on to the function VD-priors, which collects all priors, and there we also add some coefficients prior, as well as the same unit of fire, which are useful when dealing with non-stationary data. As last step, we adjust the metropolis-facing step to achieve an acceptable, an acceptable acceptance rate here to ensure convergence of our hyper-parameters. After setting these things, we can provide it to the main function VVAR, together with the data X, and information about the leg structure and how many priors to take. And then run our function, it prints some preliminary output, as well as the time that it took for the MSMC chain to conclude. After that, we can already move on to assess the convergence of our hyper-parameters. An easily and accessible way to do that is to call the animatic plot function, and which then creates trace and density plots for the hyper-parameters of our priors, which we can actually treat and hierarchically. From this, we can already see that the hyper-parameters converged, and we can move on to our main functionalities of interest when we are doing vector alterations, which are to compute forecasts and do structure inference in the form of inference responses. For these two applications, the functions predict and IRF can be used, when they are used to access, store, and compute forecasts or inference responses. So you can provide additional settings to these functions, about, for example, the horizon for which forecasts should be computed. And this, the resulting objects, you can pass on the animatic plot function, where you can also set which variables you want to plot, as well as some additional plotting parameters. When we now look at our forecast plots, we can see, for free time series that we chose, our forecasts together with the credible bands which are the gray shaded areas. These express uncertainty about our forecasts, and this also carries on to the inference responses, which give you an idea about how variables react following a sort of shock in the economic system. For example, if we focus here on the right column, the model here and monetary policy shock has an unexpected increase in the federal funds rate, and we can conclude from these outputs that cross-term asset product, as well as personal consumption expenditure, decrease following an unexpected increase in the federal funds rate. So this concludes our overview of the main functionalities of viewer. Of course, it provides a more extensive set. For example, you can integrate in your workflow using standard R methods, as well as an interface decoder to, for example, assess the convergence of the hyperparameters in a more statistical way. You can also run viewer in parallel to efficiently compute multiple MCMC chains and use them for convergence assessment. And most importantly, viewer provides different identification schemes for structural inference, such as sign restrictions. These can be accessed by the function be IRF, and it also provides the optionality to compute customs and area analysis. These can be done via conditional forecasts, which can be adjusted using the function be AFCUS. So this concludes the brief demonstration of viewer. And to sum up, viewer hands the user an easy-to-use and flexible interface to estimate Bayesian bars with hierarchical prior selection. And viewer is free software. You can contribute to it on GitHub. It's repository there. And if you need more details about implementation of the various priors or also the usage of the other functionalities, you can find the working paper on ResearchGate as well as the vignette package on Crayon. So thanks, and please don't forget to subscribe, comment, and leave a like for us.