 Hello everyone, thanks for watching the presentation on the R-Package S-Vars. So what is S-Vars and how can you use this package? So let's consider you have a time series data set in an economic context for instance inflation and interest rates and you want to perform analysis like Granger causality, checking for Granger causality or forecasting one of the series. Then you can use the so-called workhorse model in multivariate time series analysis the var or vector order regressive model as it is defined in equation one. So these models are also called reduce form models and they are pretty straightforward to estimate for instance with D-squares or make some likelihood estimation. In case the question on your data set is a bit differently for instance you want to see what happens in case of an unexpected supply shortage or how can a central bank lower inflation then this model cannot be used here so in this version because this error term UT here is serially but usually not contemporaneously uncorrelated which means we cannot simulate here isolated shocks because these shocks occur simultaneously. A possible solution to this problem is to transform this reduced form into a structure form as it is defined in equation three and these models have an error term epsilon t which is serially as well as contemporaneously uncorrelated which means we can give each of these shock series a unique economic interpretation and interpret movements in the data y t vector as being driven by the cumulative effects of these structure shocks. So how can you think about this? Well we can think about these reduced form error terms which we can directly extract from the reduced form estimation as being actual combination or mixtures of these true underlying dynamics for instance supply shock and demand shock which in the end determine a price which we can observe. So this is what we are most mostly interested in in this economic context. So how do we get then these structure shocks? We have to get them or we get them by estimating the matrix B. So this key here basically to the structural universe but we get this matrix B by decomposing the covariance matrix of the reduced form error terms. So what is difficult now is that this decomposition of the covariance matrix into BB transpose equation four is not unique so there are infinitely many solutions to this problem here. For instance B could be a lower triangular matrix or an upper triangular or a symmetric decomposition. So of course this does not fit into an economic interpretation of demand or supply shocks or demand and supply effects if there are infinitely many demand and supply shocks. So we have to somehow add some further information to somehow make this matrix B unique and to obtain unique structure shocks. So well there are two main approaches to do it. The first one is economic theory based. So here we use our economic knowledge to place restrictions either on B directly or on B with some other combination of matrices short or long run restrictions. A drawback of this approach is that these restrictions are usually just identifying so meaning that we can neither test these restrictions nor we know whether these restrictions are correct and we have no chance to obtain the correct or the true underlying dynamics in case these restrictions were wrong. So an alternative is to use instead of economic interpretation or economic knowledge using properties of the data. So this is the so-called database or statistic identification approaches. And this is exactly what we deliver here in this package. So we deliver here several statistical identification approaches for these structure shocks. So to be a bit more specific in what we are delivering here in the world of the database and indication approaches there are again several other sub approaches. We focus here on two. The first one is based on heteroscedasticity. So we use for instance jumps in the covariance structure or conditional heteroscedasticity patterns in the structure shocks to identify them. The other approach is using independence. So in a non-Gaussian framework we can look at how it already depends between the structure shocks to uniquely determine them. For the remainder of this package I want to mainly focus on one specific model here, the smooth transition and covariance model from Ludgipo and Zonayev with which I am going to show you how you can work with the package. But first you may ask why is it necessary to have so many different identification models which all do the same job in the end, estimating the structure shocks? Well this depends on your data structure because these models have all distinct assumptions on your data. And of course the heteroscedasticity type models for instance need the particular type of heteroscedasticity in order to estimate the structure shocks. If you have no heteroscedasticity in the data and you have no chance to get the true underlying dynamics. In case you want to know a bit more about when to choose which model we have another study here which is on the topic of model selection and focus on the specific advantages and disadvantages of all these models. Okay so the smooth transition covariance model so very briefly what is the idea here. So the idea is that we have in the beginning of the sample a specific covariance structure sigma 1 and in the end of the sample a different covariance structure sigma 2. And in between for each observation we have a mixture of these two covariance stages and at some point we have more weight on the first covariance matrix and on one specific T here on one observation we have more weight on the second one. And we uniquely identify these structure shocks if the structure shocks changes their covariance at a different rate. So if one structure shock double its covariance and the other one halves its variance this allows us to identify these structure shocks. We model this transition between these covariance states with a logistic transition function and estimate the parameters by a Gaussian log-likely function. What you see in the gray box is how we implemented this in R. So all our iVocation models start here with id for iVocation and then the abbreviation st stands for smooth transition. So you see here a lot of different input parameters because this model is rather flexible however we try to implement it as user-friendly as possible so all these input parameters are optional. The only thing you have to specify here is this x and this x is an object of a reduced form estimation so we first need to do a reduced form estimation and then transform it into the structure form and whether you get this reduced form estimation so this is how our package fits into the R world. So there are already a couple of packages on R which are frequently used and all of them have the possibility or you have the possibility with these packages to obtain your reduced form estimation and do a lot of other stuff for instance check for the leg order and perform this granger causality analysis and so on so this is all in these packages varas and mts and tsdyne and you can do with them your reduced form estimation obtain an object of class let's say varast and this object can be passed to our svar function here which is in this case dx so this is how the svar package fits into the R world so we did not reimplement here the reduced form estimation we built upon the pre-existing packages so besides these identification models we deliver a whole two box of different bootstrap techniques tests on let's for instance identify restriction over identify instruction and some popular svar statistics like counterfactual historical decomposition so really a whole battery of different tools for your structure analysis in this time series context so what you see here is a flowchart on how to use the functions or what is the ordering of how to use the functions in the package so this green box here represents the function from outside our package so this here is the reduced form estimation here in this example the var function from the varas package and you run your reduced form estimation here get the object and might pass it or can pass it here to one of these structural form estimations also the possibility for pre-test like joutes, parameters, stability and so on but this is optional so you can directly pass it to one of these functions do not have to do any further specification here it works directly out of the box without any further specification and then afterwards when you obtain your structural estimate you can pass it to one of the other functions here for instance to calculate import-response function of forecast error variance decomposition what I'm going to show you next is how this works in practice and I'm going to show you this with first estimating a reduced form var model from the varas package then pass this object to the smooth transition application model and afterwards calculate import-response functions with confidence bands obtained by a wild bootstrap so I'm going to show you this with an example of the replication study from Ludgibor Netsunayev the application is a very classical application in macroeconomics and a very classical application of these SVAR models so we look here at the interaction between monetary policy and stock markets in the US we furthermore test some short and long run restrictions of Pionnand and Itemo we do this with a 5-dimensional data set from the time period from 1970 to 2007 at monthly frequency with some very classical economic time series here so this data set comes directly with the package it is one of our example data sets so once you have loaded the data set and the package you can estimate the reduced form here with the varas package so I'm not going into more details here but I'm just obtaining the reduced form R form so this object here can directly pass to the smooth transition model without any further specifications or even though I did some specifications here but this is completely optional I just want to mention here the NC so the models can be computationally rather demanding so that's why we implemented them with RCBP into C++ nevertheless in high dimension and a lot of observation it might take some time to calculate here these models so we directly added the possibility of multi-core so in this case I estimated here the model was on 5 cores with multi-core the resulting object is S form ST and when we look here at the result we see the summary of some general formation like likelihood and sample size and so on and here the estimated head-to-skill-assist team matrix so this is here the change in the covariance of the structure and shocks this here is then the estimated B matrix so the relation between reduced form and the structural form so the thing that unmixes the reduced form error to the structural errors and we know from the smooth transition model that we modeled here a transition from one covariance state to another and this here is an illustration of the transition function for this particular example so the model searches for the transition point and the shape of the transition so the speed of the transition and dodgingly and this here is the result and as economists we want to check is this here this is a plausible model and when we look here at the transition function what happens during the 1980s in the US we have the monetary policy so then we know that this here is the period of the great moderation so a period was a change in the covariance in the central bank behavior which resulted in a period of lower variance or lower variation in GEP and inflation so we have seen that this model appears to be possible we can further more test some economic restrictions so for instance that this B matrix here the structural impecculation matrix might be a lower triangular matrix so we can test this here by specifying the restriction matrix the zero represent the restrictions DNA are unrestricted elements and then we can re-estimate the whole model but with passing the restriction matrix to the function and once we've done this we can check at the Lockheed-Reicher test here for instance and this tells us these restrictions are not supported by the data so we are probably better off with using the unrestricted model finally I just wanted to show you how we can obtain impulse responses with confidence bands and we do this here by using a wild bootstrap and a fixed design with 1000 iterations and again on 5 cores and this here is then the result so I use here horse confidence horse percentiles and 68% confidence bands we just look here at the fourth and fifth shocks and fourth and fifth variable since we're just interested here in the relationship between stock market and monetary policy so we only need these two shocks and variables and what we see here is that monetary policy an increase of the interest rate from the central bank lowers stock market returns whereas there seems to be no feedback from stock market to monetary policy at least no significant one okay so that's it basically thanks for watching this video and I hope you have fun with using the SBARS package