 Okay, so let's move on. So we have now a presentation by Aubrey Poon from the Rebrouille University. So, Aubrey, you have about 25 minutes. Okay, the floor is yours. Thanks to the organisers for accepting our paper in this nice conference. So this is joint work with Josh who's in the audience and David and Dan and the whole premise of this talk is centre to the awards conditional forecast. So conditional forecast is sort of a projection of a set of variables of interest, conditioned on some other, the future path of some other variables. So conditional forecast is very popular within empirical macroeconomics and it's thanks to this similar work by Wagdon Tsar and this 1999 restat paper where they introduced conditional forecast within the VR framework. And also I know that the ECB has this bare toolbox that implements conditional forecast. So conditional forecast is very popular and I presume some people in this audience have used implemented conditional forecast at one stage of the research career. So conditional forecast, an example of conditional forecast, for example, imagine we have a simple two variable var, we're modelling real GDP and a policy rate. So conditional forecast is, for example, we're interested in forecasting real GDP for the next two quarters, conditioning on the policy rate to be at 2% for the next two quarters. So this is an example of what a conditional forecast is. So there's sort of two types of conditional forecasts, the traditional reduced form case. So the example that I gave you in the beginning, that is an example of the traditional case where its conditions is generated by all the structure shocks in the model. So this is sort of called conditional on observables. The other type of conditional forecast is structural scenario analysis where, for example, imagine now we're interested in forecasting real GDP by conditioning on the policy shocks. So this is an example called conditional shocks. So there is a recent JME paper by Antonio Diaz and his co-authors, where they produced sort of a unified framework for conditional forecasts and structural analysis within our models. So in the traditional case, there's sort of two types of conditional forecast, the hard and soft condition forecasts. So the example that I gave you in the beginning were forecasting real GDP conditioning on the policy rate to be 2%. That's an example of a hard condition where the conditioning variable is fixed to a particular value. However, it may be the case now, what if we're interested in conditioning the policy rate to be between an interval. So this is an example of a soft condition. So instead of fixing the conditioning variable to a particular value, we're conditioning the value to an interval. So within the literature, the hard condition is the most popular employed in the empirical literature. So one of the reasons why the soft condition forecast is sort of more scarce in the literature is sort of due to the computational challenges associated with generating soft conditional forecasts. And in the original restat paper by Wagdon Tsar, they used this sort of naive acceptance of rejection algorithm that sort of requires a large number of simulated draws to satisfy the constraints. So there have been a recent paper by this Anderson and Atal paper in 2010 that has introduced the soft constraints, but this algorithm is still only really catered to so low dimension VRs. So in my opinion as a forecaster, I think the soft constraint is a lot more intuitive compared to the hard constraint, because you can think about it in terms of as your forecast horizon increases, as you have a longer forecast horizon, often you don't know what the conditional variable is in the longer forecast horizon. So it's a lot more intuitive to impose the conditional variable between an interval than a fixed value. So that's why I think the soft condition constraint, the soft condition forecast is a lot more insured as a forecaster, and also you can take into consideration the uncertainty around your conditional variable. So that's one of the beauties of implementing a soft condition forecast. So what is our main contribution of our paper? So we sort of produced this novel position-based approach that generalizes conditional forecasts in numerous ways. So basically our paper is similar to this Antonio Diaz and his co-author paper, where our position-based method is close form, and it can be used for conditional forecasts and structural analysis. And the beauty of using our position-based approach is that it's more efficient, especially as we go into large-dimensional VRs, and also where we can implement a large number of hard or soft condition variables and also a longer forecast horizon. So our proposed framework is similar to a paper that Josh and Dan just, and I just recently published at the Journal of Metrics, where we sort of applied our position-based approach to state-space models with missing data. So the main contribution in terms of generating the soft-constrained conditional forecast is that we sort of do this by combining the position-based approach and with this exponential min-max tilting method. So this position-based approach, which Josh is the pioneer of, it's basically, if you think about, if you don't know what the position-based sample is, it's basically a vectorized version of common filter. So and one of the advantages of using the position-based approach is that you can exploit this vast matrix algorithm. So as I said, the soft constraint to generate, once we impose a soft constraint on the conditional forecast, then we need to draw from this high-dimensional multivariate and truncated Gaussian distribution, and that's the reason why we're implementing this Boteff method. So remember, in the beginning, the Wagner and Zaire method I talked about, they use this acceptance and rejection algorithm to generate this conditional forecast. So basically, this Boteff method, you can think about it as acceptance and rejection algorithm, but it's more sophisticated. So in a sense that your proposal distribution that you're generating the draws from, it's going to satisfy the constraint most often at the time compared to the Wagner and Zaire method, where it's sort of a naive acceptance and rejection sample. Okay, so the outline presentation is that we want to propose a general framework to conditional forecast. So if I have time, I'll derive both the hard and soft constraint conditional forecast distribution, and in the simulation study, we sort of compare our method to four existing methodologies in the literature, the Wagner and Zaire, and the Bambell et al. This is basically, they just use filtering and smoothing methods to generate the conditional forecast. As I mentioned, the Antonio Diaz and his co-authors method and the Aniston et al method for the soft constraint conditional forecast. So, and lastly, we apply our sort of a novel position based conditional forecast on a large Bayesian wire where we implemented number of multiples hard and soft constraints, which is the first study that's done in the literature. Okay, so what we want to begin is, I'll just quickly go through the general framework for conditional forecast. So here is just, we have a standard structural wire with P lags. So we have a naught is our contemporaneous matrix, and we have, you know, our standard, you know, VR coefficients. So basically what we want to do is, we want to, given all our time periods up to time t, what we want to do is sort of summarize all our unconditional forecast of our observables in our VR. So that's what this last equation is doing here. We're just summarizing all our unconditional forecasts of our VR for the hate step ahead. So this H matrix you can think about is a banded matrix. So, and you can think about this, all these elements on the diagonal, these are sort of the iterated VR coefficients. So the big picture is this last equation just summarizes all the unconditional forecasts across the hate step period. And then given that, you know, the term H is not equal to zero, and so the inverse exists, then we can get back this equation one, where the unconditional forecast follows this multivariate Gaussian distribution with this mean and covariance term. And since, as I said, the H matrix is a banded matrix, we can use the precision-based sample of Chan and Zelikov to efficiently draw all the unconditional forecasts in one single block. Okay, so what we want to do next is basically we want to put some restrictions on this, the future path of our observables. So this R matrix is what we're doing. We're just putting some sort of restriction on that. And the small R and the omega are just the associated restrictions on these conditions, restrictions on the future path of the observables. So basically, given equation two and one, we can sort of substitute, we can combine these two, and then we get back equation three. So equation three, big picture is basically, it's telling up the story of the conditional observables case. This big matrix R on the, on this, the vector of all the unconditional forecast, on conditional forecast is just putting a restriction on the future path of observables. And equation three is explaining the conditional on observables case. But what we want to do here is we want to provide a general framework. So we want to sort of show equivalence between the conditional shocks case. So basically, what if now on equation three, the epsilon, we want to put some sort of restriction on these shocks. And that's basically what equation four is doing. We're putting some restrictions on the shocks of the conditional forecast. And equation four is basically explaining that relationship. And then the next equation, combining equation three and four, we're then showing this relationship that the conditional observable, conditional observable case sort of relates to the conditional shocks case. And to solve equation five, because given that, we assume that the number of shocks is less than the number of observables. So this equation five is sort of undetermined and has multiple solutions. So what we do is we follow Antonio Diaz and his co-authors. And we just, what they do is they just use this, they pick one solution, and they choose this more on Pentro's inverse. So then we can solve that equation five. And sort of equation six, big picture is just to show the equivalence between the conditional on observables case and the conditional on shocks case. And depending on sort of what we set for r and omega, we can either set for a conditional observables case or conditional shocks case. And then equation six is basically, the result of equation six is that we can now sort of generate a general framework for a general distribution of the conditional forecast distribution, which is denoted by this term here. So this result is very general and it can encompass a lot of useful and popular conditional forecasting. So it's just depending on what you set for r and omega and you can get back conditional shocks case or conditional observables case. Okay. So as I said, what we want to do too is extend our conditional forecast framework to include soft conditioning. So in a sense, we're conditioning on the future, we're assuming, we're putting some interval on the conditional variables. And that's basically equation seven. Well, what we're doing is we're putting, we're allowing the conditional variables to fall within our interval. And then as a result of sort of equation seven, that sort of now becomes that we have to draw from sort of a multivariate, truncated normal distribution, which is what equation eight is telling us. So in that sense, we need to use this Botaf method, where we're simulating for high-dimensionally truncated Gaussian distribution. Okay. So I won't go too much in detail with the high constraint, but basically, I'm just going to, you know, we go through the maths. Basically, what you do is you partition the two. So equation nine is basically you're partitioning the constrained variables and the unconstrained variables into a linear combination. So that's what equation nine is. So basically, the y naught is the vector of high constraints. So imagine in our two-variable VR case, real policy rate falls into that vector, real GDP falls into y mu. So basically, what's what we're doing? Then equation nine, we substitute that into equation one, then, you know, we can get back, we can sort of derive the conditional density of the unconstrained, the conditional forecast, given the conditional variables. So basically, go through some maths, and then we can sort of derive this distribution for the high constraint conditional forecast. And since the h matrix and mu matrix are banded matrix, we can simulate the conditional forecast from a high constraint using a precision sampler efficiently. Okay. So the soft constraint, as I said now, for example, instead of forecasting real GDP, we're conditioning on between 2 and 3%. So now we need to draw from our truncated multivariate normal distribution. So in this case, I'm not going to go through the maths, but I'm just going to give you the intuition. So as I said, we're just combining this the precision sampler with this sort of this Boteff method, this exponential min-max tilting method. So the intuition, you can read or you can talk to me after all, Josh, if you want to know the ins and outs for the soft constraint algorithm. But basically what we do is we draw from this trunk, we draw the constraint variable marginally from this multivariate truncated normal using this Boteff method. And then given that draw, then we can generate the conditional forecast using the precision sampler. So that's just the basic intuition behind that. But okay. But what I want to do now is in a simulation study, we want to compare our proposed precision-based approach to existing methodologies. So the hard constraint we compare against the wagons are the Bambara et al, which is a filter and smoothing method, and the Antonio Diaz method. For the soft constraint, we only can compare it against the wagons are and the Anderson et al method. So the Bambara et al and the Antonio Diaz and its co-authors, you can't impose a soft constraint conditional forecast within that framework. So that's the reason why we can't compare it with them for the soft constraint. Okay. So what we do is this is just the details of the simulation study. So I'm not going to go too much detail, but basically these are the details, and we just estimate using standard uninformed imprisons. Okay. So basically what we continue our simulation study for the hard constraint is we have a median var with a short forecast horizon. So eight variable var with a five step ahead, and we have three constraint variables. A large VR, we'd also consider a large VR case with a long forecast horizon. So 15 variable with a 20 step ahead, and again, three constraint variables. We're estimated using 25,000 MCJaws and with 10,000 burning period. And these are sort of the graph, this is the graph, the conditional forecast for the first four variables for the large VR case. And you can see that the posterior estimates across the four methods are exactly the same. So basically our position based approach generates exactly the same estimates as the three other existing hard constraint methods. So, but you want to, because I mentioned our method is computation more efficient. So what we do is we just sort of derive a table where we compute the computational times. So we have a medium, large and extra large. So the medium large is what I've defined in the simulation study previously. Extra large is a 40 variable case, and here, you know, the eight step ahead means the forecast horizons, the n n nought is basically the number of constraints. So you can sort of see that either for the two lags or four flag, the number, our position based approach is clearly more computationally efficient compared to the three existing methods. Okay. So let me, so basically what we do next is do a simulation study for the soft constraint is basically eight variable var, and with a long forecast horizons with one soft constraint, again, you know, we sort of implement similar details to the hard constraint. Okay. So again, our posterior estimates give us exactly the same estimates as the wagons are, and the Anderson and the tile method. But how does it compare in terms of computational time? So in terms of computing this conditional forecast, our position based method took 62 seconds. And the Anderson et al method is competitive, took 70 seconds. Well, whereas the original wagons are method took like 2005 minutes, so really computationally inefficient. But you can, you might say, oh, you know, the Anderson et al method is sort of a competitor, it's quite competitive. But yes, so what we do next is to explore how our position based method compared to the Anderson et al method, where you can sort of see that for the end or means a number of constraints. So you can see that for one constraint that our position based method and the Anderson et al method is gives you pretty much the same in terms of computation efficiency. But as the number of constraints increases, clearly our position based method is better than Anderson et al method. Okay. So for the empirical application, what we do is we estimate a large Bayesian var, a 31 variable quarterly var. So basically all our sort of a var follows from this pump et al method. So we estimate our large Bayesian var using this asymmetric Nature Concert Minnesota Pride that Josh has in QE. And basically what we do is we investigate the macroeconomic impact of a combination of the soft, a multiple soft and hard constraint at once. So we are sort of the first study that does this. Okay. So basically what we do is we estimate the model up until 2019. And then we impose a soft and hard constraint on CPI unemployment and a 10-year treasure rate. And to constraint that we implement, mimic the baseline and adverse scenarios of the federal reserve stress 2020 stress test. So here's just an example of the soft constraints. So we impose a soft constraint in CPI and a hard constraint on the unemployment rate and the treasury yield. And you can see that this is quite complex because the bounds for the soft constraint, it changes over time. So this soft constraint, this type of scenario is very complex. So that's the baseline case, this is the adverse scenario case. And these are just the results. So I don't know much time, but basically these are real GDP, industrial production, business sector per hour, housing starts, stock market and the VIX. Basically, this is the conditional for the baseline scenario. So this is where the economy is moving along steadily. So you can see that the red line is the unconditional forecast and the blue line is sort of the unconditional forecast. And you can see that majority of the real variables are increasing. For the baseline case, you can think about this as a negative shock to economy like COVID shock. And you can see that for the red line, majority of the real variables, the conditional forecast drops. So to conclude, basically we propose this novel position based approach for conditional forecasts. And it can encompass various things like scenario analysis or entropy filtering. So our approach is computationally more efficient and you can handle really large mental bars as well as number of large conditioning variables and long forecast horizons. And basically the simulation study shows that we generate exactly the same method, same estimates as all the existing methodology in our existing methodology in the literature, but we're more computationally efficient. So basically in terms of the empirical application, basically what I want to say is that our empirical application is just to illustrate the complexity of our scenario. Because previously in the existing literature you cannot implement these multiple high constraints. And basically our empirical application, we just want to show that given our algorithm, you can do very complex stuff which previous algorithms or methods in the literature cannot do. So yeah, basically our next step is extend our frame to non-linear models like Bayesian-Varvis SV and we sort of want to create a conditional forecast toolbox for people in the central bank and researchers. And that's it. Many thanks. A brief note is very good paper and super important for Raki Shoneon. Okay, so the discussion is done by Julia Mantuan from the Bank of England. Okay, thank you very much. Thank you, Aubrey, for this presentation. I really like to read this paper. I think it's a very elegant paper, I have to say. But as any paper by Aubrey that are very technical, this paper. So I'm just going to discuss a few points that I really like and some questions that probably I have because I couldn't understand. Okay, so all the questions are on my side. Okay, so the framework in which the paper is written is a building from Antonin Diaz-Petrella and Rubio Ramirez 2021. So what they work on is our having a united framework where they propose a framework where you can do several things. Okay, so you can do constraining in your forecast on hard constraints, soft constraints. So the difference between these one is just on how certain you are about your constraints, right? So if you want to constrain on a specific path for future events that you have an hard constraint, otherwise if you are unsure about this, you can include some uncertainty in your allowing for your constraint to be a range of numbers. Okay, then the same constraints you can apply to both observables and structural shocks. So and then of course you can combine soft constraints on, you can combine hard and soft constraints both on observables and structural shock and doing some scenario analysis. Okay, so this is a very ambitious task as I was saying and then you can use this precision-based sampler which is quite efficient. Okay, so the reason why we can, at the moment, we cannot very easily to impose soft constraints in our forecast is because the Literature, Univagon, and ENSA basically just rely on a set reject algorithm which, you know, obtains candidates drawn from unconstrained distribution. So you need a lot of draws in order to get enough to build your forecast. So let me move on. So I really like the paper because it's very elegant, it's very general and you can get all of the different specifications. However, I was a bit lost in understanding what the contribution comes from. So my understanding is that the contribution comes from two, from applying two different algorithms. This one, the Chen and Geliads of 2009 and they bought a 2014-17 for soft constraints. However, for me, it was a bit difficult to understand what do I need to do in order to run these algorithms. So what I understand is that I need to derive the conditional forecasting distribution in terms of inverse covariance matrix. So that my question is, is this correct? Is this the only thing that I need to do? Is it anything else that I need to do? And then, yes, so another point of the paper is that it's, so the theoretical part was very fun, to be honest, to read. But at some point, it gets a bit lost into specification of the different declination of your model. So I would love to go from hard constraints on observables to soft constraints on observables to be a bit more fluent. So I understand that these all are part of the same problem. Okay, then I have a question, which I don't know, I've already discussed with you, Aubrey, which is the following. So when you do, so you present to us all of these restrictions, and then you pick one of the solutions, and at that point in the paper, it wasn't very clear why you picked that one, it wasn't reasoned, why you picked this restriction, if it's because of Antonin Diaz's Petrel and Rigo Ramirez, which I think is the answer from your presentation. Can I use other restrictions? Can I use other solutions? And I imagine yes, but I would like to have this discussed in that in that part of the paper. Okay, so another point on this is that in the application, so I really like the application because you call just showcase how amazing is your model, the methodology that you propose, because you can have hard and soft constraints in observables or shocks. However, of course, a lot of things are going on. So there is the discussion on the choice of the prior, then of course you have shrinkage because you have a large beaver model. And then you have the impact of the different constraints on the conditional forecast. So my question at the end was this was very difficult to understand where the resulting forecast coming was coming from. So, you know, if I see these, I would like to see, okay, this constraint will impact, you know, in this way, and then how the old bits, you know, built up to the resulting forecast. So this is something that I would like to see. Okay, so things that I liked a lot. So the things that I could understand the most was this one, the minimax theory method for soft constraining was very intuitive and very clear. However, I didn't really understand how these work into the main into the algorithm that you use. So, you know, a question maybe if it's possible to write the algorithm down so I can see the single steps in this case as well. And then, so the simulation exercise was very nice because as you can see in the presentation as well, you saw that they compare their methodology way existing literature and they get exactly the same result. So this is very good, you know, because you can see that you can obtain the same result but, you know, in a much faster way. So this was very, was very intuitive. And then this makes me think about a lot about whether we can use this to have density forecasting. Because in this way, so especially with the soft constrained one, which I think is the real contribution of this paper, because this paper works in a very unified framework. But I think that where the argument shines the most is using soft constrained and combination of soft and hard constraints. So I was thinking that this is actually very interesting because we can move from introducing some uncertainty on the constraint on your forecast, which I think is very interesting. However, something that I was thinking about is that even when you use soft constrained, you are focusing on the muscle distribution, right? Because you say, you know, I think the policy rate in three years time is going to be between 1.5 and 2, whatever you want. And yes, so, you know, it's still on the muscle distribution. So then my question for you is, how can, can I use this method to learn something about the taste? Like, how can I impose the taste? I think, yes, but I like to answer. Probably in the standard analysis. And, okay, so just let me leave you with this point on why you should be thinking about using this methodology, okay? So I think that if you are interested in doing conditional forecasting, especially using soft constrained or a combination between soft and hard constrained, using a large bar, which is what, you know, all police makers do, or should do, then this methodology is very good because you can achieve a very high, you can have the result in a very short amount of time and they are very accurate. So then, you know, the obvious question is, why can I have the code to actually run this? Okay, thank you. So we have time to take questions. Simjan, yeah. Okay, yeah, thank you. I think that presentation was very good and convincing, especially that you have a method that can do all the other methods. But since the focus is very much on computational speed, I think you are very fast. And if you do this comparison, I think you do them in a matter of time. Did you implement all these methods yourself? And did you use the same code generator? Are you in the same platform? Because if not, then it is always very hard just only to look at time because then one system produces the code in that way. And also, like, do you look only at time or do you look at the float operating system? Because in the float operating system, that is where the actual calculations are going on. And if you do a VAR system, a large Bayesian VAR in the state space model, the companion form has many zeros and ones. So if you do the Kalman filter, you run the Kalman filter just blankly, do it as if all these zeros and ones are actual real numbers yet. And then that takes an amazing amount of time. But if you just identify all the zeros out and only do the computations for the updating where there are known zeros and when there are known units, yeah, then it is so much faster. And then it is not a fair comparison. So you should be very careful with your claims that certain things are much faster than other methods. It is also due to the implementation, how you implement these methods. And so that's why I'm sort of opposing a bit the general remarks that, oh, this is so much faster. So I've been doing, of course, some of these a lot in papers, in policy, so that's very useful. There is a recent paper by McKee and Christian Wolff at MIT that says, well, actually, a lot of the stuff we have been doing, like the SimSenza or the Leperanzati way of thinking about scenario actually doesn't really work. You are still subject to the Lucas critique. And I guess the intuition is, if I do this tilting or this conditional forecast using shocks that are unanticipated or using tilting period by period, I obtain a scenario that is very different than if I, relative to a scenario where I anticipate everything at time zero, right? And one can think of, I mean, it's underlying, say, our policy communication in central banks, forward guidance and so on. So my understanding of that econometric paper is that they say you are constrained in the scenario analysis of what you can do using shocks at time zero or residuals at time zero. You need to span that. So that, of course, is a bit of a killer because if you do it the hard way, at least I understand it, it's very hard to nail the parts you care about. So what I was wondering is whether you could come to the rescue to some extent, because if you do it the soft way and instead you allow for a range, then maybe you are more likely to be able to implement scenarios using time t, sorry, time zero shocks. So, I mean, at least that was my part of the reading of what I've seen. Any other questions? Hi, Nuno Gonzalez from Banco de Portugal. I just have one question if it's possible to set some asymmetries in the soft conditions. Yeah, thanks, Julia, for your comments. Yes. If I can remember, yeah, the business sampler is the inverse of the covariance matrix. The other parts, yes, probably in terms of explaining the sort of the difference in terms of the hard and soft constrained algorithm, probably, yeah, we probably should explain a little bit better. And in terms of the forecast accuracy that you ask. So basically we're not, so as we showed in the simulation, we're just showing our alternative method to generating conditional forecasts. And I think there are many applications out there. I think Todd Clark, Mike McCacken has done some application on conditional forecasts. So if we use our method and implement, you know, do the same exercise as what they did, we should get the same forecast accuracy. It's just, we're just proposing sort of an alternate and a fast method. So in terms of imposing the tails, so from my understanding, we can impose sort of restrictions on the variance of the conditional forecast. So we can do that. We're still exploring that avenue in terms of the computation times for the other methods. So basically we do everything in MATLAB. So in terms of the existing, the other three method, methodology, we just got them. So for example, the Antonio Diaz code, that was coded in MATLAB and Yvonne sent us that. So we're just directly using their code. And also the Bambar et al. We're also using, I think, Dominico's code from that. And the Wagner and Tsar code is basically, we're using Merrick's code where he sort of published in academic letters whether he improved the at Wagner and Tsar. So we're not actually, so basically we're not coding the three existing methodology by itself and making up the numbers. We're just using what's already out there, their code, and we're just comparing our method to what they're. So it's sort of, yeah, we get, I understand your point, we can sort of, if we want to, we could probably, you know, make the common filter, the filtering methods more efficient. But we're just comparing on what is the available code that people are using now. That's just, I think that's a fair comparison. And in terms of sort of Dara's comment, so I'm not sure about that, but I think we should, I think, because at the moment it's sort of, we're just illustrating this algorithm. We haven't explored that, like you said, putting a constraint, the soft constraint in the shocks, but we might be, we could look that in the future. But that's a good idea that would probably be interested. And we can talk more, you and me can talk more to Josh about that. And in terms of asymmetries, in terms of the soft constraint, I'm not sure about that, but what we do is, if I'm interpreting it, do you mean like changing the bounds between periods? Yeah, so that's what we did in the, and the empirical application. We can change the bounds through the forecast horizons. So you can do it. It's not, you don't have to fix the interval. You can change the interval. So our algorithm allows that. It's flexible enough. Yeah. Thanks. Okay. Thank you very much.