 Thank you, Cornelia. So good morning, everyone. So Esther, what is this? How does it work? And where is it coming from actually? So that's a bit why we are here. First, Esther is something new of course. It's part of a this new navigation route that Europe is embarking on in order to reform reference rates. So here we have this constellation of things. So around the center of the galaxy, which is the benchmark reforms in Europe. So of course, the overnight is the first point on the curve. So there that's the primary problem in a way. So that's the reform of Eonia, which is at stake, and then they move that the ECB took to produce an overnight rate. So here we are engaged as an actor, an active one. Then we have the terms. So then the terms Uriber reform, EMMI, so the hybrid methodology. So that's also part of the constellation. We have the working group on a risk-free term rate. So that's also extremely actively engaged in this. All these things will finally need to be put in place and to be addressed on the transition issues. So to move from from a market perspective, from a technical perspective, processes, valuations to the new rates, and also, of course, EMI's legal risks. Change is always risky. Sometimes. Right, so that's going to be my master plan here. So the four points I will speak about. Why are we as ECB involved in this reform? What's the context? What are the constraints? And also the dialogue with the public consultations. Then a few words on how Esther works, methodology and operational aspects. Right, so first, why is ECB engaged in this thing? It's not our primary business, so to say. We are here to implement monetary policy. We have primary objective, price stability. We have monetary policy strategy, also in line, of course, with the strategy. And then we have the implementation. And this is where it starts to connect with overnight rates because, of course, the overnight rate is the first dot-on-the-curve. This is what measures how policy transmission works. So then there is a link between us and indeed, an overnight rate. So why did we decide to step in? There were these AONIA reforms 2016-17 that started to be actual, to be important. Plus also concerns about the fact that AONIA was backed by lower and lower volumes and that this rate, this benchmark, would not be compliant anymore with the new regulation. Banks leaving the panel. So in September 2017, given this momentum, so ECB announced in September that it would produce a new overnight rate based on its MMSR data that we had been collecting since July 16. And that there would be also a working group of the industry in order to look at the risk-free rates, so the terms and which would be what ECB would provide so criteria. So that's why finally we're in because overnight is the first dot that measures how transmission works. Of course, there are also systemic issues behind it, but I can leave that aside. There is a new context in which producing a benchmark rate can be done. It's not like before. First of all, there are these IOSCO principles, global principles. When I say global, it's serious. I mean we're just this morning discussing with a colleague from South Africa. So South African Reserve Bank, which is also working on this. So everyone is impacted. IOSCO principles says say that if you're a benchmark administrator, it's a business which needs to be organized according to certain main ideas like governance. So you need to have an administrator appointed in charge. But this administrator is not doing what he wants. There are controls. There is oversight. So there is doing and being checked. So that's one. You have conditions of benchmark quality. Maybe you have the governance, but you don't have the quality, but you need to have both. Quality, what is it? It's the design of the benchmark and what you are building your benchmark. What's the quality of the underlying data of this benchmark rate? And the hierarchy of inputs. The idea is to have transactions based benchmarks. Fine, but if you don't have transactions, you need to do to integrate other information. So you need to explain how you do this. It's not that you decide today I will take quotes. Today I will take an idea. No, you have to explain how you structure this process. Quality of the methodology. It means that the way that the rate is computed needs to be transparent, needs to be explained. It's not that tomorrow I changed the methodology because I think it's more convenient to me. I need to explain to you that that's the process I will use for the next years. So that there is stability and certainty in the way, in the conditions in which the rate is built. And there are of course you need internal controls and data. So that requires a lot of work actually. So it's not that simple here. Because in the end you have what? You have accountability towards the public. Towards authorities. Auditories, records of actions done. So in the end the norms that this represents are quite stringent. Of course it's not a one-size-fits-all approach. So you can indeed combine hierarchy of inputs. All you need to do is to document and also upgrade let's say the way that the benchmark is built to the risks. Because we are coming from issues like manipulation. So that's the overarching idea of manipulation. DCBs somehow is not too much impacted. I mean we have these high school principles are not relevant for central banks. But we decided to be close to these principles as much as reasonable or necessary. Context, this will be translated in a legal act from the DCB guideline. On top of the MMSR regulation which regulates how the data are collected. But then there will be a legal act that says how the benchmark is built. So there will be an oversight committee overseeing the administrator. Which will be a separate function that will ensure the integrity and the reliability of the computations of the processes. The constraints were in the new world. If we look at how benchmarks were built before and how they need to be built now. If we take a bit of a step back, what does it mean all this? I mean before we had Eonia, we had the old Sonia. How were they built? It was fairly simple. So you could have a publication which was very quick. Because it was built on aggregated data transmitted by the banks. The contributors to the rate. So one number. I have a volume rate I make with the average. So then you can do this at the end of the day. One issue we faced in the public consultation is why don't you produce the rate at the end of the day like Eonia? There is a constraint because actually now the rates are built differently. We use granular data like the new Sonia, like the OBA4 in the US. Those rates are produced the day after at 9 o'clock because you have to aggregate hundreds or even thousands of data. In the US as well, the new benchmark will be the SOFR. SOFR will be published at 8 o'clock, secure benchmark. But it will be published on the basis of already confirmed information coming from the trading platforms on the secured US market. But anyway, this one is also the day after. So in the end there is something that is changing is when you get the information, when you get the rate. So for Ester it will be the day after that the rate of today will be known. So it will be tomorrow. Tomorrow is Saturday so it does not work. It will be Monday. Before 9 o'clock in the morning. For example, to illustrate this constraint it takes time to collect the data. So again we use MMSR data to build Ester. So here on this chart you see how the volumes that we receive progresses overnight. In MMSR we receive the files between 7 o'clock in the evening to 7 o'clock in the morning the day after. And you see the build up in the volume of information that we receive. So you can see that midnight we are far from even having 50% of the information needed. We are at 30% or so. And then there is not much happening until 3 o'clock in the morning. And finally we get, we cross 50% after 4 o'clock in the morning. Because of course on these data collections you request more info. It's not only trade information, you request as well information which is coming from other databases, other instruments which are not in the same trading systems or in the same reporting systems. The time to gather this for banks, for the industry, it's long. And it takes the night. Therefore it's difficult to say we can publish based on granular data indices on the same day. For UK that's the same, they're exactly the same reporting logic as ours. So that's why all these rates now are going to be published the day after. Right. We had consultations to build our Ester. And the working group on risk free rates also had a consultation as was said to decide. So what will be the new rate that will replace Aeonia? So this I will not repeat. I will skip quickly. But in the end September this year was decided that the Ester will be the replacement of Aeonia at the latest October next year. But this you know already. How Ester works? It's going to be an unsecured rate. Fine. What does it mean? Unsecured is a lot of things. So we have a scope. It's been made public in the consultations. Plus in the statement of methodology. We are speaking about an unsecured of a net rate based on deposits which we use to measure the borrowing cost of the banks. Of the reporting banks. Calculation. Weighted average. With a trimming 25% I will explain later. We have a data sufficiency policy so the day we don't have enough transactions we use the rate and the information of the day before. And we have triggers. 20 banks minimum. Plus a concentration limit. Here we adopted this kind of approach because in the Eurarii it's complex. You cannot say I soups you my market activity in one number which is the volume. Because in various countries you have different national holidays. Bank holidays not on the same moment sometimes you don't have as much traffic as usually in certain countries because there is a banking holiday. So then you have to make your rate let's say hermetic to this kind of risk. So then we look more at the number of banks and concentration to avoid that the rate is biased. Governance and processes. So this I said already. Publication policy. We have more information than is currently done. So today it's right on the volume but we'll say more than this. And then a bit on the timing. So again. And there is also actually available pre-Ester publications that already give you a flavor of what Esther is going to be. If you want to know more there is an excellent website, our website where all this information is so you can look at it. Broadly speaking what do we have? We have in Esther 30 billion per day in terms of volume. So that's quite good actually. Okay it's not as big as the software in the US which is in the range of 800 billion but compared to unsecured 30 billion we think it's good. Based on the data that we already received. So that's something that allowed us to build that. So we thought that we have enough volumes with the scope that we've got. Trimming at 25% so it looks a bit bizarre. What does it mean? Does it mean that we eliminate 25% of the information or 50% of the information? Because we indeed may lose data. So that was also a point that we received during the consultations. So if you look at the weight work so very quickly. The trades of the day you have your bulk of trades on this day. So we saw the trades by rates and volumes. So we start by increasing the rate and the associated volumes. So here it's a fictitious example. And you see that you have basically this universe of trades. So you have 25% at the higher tier and 25% of the volumes and rates at the lower range. These ones are not taken so they are filtered away in order to compute the rate because we compute the rate on the basis of what's in the middle. But of course if I want to have the middle I need the extremes in the end. So we don't eliminate the extremes. We filter them out from the calculation of the rates but they are part of the universe that we are using to define how we compute the rate. So we don't lose data. That's a very important point because it doesn't mean that we eliminate transactions. We don't take into account all transactions for the computation of the rate. That's different. As a result Esther is quite stable. That's the light blue here compared to Aeonia. And we see that we have less let's say of course vulnerability to individual factors or contributions. In the end the rate is pretty stable. Is it too stable in a way? We have also to have our view on this. We think no because indeed the rate is that stable. You could say okay it's not good. But actually that's the current situation how it is. Given the excess liquidity in the market. Given the forward cadence of the ECB. So why should you have a night jump every day by 10 basis points? No fundamental reasons. So we quite agree with this outcome. Volumes 30 billion fairly steady. Higher than Aeonia. And we can see that indeed we are in the range of 30-35 billion. There was a top at 44 billion. The lowest was 16 billion. So in the end that's quite good. It falls volumes falling at quarter end or your end. So that's also in line with our experience of market activity. If we look a bit at past data. So this is Esther looking a bit at what did it return as numbers. So in the end we had a rate indeed between 42.6 and 47.4 basis points. Volume range, number of trades. You see that it's around an average 400 trades per day. And the number of banks contributing relatively sufficient. And you can see that this contingency triggers that we have. Minimum 20 banks. Maximum 75% concentration for the five most active banks. We never hit those thresholds. So we are obviously safe. So we have no lack of transactions that would lead us to publish on a contingency basis. This is very important. Esther will be based on transactions every day. Every day it will work like that because we have enough transactions. Now a bit more operational things. Governance processes. Going to work. Methodology is nice but then you need to make it work every day. So every day it has to be boring because it has simply to work how it should. In the end it looks simple but then it's a process. So ECB administrator rate will be resulting from automated processes and also some more manual feedback loops. I will explain a bit. We'll have a methodology that will be regularly revised or reassessed. So does it work? Is the trimming 25% good? Should I modify this? Should I modify my scope? Have more instruments? Let's see. So there will be regular reassessments of that and how that works. In case we observe that there is a need to change, there will be public consultations. And importantly the rate will be publicly available on the website so then it will be available for free. It's not bad. Yes, how do we guarantee quality? Because we can say we guarantee quality. Nice, it's a statement but it has to be done. So the combination of automatic and manual things. So green is automatic. Automatic is good. It's green. So that's all which has to do with the reception of data. So the files and the integrity checks. All this will be done by systems like it is done currently in MMSR. Nothing new. We don't reinvent the wheel. We improve it. We make it more round. Rate possibility checks will be... We are going to use algorithms that will tell us if some traits need to be verified or not because they look like they deviate from what we expect given the previous patterns of the reporting agents. So there will be some automatic flags. So... And then that would suspend from the computation of the rate these operations, these transactions that look not in line with the previous patterns. The idea is that we build a rate among the data what we have seems to be sufficiently reliable to be used to measure properly the underlying interests, so the cost of funding. So we park outside things which look not in line. Automatic. Plus manual is blue. Then there's the feedback loops. So your system will call the banks. So in case we identify that a trade does not look like it should, first it's suspended let's say from being taken in the computation of the rate but then there will be contacts to get confirmation from the bank. Is the trade that you reported for yesterday, is it correct? Not correct. Is it what it looks like? And then depending on the confirmation or not from the bank then the rate can be reintegrated, the transaction, sorry, can be reintegrated in the computation of the rate to have more info and make the rate better but if we cannot have this confirmation we don't take it. So this is the manual part. Which is there to achieve good level of quality. So we'll try to combine speed, timeliness and quality. Info that we are going to give to the markets. Rate determination type of information. Of course the rate. There is a rate first so we should start with that. Three decimals like Aeonia will give also the total volume on which the rate has been built. The number of transactions and the mode in which we computed that. Is it normal or contingency? And a bit more numbers. Number of banks. The share of the volume of the largest five banks to illustrate that we don't use contingency. And raise distribution. So here some info on the rate at the 25th and 75th percent types. So with that we think that we explained quite well every day what we're going to do. Transparency also. It's not simply a slogan so there will be a communication because I mean as administrator we have a commitment to communicate a bit and say a bit how the rate was built. So there will be regular quality reports. Of course if there is no reports, no errors to report then there is nothing to say. Sorry. Now there will be errors between 0.1 and 2 basis points so there where we say okay then in for example in the previous quarter on that day the rate which was published was deviating by half a basis point from what it should have been. That's something of course. And then there are the errors superior to 2 basis points. In case we spot that in the course of the morning we see that the rate is wrong by 2 basis points or more then we simply republish until 11 o'clock in the morning and this is why the feedback from the banks is important. And that's my last slide. So simply linking what we are going to do in Esther and what the working group and the industry is going to do with it. But this is the next step. I think that was it.