 Good afternoon everyone and sorry for this small delay which is due to The beginning of summer where and everyone tries to fulfill his agenda and that of course makes a lot of meetings coming at the same time and Obviously, they cannot always follow the logistics as it is foreseen But you have already had this morning and interesting discussions What I would call about the demand side of statistics the new needs for monetary policy the new needs also for financial stability and That's why I think we should not forget that there are always two sides and this afternoon We are now about to look at the supply side that also influences statistics as Especially from the technological side, but also I would say from the regulatory side as is the case Not only for statistics, but for most of our life I think that technology and also regulation which follows very often technology is About to bring Maybe disruption but certainly change to the way how we work and how we will have to Work in the future. It's not only that the computing capacity is increasing Computers being more powerful. We also have more and more daily processes, which are about to be digitalized we see also the development of artificial intelligence of machine learning and If some of you could have a peep into the newspaper of today, there was even an argument about how in China The technology is also making great straight strides forward Not necessarily to encourage the democracy, but nevertheless it enables Everyone to do his work in a different way whether it is for social control purposes But that is not what I suspect statistics to be for we are the servers of Those who try to maximize welfare and that is how we see statistics and that is how we also see this Conference today at Central Bank level We are traditionally Well people say a little bit more conservative Which would mean that we are also looking at technological development With a certain amount of prudence Patience and persistence as we say in our monetary policy jargon but On the other hand, we also know that we have to adjust and to adapt We see that distributed ledger and also blockchain technology They are not just buzzwords for central banks we have to observe these trends carefully and Try to start how we can put them into use for our own purpose and this also We see that there will be new possibilities Big data is not something that we would like But there's something that we have to deal with because it is possible it exists And it's the only way to deal with the flow of data that are made Possible, but on the same time also to respect the new regulations which govern all activities also statistics central bank statistics by its very nature is Technology dependent and also technology driven It is therefore a prime candidate for using these new technologies in order to produce statistics not only faster But also with higher quality and always remember when we started the euro back in 98 how I tried to compare the Instruments that we had at our disposal at the euro area level and those instruments that were at the disposal of The Fed and I was greatly impressed with how much delay We received many of the statistics that were easily available in the Fed and also how much larger and more granular and deeper Their access to information about the economy was compared to us We are about to try our best to close that gaps And I think Statistics have been greatly supportive in this Domain then we have to some extent already close the gaps Although there are still differences, but the difference now have more to do with the Different economic structures that we have in Europe with the fragmentation that we have and the slow process of harmonization In the integration process and in the integration process we have Also now we zero start someone who is also driving into the same direction so this afternoon Means that we will not only try to see how we can produce more and faster but we have also to see how we can produce at The most efficient way at the most cost efficient way for the end users because how we torture them is not Limitless and there are boundaries to what we can do and you see already that those who Produce that who produce raw material and hand it over to us They compliance complain sometimes that we should do our utmost to avoid duplication And I think that's very fair this a very fair discussion in democracy fortunately statisticians in general and central bank statisticians in particular Have already taken up this challenge and they have several projects leveraging on the potential both of digitalization, but also in machine learning and I Think we will hear more about this during the discussion this afternoon. I'm very proud that we have been able to Gather for this discussion Some outstanding speakers this afternoon. We will start with Alberto Pache from CERN in Geneva Who will set the stage with the power of statistics in the area in the era of big data? He will show us the potential of the new technologies and potential applications for our purposes We will then turn to Claudia Buch the vice president of the Deutsche Bundesbank She is Very knowledgeable in many areas and she also comes regularly to the governing council so we start this afternoon after this conference, but she is also inside the Bundesbank in charge of statistics and She has also been within the Bundesbank driving force to analyze the power and also the potential of big data Miss Mariana Kotseva is director general at Eurostat and she will inform us About the impact of technology and innovation on official statistics. She has been Very active in the European statistical system to imply to apply new technologies in order to increase the efficiency of statistical production processes and Finally we come back to the ECB with Werner beer deputy director general statistics one of the longest-serving staff members inside the ECB and I hope that his last months at the ECB will be as productive as his how many years Okay, so That's quite long. So you see he will share with us all his lifelong experience and he will show some initiatives that we have taken at the ECB between In order to use modern technology to streamline data reporting He will also highlight what important data standardization Plays of course in the world of Digitization so without further ado, let's start maybe with Alberto patch The microphone works. Yes Everything works here. So it's a great pleasure and honor to be able to speak at this statistic conference. I'm not from the Literally say the statistic business or the banking business. I'm from Sun in Geneva In Geneva at Sun. We have this 27 kilometers machine Which is really one of the most complex machines that mankind has ever built is underground It's on between the border between France and Switzerland. You can see the red dots and on these machines There are four large experiments That has been designed to study the inner behavior of matter now the One of the main important things of these experiments is that they produce a lot of data they produce 30 petabytes or 30 million gigabytes every year which we need to start ensure the long-term preservation and As we have an open data policy also to make it available to all the physicists worldwide we do a lot of Analysis on this data at Sun But a very large fraction of this analysis is done worldwide in the physics lab all over the world So and this is a snapshot of our network Which is a research network where you can see on the top that we need more than 600,000 CPU cores to analyze the LHC data Now there is a link here if you load this link in Google Earth you will see the live status of the current grid LHC computing grid Now what has changed and why do we need so many computers to do? scientific research is because it's really a matter of statistics and It's We always now we have scientific theories and we do experiments and the point is always to try to Calculate the likelihood of that the model the probability of the model being true given the data that have been observed so basically we I Understand that here in the right place where everybody understands this and so if you look at one of the Scientific papers that has been published to announce the discovery of the X bosons. It didn't say we have discovered the X boson We only have said that the observation that was made in the LHC is Compatible with the theory of the X boson with of course a certain fluctuation probability standard deviation five sigma etc etc so what has changed in The last years so if we go back to the last 20 30 years Statistics and mathematics as a science as hard science really they didn't change too much What has changed is the power of computing so in the last 20 30 years CPUs can do one million times more What they could do only 20 30 years ago? I can always remind people that the power of computing inside our mobile phone of today is Equivalent of the power that Sun had in 1990 for the whole worldwide physics Storage also had an impressive growth one million times more possibility to store one million time more data the network with the internet We had a little bit less, but it still have a factor of 10,000 improvements Now what's the problem is that the general population is Unaware of these changes So this is an example of survey that have my I've made not with random people, but with master students and I ask a very easy questions what the distance how they estimate the distance between Lausanne and Zurich and They all got the right answer not because they knew it But because they were able to find the answer by common sense it could not have been 200 meters It could not have been 200,000 kilometers Even less 200 million kilometers But when we talk about computing and I ask how many transistor and in the Xbox Computer only one student out of five was able to find the correct answer While the most popular answer had the three order of magnitude Error and things gets much worse when we talk about an internet so on these questions only one student got the correct answer and the majority of the student got a 20 order of magnitude error If you put back the distance between Lausanne and Zurich it is as if the student would have thought that the distance between Lausanne and Zurich would have been one million billion light years so What the Kennedy's computer do clearly this has enabled the use of methods and techniques that few years ago were computationally impossible So clearly clearly the current researchers and would not have been possible without all this computing evolution Because in these collisions that we do we don't know the initial conditions So we don't know what has interacted so we only see the results and that's why we really have to remove all the noise and and basically remove every type of collision that we already know in order to find the collisions that we don't know that make the discovery and The changes that from in the past as the computing power was limited you really had to Focus the computing in an area where you thought you would find something now You have an approach where you can do systematic 360 degree attempt to try all possible combination and all possibilities So in practice the increase of computing has allowed statistics correlations on unstructured unverified unreliable data Now of course, we know all the traditional approach where we collect data in database and the database is supposed to contain the truth And then you have deterministic arguments that derives Or can make predictions and of course when the prediction is incorrect you can always trace it back to some wrong data On the other hand with the unstructured approach you collect data Without constraints and then you have to guess what it is what it represents And of course what you do is you make correlations with other source of information So in this example you have a number you think it's a phone number And but you can correlate with the fact that it was collected from an internet connection from Switzerland So you think it's a Swiss number So the analytic the analytic software can be designed to guess what is the most probable outcome But this is not the most interesting part because now we see that the main scenario is to identify less probable but alternative truth which then allows industry to discover unexploited niche markets or Uncovered needs or service that are missing And the change compared to structured data is that when your prediction is incorrect Is not that your data are wrong But is the correlations and the additional parameters that you have used to tune your prediction that may be refining and that's why Yesterday in the keynote. I've heard that the transparency is very important The more you go on a structured data the more you have to explain what are the techniques because with structured data It's very very easy to make this these deductions So here I put a list of many techniques and technologies that are available I won't go into detail. I'll just mention that the structure and structure that I Just mentioned but to say that whenever possible if you have structured data, you must use a structured data It's only when you don't have structured data or where the cost of having structured data Is too high and you have source of unstructured data that are very cheap that you can use the structured data So basically you need both. It's It's very simple now Here I have put a link maybe we can show but it's the idea of When you have I've heard the granularity for example in our data center We have a very very high granularity of data which allows us to calculate statistics in real time So there are many parameters that we can see but you can see that you know This is data that are flowing from the internet. So I know that in this moment we have Whatever we are transferring we have it's interesting. We have eighty four thousand Computers that are reading from the internet our data and we are transferring thirty one gigabyte of seconds And this is statistics aggregate in real time. So this is the generic statistics, but the power of the Very high granularity data is that we can in real time make query That we're not foreseen and so drill down in particular problems or issues which turns out to be extremely powerful So dealing with unstructured information Clearly text base and saw documents email messages, but more and more we see the possibility of analyzing images audio files videos Knowing that every information can be incorrect But of course what you do is that you you reduce the ambiguities and you reduce the errors by Processing a large amount of data Which is where the big data but what comes from and here I have two examples one is the image recognition So how many of these photo represents a baby? It's a subjective questions Which basically you can you cannot give a definitive answer? I don't know is this a baby? Well, it's a baby cat, but it's not a baby. It's a cat So already we can start a discussion if this is a baby or not So the quest the second question. Can you estimate a subjective probability for each image? Yes, of course Can you transform millions of subjective answers into an objective one? And this is the mistake not to be do to be do to be done. It's you cannot you can for every of this image You can estimate a probability for being Interpreter's a baby or not being interpreted as a baby But what you can do which is now an object of a very very large research Is that you can develop an algorithm that can analyze a new photo out of the set from the million or subjective answers? Analyze from a much larger image database, which will tell you this is a baby with this probability and This is extremely strategic in many many of the industries that we see that are heavily investing the other example the second example on this Is the fact that for example Very likely everybody in this room has a mobile phone or a portable computer And you know that some companies know the location of your phone They can read your email across access your contacts And they also know all the web pages you have been Visiting and the good news is with the data protection regulations now you own this data and I encourage you to for example if you have a Google account to check what Google knows about you and The good thing is that they are fully compliant so you can delete you can erase this data But this data, but what they can do with this data. They can do all the correlations And when you do the correlations, then it's their information. So it's information that belongs to them no longer to you So the other change that I wanted to mention that is happening is that many many notary roles now can be guaranteed by algorithm So application distributed across thousands of computers can ensure and verify guarantee by himself its own integrity Here two examples one is the first is the peer-to-peer file sharing using the BitTorrent protocol Where data is Reassembled on on the client from a large set of untrusted computers. So and it is probabilistically Impossible for someone that would like to cheat you to download the wrong file because you don't have so many computers Enough and the software does all the necessary cryptographic verifications, which guarantees the integrity of the data and Clearly the second example, which is very very emerging very fast is the blockchain and the blockchain is just a Distributed databases validated by a large number of computers everybody can read everybody can append the new data and Everyone can validate and as as validation is rewarded and you have many computers validating which makes it probabilistically impossible to cheat and Why this technology as potential because every activity requiring a certification authority can profit from using this distributed database which is guaranteed and worldwide accessible so and we see new business model appears because The the the notary role of maybe registering contracts Can be completely automated But the business is shifting once you have the contract what are the workflow and the procedure to enforce the contract and to resolve disputes And here I would like to mention that we really see the big companies that are shifting the business if we take eBay Amazon Alibaba Ali express or even PayPal It's not the fact of that you have established a contract to buy or sell a merchandise But it's all the workflow which guarantees that if one of the two parts either doesn't pay or doesn't deliver So it's the contract enforcement or and resolving disputes which makes the added value of this new type of business Where we see the most spectacular changes high-energy physics has been profited because the community is historically well organized around Sun But we see that other science can expect similar benefits. So computational biology is one I just mentioned the challenge of analyzing the DNA and clearly Identifying the correlations between the DNA and the genotype medicine to assess effectively the effectiveness of therapies climate change and weather forecasts are also a very very active area Active science in this area and then we have of course everything concerning about finance insurance loans derivatives forex futures Anywhere there is a contract or a risk and of course then the market and the normal business Targeted advertising lobbying identifying new markets from the data that are widely collected. So this brings me to the conclusion So which is I try to be Positive so everything you learn in hard science during traditional studies is absolutely still valid Which includes mathematics statistics physics laws still apply But computers and network can do much much more today And statistics solutions that were computationally unfeasible in the past become possible today and Cannot fight progress But the positive statement is that many many of these approaches can bring Significant improvements to everyone's life Otherwise, you will not keep a mobile phone in your pocket knowing the problems that I just mentioned but what really makes the difference is the awareness of What computers can do and this is why? Education if of the utmost importance because educated people will be able to profit from this technological Evolution while less educated people will be able to profit less Thank you very much Thank you very much for Leading us in this into this new world where we obviously have to change our what we believed our Certainties and try to see the world with different eyes Not with the eyes of being certain what is right, but what is probably probabilistically Maybe has a chance to be right. I hope no one belongs to the zero point zero one percent That is less left out by the probabilistic certainty So the next one will be Claudia book Please Claudia Yeah, thank you very much if and thank you very much to the organizers for for having me here I was tempted to say the Bundesbank is also a big machinery and not not nearly comparable to to Surin But I think the the statistics the the information the knowledge we produce is somewhat different So I try to walk you a bit through different projects. We have at the at the Bundesbank how we approach the issue of new technology innovation in the production of statistics central bank statistics and also also the knowledge that we generate out of this out of this New development, so that's briefly what I will talk about but I think we had a very good presentation so and most of you actually Experts more than I am on particular aspects of the new tech technology, so I will be relatively brief Then I will tell you what we do in the Bundesbank and actually our starting point to a large degree is that first of all We want to use We want to make better use of the data We already have and then we think about what can we do in terms of enriching these these data sets making use of Big data and and and new private data sources. We are using all this, but I think the first Lesson that we took from from all this new development is this that we can get much better in Using what we already have very briefly talk about the production of knowledge out of these data because this is not really a research conference, but I think there's also important Important aspects to be to be born in mind here And then I will talk about manage the challenges because knowing that we we have a lot of new information a lot of new technological sources And then managing the process. That's that's yet a yet another story So let's start with what's new and how has actually the statistical value added chain Not only in central banks, but we'll hear more also from statistical offices. How has this changed and so by a large the traditional system is that We've been producing statistics for a specific purpose So if we need a balance of payment statistics, we collected all the data that we need for balance of payment statistics And then we aggregated them to clean them We did all that all that work and then at the end you have balance of payment statistics Now we're using to a more granular granular more micro database process where we collect Data that we can then use for different purposes So the data will be more granular and you can use them for different types of statistics Which of course changes the entire process of collecting the data processing the data. We also need more of Circles of information going back to the production of statistics. So once we've produced a certain Aggregate statistics, we're actually learning from this process And we have to go back to see how we can and how we can improve the process. So it's an entirely Different concept being based on micro data Which means a lot of investment also from our side to manage this process properly But I think in the end we learn much more For monetary analysis for financial stability analysis and I would I would talk about this in a second So this is kind of the new statistical value at a chain and very simply speaking Technology changes all aspects of our value chain or I would as I would call it the production of statistics and the production of knowledge in central banks because it's both what we're doing in central banks not only Generating data and and statistics that we can use but we also generate knowledge and basically I don't have to read out all these elements here because you've just heard it in the in the presentation before we have new technologies We have machine learning artificial intelligence We are we're changing our IT infrastructure dramatically Maybe an interesting aspect and I will return to it later on How labor or how our statistics statisticians Enter into the production function and also the research as the analyst that we have an essential bank that is also changing So we're moving away a little bit from routine tasks to more non routine tasks. We we need People who to some extent have better Technological skills, so that's what you would think I don't know how many of you knew all the Aspects that are in that were in the previous presentation. I must admit. I didn't know all of them because where I'm not an IT expert So we need people who are familiar with these new Technologies, but we also need people who are good at communication who are good at explaining what we have in our statistics Communicating with the users so that there's that there's an information flow going back and forth between the statisticians and the users So communication is actually an important part and then of course The the data that we that that we have is are changing so we have our official statistics We have big data sources. We have private data sources and I think we have to be very We we have to develop protocols how we want to combine these data sources and how we can actually use them for our analytical work So that's in a nutshell. What's new? What has what has changed? What are the implications for producing? Statistics and like I said earlier what we did is we started from taking stock of what we already have so My first implication would be for the production of statistics make better use of the existing data. So Now this looks pretty generic. So we have data on banks. We have data on on firms on companies Nonfinancial firms. We have some data on on households and we have statistics on six securities and probably in your institutions It looks rather similar now when we broke this down a bit more and we looked at what is it that we actually have In our different business areas in the Bundesbank This is the chart you got and it looks I mean we approaching the level of complexity that we saw for the for the Data so we have a lot of statistics that are hosted in different business areas with very different user rights Who can actually access the data who can use it? Very often the all these statistics Had different Identifier so they can't easily be merged even technically let along the legal restrictions For for using those data. So our first approach was to say well What can we do to actually make these data accessible to analysts in-house and also externally, but really the most important aspect of it was To provide better information for the analysts for the researchers for the economists for the financial stability people in-house So what the Bundesbank did in 2013 was to put up a new Integrated micro-data-based information and analytical system is called immediate so we have acronyms for for for everything immediate really has to go to first of all make Enhance the accessibility of the existing data for the analysts in-house In that sense It's really supporting evidence-based policy making which for us is very important not only for monetary policy analysis But also for looking at the impact of regulations on financial sector reform. So that's a very crucial element of our strategy And in addition we also wanted to encourage a cooperation with external research Which I would say was has always been ongoing in the in the Bundesbank But now we put it into a more structured as to a more structured process So house immediate being set up And as an internal steering committee where all the relative the the relevant departments Participate and then we basically have two arms So one is the so-called house of micro data, which is really a warehouse of data for internal Policy work so that the internal analysts which are typically working under a tighter time constraint than the external Analysts or researchers have ready access to data sets which are which have been merged where they don't have to go through all the Procedures of bringing the data together and then we have a research data and service center So that's for the for the external researchers and the internal research of researchers of course So that's entirely equal equal treatment Which is 20 employees we have 12 working places for for guest researchers because the data confidentiality issues are extremely important So we have to make sure that whatever data we give them much tighter restrictions obviously for the for the external researchers The data confidentiality is always assured. So in this research data and service center, we have 300 active projects It's more than 160 institutions involved. So this is just to say it's it's not without without cost So I come back to cost benefit analysis later on It's not without cost to to improve access to the existing data But I think it tremendously pays off in terms of the information and the knowledge we can actually Generate from these data sources I think another Presentations here you've been talking about the development of new data sources. Of course, we are also very active in Anna credit so loan by loan credit registry Which right now we are running in parallel to our existing credit registries like many other countries But I think at some point in time we also have to think about How we can or we can streamline the the existing databases that we have Then we have the securities security by security as a database and then Riyad which is linking the two but I think most of you are actually much more much closer to these specific projects Then then then I would be in the in the day-to-day work So I don't want to go into into much detail But obviously developing new data sources is a crucial element of the of the new data strategy Before we'll say one more word about why I think it's extremely important to have this Detailed data and why it's not just a project for statisticians to collect ever more data and to do ever more sophisticated data sets Let me say one more. Let me show you one more slide Showing you and that's my third implication for the generation of statistics Let me say one more word how we use new analytical tools And this is just a snapshot of the of the different activities We have how we use it for our day-to-day work On this on this different different projects I should say when I joined the Bundesbank about four years ago. We started a big project Trying to learn from experience how others have been using big data and how big data sources have been used in other in Other Sciences and then we developed very concrete projects of implementing this knowledge In the bank, so we didn't want to overhaul the entire system like I've said But we wanted to make sure that if we if we use new technologies if we news use Machine learning big data sources. We do it in a very targeted way and and integrated into the into the processes Into the systems that we already have and again. These are some examples also following up from that work, so With regard to the securities holding statistics We're using machine learning tools to improve the production of statistics to make it more efficient to improve the detection of errors and Basically to complement The statisticians that we have now that's that's one of the big lessons that we're drawing from this from this exercises So it's not that machines are replacing the statisticians is that they make them more efficient that they maybe let They move on to more more to less routine tasks like Communication like I've just mentioned and I think that's a very important part of managing the process that we have to explain It's not the robot have walking in or the machine walking in and it's replacing people It's quite to the contrary that we can actually improve what we are already doing Riyad also does it we have protocols for record linkages because obviously and that's also something that we're seeing in the research Data and service center one of the crucial elements if you have different different statistic Statistics micro data statistics is how you can merge them how you can link them How can you make sure that if you're matching data sets you're you're matching the right? Reporting entities and I don't know many of us who've done micro data work also and in terms of research years ago We've been sitting in front of computers and try to link Data sets manually of course that we can improve Dramatically using machine learning techniques and we're doing that in both of these projects and we also using it more in our traditional Work using aggregate data when it comes to seasonal adjustments where we're also using Actually, don't know that maybe you would have to talk about it in the coffee break What exactly random forests of conditional inference trees are but it looks like it's a it's a new tech technology that we're using here Okay, but with all this I think it's extremely important And that's what we are learning that we need cooperation between the different business areas We need also cooperation and learning from each other across central banks. So we are not the only ones doing this We're engaged in many European projects And so we have to make sure that we that we actually pull out knowledge and and and improve all together Implications for the production of knowledge I can be relatively brief here because this is really also about research and and How we can use how we can use micro data? So this is basically what in my view the world looks like if we just look at aggregate data It looks very we basically look at a solid block. We see a fish, but we don't really know what's behind it and What is what is what is driving the response of the economy to a monetary policy? What is the driving financial stability risks? What are the underlying risks? So going to micro data it looks like this We're seeing much more. We can get much better in terms of identification. We can distinguish Demand and supply which is crucially important thinking about developments on on on credit market So this is why we really need it on our credit in order to see the variety the heterogeneous Response of the system of the financial system to monetary policy shocks to Changes in interest rate risk without looking at the heterogeneity We have basically no good understanding of what's happening in an economy So this is why we really need these types of statistics and I guess I could go on to telling you about a lot of Projects that we are doing where we learning exactly that how we can use heterogeneity for identification But I'll stop here and talk a bit more about the the challenges One of the challenges is that if you tell people about the the new technologies the new possibilities Everybody says well, that's great. We have to do it. We have to implement it Then when you say well, but maybe that's also means that you have to change a little bit What you've been doing so far then people say, oh, we're not so sure that we actually want to change And then if you ask people where are you willing to manage the change then it's getting sometimes even more complicated But we have a lot of good projects and we're we're we're we're trying to also communicate back and forth between the the managers and the people who actually have to implement it and so there's a lot of Things that we are we are doing so leadership Commitment of leadership and good communication is crucial The second challenge I want to manage is I want to mention is Cost-benefit analysis so what we have learned is that and here I come back to my point about communication What we have learned is that yes We can we can we can make big progress in terms of understanding the economy in terms of understanding monetary transmission financial stability analysis if we have Microdata, but we also have to make sure that we cooperate closely with those who actually have to report So if we don't explain well what we are doing and why we are doing it and how in the end also the improved statistics can be used to Reduce the reporting burden then we are not We are not leveraging upon The advancements that we can make so that's why I think Marys and cost procedure cost benefit analysis is extremely important public consultations are important because we have to understand what Internal IT systems of banks for example look like to be able to to best use the information that is available there and for all of us improve the the Data that we can get so one of the strategic objectives and again I'm not telling you anything new is to collect data in a standardized way and to integrate it Also across the existing statistics many of the statistics that we have they have been Initiated in the 70s in the 80s in the 90s So now I think it's the point in time where we have to take do a stock take and see where we can actually Consolidate what we have so this is the purpose of I Your I are if I learned it has changed the the acronym a little bit and bird bird is much easier to pronounce And that again requires communication One more slide and I basically mentioned that before We really have to work on knowledge sharing. So one of the initiatives that we have started is Nextar which is a network of central banks working On on knowledge sharing in terms of the use of microdata the preparation of microdata for Analytical work. We have many other projects where we do this type of knowledge sharing So this is just an example Some of you may know the international banking research network where we also try to make better use of our underlying microdata So with that I close. So I think the new technologies the new data sources have a great potential for enhancing the production of statistics and knowledge in central banks and This starts with making better use of the existing data We have to develop new sources of data new Analytical tools we also to some extent have to retrain our analysts who are used to work with aggregate data who now have to work with Microdata and do good identification in microdata. So all these are challenges that need to be managed I think we are in very good track and for us it has been really useful to start small learn from experience and to make sure that we We have everybody on board both internally and externally and with that I close and thank you for your attention We have now heard to Speakers who Although being in charge of statistics in their present functions have nevertheless probably seen statistics more from the side of those who consume statistics due to their academic areas where they have been active in as professor for finance in our jesse lozan for Alberto and Also, Claudia has her whole chairs in new German universities for macroeconomics and finance as well now we move to people who were more on the production side inside institutions for statistics and First will be Mariana Mariana Kutseva has among others also been the head of the Bulgarian Statistical office before she moved to Luxembourg in 2012 and She has now been appointed a few months ago as director general of Eurostat. She's the first female director general of Eurostat and She will now introduce us to what is the Eurostat approach to new changes in statistics Hello, thank you very much mr. Merch for the nice introduction Let me start with something simple. Happy birthday dear colleagues from the European Central Bank and national banks I have to admit as the first female director general of Eurostat I feel quite comfortable coming from 65 years old Eurostat To celebrate with a young partner of 20 years who has been always Very energetic very demanding to Eurostat very innovative. So my sincere congratulations and thank you very much Oro and Caroline and Werner for inviting me to speak about the future Because the best way to predict the future is just to shape it or to make it. So with that I just we would like in a let's say maybe in 10 minutes to Summarize some reflections Reflections which we are currently discussing also in Eurostat and together with the national statistical Institutes and also in our nice conversations with the colleagues from ECB Some of the things you have heard today, but maybe this is a good pedagogical way because I am also coming from academic work That we summarize from different angles and then everybody could live with some thoughts and some messages for the future so First point, let's see. Yes, it works like that So what is the impact of technology? Innovation on producers of statistics, so we could summarize them in Three simple messages, but we have to reflect how to deal with them afterwards The first one is does technological innovation creates new phenomena to be measured We should require more granular and Timelayer data. We had two fantastic sessions this morning where different speakers Presented such examples, which you could see also on my slides like shared economy We are making the difference between producers and users Is becoming blurred this was mentioned by mr. Garcia this morning Then there were examples of smart contracts and other new business models, which are based on blockchain Technologies and one of the sectors who are Employing more and more these models is the financial sector and then again. It was mentioned this Morning by mr. Smet in the first presentation about the impact on the labor markets and That the the technologies actually create a lot of challenges for statisticians how to measure the current situation and the trends in the labor market the second Implication is that techno technological innovation creates new data sources So there were a lot of discussions here. They are nice pictures prepared by young Eurostat people on different sources, but what I wanted to emphasize here if mr. Novo Dni who did a presentation today Allowed me to copy something from his slides I just took a note by hand that the most important challenge for us is to Reinforce the multiple use of data that we have to integrate the data and to find the best way of this Combination to produce from usually unstructured Sources combining with traditional sources and information which could be translated in policy actions And also to make to make what to the extent possible This is also a message from Claudia just few minutes ago that we all have to make a use as much as possible of already Existing data and for that comes our partnerships or with a young partner like ECB Because we have to coordinate Already not nationally among the institutions, but also internationally among the major institutions involved in the production of European statistics the next Message is that technological innovation creates new processing opportunities and there were a lot of examples There there was a very nice summary by Alberto in the first presentation of this panel and here are just Examples which were which were mentioned by my colleagues, but what is Let's I also took a note today from the presentation of mr. Berner is that Technology could help to create a new approach to regulatory and reporting and to compliance in your words But also it could create a lot of challenges to ensure Standardization and interoperability if we come back to my previous message that we have to combine the data sources We have pure challenges on data processing to make networks and to make the standards applicable So we could exchange information Of course artificial intelligence somewhere. I have read that 2018 will be the year of artificial intelligence. It's a good message if we remember that 2016 was the year of post-route So at least we are moving in a positive direction But they are a lot of questions to how practically What are the implications of artificial intelligence in the everyday statistical production? now what has been Eurostat's response so far together with the member states and Thank you very much for inviting me here because it was very inspiring morning And I was thinking this morning that it was five years ago 2013 when we had the similar conference with the national statistical offices and we were very proud five years ago That we are quite on track and we will start big data action plan and already for five years We have been doing different pilots in different areas Exploring Different big data sources like mobile data traffic loops smart meters social networks Web scrapes scraping techniques, etc. And then we also start working on horizontal topics You see how many boxes we have there like 80 questions it questions kills How to ensure quality when you work with big data, etc. And now listening today It sounds like something what is already lacking behind Because as you see as you see from the presentations today, and as you see in my next slides We already have to think about different questions This is just not enough to think about big data as a potential source and Few few months one month actually two months later in September. We have Dedicated conference with the director general so fantasize and Eurostat to discuss okay five years after the experiments What we are going to do in real production? So because one thing is to do the experiment another thing is really to use in your daily business big data next slide illustrates because For you and many of you mentioned today that one of the challenges to produce timely error statistics We had a lot of pilot projects using big data and traditional sources to Provide early estimates on key economic indicators including GDP including unemployment or job vacancies and this is just one example of using or combining traditional business statistics indicators Indices with data from the traffic loops. This is a graph from Slovenia where the results are quite promising But I was curious that the challenge was not with technologies The biggest challenge was when in the at the beginning when you have to make the data from big data Sources coming suitable that you start to produce Econometric models and then to produce statistics. So the challenge is not in the traditional modeling the challenges that you reach this stage So the challenge is at stage zero and then What does it mean for us now starts the provoking part because I promise that I will be provoking That we start discussing together how to shape the future Technological innovation creates new business and governance models and these are Something like conclusions from the reality from which we could derive implications. What should be next? First point is pooling data model is not any more fully possible. What does it mean for thousands of years? Whether we use in statistics surveys or administrative data Always we bring the data from the outside world to the statistical organizations and we process data there now most of the data sources we have plenty of data sources and It's not necessary that you bring these data sources within statistical office And this is a big change because this is a big change for everybody in the statistical office to Accept that we could work. We could do data processing outside our premises Are you speaking the minute what could be the solution for that then artificial intelligence as we discussed With many algorithms, of course, this is a power But this requires new skills and there are a lot of discussions to what extent artificial intelligence could replace or could contribute I Don't believe that it will replace but let me use this word to what extent in the in the data chain The human factor could be replaced and then a very key concept which appears It appeared also in the presentation but it's becoming essential for statistical production is about the trust that the whole statistical production in the modern world should be based on trust and I will elaborate in a second This concept and the final point is that unlike Thousand of years when we were absolutely sufficient as statistical word to sit you have to close statistical experts in a room like that You allow them for three days to discuss how to make a questionnaire Comparable across countries and then when the white smoke comes out from the room They go happy in the country and they do the surveys now This is not any more possible because the data are outside the technologies are outside So we need to develop the partnerships if we'd like to to be relevant. What is the new model? we choose this Let's say title and this is in a nutshell my presentation We have to move to a trusted smart statistics. I ask my people be careful I'm going to ECB the colleagues are very skillful and they'll ask me for definition What is a trusted smart statistics and they say no they have to understand That's why we use this term because everybody has a gut feeling what is trusted everybody has a gut feeling What is smart everybody knows at least pretends to know what is statistics? So everybody should have intuitively assume what is trusted statistics, but speaking seriously in response to my previous Slight of challenges. What could be these elements of trusted smart statistics which are like Puzzle components, which are not nicely beautifully yet a range But this is one of the challenges for the statistical communities The first one to move from pull in to push out computation mode This means what I just discussed that it's not necessary and it's impossible to some extent to bring all the Micro and granular individual records and data to the statistical offices or the banks So then the question is but if we go out how will you ensure trust and Now these are the good news from technology developments that is possible to use the data without sharing I will come with a picture in a second and it's also important that we ensure from the beginning Privacy by design. This is a term which was introduced some years ago in Canada But now it's becoming to be more and more debatable especially in the context of GDPR General data protection regulation Because it's important that we show in the statistical process how the privacy goes through all the stages The next element is since we are surrounded by smart systems Instead of bringing data to us and produce statistics We have to go in the opposite direction to embed statistics in smart systems and this is quite a big change And then the last point is about the trust the next few slides are just illustration data use without sharing sorry that on this picture I have noticed that there is no Element for central banks, but you are very much welcome This is the general idea that we have to work for secure multi-party computation Which techno technologically this is possible that different data holders bring Data to a place, but they don't see Identifiable data and this is so big advantage. The question is how we could transform this advantage Which technology technology offers? Like blockchain for example related to this to real production of statistics So instead of discussing for 20 years the legal framework how to exchange exchange micro data Maybe we could completely forget but about this discussion and then to use technologies to find new ways to combine the data Without sharing them and then I just wanted to highlight that National banks are not there so we will change the picture But what is important is that data protection authorities there because they are becoming Really important member of the partnership of producing statistics in the future trust is becoming one of the elements and Trust should go through the whole process trust in the input data many of you discussed this Trust in data processing because it is not enough to use modern algorithms. It's also Important to show to the outside world that statistics which are produced on the basis of these algorithms could be trusted and This is one of the of the of the key questions for us in the future And then of course trust in the output in the modern world We have to build the trust every day. This was emphasized by previous speakers So this is a very nice picture which my young statisticians in euros that said that this is self-speaking So maybe yes But this is our environment what is important from for me as a manager of Statistical production system is that we in the smart systems like smart cities or Somebody of you spoke today about Interconnectedness of financial institutions. So different devices and people they are digitally connected all the time They exchange information and most important important for us as statisticians. They are not any more passive objects They learn they make decisions. So we have to go to them in order to derive statistics We we cannot just collect them as we did in the past So this is a nice picture which for young people seems a very very self-speaking I hope that we will find in this picture where to put national banks and statistical offices the last slide conclusions Digitalization changes society and economy it has disruptive effects in many sectors as was shown in this conference But it has serious implications on the way we are producing statistics And they are number of such implications to summarize more and more data produced by private sector So we have to decide how we are going to partner with the private sector if you'd like to make this data Sustainable source of information Sustainable input data because everybody is very enthusiastic about the partnership and when we say Nice we did the experiment now We have to do it every day and then they are becoming a lot of questions of legal institutional framers how we should do that next point is about smart systems which are in development and We have to see how to embed statistics there Many people think that statistics is just processing Calculations, but the biggest challenges is not there and this is especially relevant in the European Union that We produce statistics on the basis of certain institutional and legal frameworks And if you'd like to make this new Technologies and data part of the daily business we have to adjust the frameworks and this is equally challenging tasks to Equally challenging as the data processing Trust for which we have to work every day and to communicate and the partnerships I had a discussion with my team because I said why this is only in an outside ESS they said to me because we don't have space space and you have talk so much So this is the end of the presentation. Thank you very much Again all the all the best wishes for the next couple of 20 years from all your stat team Thank you Thank you, Mariana indeed Inside central banks. We have some knowledge With trust because we always pretend money is trust printed trust and So that's why we now turn also to Verna be it to tell us how we are in this institution going about So a few years ago. I would have started with the remark There are two European statistical system one your start and the ESS and one the ECB and the European system of central banks Today, we are much more numerous as institutions on the European level in order to generate data Think in particular starting from roughly 2010 2011 and so on the European system of financial supervision was coming into the being into the Existence strongly supported by another director general in the Commission DG FSMA in the meantime We have single supervisory mechanism. We have single resolution mechanism and all of those are generating data Now in the lower part of the slide you see some of the data sets which were put on its way Some of them already exist some others are coming into being but even those who exist or which exist are Not yet in full use today So I was asking myself how to interpret The theme which was given by our role to us can technology innovation help and It's not far fetched to look to Austrian Austrian interpretation So I would was looking to the Shumpied a shumpeterian theory of innovation applied to data output and in this context Innovation is through changes in the composition of factors reflecting in the adogenous production function Well, I have a very similar production function as it was shown to us by Claudia and we were not in in cooperation before One thing is to interpret indeed the innovative parts in the production Factors like labor in particular the higher skill sets may be related to data science But also capital the new technical products like I do which was already mentioned also this afternoon and of course the Exponentially increasing availability of data, but what is a specific aspect of the shumpier in a proud shumpeterian approach? The F itself the production function is not Stable the production function is subject to innovation. It's the combination of the production factors So it's a managerial or entrepreneurial activity in which one has to look into Now what does this mean if I translate this to the ordinary pictures of collection production and Dissemination of statistics and data and I'm always using the two together So it's not only statistics anymore on the side of the European system of central banks and the SM SSM the first point related to Collection is certainly related to the reduction of the reporting burden which was mentioned several times but also to Get new data sources the second if I related to production is a production of more data and Statistics which would be available faster and More fit for purposes with a higher quality and if you look to dissemination It's indeed to give stronger support to the users because the big data sets It's not really maybe big data in the sense of the internet, but it's still Rather big. It's bigger than what we can digest So let me go through now different aspects On the collection production and dissemination and let me take not a very futuristic approach I go back to the next maybe two or three years in order to say what there are main aspects Which one may look into with a little bit more care and Remarkable it starts with collection I think in the past I would not have easily started with collection But it's important or became much more important as a follow-up from the crisis because during the crisis Not only the ECB not only the European system of central banks But all these new European institutions and bodies were of course collecting additional information were issuing regulations now I stay on the ECB level for the first in the first slide what you see at the very right is indeed data sets which we are collecting so we issue ECB regulations the specific thing with ECB regulations is they are directly applicable to reporting agents and What do the reporting agents perceive do the reporting agents perceive that the data collection is? unified harmonized even that the original source is One legal act which is addressed directly to them and the answer is no So what happens is for those banks which are active in more than one Euro area country they of course report to the respective National central banks but not to one but to all those in which they are placed According to the principle of subsidiarity and they are confronted with very different data collection procedures Now that's not the criticism because was very sensible that when we started the monetary union The data collection systems which were in place were indeed also used by the national central banks in order to add step-by-step European requirements in addition to at that time Very huge part of national requirements and not always when we added a legal act really their data collection needed to be changed But is this the model for the future? this is one of the questions and I still stay in the context of the ECB So I already mentioned several times so I stay for the moment on the level of statistics in this slide and The objective is certainly to streamline over time the heterogeneous reporting process and We do this in a little bit different approach compared to the past So we are not hurrying to a new legal act and harmonize and discuss the new legal act only among ourselves But we go out and will ask and have asked the reporting agents How do you want it and one approach which we are suggesting but which we are not Imposing is the integrated reporting framework and indeed we have changed the term a little bit When it was before European reporting Framework some of us felt that then the ECB would start to speak for everybody and maybe that's not Exactly what we have in mind while the term European was coming up originally by comparison to national but I think just to Claim from a communication point of view that we are very far reaching here We felt maybe we should go one step back and indeed Claudia. You're right. The term is not as nice as it was before But that's a little bit in the back of our mind So one idea we were suggesting is indeed that all the different legal acts starting of course with some priority items one would translate into a so-called IREF Collection layer and the important part of the collection layer is that it would be rather Cranular data because what we have learned also from the banking industry I think the banking industry by and large and I'm a little bit arguing black and white is not so much Concerned on the mass of data. They are reporting to us But they are very much concerned that they have to do the transformations for us So if we give them a specific template very close to economic theory and if we ask them to do the Transformations out of the accounting data the administrative data for us that of course produces a rather big burden By contrast if you would ask them to give data to us, which they have not very much to transform Themself even while the data set may be much wider the costs are typically significantly reduced so one aspect is indeed to translate our Existing legal acts into something which is much more easier to handle in a more digitalized economy But again, we are going out. We asked the center. We asked the reporting agents Is this a model which you could basically support and we will issue a questionnaire We will get a response in in this autumn. We will assess it later The second step is birds. This is indeed the banking industry is confronted with all of our legal acts Which are very nicely enshrined in the system of national accounts, which nobody really knows outside The real community of economists or statisticians So we help them to translate the so-called IRF collection layer Into a bird input layer, which is much closer to their operational systems and The first reactions which we got in Also in our dialogue meeting dialogue with the banking industry in this March This was that generally speaking. I think the banking industry would be happy to explore with us How to do this in the most efficient and cost efficient manner, but it's really a dialogue So in this dialogue, we are of course not talking about to change the requirements We have for the economists, but we discussed with the banking industry in a dialogue really how to implement these Requirements in the most cost efficient manner. Now. This is a perspective. I am presenting here Which is the perspective from the statistic side of the ECB But that's certainly not the perspective of the banking industry the perspective of the banking industry looks different It's not only the requirements which are coming from our side This is then on the right side the upper part, which is exactly what I tried to explain for the moment But there are of course many more who are collecting data on the European level if I'm only staying on the European level I think we would for example IMF BIS. We would also try to absorb both on the statistical side and also on the data collection side either via the ESFS or via the ESCB the data collections for international organizations Now, what do we see here on the right side? The picture is much more dispersed There are requirements coming from the ESCB. There are requirements coming from supervisory and resolution reporting There are of course requirements also which are typically implemented by DG FISMA and ESMA like for example This is just by way of example Mephia and EMEA I think in total when you look to the number of pages in the official journal Which our colleagues of the banking industry have to assess to read to understand to know these are roughly 10,000 pages full of all types of specialist legislation and the question is here Do they always interpret this completely in line with what we want or maybe there are a number of mistakes coming up And I think it's also is always natural that mistakes are coming up and we have to economize here I think there are two aspects how to do that I and this is of course a matter for discussion. This is not a matter for me to say how it should be There must be certainly cooperation on the side between the European institutions and Authorities and I think this lesson I'm looking to you Mariana We have learned already between Eurostat and the ECB. I think 25 years ago. We have established Collectively or in particular that was at that time the director general of Eurostat one of your predecessors the CMFB in the meantime in 2013 we have added to the CMFB the European Statistical Forum So we have a real coordination and this coordination is not one in Let me say only conceptual talks. This is also something really on the operational side And this is what really matters I think what is very important from our point of view is to make the following distinction One is to cooperate in setting up Regulatory requirements or more the legal aspects the fundamental aspects This is certainly and without any doubt a task for the European authorities And there is a second much more cost-driving aspect and the second cost-driving aspect is as I translate it into the data management So for example between Eurostat and us we have embarked for a number of years together with International partners in just mentioning SDMX. It's one data model doesn't matter now on the technical detail As we had to hurry all to fight the crisis I think a number of European institutions and bodies came naturally to these different conclusions which data model to use and Also which definitions to use and it's not always clear whether the differences in the definitions are completely Deliberate or just happened also in the legislative browsers, which is very heterogeneous So when we look to the data management part different from the regulatory part I think we should carefully reflect about whether we could not involve the banking industry much more directly in this work I think there are also examples on the side of Eurostat. I remind ESAC so you have Let me say an advisory body and of course not with banks but here with non-financial Corporations with universities and so on but it will be worthwhile when we want to consolidate to look into this and this indeed seems to reduce the costs to the decree possible by keeping the Responsibilities which exist on the European level but also which exist between the authorities and the banking industry The next part I have only one slide today Well, sometimes publicly I speak much more intensively about it what I have said by now the integrated reporting framework was a little bit A static approach. I was looking to existing regulations and how to implement existing regulations must more cost efficient But it's very obvious that in the next five years. We don't know how and how it will be in detail digitalization matters and digitalization matters in the first place In the economy as a whole in doing the operations But when the operations are digitalized, it's obvious that also the reporting will need to follow It's Mariana was mentioning smart contracts, but of course we should also mention smart regulations so in the future while we still trans while we still regulate by having legal texts in words which need then to be translated into data models and at the end these Translations are rather heterogeneous smart regulation would go much more into the direction to regulate Directly also data models and other aspects which indeed are in the operations and that makes it necessary that we have data Standardization data standardization will be one of the big themes in the years to come It is very similar to what we did in the last 20 years on the macroeconomic side when we were cooperating in International statistical standards now data standardization a route iso standards And we know some important of them not going through them is one of the subjects in the future And I think we in DG statistics will also invest much more into that theme This already is the first point between collection and production. Let me come to the next items on production Now we were discussing indeed new legal acts are coming up Maybe the legal acts are a certain burden, but then but then the data are in the ESCB in the euro system and the question is what to do with these data Now here the new world is Moving from template driven to data driven processes template driven means We have carefully defined in long discussions the output for example, how is M3 defined? We took two years about it and then of course we were collecting it in the new world We do not get the final indicators, but exactly because we would like to be very flexible We get millions of individual data sometimes billion so we have data sets with Two billion data points per day or we have data sets as money market data So it's of course on the one hand we give these data sets also for research purposes After defining very carefully the excess rights some indeed can look to the details can do research with them But this does not bring us to a real Process for policy making process for policy making has also something to do with timeliness And this means this huge data sets need to be translated In addition to that they are available for research need to be translated in your system standard micro data business processes, it's also the question whether the 20 Central banks within the euro system should do it individually Everybody one by one coming with different solutions or whether we collectively should indeed look into this Development so the data are partly here like for example on the money market reporting. It's a rather small data set Which we have daily where we get 45,000 transactions per day And the question is of course, which policy indicator should be provided to whom at which time when I talk about which time I do not mean quarterly or monthly. I talk about at 9 20 or 9 15 Yes, it's the question when should be available in the morning Compared to the previous day and this needs to be really done And this means the data flow within minutes, which is completely automated for that reason an IT investment And it seems to be not enormously sensible. We have 20 different IT Investments at least we should also here economize in terms of costs because otherwise not the collection costs, but the internal production costs Arising significantly when I think about on our credit where we get data monthly So 50 millions individual loans with 88 Attributes you can make a calculation how many combinations are thinkable We are not talking about thousands or ten thousands or hundred thousands of thinkable combinations But much more so the question is what is the most important for policy makers for the ECB for the euro system on which then the euro system collectively designs Investment processes and of course it's for each NCB is still to decide whether they would like to go in their own way But there must be a big part where we go together. Otherwise, we will have not a coordinated process in the system Macroeconomic statistics macroeconomic statistics is indeed what nowadays we produce on a regular basis and also macroeconomic Statistics will continue in its development What we do today is reflected in the blue parts. So data are collected predominant data are collected on the national level They are put together to an to an intermediate Aggregate which is currently here called granular this terminology is not completely standardized But this is the terminology used by in next to the national Contributions to euro system to euro area aggregates are produced In the context of more and more microdata the microdata are going to to the green part where euro area where euro microdata for all of the 19 euro area countries are collected Intermediate aggregates are produced and this feeds back on the national level in terms also of data quality because the holdings of security issued by national Non-financial corporations can only be catched by those who have the holding statistics So there's an increase in quality the more granular data Contribute also to further breakdowns on the macroeconomic statistics Even more so the parts which are marked in orange There are possibilities like web scrapping which gives the opportunity to collect text in addition to the Developments of data. So what is in the news must not necessarily be only be read But indeed can also be put with the new technologies into place Supervisory data because in the meantime in the field of statistics, we are not doing only Statistics in the ordinary sense. So we cooperate very closely, of course with our partners in the EBA and Data sharing is one of the big subjects We are doing already data sharing right from the beginning the so-called sequential approach was established where data are transmitted from significant Institutions and in the future also from less significant institutions Why are the NCB's or why are the NCA's the eurozone NCA's to the ECB and the EBA and This model makes it also very obvious to everybody that we have to have a very close cooperation between European institutions and bodies not only yours that and the ECB but the other Institutions as well with whom we cooperate in particular on the supervisory side and just selected here the EBA But the very same happens of course to the SRB The very same happens also to other institutions and to share this information By observing the confidentiality rules and this means that we have also to look into Common bit common data models common business Processes because these are huge data sets when we don't converge over time to a Very close cooperation not only in that we consult each other but that we discuss the matters in the run up among ourselves and That we indeed agree between the institutions at the end on the way we handle this We will not master a cost efficient and quality effective system I think a very good example is a reason the development that the so-called task force on Euclid Implementation was set up between the EBA the European Banking Authority and the ECB and we work here in very close partnership Let me come to Dissemination I keep this rather short, but it's not unimportant. We have of course the very ordinary Instruments currently in place, but one instrument which has in recent times gained importance It's the third one which are presented here Embedded in online newspapers. What does it mean just by way of example to illustrate a little bit the new opportunities? It is we indeed in the ECB have on our own website We have certain crafts which of course we maintain ourselves and these crafts are taken by online Newspapers on board so the online newspapers have this in their own Website, but the important part is not that the updating the updating remains in the ECB and So the data are not the source is not easy be or maybe in other cases Eurostat or whatever the source will be but the main take and the maintenance and meaning also the updating is directly done by the ECB and that of course makes the data more reliable and more trustworthy Plants for the future ESCB corporation is absolutely important in the field of dissemination and in particular an elaborated visualization and engagement tools these very big data set We are not able to assess them by ordinary inspections. We need very special visualization Tools we for example look into this in the context of an accredited to get familiar with an accredited how to visualize this It's also important to cooperate closely here with the research community and also the research data centers and Corporate with researchers and Oral you should look to the very last sentence also including professors for economics of the Vienna University of Economics and business and May I ask you to join me for a second here? Oral has established a very nice habit in the ECB whenever there is a colleague Who is leaving us He has a very dedicated small Present and I would like to follow this habit Also here so my presentation Should bring together on the one hand Schumpeter and Schubert, but even more so I Think the Mozart Kugeln, which he has handed over to everybody So it's under the overall heading some Schumpeterian ideas on Schubert statistics Sweden by Mozart. Thank you very much Thank you very much for the time we could spend together in the previous eight years Thank you very now also for this impromptu Present for Oral which has amply merited it it is now for me very difficult To wrap up although we have seen how much consensus there is among the different speakers So that makes it much easier. We have seen Me at the beginning how there is so much certainty about the change that will come upon us through Technological change we have seen the depths of the change how it affects many areas of society Then we have also been told that the very nature of the change Will be towards more decentralization, which means there will be more stakeholders, which will require a higher amount of cooperative arrangements and exchanges of information sharing of knowledge and Then I see the only slight Contradiction that is still existing. We also see trust is still needed trust usually was exercised through a trusted institution Central institution that is still so In issuing money, but in issuing statistics, maybe two or can we assume that the future Technological change will embed trust into the processes rather than into a central institution That remains to be seen We have now had Certainly on the production side a sharing of knowledge. I hope so But this comes at the expense on the consumption side that we consumed half of your very merited coffee break So I hate normally to wrap up a session without at least asking one round of questions to the audience But maybe the organizers would allow me to have to open the floor for a few minutes to see if we have Some questions, but only one round because I would take to go without a coffee break