 Hello again. We are close to the beginning of the final session of, final part of the afternoon session of the first day of our school. We have still one or two minutes more. The presentation is already shared. I see it. But maybe you can share it full screen because I think it's not full screen shared now. Okay. I will try like that. Yeah, great. Perfect. Perfect. Now it is okay. So I think we can slowly start. We have one more lecture in today's program. And I'm pleased to introduce the next speaker, Professor Ivan Dimov from the Institute of Information and Communication Technologies at the Bulgarian Academy of Sciences. Professor Dimov, as a scientist, is also founder for Monte Carlo Methods and their applications and just a very interesting detail. Actually, he was a PhD advisor supervisor to the previous speaker to Professor Emmanuel Tanasov. And I think it is very, very, it's a very great pleasure and a very great satisfaction to see your students growing up and becoming professors themselves. Professor Ivan Dimov will be presenting a talk, a lecture in his quality of a member of the EuroHPC joint undertaking governing board. And as such, he will present us, he will introduce to us the very, very new machine, the Sophia Petterscale supercomputer of EuroHPC, the discoverer. So please, Professor Dimov, the floor is yours. Thank you very much, Professor Ilieva. It's very nice to have such a nice chairperson. One should be careful, do not say chairman. It's my great pleasure to be with you and to have this presentation here. I'm very happy to see so many active scientists working on that. So I will try now to give you some short information about the new supercomputer discoverer. Okay, let us start. So you can see on the left side, some pictures from discoverer supercomputer. Some of you may know that discoverer is a Petterscale supercomputer, which can execute 4.5 petaflops as a maximum speedup and 6.0 peak performance. Discoverer is ranked at 91st place among the world's top 500 supercomputers. What is also important to say that discoverer's infrastructure is co-founded by EuroHPC joint undertaking by 35% and by Petterscale consortium and by the Bulgarian government 65%. Let me tell you more precisely that Petterscale Bulgaria is a legal consortium combining the knowledge and 15 years of expertise of the National Center for Supercomputing Applications, the Strategic Center for Artificial Intelligence, Institute of Information and Communication Technologies at Bulgarian Academy of Sciences and Sofia Tech Park where it is hosted. Also, I should say that discoverer objectives are to foster better science for society to facilitate cross-border collaborations between academic institutions and the business and to help training the next generation Bulgarian and European IT talents. Here I would like to add that very recently EuroHPC announced a new project which aims to educate master of science in HPC. And this is, which is very important for Europe and also this is first example of such a world program where you have common European master of science degree and this degree is in HPC. So now very briefly I will say I'll give some information about the system architecture of discoverer supercomputer. It has 12 directly with cooling racks with was up to 32 blades per rack. So there are 376 blades. Each blade has three nodes in such a way that there is 1128 computing nodes. And each node has two AMD epic processors per node. So discoverer has in total 2256 nodes. What else? So 256 gigabytes shared memory per node. And each node contains 64 cores. So at the end of the day, we have a bit more than 144,000 nodes. Also I should say that total size of the RAM reaches over 300 terabytes. Fast disk storage DDN is bigger than 20 gigabytes per second. And what is also important, internal connection with infinity band is to 200 gigabytes per second. And it is also maybe important to mention here that the whole system weights is over 30 tons. So total power consumption is 1.3 megawatts. Some information about computer node design. I already mentioned that it's AMD epic 7H12 with 64 core. And CPU cores per node is 128. The main memory per node is 256 gigabytes. It is also maybe important to mention that teraflops per second sustained for link pack is 3.9 teraflops. What else? Maybe also this is also important from my point of view to mention that link pack not power consumption is 661 watts per 256 gigabytes computer node. Some brief information about high performance network. It is infinity band HHDR. Interconnect bandwidth per link is 200 gigabytes per second. Then another important parameter is expected latency, which is important for many applications. It's 520 nanoseconds. And another important thing is interconnect topology. And this topology is dragonfly plus. I will show a bit later how it looks like the topology. It's given here. So here we have high speed interconnect network topology. So dragon topology is considered as sort of close to optimal topology for many important applications. You know that optimality in this case depends very much on the specific applications and you never have something which is absolutely perfect. But this topology was chosen for this computer. Next, some brief information about software environment. Quite standard from my point of view system software is operating system is Red Hat Enterprise Linux 8. Compiler suite is AMD optimizing CC++ compiler. Also, it has a computer system, GNU computer collection, Python. R is important for you. You may know for many applications connected to stochastic modeling, sensitivity analysis and things like that. They are very, very modern. Somehow they're very up to date for many applications to study large systems. Some information about numerical libraries. There is AMD optimizing CPU libraries you can see. And scientific computing, which includes trillinos, BLAS, LaPak, ScalaPak, packages connected to linear algebra problems, and also EigenSolver, which is important for computing eigenvalues and eigenvectors. They're also important for large class of applications because eigenvalues and eigenvectors are characterizing sustainability of large systems. There is also some information about the bugging. You see here the information is given here. Resource and workload manager. Some brief information about software and target application areas. There are open source softwares and we will be focusing on software connected to bioinformatics and genomics. Here we have some packages listed here. The next one is computational and quantum chemistry. Also quantum espresso. The next one is molecular dynamics, mesoscale modeling, and Monte Carlo. You know that we have in Bulgaria quite a strong school in Monte Carlo with many important applications in different areas in physics, in biology, and other applications. Then computational fluid dynamics, which also includes finite element method because we have also society of colleagues very, very strong in this area. And of course, artificial intelligence, big data analysis, and this area is also very, very popular and very interesting for many Bulgarian scientists, for many projects that are ongoing. As application areas, we are mentioning here in silico drug discovery. We already had one very good lecture this morning about that. We have also strong community in this area. Then structure property predictions and molecular discovery. The next one is digital product formulation and optimization. The next area is also a sort of favorite area for our scientists dealing with HPC. It's climate and weather forecast and environmental modeling. We have quite a successful participation in many international projects connected to this area. The next one is simulated environments in different areas like automotive, civil engineering, and so on. And the last one, which is mentioned here, but it's quite important. It's connected to financial market dealing with big data, which is more and more popular nowadays. Some words about discover or access policy. In this area, we are somehow forced to follow the policy developed by Euro HPC joint undertaking. And we are strictly following this policy. So talking about resources, we plan to have regular HPC projects for 80 or 90%. This is the main part. And you know that according to the rules established in place and some other projects, there is a strong form how to prepare and how to evaluate projects. We have very few percent of HPC project benchmarking, which is important from the point of view of policy of Euro HPC because till the end of this year, there is ambition to try to somehow distribute better scale supercomputers already installed in Europe and to say what kind of application should be run or more efficiently is to run on some supercomputers and what applications should be run for some other supercomputers including pre-exascale supercomputers that are already in the process of installation. The largest one, the biggest one is so-called Lumie consortium. And this supercomputer will be installed in Finland. But the consortium contains a number of countries. And also we will have something like 10% for fast track applications, which means that applications connected to some urgent problems connected to possible hazards. You may remember the strong snowfall in 1917, I think it was when our younger guys did a lot to have a very, very precise prediction of a snowfall and to give advices to the government board, which was planning some special actions to avoid this problem. So as I already mentioned, Euro HPC calls will be organized via praise project and are targeting pan-European HPC projects including academic and industrial applications. And the split between academic and industrial applications are 80 to 20%. So in this regulation, we have quite a big percentage of, I mean, significant percentage of industrial application, but the regulation for them is a bit different. Petascale's resource share is 65% and it will be allocated using Euro HPC access policy. What else, I want to say that for both Euro HPC, which is 35% and Petascale consortium, which is 65% share, there is additional split between free access for pure scientific, academic applications and paid access, which is for industrial or for-profit applications. Here I give, because I still have something like 10 minutes, I will try to give some illustration of possible applications that are welcome to discover a supercomputer, but also other applications are very welcome. Here I would like to say a few words about applications that are already prepared to be run on a supercomputer and one of them is connected to computer modeling. I will mention here that there are great challenges in environmental modeling. There is impact of climate changes on pollution level. It's also important to consider scenarios for climate changes in order to find possibilities for people to adapt to climate changes and impact of climate changes on European pollution levels is also very important. This study is extremely important with regards to the new Green Deal, which is a very important European initiative and how this Green Deal reflect to other areas like economy, especially on producing of products, producing of crops and other products. Here we are also interested and we have quite a big history of contribution to developing efficient methods and algorithms in this area like variational data simulation, using of a joint equations when you deal with partial differential equations, using perturbation techniques because very often you have your input data with some uncertainties and it's very important to have sensitivity analysis to measure how reliable is your model. About integration methods, I would like to mention here backwards Euler methods, implicit midpoint rule, fully implicit three stage Runge-Kutta methods and so on and so on. Some illustration about special techniques of emergency situation forecast and here is just one illustration about this emergency situation. When application for this application you for sure need very high computational power and this example is about prediction of level of tropospheric ozone concentrations based on some situations and you can see that some parts of North Italy here you have some very critical concentrations for this study. Another possible application is so-called sensitivity of European pollution levels to changes of human-made emissions. This is what we can control somehow, human-made emissions and they are very important, especially for a parameter which is called accumulated over threshold of 40 ppb per hour. This is about integral of the tropospheric ozone concentration in time and you can see that some places of Europe are highly affected, highly polluted. Another possible application is application to quantum computing. We have practiced a number of successful projects dealing with quantum computing based on Wigner formulation and so-called sine particle formalism. There are some pictures unfortunately I don't have time to talk about that in details but this is just a picture just an illustration how one can deal with that. On this slide I have a bit more about that. This is Wigner equation. You may see that quantum technology is important because you would like to have some elements like resistors, transistors and so on dealing with very small number of atoms, even something like say six or eight atoms but in real technology you cannot have exact arrays like it is shown in the left-hand side down this picture. Normally you have some discrepancies and it is important to see how the package of electrons are moving and the picture which is on the bottom in the middle shows that it behaves like it is moving through resistor and in this way you can simulate applications like that. They are simulated on other supercomputers but one possibility is to run such a large job on discover supercomputer but also many other applications that will appear in different areas. This is my last slide before I finish. I would like to say that I have borrowed this slide from a friend of mine Svetozar Margenov because I like very much the sentences that are written here because the main goal of using supercomputers, especially HPC supercomputers, Petascale supercomputers is to find a close connection between practice and theory between industry and academia. I like very much two sentences. The first sentence said there is nothing more useful for the practice than the good theory but the second one says in theory the theory and practice are one and the same. However, in practice they are not. So thank you very much for your attention. Thank you for your attention because this is the last presentation. Now I am open to answer your questions or if you want to say something about that, you are welcome. Thank you. Thank you for the very, very interesting and very timely arranged talk. So now I believe that the participants in this autumn school know quite a lot about our newcomer, the discoverer. We were expecting it so very eagerly looking forward to use it in our simulations. So please, questions, comments. I believe that this very interesting presentation has... I can see... I'm sorry, I can see that there is one question in the chat. In the chat, yes. And the question is, are you offering already Master of Science program in HPC or will you offer it in the future? I will say very briefly about this project. It will start in September. So very soon we will have a governing board of Euro HPC and we need to approve this program. The idea of the program is that there are six, so-called six awarding universities. This is two-year program and it starts in September. The project starts in September, but this year. But really the education will start in 2022 in one year because we need some time to go through agencies and things like that. There are six awarding universities. Among them is Sophie University, Faculty of Mathematics and Informatics. But also it includes Sorbonne, it includes from Germany, FAU University. It also includes Spanish University. It's polytechnic of... This is this province which is on the northeast part. It's Barcelona, then polytechnic of Milano from Italy. So I think I mentioned all the universities and the idea is that students start in one university for one semester. Then they move to another university. So they may start in Sofia. On the next semester they may move to Paris or to Milano or to Barcelona to continue. And they will have a common European diploma, which is common diploma of Master of Science in HPC. And this is the first example. The project is for 7 million euros for three years. And it covers also student travels from one place to another. So it's quite a clever approach, I would say. And I hope that we will have enough candidates to join this project. So this is what I can say to answer this question. But I can see that it's already half past six. But in any case, if you have other questions, I'm ready to answer, please. Yes, thank you. Thank you for this additional information. There is a question about the recordings from the school, I think. I believe that they will be available online. Are there further questions to Professor Dimov about Discoverer or about the great ideas and the future plans of the EuroHPC joint undertaking, of course. Let me have a look. I think that Kerstin, is it a question or you just, yeah, you just gave an applause for the lecturer. Well, if there are no further questions, let's thank the speaker once again. Thank you, Ivan. Thank you very much indeed. And this was the concluding lecture of the first day of our school. We'll reconvene tomorrow morning at 9.30 Central European Summer Time, which is Greenwich Mean Time Plus 2. And we begin with the basics of molecular dynamics. The lecture will be given by Professor Anela Ivanova. So, looking forward. And now have a nice evening. See you tomorrow. Bye. Thank you. See you tomorrow. Bye. Bye-bye.