 Can you actually share the screen and you can start. The next speaker is our good friend, Steve Nidenge from Ifarwanda. Please share your screen and the floor is for you. Okay, so good morning, good afternoon to everyone. I'm Steve Nidenge from the East African Institute for Fundamental Research here in Kigali. And I would like to talk about some of the research work we are doing at the Condense Matter Group here at Ifarwanda. So this is just a tiny sample of what we are doing. Also I'm going to take you out of my materials and liquids for a short while and talk about the system in the gas phase. I've seen the schedule of the meeting and I've seen very little mention about the processes occurring in gas phase and I guess and I believe it's also some kind of important part of the physics community in Africa. So initially I was expecting to talk about the UV and infrared as a social spectrum of nitrogen dioxide and water dimer, but I realized that 20 minutes would be a little too short to be very exhaustive on those two topics. So I would just focus on some recent results we've had on the nitrogen and dioxide that are yet to be published. So first of all, the question is why are we interested in nitrogen dioxide? So for people who are not in the molecular physics community, nitrogen dioxide has been a very popular and a prototype for understanding a lot of physical processes like unimolecular dissociation in molecules, understanding the stan conical intersection or even quantum control in some specific system. And for more than 15 years it has been the main focus of research for people working in molecular physics. As a small anecdote my PhD advisor spent almost 60% of his research career just looking at this specific system experimentally. Nitrogen dioxide is also a very important trace gas in the atmosphere so it plays a role in absorbing sunlight and therefore contributes to the health energetic budget. It also contributes in regulating the chemistry in the troposphere so the dissociation of NO2 usually produce an oxygen atom that can be involved in other secondary processes in the atmosphere. That's but not least, NO2 as well as many NOX are very involved in combustion processes. NO2 in particular is used as an oxidizer in rocket fuel and I may come back a little about that later. So we have our target, we want to be able to study the NO2 molecule, the idea is how are we going to do that, what kind, what framework, what tools are we going to use to do those type of study. So for this specific molecule, for this specific system, the process we are going to use is what is known as molecular quantum dynamics. So I will first have to introduce that method, that approach for those who are not familiar with it, specifically with respect to what I'm going to present. So it won't be the most general and exhaustive description of molecular quantum dynamics but just a specific area related to the tools I'm using and then I'm going to present you some of our results on the photo absorption of NO2. So molecular quantum dynamics, the starting point of understanding this process is the molecular Newtonian where you have a kinetic energy of the nuclei, the electron, then the Colombian interaction between the nuclei, the electron and the nuclei and the electron. So because this type of equation cannot be solved analytically, analytically at all, a lot of approximation are usually made and the most common one is what is known as the Bono-Ponima approximation and this approximation allows you to separate the motion of the electron from the motion of the nuclei. So when you are separating the two motions, you can solve what is known as the electronic Schrodinger equation. The solution of this equation, when the energy solution are interpolated, generally what is known as the potential energy surface. And you have two examples of potential energy surface on the left. You have on the top, one-dimensional potential energy surface often called potential energy curve. And on the bottom you have another type of potential energy surface that is a multidimensional one. So what do you learn from the solution of the electronic Schrodinger equation? You can learn about the structure of the molecule, you can learn about the electronic magnetic optical properties of the molecule or even of the material. So usually there are two big types of methods that are used for this solution. They are wave function based method that are made in the post-artifog method and they are method that are based on density functional theory. So usually the wave function based method are more accurate but scale probably with the size of the system. On the other side, density based method are usually slightly less accurate and not always reliable for specific processes but scale much better with the size of the system. There are many codes that are available to my commercial, to my academic to solve these kind of problems. So you have, for example, more pro, you have quantum espresso, you have Gaussian and among other codes. So what do we do? Once we are able to solve the electronic Schrodinger equation, you obtain what is known as the potential energy surface. When you have the potential energy surface, you cannot start thinking how do the nuclei move in the cloud that is generated by the electron. So the electron creates some kind of surface in which the nuclei start to move around. So there are two ways of solving the problem. You can use a classical way to solve the problem, which is often referred to as molecular dynamics. The first thought this morning describes some molecular dynamics simulation of the system. More or less your force that you use to move your particle around is just a negative gradient of this potential energy surface that we're able to build using after solving the electronic Schrodinger equation. For our work for the kind of system we're interested in, we are going to follow the complete quantum route. That is, we are going to do the quantum mechanical treatment of the motion of the nuclei in the system. So, yeah, more or less two ways of doing it. You can do it by solving right away the kind of fundamental, the kind of Schrodinger equation for the nuclei, or you can use some integral formalism developed by Feynman and obtain some kind of alternative distribution of quantum mechanics. This theory is formally exact, but the application or the implementation usually relies on a lot of approximation that makes it quite often less exact than the Schrodinger treatment of the system. Alternatively, you can also decide to treat some modes or degrees of freedom of the system, basically, and the rest quantum mechanically. For example, if you have a lot of hydrogen in the system that are much smaller than heavier particle, we can sometimes want to treat those hydrogen using quantum mechanical formalism and treat the heavier molecule, heavier nuclei using a classical mechanism and these are often on the classical or even in some documented semi-classical methods. So for the work I'm using, like I said, I'm going to, first you solve the Schrodinger equation using quantum mechanical methods, then you want to solve the motion of the nuclei again using quantum mechanical method. And for that purpose, I'm going to use a formalism that has been implemented in the code called NCTDH, which stands for multi-configuration time dependent aspirin. More or less the idea here is that you want to solve the time dependence for the equation. As you know, you have the Schrodinger equation that has a time-independent form and the time-dependent form. So we want to solve the time-dependent form of the Schrodinger equation. And in this specific process, the wave packets that describe the motion of other nuclei all together and expands as a side, which depends on time, can be expanded on a basis. So you have your expansion of your wave packet on a basis, which here differ from other ways of expressing this or solving this problem by having a basis that is time dependent, as well as the coefficient here that will be time dependent. So the idea is that we want to be able to select only a, I will call it local, a small number of basis functions that are going to follow the dynamics of the system. Because some processes are quite local, when you look at the dynamics, you want to be able to use instead of a huge number of basis function, only a small number that are going to follow all the nuclei all together during the process. And those wave functions that are defined in the time dependent are obtained using a variational principle, so that in the end you are able to obtain an equation of motion, not only for the coefficient of the expansion, but also for your various wave functions. So the equation are very complicated, but all of the process is working quite well. And in the end you have a significant gain by using this type of development, and this can be exhibited by the effort, the make-up effort you do when you are solving this type of problem. For example, you can look at the memory requirement when you run a standard time dependent propagation compared to an M-student propagation. You should, for example, a system that has nine degrees of freedom or nine months depending on how you want to call it, realize that you should use a standard method. I can talk later what is understood by standard method, you will require about 1.5 petabyte, and you have about three orders of magnitude less when you decide to use an M-student format. But that reason that NCH method has been seen to be the best way of solving this type of problem on this type of system, because it scales well with the dimensionality. I mean, it scales better with the dimensionality of the system compared to other standard methods, and it makes you save a lot of memory and allows you to treat a lot of problems. And what is even more interesting is that the method has been implemented in a very nice computer code that is more or less open access for everyone. So, the issue when you want to solve the problem quantum dynamically, one of the main issue is that you have to deal with big integral. And this comes essentially from the fact that you are solving the practice of the time-dependential equation. So you have to compute a multidimensional integral. And the computation of multidimensional integral can become very time consuming if you are dealing with a huge system. So, usually the potential energy form that you need is multidimensional function. Let's say, for example, you have a system that has 12 degrees of freedom. You have to do with the dimensional integral. So the way we can go around that is to express that potential is what is known as sum of product form. That is we expand potential in a sum of a dimensional term. In that case, when you have to perform the pre-dimensional integral, you instead have to perform a lot of one-dimensional integrals that are much more cheaper. It's some kind of applying a log scale on the cost you will have to spend on performing this type of multidimensional integral. However, transforming this system in this specific mode is not always straightforward. So I have a few ideas on how to do it. For example, they are already in the code that I've been using, some technique to transform the potential when the system doesn't have too many degrees of freedom into this specific sum of product form. There are also other possibilities of having someone provide you the potential energy surface already in the sum of product form that is convenient for your description. And the last part, which is current days research project is how to use this kind of on-the-fly description of the potential so that you are still able to stop the dynamics by generating the potential only every time step. So this is something that is a little more involved. So what we did for this specific work, we're dealing with a system that has three atoms, so the dynamics is more or less a three-dimensional 3D dynamics. We simply have used a code that was already provided and transformed the system into that specific sum of product form. So now we have our main framework, our main tool, and we want to be able to study the photo absorption of this system NO2. So how did we get to this specific topic? How did you get to this specific project? This was suggested by people that contacted people at Sandia National Laboratory. So it's some kind of close project that I was allowed to present some of the results from this research. They were wanted to be able to find a way of compute from first principle the absorption spectrum of this nitrogen dioxide at any temperature, and we want to be able to validate our prediction against experiments. And in particular, we want to be able to obtain an absorption spectrum between 294 Kelvin and 1300 Kelvin. That's probably the time of temperature that are important for combustion studies. Also, they wanted to be able to say, okay, let's say the system does not start in the ground, the vibrational state, but start in some potential excited vibrational state. Can we still give us an idea of what would be the absorption cross-section? And that's because of one type of experiment they were in the process of designing at Sandia National Laboratory. So we started working on the project and they were actually two sides in the project. On the theory side, my collaborator in the US produced these platform electronic structure calculations to obtain the potential energy surface using a very high level of electronic structure, which is called material reference configuration interaction. And with this type of method, you are not only able to produce a ground set of the system, but you can produce also several excited states, electronic excited state of the system. Following those completion of potential energy surface, what we did is actually study the dynamics of the nuclei in the system using the code MCTDH in order to produce the absorption spectrum of the molecule at various temperature. Okay. On the experimental side, they had more or less two tools. They were performing direct absorption spectroscopy. And the other side, they had a double resonance spectroscopy. So like I said, you can imagine that your system is initially on this ground state. You can apply first a laser and put it on some excited state of the electronic ground state and later you apply a second laser to put the system onto the state in which you want to probe the system. So that's more as a double resonance spectroscopy technique. So here is the image of some of the potential energy surface for the dynamic of this specific system where they had to compute at least four potential energy surface and a few dipole moment and transition dipole moment which is on the surface. And in particular, the dynamic starts on the ground state you apply the laser or you the system absorb the footprint and is projected in one of those three potentially excited states and then we start following the dynamics on the system as it moves along those various potential states. So one of these states is not coupled to the other. So we can also decouple the dynamics of first study the dynamics on one potential energy surface then study the dynamics from two coupled potential energy surfaces. So the first idea is since we have to put a packet on the ground state, are we sure that the result would be accurate. So we first test the quality of our potential energy surface. You can see that between about zero and seven thousand wave number. This is about my nine EV. You have an error on almost 70 or 80 vibrational state of about 14 wave number which is an excellent agreement when you compare theories and computation. So there are social spectrum. It's more or less an overlapping in a time independent formula and it's almost an overlapping between the ground state wave function which applied at that moment and potentially excited vibrational state but at the ammeter stable or the above states, but this can be written in the time for malism as the fuller transform of the auto correlation function that you obtain the cost of the dynamics of the system. So that's what we did and we were able to obtain the portion cross section of the system. So on the right you can see the overall absorption cross section in radio experimental data that we produced several years or decades ago. And in black is a total absorption section and you compare it also on the left in the low energy range of the system and you see there is a very good and almost excellent agreement between what is obtained from theory and experiment. In particular, I want to point out the intensity of the cross section. You see those values that are mentioned on the left. And then on the right you can see that the cross section access absolute value. So there were no scaling, nothing was done. It's purely an initial result that you're seeing right here. And not only the width of the cross section matches, but you can also see that some of the recurrence on the cross section which corresponds to specific vibrational recurrence are matching very well. And that was proved by the comparison with experiment. And we did the same if you look also in the low energy range of the cross section, you can see that we have the same trend at 673 Kelvin with experimental data that are in black compared to the calculation. And you can see clearly from this graph that if you go from 296 Kelvin to 673 Kelvin, we can start seeing some kind of temperature dependence on the system. So one of the things we can get from those absorption cross section is the absorption coefficient of the system and you see here social coefficient as 669 Kelvin. And you see that the result of our calculation in red matches very well trend that was planned from experimental data. However, when we start looking at the higher temperature 1313 Kelvin, you see that there's some different between the two and these kind of different because experiment at this temperature particularly difficult to do. So they realized they were probably some error some difficulties in the experiment that they were able to use and they decided that maybe they are going to rerun this type of experiment in order to be able to obtain more precise. So this is one way to reach you can imagine that theoretical computation can be able to provide data where experiments are not very reliable. So this is one of the difficulties of performing them. So this is one interesting and one important example of running this simulation and predicting very accurately those important data for people in the combustion community. So this is to summarize what we were able to do we a new set of potential in the surface and that moment was produced from for another to study the system at room as well as high temperature. The ground state is particularly accurate. We only have the 40 weather number zero error between the data in the 7000 wave number range. And we found an excellent argument between the cross section of the experiment and from open from our simulation. And there's all this argument that you have at high temperature is a motivation for people like Santiago to run the experiment and do something that's more reliable and might also to probe the instrument there. So, finally, I would like to mention people who contribute to this work in the US was Richard Dawes and I stood in the Sanchez, what in Missouri, David Osborne, who is experimental is at Sanchez National Laboratory, a collaborator in France, Fabio and Gatti and he was essentially from the Department of Energy in the US, but there's also the institutional support for ICTP University of Rwanda and government of Rwanda and this one was covered actually in two institutes. I'm here at ICTP and probably was also covered in Missouri. I realized that I am the only one from our institute to talk and I will encourage other participants to check the website of our institute. There are many opportunities that come from time to time, like potential seminars, research visit, there's the opportunity, mass opportunity and some of you can be interested and also look at the people who are working in the community, you can find some overlapping with your participants and don't hesitate to email us and don't hesitate to attend our events. So I hope also that you'll be able to visit us next year and I guess I'm done for this talk. So thank you. Thank you, Steve. So we have time only for one question. The rest will do it during the interaction session. So, I mean you can unmute yourself and ask a question. Yes, please. Sorry, I don't know if my question is relevant or not, but I'm wondering if the results of this method is comparable with TD-DFT methods? Well, I've done some tests on another system that is also known using TD-DFT and using this type of approach and they are much more accurate than TD-DFT. The only problem is that first using TD-DFT, you use DFT to generate the electronic structure. And as I said before, for some, in some cases it can be quite accurate, in other cases it can be quite wrong. So depending on the system, you can get good results, but you can also get wrong results. Here on the other side, we are able to control the construction of the potential in the surface of the electronic structure, but you're also able to control a little bit of the dynamics of the system. So they can produce results that you've been using the two methods, but I will trust a little less TD-DFT results than results obtained on this type of approach, at least for small systems. Okay, great. Thank you. One more question. Okay, so we will continue the discussion on the interaction session. Thank you again, Steve.