 your screen. So she's a group leader for electrochemical material and interfaces at the fair, which is the Dutch Institute for Fundamental Energy Research in Eindhoven. And she did a PhD in Zurich at ETH. And then after that, moved for a postdoc at the MIT. And then she came back to the ETH as an over-assistant project manager, group leader. And after that, then moved to the Netherlands, where she started working in the DEFAIR. Today, she will talk about identifying the limiting processes at electrochemical interfaces from experimental data to multi-scale modeling. Anya, the floor is yours. Thank you, Maria Loh. Thank you for the nice introduction. And also thank you, the organizers, for organizing this great workshop. Yes, actually, I'm going to talk about what we are doing in my group, electrochemical materials and interfaces at DEFAIR. And you have seen from the title identifying the limiting processes at electrochemical interfaces. This is also the topic to some part of the workshop. But what you see in my title, I'm going from experimental data to multi-scale modeling. And I think this is perhaps something which is a bit different in my talk. And I would like to introduce to you what we do in one group about both of these topics. Oops, OK. Let me first start with a slide from DEFAIR. DEFAIR is the Dutch Institute for Fundamental Energy Research. We work on physics, chemistry, materials, and engineering. So you see it's really multidisciplinary. And we work on future energy applications. We are located in Eindhoven in the Netherlands and have actually two departments, the solar fuels department and the fusion energy department. So you see we are working on future energy applications for more the midterm and the long term. And next to the research, we also design, operate, and maintain user facilities and infrastructure. So far, mainly for the fusion research, but we will do this also more and more for solar fuel equipment. And when I'm with solar fuel, let me show shortly this slide. What do we do in general in the solar fuels department? You see some teaser, some images around. But let's focus on the middle here. Mainly we are tailoring and control the chemical reactions from the molecule to the system level. So you see we have a larger spectrum, more from fundamental to really more applied research. The group in the work in my group is let's say more on the model side, more on the molecular side. But we are also doing other work in solar fuels that differ. And you see perhaps also the three pillars here. It's analysis, control, and modeling. Now here you see what I mean with electrochemical materials and interfaces. So we have this in different energy applications, water splitting, fuel cells, electrolyzers, and batteries. And it was actually pretty nice what Professor Schlegel said yesterday also that these are interfaces and materials that we are working on. And these are different interfaces. So we have solid liquid, solid gas interfaces. But a lot of things are rather similar. Some techniques are even similar. And this is also how we look in my group in it. We look at the materials and the interfaces for different applications. And at the moment, my group has mainly focused on water splitting. And this has to do with the fact that we have not so large groups that differ. So we are focusing on experiments and on modeling. So it's good to focus on one application. But the methods that we also develop can be used for other applications as well. And as you might know, I have a long history on high temperature fuel cells, soil dioxide fuel cells. And when you look at the material sides, a lot of things are not so different. Sure, with water splitting, we have a liquid, solid interface. So for all these applications, we need an increase in the performance. And the vision is to make these materials and interfaces better. You all know this when we have these interfaces. And when we look at water splitting, most people focus on the oxygen evolution reaction because it is more complicated. And this is also what is sketched here. So we have many species, many processes, and often complex material. So it's a highly dynamic and complex electrochemical interface that we are dealing with. And what are the research questions that we have now there? So the first one is, I sketched you now a scheme. So this is the typical scheme which is looked at in the oxygen evolution reaction. But the question is, is this the good reaction mechanism? Which species do we have actually at the interface? And what are the limiting processes there? You see here an electrochemical impedance spectra. You see these semicircles which represent limiting processes. The question is, how can we identify these processes? The typical way is to do this by equivalent circuit fitting. So you have a combination of resistances and capacitances. And with this, you can very well fit the spectra. But then the question is, how can you relate actually the elements that you have here to the mechanism that you have first set up? And this is actually rather difficult. And then if we know this, what materials, compositions would be good for good performing electrodes. So this is actually to have this relation between the measured data and the mechanism is actually our main motivation for the research that we do. And you see here is our approach and also our strategy. We do experiments and we do modeling. What we try is to measure the same data that we model also. So we simulate, for example, impedance spectra and we measure impedance spectra. Also current voltage curves or chocolate measurements. This is actually our aim. And by combining these two, to identify the limitations of the interfaces and from this then to create advanced chemistry and to create advanced architectures with higher performance. First, what I write here, rather simple experiments. You will later see it's not so simple all. But what I mean here that in my group, we at the moment, we are not doing these complicated synchrotron measurements where you have to have these meniscus method and so on. We are actually what you will see rather simple methods. And what the aim then is to relate actually such measurements with the mechanism. And in the following, I will speak about experiments. So actually I will talk about how we can get with the experiments to higher performance or better understanding these interfaces. And then I will show you in the second part how we do the modeling to get also this data. And you will also see how we can bridge this gap already. But as I said, it's an approach and it's a strategy also, so this is also for longer time. There are still a lot to do to bridge this and to really fulfill this entire scheme here. But I think I can show you that we have quite some good results that show you how we will go on also in the future. So let me start a bit with experiments. So we focused the last years a lot on hematite, on iron oxide. My fascination for this material is the abundance, the stability, non-toxic, the low costs. But from photo electrochemical side, there are quite some shortcuts why it is often critically seen whether hematite could be a photo anode. What we did actually, we looked at the impact of the processing on the photo electrochemical properties. And what you see here, cross-section. So we used the typical substrate glass with an FTO layer and then on top the photo anode that we prepared by two methods here, what I show in the following, by DC sputtering and by RF sputtering. So DC sputtering is an argon atmosphere. Then we annealed to get the oxide. RF sputtering you do in an oxygen-rich atmosphere that you deposit already in oxide. Both all samples are annealed in 645 for 10 minutes in air, actually. Here on the right side, you see now current density over the potential. We see that we have quite a difference in the performance. So DC is more like a lot of people see. RF is pretty low performance. So we would like to know what is the difference in the performance? Why is it so different? And yeah, this is what is known. We get a shift in the onset potential. So this is usually related to surface chemistry catalysis. And the other is due to the morphology. When we now look on with the SEM on the top of the samples, we see that from the top, they look rather similar, actually, the DC and the RF samples. When we do a TM cross-section and FIP cutting and cross-section, we see that the DC, so we see here the FTO layer. We see the hematite thin film here. And here we see also the FTO layer and the hematite film on top. We see that in the DC case, we have sometimes some holes at the interface. And in the RF, we see some holes in the layer. So this could be an indication why we get different performance. From the XID, we see that we get indeed in both cases F2 or 3 phase. And they look rather similar, even though we see a bit different crystal phase formation texture a bit with this 104 and the 110 phase. So 104 is higher than the 110 for the RF. Both 104 and 110 are actually rather similar. We also see some differences in the XPS data. So this is the XPS data of the oxygen. And the main difference is here around this 532 electron volt. You see here higher bump or higher peak here. So with the RF samples, we get more of OH-minus or oxygen vacancies. So this gives us quite an image about what is the structure and the chemistry of these samples. And then when we go to the photo-electrochemical properties, we see here the impedance data. We see that we have different performances, which is related actually straightforward from what we see in the current density data. And for this study, we fitted this to equivalent circuits like they are shown in the literature. And shortly to see that this resistance associated with the charge recombination at so-called surface states, we have here also the semi-circuit for the surface states by capacitance. And here in this plot, we show that indeed for RF sputtered, we have a higher R-trap resistance. So when we look at all these different elements, we can actually in the end then make these images about DC and RF. So what is now going on at these interfaces? And you see here that we add both the electrochemical data and what we have learned from the morphology and then the structural analysis. So we have holes for the RF in the thin film. And this makes that we get more bulk recombination here, whereas we have less bulk recombination for these DC films. Therefore, the arrows are smaller than these ones here. We have also seen in the analysis that we have the surface states there indicated here with this symbol. And we have put them in different colors here because what we see from the analysis is that it looks like that we have even perhaps different surface states and for sure different density of the surface states at the interface here. Surface state recombination is also different more here than with DC. Charge transfer in the DC to form the O2 is faster. So what do we learn now from this? We see a strong impact of the processing on the performance. A lot of people report this. But what is actually amazing with this study a bit is that the DC sputtering has actually the higher performance than the RF sputtering. And this is in this respect also pretty nice because DC sputtering means that you spotter metallic iron. And afterward, we anneal it and we get a well adhering, well sticking oxide phase on top of our substrate. And this is actually when you think application wise, it's very nice because in this respect, this could be used for fast and easy processing. This is the first thing that we learn of this. The other thing that we learn is also that you really have to look in detail on the processing conditions and that a lot of changes, differences in the literature also on different behavior. Or when you look at the surface states or things like this, the processing was always done differently. So it is good also to compare, let's say, in this way, in one study where you do the same analysis. And this is for sure important when we see later, when we go to modeling, that we have well defined and bi-characterist data for the modeling where we know what all the structural morphological properties are. Perhaps one short slide also on perhaps a bit a side note about impact of processing, what we find now with two different kinds of processing that usually in the literature is reported about alpha, FE2 or 3, but there exists also gamma FE2 or 3, which is usually not stable. But when we do high ion plasma treatment of the samples and we can make these so-called nanofast structures, we also get an increased performance. We do not see differences between exposed and non-exposed samples in the XRD, but when we go to Raman, we can identify this gamma phase. And this gamma phase, we also saw under certain conditions when we do sputtering. So the question is, but there we have to go further into more detail, can we stabilize a gamma phase? And by this increase the performance. This is something which sounds interesting. And we would like to go into more detail. As I said here below, we have also found this gamma phase when we do certain processing with the sputtering. What we do not know yet is what is the nature of these surface states that we have there? Because we have surface states, we have other surface states, but what is the nature there? So these are a bit things which go further, but where we see that with such a simple material like iron oxide, still a lot of things we do not really know. Next to the iron oxide, we also have worked a lot on the tungsten oxide. Here we show the impact of defects. You see that we did sputtering and ALD of tungsten oxide films. You see in the sputtering, we have these holes. ALD looks perfect. But as I said before, when you think of really an application of these films, I mean, here we have an ALD film of 50 nanometers. We just got this because we said we would like to compare the sputtering to the ALD, but this takes so long time. ALD is more for thinner films. So what we learn from this study when we see that the performance is better for the sputtering, we need to tune the sputtering to get to better layers there, for example. So let me show you here the difference. On the left is the ALD. On the right is the sputtering. We relate the different performance, overall performance, which is higher for the ALD than the sputtering to these physical defects. But what we also find is that there is an effect of what we call chemical defects. So actually, we wanted to get better performance. And the first layers were annealed in air. And then we thought, yeah. If we go to O2, we have an oxygen-rich atmosphere. Perhaps we have less vacancies. Perhaps we get higher performance. And we repeated all these measurements for the sputtering several times. We always got a better performance with the air annealed than with the oxygen annealed. And for sure, with nitrogen annealed, it is lower. And we did the same experiments for the ALD. And also, when you think a bit regarding application-wise, it's for sure nice if you can just anneal in air and get a good performance. So here we show what is actually behind this. So we did XPS analysis. And we see that you see here the lettuce oxygen, the blue one, and the green one, the adsorbed oxygen, which is usually attributed to hydroxide groups or to oxygen vacancies. And you see actually here, it changed as a function of the annealing. And we summarize it here. So you get less lettuce oxygen. So probably more vacancies when you change the annealing atmosphere from O2 to air to N2. And you have more adsorbed oxygen when you go in this direction, so probably in order to maintain the charge balance at the interface. And what this means, so that you have actually a trade-off between increased adsorption of OH and recombination center. So it's a trade-off why the air samples are finally turning out better. OK, then we said, OK, we are usually depositing on FTO glass samples. But it would be nice to use also silicon samples. And you think also again regarding applications. And also with respect to the effect of getting, due to the set she improved, better performance. So we deposited the tungsten oxide on these FTO glass samples and on silicon samples and received a higher performance on the silicon samples. And then we did actually a nice study on finding the reason why it is higher. And what we did is we illuminated our samples with either infrared or UV light or with both together. And this is what you see here. And because the tungsten oxide and the silicon are active with different wavelengths, and we could really separate that distribution, what effect is related to the current density in these heterostructures tungsten oxides. And mainly what we find is that we have also an active contribution of the silicon. And it's not only the Z-scheme that we get an increased performance. And what we did then is then we added a platinum layer in between to increase the work function to enlarge actually the banding of the silicon and could even increase the internal performance even more. So the next step was to go to the third dimension. So first from the glass substrate to a silicon subset and now from the silicon to a micro pillar electrodes. And there we collaborated with the University of Trent together. So they were fabricating these nice micro pillars. For us, silicon micro pillars on us. And then we deposited tungsten oxide by sputtering on top of it. And here with this cross-section, you see that sputtering is not a conformative position method. So therefore, we have a thicker tungsten oxide on the top than on the edges. But you see that really down to the bottom of the micro pillar, we were able to deposit tungsten. We find tungsten oxide actually. And then the right side here, you see that we changed now the height of these pillars and the pitch so the distance between the pillars. And you see as we have higher pillars, we get better performance, higher current densities. And if we go to smaller pitches, we get also higher current densities. But actually, we are not so happy with it because it's not as good as we expected it. And this is what you see here a bit. So we see here when we increase the total surface area, we get a strong decrease when we go down in pitch. But the current density does not increase as much as we expect this. And for this, the first idea was, OK, we get some shadowing if the layers are very close together. So this is for sure one reason that you get shadowing. But also we figured out that we have some inactive tungsten oxide layer because if we irradiate with an angle, then the tungsten oxide on the backside is not just the real shadowing, but it's really inactive here. So it's like a dead layer or something like this. This is the reason why the performance increase is not as good as we expected. And the optical simulation that we did together with the University of Delft, they show also how this depends also on the incident angle that you use. And finally, we came up here to what would be the best in the future to work with pillars, which are actually micro cone structures. Actually, but we did not produce them yet. Actually, we are at the moment collaborating with a group in Japan who is actually doing micro cone structures, but we have not deposited yet the functional layers on top of it. So the PhD student who did this work, he went then one step further to make nano wire photo anodes. And you see here SEM images of these. And we also prepared different lengths of the nano wires. And then we found, if we add them, that we get different top surface areas. So top surface areas, actually, when you look from the top, how much area there is. And this is also, this is here an important parameter. And when we draw this now in, let's say, in three dimensions, current density, we see that it's really a trade off between nano wire length and this top surface area. And in this way, so what we would like to show with this work a bit, how you need really to, you can, in principle, tune your photo electrodes. And you can actually get quite an increase also in performance with not only that you study the defects, but also in the third dimension that you have a good structure and you design it in a good way. So I've shown you now, from the experimental side, how defects, phase formations of the states, 3D architecture, how we can study these and get a lot already out of in order to make better electrodes. So the processing conditions have considerable impact on the performance. And it's really a trade off between the material selection about the complexity of the structure, about the performance. And we also have seen that well-designed and tailored advanced architectures offer actually a nice pathway towards higher performance. And I bring this in the light, a bit of this discussion, should we go to more complex metal oxide structures or should we better stay with simple oxides? So this is seen a bit in this entire discussion. What I have not talked about is about a surface species. And actually, this is when we would like to investigate the limitations, we actually should know the surface species. And there are different methods. We have heard them already also in this workshop here. What we are using is operando-ATIFTIR, attenuated total reflector Fourier transport infrared spectroscopy. And what this technique is about, that you measure actually the infrared signature at the surface. So due to this crystal, you get these total reflections. And then an evanescent wave comes out on the top of your surface. And you can then measure an infrared spectrum very close to your surface. And this was done. Actually, when we started this, it was just when the paper from Zandi and Harmon came out. So actually, at the beginning, we were very frustrated that when we had our granted Marie Curie proposal, that there was just coming out of the paper. So they claimed that there is an iron oxide species at this wavelength, at this wave number. But you see also from the curves, this is not, how to say, very stably measured. The measurements itself are very difficult. I show this actually in the next slide. And there were also other people measuring in different electrolytes and also finding a bit different features. And what we have seen in the last years that there are a lot of challenges with these measurements. And we have also seen that with other measurements, other species come out. So also transient and time-dependent measurements were not done yet. So we continued with this. And what I would like to show you in the next slide, a bit what are the challenges with these methods. So as I said, the two papers that I showed before they used them, I call it a separated setup. So they have this crystal here. And then have the sample actually flipped on top of it and then have very few electrolytes. And this is actually a problem with the measurement for reproducibility and for stability of the measurement. Another setup that we have actually followed lately is to really deposit the films on top of this ATR crystal. Main problem is the costs here. But you get much nicer and more reproducible measurements. Challenge is there with the sample design because when you have this infrared beam, you have to have an infrared transparent conducting layer, for example, on top of it. You have to deposit the iron oxide on top of all these, the Sinccellanite crystal. I should not see too high temperatures. So you understand from this how you have to actually, on the material side, the processing is not so easy. And this actually also delayed us to find good iron oxide processing for low temperature processing of iron oxide. Let's see. Here we see now that we were able to measure on these ATR crystals that we were able to find a processing way to make low temperature deposited iron oxide with some reasonable performance. And what we did is now we measured the infrared spectrum. This is a complicated plot. So mainly we showed two measurements here. So the one measurement is that we do the ATR-FTR spectrum as a function of the electrolyte soaking time because we realized that the electrolyte is soaking into the structure. The reason is that the films were not fully they were a bit porous. So what we learned from this is that we have to wait an hour, one and a half hours about to have a stable structure. And then we could apply another set of measurements, applied by us, and then measure the FTR spectra. And what you see here in this region where we expect the surface species, we really indeed see a change broadening. So what we have to do now further to fit this data and do this also under more different conditions to really be able to say, OK, this is due to this or that surface species. But this is actually ongoing research. And I will now go to the modeling and will show you how this connects actually because these measurements, as I said, are extremely difficult from the modeling side. It's we can model the surface species. And this is also where we would like to bring it together. Modeling part, I've actually split in two parts. The multiscale modeling approach itself. And then I will show in a case study what we can now really do with this. So we do microkinetic modeling. So we start with a mechanism from Rosmeister actually, the four steps OH, OOH formation, and then O2 disorption. We can write down all the different equations for each step and the mass balances. And what you see here for the simulations, we need actually the rate constants. And for this, we use a DFT to estimate them. And this is what I show here. So we do actually the typical free energy calculations as Rosmeister has shown. So we calculate the free energies of these different steps. And we can relate, then, the free energy to the redox potential. And by this, we can estimate with a scarisher model here. We can estimate the rate constants. I always say estimate because DFT is under very ideal conditions. So when we compare this, then, to our experiments, it's an estimation. It's starting value, actually. But like this, we can estimate these rate constants. I show here in the slide that I said a bit, yeah, we do this DFT to estimate some starting values. So this is one motivation. But we also did this DFT to try to find out what impacts the overpotential. Because actually, the largest step here is the overpotential. And actually, we would like to figure out what is the reason that the overpotential is different. And we looked on different systems, so iron oxide, tungsten oxide. But we also worked a bit with 2D materials with lately paper. It was just accepted yesterday on topoxic oxide. And we looked actually at the overpotential dependence on different materials, orientations, cavities, vacancies, and these things in order to find out what is really determining the overpotential in these systems. And I have to say, there are a lot of reasons. But it is still, and this is, I think, something in the community, it's still not very clear why some materials have these and have these. But you can really relate, sometimes, these low overpotentials in the free energy calculations with really good performances in the experiments. OK, now, when we do the modeling, we have an input, potential, and the illumination. And then in this paper, we use this microkinetic model with the input from DFT to calculate current density. And we use a state space model for this. This we have published in the first paper. And then in the second paper, we included also the charge carrier dynamics. And this is what I want to show here. We have a conduction band and valence band. And we have surface states here. And when we include this charge carrier dynamics, we actually include the charge transfer via valence band, via conduction bands, and via surface states. And why this is such a big step is what you see here. So in the first paper, we actually assumed that the whole density is a fixed value. And here, you see the whole density is a function of the whole flux of the potential dependent dark current of the rates at which the whole recombines the rates at which the holes get consumed. And you see that for some of the whole density comes in here at several parts. And similar when we calculate the density of states of these trap states, for example. We also use a lot of model parameters in our simulations. And we try to get them from the literature. But as I say, a lot of values are not very well known. And what we can do now, we can really calculate current densities. But we can also calculate top light measurements. And we can calculate impedance spectra. So these are really simulations from this electrochemical model. And these compare, actually, rather well to experiments. I think this is a great step, because most of the time simulations are used to explain data. So we try to simulate the same data that we measure. And we see, first and since it looks pretty well. Now we can say, OK, what do you have from this? You can simulate this. As I said, a lot of parameters are not really well known. So sometimes you really can shift the parameters. And this way, you can also get a very nice correlation. But we are actually also amazed that we can really simulate the data that we measure in such a nice way and such close together. But OK, what can you do now with this? And this is actually the second part. I really looked a bit on the surface state discussion. Because this is one reason why we have low efficiency. We have different kinds of surface state, these so-called recombining surface states, which are defects at the photo-anode surface. And we have intermediate surface states. We take the nomenclature of these surface states, actually, from the literature here, which are adsorbed intermediate species at the electrode surface. The characteristics of these recombining surface states are energy level, surface state density, trapping rates. So we assume now that the energy level is 0.4 volt below the conduction band. We have this from the literature. We assume that trapping rates are independent of the surface state density. And what we can do is now in the simulations, we can change just this parameter. And this is the nice thing compared to the experiment that we can really systematically change this parameter. And this is an experiment that's not easily possible. Yeah, here I show actually the main values that we calculate in the following. So we use actually the capacitances as values that we compare now when we change the surface state density. So you see here linear sweep measurements. We increase surface state density. When we have more surface states, we get higher onset potential and lower current density, which makes sense. We have surface states. We have a higher onset potential. Also with the capacitance, we see when we have more surface states, we get a higher surface state capacitance. And we find also that we get more Fermi level pinning. You can say, yeah, this is what is expected also from experiment. But it's nice to see this with these simulations where we start actually from a microkinetic model with a very rough DFT data as input. So we can do systematic studies on single contributions and then relate this to the physics and the chemistry in the experiment. And as I said, in the experiments, it's much more difficult to change, for example, single parameters. Let's go to the intermediate surface states, which are changed through light intensity and reaction rates. And what I show here is that here's the capacitance shown. And we see that the surface state capacitance is before the onset potential. And it is much higher than the recombining surface states. And we also see here that with different illumination, what we, for example, cannot do much less do in our lab. We can change the illumination and then see what effect does this has on the capacitance. And actually, with a higher capacitance, we get a higher capacitance when we have more illumination because we have more reaction sites. But we do not change the reaction sites too much. So therefore, we see that the difference here is not so huge because actually the reaction sites itself are not changed so much. Also, a nice result from the Motschottky analysis is that you have seen that we have this Fermi level pinning for the recombining surface states. Here for the ISS, we actually see also here, and this is shown here more, that we also see a decrease. And sometimes in the literature, this is interpreted as a Fermi level pinning. But actually, when you look with 30 ohms, series resistance, you have this. But with zero ohms, you do not have this. So actually, when you have a resistance, a serial resistance, and the current density increases, we actually increase the space charge potential. And this means that we change this one capacitance here. So actually, this flattening that we have here from the Motschottky is due to a decrease in the space charge potential. But it's not due to a Fermi level pinning. And this is something that we can get out from these simulations, and which is in the experiments, it looks actually the same. Here we change, for example, the backward reaction rate and see that we get spikes here where we do not get these here. It's also something we can change in the simulations, but not easily in the experiments. Actually, a summary of these surface state analysis, we see that these recombining surface states are at a different potential than the intermediate surface states, which gives us a measure for the experiments how this is. And we see how, for example, the current density changes. And this is what I've said before with the Motschottky that actually here we get a Fermi level pinning where this effect that we see in experiments on the simulations is actually not a Fermi level pinning. But this is related to actually a feedback loop of the serial resistance. So the conclusion from the modeling is now that multiscale modeling, we use the multiscale modeling approach for simulating water oxidation. And we have included actually both the physics in the semiconductor and the chemistry at the interface. So it's a combined approach here. We simulate the same data that we experimentally measure. And we are, for example, able to really distinguish between this RSS and the ISS. What we at the moment started is a sensitivity analysis. I mentioned shortly we have a long list of parameters. We now need to figure out which are the most important parameters to affect our data. And there we have actually hired a person from control theory who knows how to do with these a lot of parameters and do a sensitivity analysis. And we use this now for microkinetic modeling analysis. And I'm very curious what comes out here. I'm almost at the end of my presentation, but I would like to use the last few minutes also to show you something what we also did in addition to this. So I showed you now we use actually DFT and do then a kind of continuum modeling. What we have also done with postdoc recently that we combine DFT with MD and Kinetic Monte Carlo. And this is shown here. So actually the student did DFT with implicit water and then calculated the kinetics of single steps. So we can determine actually the activation energy for this process here and then use the Kinetic Monte Carlo approach to actually simulate the surface species and the dependence on the running time. Did this with two different models. So actually what we can get out from this analysis, we can get out what surface species do we have, how do they change over time. And this is actually comparable data that we can also get out from the microkinetic model. There we can also get out plots of the surface coverages. And actually the next step here is also to compare these two pathways. Yeah, as I said, what we have done is we have done experiments and modeling. We can do similar measurements. You see here surface coverage calculations that come out with our modeling approach. And you see here we are here not so far yet with the ATR-FTR. But this is actually where we would like to connect these two. We continue with these ATR-FTR measurements and especially here we do a sensitivity analysis perspectives that we get out of our modeling is to relate experiments to the modeling but also to validate the models that we do. I showed you a bit that we can change parameters in the simulation that we cannot easily change in the experiments or that we cannot single-singly change in the experiments. An idea is also to predict experimental data when we can bring some measurements together with the surface coverages here in the future for other material systems to predict data and also to go to different mechanisms and to implement these and to compare these two experiments. So actually the final conclusion is we can simulate electrochemical data that can be directly compared to electrochemical measurements. We believe that multi-scale modeling in combination with experiments can fill the gap of experimental challenges. What I mean here is also regarding surface species. And I've shown you for the experiments also that well-defined and tailored advanced architectures offer also nice pathways towards higher performance. With this, I would like to thank you. I would like to thank my group. Actually, it's a pretty old image now. And I would like to thank funding agencies as well as collaborators. Thank you very much. And I'm happy to take any questions. Thank you very much, Anya, for this overview and my combination also of experiment and modeling. We can start with questions. We do have one from the audience, from Jose Carlos Contesa. Can you ask the question, Jose? Yeah, this is, well, I am an Institute of Catalysis and Patriarchal Chemistry in Madrid. And Anya, how did you explain that in the tungsten trioxide case, infrared irradiation gives nearly no response, while UV plus EIR gives significantly higher another current than UV alone? You referred to these measurements when we separated UV irradiation in infrared because the tungsten oxide is not active in the entire wavelength range. And this is the reason and what we showed there that indeed, when you have one sample where you have the tungsten oxide on the silicon deposited, that you could really separate the contributions from the silicon and from the tungsten oxide with changing the wavelength of the irradiation. So actually, we had an infrared source and a UV source. And then just use the one source and the other source and then the combined source. So then if you used exclusively tungsten trioxide alone, you would have not this effect? We have a tungsten trioxide? Yes? Yes. On a silicon? No, no, no, without silicon. Without silicon? No, no, no, no. This experiment we did on silicon only. We did not do it on other substrates, as I remember. Yes. And we also did not do it on any pile. I mean, you could also, because we did also, we also worked sometimes with people who do plasma exposures. So they work on bulk tungsten samples. And then they're near them and have on top the tungsten oxide sample, but I think now these measurements we did just on the silicon. And on FTO? No. So it would be very convenient to do this on FTO. Yeah, so what would you expect? You would expect that we see? I would think that in that case, you would not get any enhancement by infrared. Say it again. You would not get? I would not. You would not get an enhancement using infrared. Yes, yeah, yeah, yeah, without the silicon, yes. And this is actually what we state in this paper, that the silicon itself enhances it, but you also enhance it due to the interface that you have. So there are actually two effects. OK. And this we do not expect with the FTO. But actually, if you ask me now, I should go back because I'm not sure whether we actually did these measurements on the FTO. No, I don't think we did this. No, no. Just try. But it would be nice, perhaps, really, to confirm this. Yeah, yeah. OK, thank you. Then we have a question from Simone. Thanks. I have a question regarding the Fermi-Level Pinning. And we heard on Monday from James Darant. And in his samples, he has no evidence of Fermi-Level Pinning. Whereas what you have discussed today, Fermi-Level Pinning is an important issue, right? So do you think this is sample dependent? Or what is your opinion about this? Yeah, my problem is now that I was not joining the conference on Monday. So I cannot comment on the difference there. Sorry for this. What I know is that in the literature, a lot of times Fermi-Level Pinning is discussed. And usually, when you do experiments, you can do these Motschottky analysis. This is actually a way of measurement. And then when you see in the Motschottky analysis that you get this leveling, flattening, then you say, OK, it's a Motschottky. You get this flattening. It is Fermi-Level Pinning. And this is exactly what I showed there now, that one part is Fermi-Level Pinning, but the other is not. And with the other one, we showed it actually from the theory that this is actually a feedback loop that we have. We have a serious resistance, a serious resistance in the system. And due to the serious resistance, we get a decrease of this one over C squared, so an increase, actually, of the capacitance. And this leads to this decrease and looks a bit like a Fermi-Level Pinning. And what we found in the literature that this is sometimes also called a Fermi-Level Pinning. Now, your question was also a bit different. Is this sample dependent whether you get Fermi-Level Pinning? I think it's materials dependent. I do not know what the material from this talk on Monday, but the Fermi-Level Pinning also is related to the material itself, how strong this is on the band scheme, but also I think it is related to surface states. So you can also avoid, when you tune your surface states, you can get a different kind of Fermi-Level Pinning also for the same material again. Right. Yeah. Thank you. OK, I don't see at the moment other question. If I may. Yes, actually, please. Just one curiosity. So in your current versus potential curves, I noticed that you have an increase, then a plateau, and then another increase, right? So what is the reason for these two waves, let's say? The current potential curves. So these were in the beginning with the experimented data. These ones, you mean? Towards the end, you were able with your microkinetic model to nicely reproduce this feature. So you see that the current density increases with the potential. And then this kind of plateau, and then it goes up again. So I was wondering what is the reason? This is when you do electrical water spitting, actually. Right, OK. Can I ask you something? Yes, please, Mikil. Very nice talk. You made a convincing point that you have to model everything at the same time. I have a question about pH. You didn't mention that at all. I have an oxide force apart if you go up with the pH. No, you have to do it at very high pH. Can you include it in your model, or is this all wrapped up somewhere? So in the experiments, we are working at pH 13.6 yearly. Because it doesn't exist at lower pH. Yes, and in the modeling, we actually also assume low pH. What I'm now, there's one point when you go to the experiments, you change also locally the pH. And this is a huge discussion in the literature at the moment when you do your experiments that you locally change your pH. So you have a different pH at the interface. And this is actually what we started now with. Actually, we are on the way of starting with a postdoc to measure the local pH. So this is from the experimental side. In addition, that is related to pH. There is the theory that there is a competition between the surface oxygen, the ones connected to the iron, and the ones connected to the water. pH also has an effect there. I know that will be very challenging. But some more chemistry in the model may be needed. OK, thank you. So mainly, we assume overall that we have a pH similar in the experiments. But I know what you mean with the, I think you refer to the DFT data also, right? No, I refer to the theory of these people in Switzerland. Schmidt, I think, is called. They have this complicated theories about competition between surface oxygen and water oxygen and the effect of the pH on this. OK. So I understand. OK, there's one question from Peters. Yeah, I know we're late. But I have one quick question. It's general, and that was at some point, you related this DFT derived Gibbs free energy of an elementary step into a potential. And then you used, you said, this is the Geirischer approach to calculate or estimate rate constants. I couldn't see the slide very well. Can you explain this connection again to me? This potential or the way you calculate that it's clear to me. This is what I referred to as the so-called limiting potential earlier. Like this is the potential at which, or the over potential at which the overall free energy letter becomes downhill, right? That's what you referred to. Yes. This is the slide that you mean. Yeah. And then on the other side, what do you have here? This is like Marker's theory sort of expression. Where do you put that in, then, this sort of limiting potential that you obtain from the equation in the middle? So we get from here, from the free energy calculation, we get this delta G. And from this, we can estimate this redox potential, and this you put in here. And at what potential? This is mainly, and what I say it is, I say always, I think it's an estimation, right? And the free energy letter on the left, they're usually evaluated also at some potential on the hydrogen scale and this computational hydrogen scale, right? So the way it looks like this looks to me, looks very close like zero volts or so. That's what I usually see in these papers, right? And then it all goes uphill. And the largest step, so to speak, is the last one to become downhill when you apply positive over potential. Actually, the largest step is usually in most simulations, it's either this one, the second or the third, right? This is also what is known usually that this O or the OOH formation are the potential limiting steps. Yeah? Yeah, potential limiting or, yeah, exactly. The limiting step, I mean, thermodynamically, it's the last one to become downhill when you apply an over potential. Oh, yeah, yes, yeah, yeah, you can say it like this, yeah. So you can also plot, yeah, what would you now mean? Some people plot it in order to see when it becomes downhill, yeah. Yeah, I usually plot it like this, yeah. Yeah, but what I want to say is also, perhaps I should say this also. I say we try to estimate these values because actually, as I said, our idea is to simulate the data that we measure. And at the moment, we have done this with using input data that we have from the literature with estimating, let's say, rate constants. We are fully aware that this is also what I said that this might not be the truth fully. But the next step is actually to fit this model to different experimental data and then to extract again the data, the values that we do not know well, yeah. So this is actually the next step to, and with this, I do not want to determine kinetic parameters. This is not my idea, but to see when I have different electrodes, when I have different materials, how much do these values change and what are the most sensitive parameters and what are the most sensitive values? And this is actually the strategy. I do not want to determine kinetic parameters. I do not want to, but I want to figure out what are the most sensitive ones and which can I change in the experiments and how can I with this tune my experiments and how can I perhaps prevent doing a lot of experiments and predict data in this way to say, okay, this needs to be done. This is actually the way. And with this, I say, if I do optimization, I can perhaps better figure out this. But for this first, the step is to do a sensitivity analysis to figure out which values are. Because when we look at these input values, I mean, you can also refer to the paper. With several values, I might have question marks behind it, yeah? Yeah, I understand. Thank you. And this we need to figure out more closely. Okay, thank you very much for this lively discussion. I'm afraid there are still a couple of questions in the queue, but I think we also should allow people to get a coffee before starting again at 5.30. So yes, I'll see you in 20 minutes. Hi. Hi. Hi, ciao. How are you? So have you already tried your screen or? Yeah, I'll share again now. Hi Simone. Hello. Can you see that? Yes, that's perfect. Very good. Okay, great. Thank you. So would we have a couple of minutes more? Yeah, I think so. So I was wondering, David, did we meet at any of the CPMD meeting or something? I definitely recognize your name. Were you there at the one where, where was that? And I don't know, that was probably not, that was in Japan when it was... No, it might be from a longer time ago, maybe a total energy conference or something like that in Trieste itself. Yeah, could be, yeah, yeah. Yeah, yeah, it's interesting. I was feeling like my research has gone more applied and I have last touch with some of my electronic structural roots, which were firmly based in places like Trieste. And now I see all you guys are thinking in the same direction. So it's kind of an interesting way that it ends up gone. Obviously you were a front runner and now we all follow, right? No, I don't think so. We're all following the money. I think that's the thing. David's part. Let's put this thing. It's good to see all of you. Well, I think it's 5.30. So welcome back, everyone, to this second part of today's workshop. So it's my pleasure to introduce David Bredengast, he's director of theory of nanostructure material at Berkeley. And so David's PhD at the University College, Cork in Ireland. And after that moved to the U.S. for a first postdoc at the Lawrence Livermore Lab actually with Julia Valley, right? And then moved to Berkeley where he then got also appointed as professor, as director, sorry. So today it will present an understanding of electrified interfaces with constrain and I think we move now in the realm of the first principle legrodynamics and theory, right? Okay, David, the floor is yours. Okay, thank you so much, Riela. And yeah, many thanks to the organizers of this fantastic workshop. It's a great way to get both experimentalists and theorists together and to talk about what's topical in this area. And I'm very glad to have the opportunity to share my own perspective in this talk. So, okay, let's begin. Okay, so just a brief outline. We'll begin similar to the previous two talks. We'll talk a little bit about experiment. I don't do any experiments myself but just the inspiration you can gain from seeing some of the really cutting edge experiments now that can look at electrochemical interfaces. There'll be a brief discussion about continuum models and the need for those, I think, in this field, even though predominantly, as Maria Laura pointed out, I'm kind of a first principles, legrodynamics person in the kind of work that I do. Then we'll talk about solvated species and how there's a much more diverse population of species in solutions than I would have thought of. Particularly when we start looking at the common electrolytes that might be used in batteries and maybe beyond lithium ion batteries that might involve multivalent ions like magnesium. And then I'll talk a little bit about challenges in modeling dielectric layers between the electrode and the electrolyte which could be an active material like sulfur which I'll talk about or it could be a component of maybe surface oxidation or what's called a solid electrolyte interface which exists in all the TMI batteries as well. Okay. The group that we have in Berkeley, we do a lot of different things. Everything from ultrafast chemical dynamics, energy conversion storage, which is mainly the focus of this talk, solution phase chemistry, there's a little bit of that too. And a lot of simulated x-ray spectroscopy. So that most of the talks I've ever given in my career have been about simulated x-ray spectroscopy. There will be a little bit of that in this talk, but really it's more about simulating interfaces and a need that arose while we were trying to model some challenging x-ray experiments. And there's pictures of the group there and our overarching goal generally speaking is to try to provide a microscopic understanding of functioning energy relevant materials and their interfaces. And then to use that to provide design roles back to the experimental community so that they can make better materials or better devices. And as always, we were focused on interfaces, right? There's this quote from Herbert Kramer, the Nobel Prize winner that the interface is the device. Materials on their own don't do very much. You have to put them together to make them do something. And typically what happens is not just a simple additive function that emerges when you put two materials together, you actually develop an interfacial volume or element that has its own properties that might be slightly different from old species. And so there's a great example of this that's already been said in previous discussions within this workshop that when you especially put a fluid next to a solid interface, you typically see density fluctuations. And that's a very simple example of how an interface might be different from its bulk properties. And this densification is a normal effect it's due to entropy and the reduced number of accessible states that you have at the interface that tends to pile up molecules in layers. Now real interfaces are much more complicated and this is still just a cartoon, but it looks pretty scary when you think about what might be happening at an electrochemical interface and all the various processes that might occur. We try to do our best as simulators, as modelers to try to make this somewhat simpler so we can gain some understanding that's not overly complex or too specific. But there's some key questions that we're trying to get at. One is what are the actual species that are present at the interface? And all of those species, which of them are actually relevant, which of them define the function of the interface? And then if we can figure that out, maybe we can isolate some particular bottlenecks or limits on the performance of a given material system or a device. Okay, so let's talk about some inspiration from experiment to start things off. And my particular focus has been on X-ray spectroscopy and very recently we've seen a lot of advances in operando X-ray spectroscopy. We heard earlier this week from Professor Schlögo and he's definitely a world expert in this area and you've seen many talks referring to work at Synchrotrons to look at either in situ X-ray photo emission, so exploring the oxidation states of materials and how they change at interfaces or operando X-ray absorption spectroscopy, which is a really nice way to look at electronic structure, first of all, but then to convert that into a description of the local chemistry that might exist at the interface, changes an oxidation state, maybe orientation of molecules with respect to the surface changing in their dipole and even some bias dependence as you get to the operando measurements. The nice thing about these particular measurements, if you do them in a certain mode, which is called electron yield, you can measure details that seem to be within the first one or two nanometers of the actual interface that you care about, which is really fantastic because that's the typical lens scale that we might be able to approach in our first principles calculations using abonational molecular dynamics. And why do they need us? If you want to put it that way, why do the experimentalists need to talk to us at all? Well, spectroscopy traditionally is a fingerprinting technique where if you have a previous measurement and you know what that is, it's without ambiguity. If you measure it again and they look the same, you've caught your suspect as it is. However, in this particular case, there are no spectral standards for interfaces or if they are, they are only emerging now. And there are many impacts or modulations that the interface may have on the spectroscopy that you measure. So certain peaks may shift around in energy. They may disappear completely. There may be new chemistry that you don't expect and you don't know how to characterize. And there's all kinds of effects that a surface may have in influencing the spectroscopy too. It's not an innocent partner in this relationship. It can electronically interact with the species at the interface. There are strong electric fields and there's a lot of dynamics as well. So with that in mind, some years ago now, back in 2014, we had done some work looking at kind of a fruit fly example of a model interface, right? So water next to gold, there's really nothing going on there from an electrochemical point of view. It's a very simple system. And what you observe as you change the bias is a reorientation of the water molecules at the interface in this packed, densified layer. Now, what we were able to do is to translate those observations from molecular dynamic simulations which would have been known in the literature into X-ray spectral signatures. And it was at this point that we began to realize that this technique was really sensitive to this very first one to maybe three molecular layers next to the metal surface. And one of the most important aspects of that, you can see here some of these excitations of water molecules that I've circled here, here, here, and maybe here. If they're quite close to the interface, they strongly interact electronically with the substrate, with the gold. But if they are far away from the interface, there are somewhat isolated excitations. And so you can use that then to understand why the spectroscopy might be different and why certain peaks may disappear at certain voltages as you rearrange the structure of the water and even rearrange its hydrogen bonding structure. So I won't dwell on this in great detail. It's just a beginning, let's say, and it's a very simplified system, not a true electrolyte. Okay, so after that, we were very emboldened, let's say, and thought we could do anything. And we picked this high school example of platinum and sulfuric acid, right? The original Volta-cell that you would have seen maybe in your high school chemistry class where you produce hydrogen bubbles and oxygen. Now, this is an actual electrolyte. So you have dissolved acid, sulfuric acid, and we did the same experiments with our colleagues at the event site source. So Mikhail Samron is the main collaborator here and Jinghua Guo is the B-Mind Scientist who can realize these experiments. And we saw some definite voltage dependence. So on the right-hand side here, in the inset, you can see the cyclic voltammetry data, the numbering system, and the colors correspond to the spectra that were measured. And then in the larger insect figure, what you have here is the ratio of this first peak indicated by the vertical line here at 535 EV and to the intensity of, let's say, a second peak that begins at about 537 EV. And you see this almost linear relationship between the relative intensity of those peaks and the potential, wherever you are on this cyclic voltammogram. So a key question is, well, what is the species if there is one species that contributes to this peak and how would we identify it and understand that? OK. So in this case, this is a complicated system. We ran these large molecular dynamic simulations of the platinum surface next to water with dissolved species inside. Earlier this week, we heard some really nice work from Axel Gross, which discussed the care that you need to take when you model the platinum surface and depending on the conditions, whether it is hydrogen enriched or not, those are all relevant details here as well. We were mainly focused on this oxidative region where we're at positive biases. So we imagine we're drawing anions to the interface and then we're interested to see, OK, what would be the particular species that might contribute to this peak, what they look like, and so on. So we tried to simulate as much complexity as we think should be present in the system. So the water, obviously, which is at the oxygen cage, most of the signal should come from the water. And then the various products of the deprotonation of the acid. So the original acids, which very rarely exists by sulfate, which has one proton still attached. And then sulfate, by the way, these should be tetrahedra. They look almost plainer, but they're actually tetrahedra. And then the sulfate itself, which is the minor product. So even though sulfuric acid is a strong acid, and it looks like it might be bi-protic, it might release two protons, actually, most of the time, it exists in this bisulfate form. You have to go to pretty extreme pH to get it to remove both protons. And then the hydronium ion is the other species left. OK, so when we look at all of these spectra and we do the calculations for them, what we find when we do the molecular dynamics averaging is that the key contributor at this energy, 535EV, is the sulfate anion. OK, now, that was surprising. It's the only candidate that really contributes here. But the bulk sulfate concentration we know should be low. Bisulfate should be the one that's dominating the spectrum. And then maybe if there was specific absorption of this di-anion into the surface, maybe that might enhance its population, but we weren't sure if that would be sufficient. One other side note, we don't see any oxidation of the surface that would produce a pH peak at lower energy, and there's no evidence for that in the experiment. OK, those are the conditions. So this is the puzzle we left with. And I'll take a brief aside just to talk about the challenges for simulating this kind of spectroscopy. And this is really getting to us the focus of this workshop, which is a challenge for simulating electrified interfaces, period. So typically what we do in simulations, we take equilibrium systems and we try to put them in contact and then develop the interface in that way. And it's really difficult to do that with electronics because no matter what has been said, you cannot really just fill a box with atoms and hope for the best. If you could do an experiment, yes, you can do that. Nature is kind enough because it gives you 10 to the 23 or so individuals to work with. And you can deal with chemical potentials and so on. You have a very large reservoirs. But in simulation, with a finite simulation of cell and a finite number of species, you have to be very careful because imagine a situation, let's say, where you go from steadily charging a surface more and more, you'll start drawing certain ions, the interface in this case, the negative ions. And then as you get even more charged, you'll drag more of those species over. You'll begin to run out of them in the bulk. And now are you really modeling the same concentrations you had before? And what should you do to balance out that charge? And there's a lot of issues with screening as well. Are you doing the screening correctly? So it's not something to walk into lightly or with naivete. You really need to think carefully about how you do these simulations. And so for the moment, what we had been doing was giving up a little bit and saying, OK, maybe we'll just simulate the isolated objects in the solution and not think too much about how they interact with each other. But we're still missing this detail of how many of those objects are present and what ratios. What's the term dynamics, basically? And if we want to do real statistical mechanics, the whole basis of statistical mechanics, the development of partition functions is relying specifically on the ability to count the available states and to do the statistics on that. But if we don't know what we're counting, that can be troublesome. So this talk is mainly, it's not supposed to be depressing, but it's a set of warnings that you should consider carefully when you do these simulations, and then that we can build upon to define new methods. I would say for those of you out there who are still students or postdocs, there's plenty of room within this space to do more work and to continue to advance this field. By no means a set of solve problems. There's plenty of work to do. OK, so sometimes what you hear people say is, oh, we'll have bigger computers in the future and we'll be able to solve all these problems. And this is just an example to show you how difficult this problem is for electrified interfaces. So back in 2014, we worked really, really hard to do this pretty large simulation, but it's only 1 nanometer by 1 nanometer at the interface and then maybe 7 nanometers or so in length. That was a lot of effort, but really, let's imagine, OK, if we could do 1,000 times larger, so 10 nanometers by 10 nanometers and maybe 10 times longer as well, and if we could simulate for longer times, what would that look like? So if you think back to the start of the week when Francesca Toma was speaking, she's my colleague from Berkeley, she mentioned kind of a typical current density you might expect in an electrochemical experiment. And so let's say 10 milliamps per centimeter squared. You often see numbers in that range. If you convert those units into units we might understand better from a simulation point of view. That works out to be 6 by 10 to the minus 5 electrons per nanosecond per 10 nanometers squared area. So in a surface that is 10 nanometers by 10 nanometers, every nanosecond you can expect only this fractional amount of electrons to be transferred in the balance, let's say. OK, there may be processes occurring that are forward and backward, electrons going one way or ions going the other, but the net result is this tiny number. And so if we scale it up to actually get an integer, roughly six electrons will pass through a surface that is 100 nanometers by 100 nanometers every microsecond. Now, we can't do that. I mean, doing that simulation, never mind have an issue. I mean, even classically, if you could do a kind of a hybrid model, that's a huge challenge, right? And so I think waiting for the computers that might solve this problem is probably, I'll be dead by that time, so it won't matter to me, but it's not worth the time. So really what we need to do is this concept of constraining our simulations to try to model the details around the events that we might care about. So charge transfer events, they're extremely rare. There's very likely a lot of equilibration that can happen before and after. I'm not really talking about when the current is running, which will create its own steady state, but even just before that point, when you're still in the capacitive regime and you're beginning to initiate charge transfer, you could maybe model that system as a quasi-equilibrium. And so if you focus on the thermodynamic state of the interface and try to isolate the different charge transfer events, maybe that will work more on that later. But definitely to get there, as simulators and theorists, we should take all the insight we can get from experiment and from models and so on, because we need it. Okay, so let's talk first a little bit about models and continuum models in particular. So we did hear a little bit about this also earlier in the week and this picture appears quite a lot from this review in each materials by Ned Markovich. So the theory of electrified interfaces is pretty old. It goes back to the middle of the 19th century to Helmholtz and it has evolved every 60 years or so and we're at a point now where if you do things the right way, you realize that yes, your concentration should not blow up at the interface because the atoms have a finite size. They can only pack together in certain densities. There's a limit there. And we also know that the dielectric screening near the interface is reduced because you have less species there to do that work. And then there may be details of specific adsorption that you have to care about too. And so what we had done recently was to try to fold all of these details together into one model. And we're using basically classical density functional theory here. So you can define a free energy functional, which Omega here, which is a function of the potential and also the density of species. So roll plus or roll minus would be the density of cations or anions. And so this is a simplified two species model, cations and anions embedded in a medium defined by the dielectric here, epsilon, which is spatially dependent. And you end up solving a set of self-consistent equations based on the boundary conditions, which are the concentrations of the different species out in the bulk solution. And maybe some details you might know about the interface in terms of its ability to specifically absorb certain species or not. These corrections here, they're basically entropy terms. They have this kind of a cubed row, log a cubed row that you might remember from statistical mechanics. Those terms are related to the finite size of the object. So they place limits on the concentration as you get close to the interface. And then there's also a correction we've derived when you have a size disparity. So your cation and your anion may not always be the same size. And so a zero here refers to one of the species, the cation and B zero, the other, the anion. Okay, so with this free energy, then you can minimize that functional to find the optimum value or the stationary point with respect to the potential and with respect to the density profiles of the species you care about. We only solve this in one dimension and we end up with this modified Poisson-Voltzmann equation which looks very similar to the normal relationship you expect between the potential and the density, except now it has the dielectric function in there as well. And what you get when you do the calculations are these kinds of profiles of ions as a function of distance from the interface. So there'll be a screening region and then a bulk region. Okay. By the way, VASPSOL was mentioned earlier in the week too. So the postdoc of mine who worked on this Arcton-Baskin, he's now at NASA and he's also collaborating very recently with Richard Hennig who was implementing VASPSOL inside the VASP code to also include these details we just described. So in the near future, you should expect to see some improvements that say in the kinds of systems you can model with VASPSOL and the range of potentials that you can physically model as well. Higher potentials would be possible now. Okay, but ultimately what we'd like to do is to go back and forth between this continuum picture and a more atomistic picture which I've just drawn as a cartoon here, like if we know the concentration profile, can we discretize that and say, okay, I'll begin my simulation with a certain, let's say excess of ions in one region and then the bulk concentration far away with the aim to try to model the electrified interface as close to an equilibrium population as we can get so that we don't run into this situation where we're stealing species from one part of the simulation so to meet the requirements of the other. And so that's the idea in principle. It's definitely not yet implemented although we have begun that work but that's our goal, that we have an integrated framework for interface modeling. And again, I would say for those who want to model explicitly at the atomistic level, whether you do classical MD or have an issue MD, electrified interfaces, this is one way that might be consistent with the kind of grand canonical picture that we also heard from Axelbros the other day. Okay, so back to the spectroscopy. So I had mentioned that the sulfate contribution seemed to be dominating the peaks that were observed in experiment and defining this potential dependence. And so what we did was we built one of these continuum models to describe the system where we didn't just rely on details of specific adsorption because that wasn't sufficient. We also coupled together three species. So the bisulfate population, the sulfate population and the one thing that connects the two of them, the hydronium population or the protons in the solution. So the pH. And so the concept here is that as you approach the interface, if you have different populations of these species, you will basically be creating a local pH that is different at the interface than it might be at the bulk. And that should be calculated self-consistently so that now assuming that the equilibrium constants for proton addition and removal are the same as they are in the bulk. I don't know that for sure, but we've assumed that. Then if you use that, you can come up with self-consistent charge distributions that deal with the detailed balance between the sharing of protons of these species. So if you don't include equilibration as you go from negative surface charge through zero to positive and even more positive, you find that, yes, there is some sulfate at the interface in blue here, but there is always bisulfate. And it's only when you include this equilibration with the proton population, so allowing the pH to vary as a function of space, then you find that now sulfate begins to dominate the species at the interface. It's not everything, but you're definitely displacing or removing bisulfate from the interface. And the free energy is lowered by enhancing the number of highly charged diana at the interface. So that was an interesting observation. And seemed to make sense when we thought about it more, and this is how we then explained why the experiment was seeing only the sulfate as being the dominant component at the interface. So that's just the detail here. So it's not just a matter of providing the isolated species and their spectral signatures, which you cannot do an experiment anyway, but also this inclusion of some physical modeling of what we expect the situation to be as all of these species interact in exchange between the different populations. So local pH in this case. Okay, so there's some useful constraints that you can think about when you design molecular dynamic simulations of interfaces. So what we just talked about here was building up the complexity by considering isolated species and solution. You should maybe learn about the free energy landscape. We're gonna do that very shortly by exploring different collective variables, coordination number, distance between species and so on. And then you want to slowly introduce the interface and maybe provide initial density profiles or guesses of what the populations might be at different distances from the interface. Hopefully do that self-consistently. And then you're still in this capacitive regime. We haven't done any charge transfer yet. And then we have to think about charge transfer events, what they look like and maybe even some electronic constraints to prevent that. So that's coming and we'll get to those details as we move on. Okay, so let's talk now about the solvated species that might exist inside our electrolyte and we'll focus on multivalent ions here like magnesium. So you might have seen this cartoon already. This is the rocking chair model, so-called by Michelle Armand for the lithium ion battery where you're basically moving lithium ions from one intercalation compound to another as you connect the external circuit and allow electrons to flow or to push them backwards with an external power source. So at the anode, you're creating solvated electrons. You're lowering the Fermi level of the system to do that. And then over on the cathode side, cathodes are typically maybe not electronically conducting in the traditional sense, in the ballistic sense. And instead what you see is changes in oxidation states of the transition metal atoms in this framework and hopping of electrons through those transition metals out to meet the ions and to neutralize them as they come in. Okay, there's a wide range of disadvantages of lithium ion batteries. I could just show a picture of a car on fire here, but that's a little bit crass. The main thing is that they have low capacities. That's limited by the cathode material. Their voltage is probably below optimal. That's limited by the anode. There's some advances being made there by using lithium metal as the anode. The rate of discharge is limited because there's some safety concerns. If you discharge too quickly, you might start burning the electrolyte. There's some instability and then the famous dendrite formation where lithium fingers extend from the left-hand side to the right, short the cell and cause fires. So some beyond lithium ion proposals would be chemical transformations. So you start with material like sulfur and convert it to lithium sulfide. We'll talk about that at the end of this talk. Others include different metals, maybe more earth abundant metals like magnesium, but also metals that have stable passivation on their surfaces. So they don't form dendrites and don't lead to this instability issue. So the first magnesium working cell was developed and published in the year 2000 by Doran Arbach. And it had a complicated electrolyte so that this was the first scary thing when I started looking at this system. You had various different species that might be present and proposed to be active. These contact ion pairs and triples of magnesium with chlorine or chloride, sorry. And then other counter-charic species that were even more complicated with some organics. And then you have some solvent. But we started, okay, thinking about this and saying, let's focus on the active species. And then already there was some controversy talking about different experiments that could see different coordination of magnesium in the electrolyte. Some were done by freezing the electrolyte after the effect to make a solid crystal and then analyzing that, which is obviously not the same. And then others efforts that were done kind of on real liquid samples. And so you can do the kind of simplistic cluster models in vacuum and begin to explore the binding energies of these objects, but you realize very quickly for multivalent ions and solution, particularly when the solvent is not water, but some aprotic solvent and organic species, in this case, THF, the binding energies of these species are huge. So the typical bonds because they're strong ionic bonds between high HRS objects, they're in the multiple EV. And then even the polar oxygens or whichever group might be the polar group of your organic solvent, those are quite strong bonds as well, much bigger than hydrogen bonds, okay. So we did the thing I said you shouldn't do. We just threw stuff into a box and modeled it initially. This was back in 2014 as well. And one thing we noticed was that the intuition you might have from these cluster calculations where you start with something that looks like a six-fold coordination, magnesium here, two chlorines and one, two, three, four surrounding molecules of solvent ended up equilibrating in quotes to some structure that had reduced coordination, four-fold coordination with the two chlorines on the same side, amazingly. We think electrostatically, why would they want to do that? They repel each other and then only two solvent molecules involved. So already, even though we trusted the simulation, it's presenting us with some puzzles and we wanted to understand what is actually going on here. We also did some meta-dynamics. I'll talk in detail about meta-dynamics a little later. One of the proposed species in the solution was this dimer. So we can form dimers. That's another complication. And there's three chloride ions involved. In some models, there's only two equally shared, maybe in other models, three. What we found was that that species, when you first form it, it's only short-lived. And eventually, if you break them apart, two chlorines go with one magnesium, one with the other, okay, that's maybe to be expected. But there's this intermediate, which has one bridging chlorine and two kind of dangling ones that seems to have a deep minimum of free energy. Okay, so it's getting more complicated. So we need a little bit more heuristics here to try to understand how do these electrolytes behave? Okay, so the key challenges that I'd like you to remember when you try to simulate highly charged objects in solution, here I'm focusing on what I call multivalent electrolytes. It doesn't have to be the battery materials I'm talking about. It could be lanthanides, right? They're mostly three plus, or it could be things like titanium, which can go to four plus in some cases, or platinum, it really doesn't matter. If it has a high charge, and if your solvent is not a great solvent, like it's an organic species, not water, then these are all things you should worry about. So you have very strong coolant interactions, weak dielectric screening. There may be quite deep, local free energy minimum that you can get trapped in as you do your molecular dynamics. And so your initial conditions can lead to some irreversible outcomes. You basically will only explore a very narrow region of space within the free energy landscape. And in some horrible cases, you may just have precipitation for multiple reasons, but one is that you have a finite population of charged species, so they'll try to find each other. And as they do, you will be removing the mechanism by which they screen each other. So Dubai screening, which comes from the free charges in the solution, if they start pairing up and you only have any of them in your box, then every time they pair up, you lose two free charges. And so your Dubai screening goes down and now they see each other even more and it cascades. And so you basically end up with these precipitates in your solution. Whether that's real or not, you have to question based on your knowledge of the physics of how they might stay apart in the first place through screening. Okay. So, all right, imagine this situation. So you run the simulation, you can calculate something like GFR and coordination numbers, and then you begin to explore the free energy landscape. And this is this problem I mentioned of being trapped in a local minimum, but after a long, long time, you might come over a barrier and see some other species, okay. So, we looked at magnesium on its own in the THF solvent and we found that if you began a simulation in 5-4 coordination, it was long lived enough that you could get very good statistics on its GFR, it's a radio distribution function. So the coordination between magnesium and oxygen and the associated coordination number, it's very clearly five, no deviation. But then if you begin the same simulation starting in coordination number six, so a more packed region around the magnesium, you can also collect very stable statistics for the GFR and the coordination number. So you may have done a good job in quotes by running a long enough simulation, but that doesn't necessarily mean you have the answer because clearly it depends on the starting point. So what we have been doing ever since we noticed these outcomes is free energy sampling. And so the idea here is that you think about some kind of collective variable that you care about and you want to basically integrate out all the other degrees of freedom in equilibrium with the bath also with the finite temperature that you're exploring and the concentration maybe that you're exploring. So you end up with a free energy function which is a projection onto that one or two variables that you care about with all the other variables integrated out. There are different ways to explore this kind of simplification of the complex free energy, the many-body free energy. One of them is called umbrella sampling or you'll also see people call it a blue moon sampling where you add a spring, hook's law type spring into your Hamiltonian that basically favors certain values of a given coordinate X zero with this harmonic potential. And so it basically biases the simulation to push you towards those favored values. And then you can tune X zero to get something like a profile like this. There's also meta-dynamics which is an ingenious approach developed by McKenna Paranello where you leave and Sandra Lio, sorry, you leave behind kind of little breadcrumbs to remember where you have been that prevent you from revisiting that region of free energy space again and again. And so that helps you very much to climb out of local minima in the free energy landscape and to explore new minima. And so we've made a lot of use of these two techniques and particularly meta-dynamics for long simulations using classical micro-dynamics. Okay, so I talked about this already. This is our puzzle that we're trying to solve. So, okay, let's start up on the top left here. So we imagine the situation where we have, this is the free energy difference of association between magnesium and the oxygen of the THF molecule as a function of that bond length. And so as we separate one THF from the magnesium, this is what should happen. You should return to zero free energy difference because as you remove one molecule, it will be replaced by another from the solution. So the net difference should be zero as you go to long distances. So that's good, that's a good check that we're doing the dynamics in the right way. There's quite a high barrier to removing this object. This is in units of KT at room temperature. So 10 times KT to pull off one molecule. So unlike water, these organic solvents, they're quite strongly bound to the species that they connect to. We'll talk a little bit about entropy later. It has big consequences for that reason. Okay, then if we do a similar free energy analysis, but this time the coordinate that we're exploring is the coordination number. So the number of molecules bound to the magnesium, you can define that as a continuous variable and then scan it in the free energy. And what you find here is that there are multiple minima for the free energy difference with respect to coordination number. So that's interesting. That says that in the simulation, you might expect to see a mixed population of both species. Before we had said that if you began with six-fold, you stayed in six-fold. If you began with five-fold, you stayed in five-fold coordination. Now that we examine the free energy, we say, oh, it makes sense. They are both local minima in the overall free energy landscape. And so, yes, you should expect a mixed population of both with their relative populations defined by the Boltzmann factor based on this free energy difference here, which in the classical simulations was about seven KT. When we repeated this analysis with a initial MD, we saw the picture was even more complicated. We found essentially almost three local minima and surprisingly, the five-fold coordination and six-fold coordination that scale here is not an exact match to the integer number of molecules, but you get the idea when you look at these pictures. They are equivalent in free energy. That's really surprising because if you think about it, I said already these individual bonds between the magnesium and the oxygens of the solvent, they account for a lot of energy, a lot of enthalpy. And so if you were to just use clusters and compare six-fold and five-fold coordination, the difference would be on the order of 40 times KT, even though here we see the difference is zero. There's a barrier, but they're equivalent in free energy, which means there's a 50-50 population of these objects in solution. Okay, so that's interesting and very surprising. It must imply that one of these structures is more dynamic than the other because how otherwise could you have a free energy that is equivalent if the enthalpy is different by such a huge amount? Okay, magnesium isn't unique in this sense. We've looked at other species. So some like lithium and acetonitrile or calcium and THF, they are dominated by a single free energy minimum, so you have truly one solvated species, but many others have multiple free energy minima with respect to coordination number, and sometimes even plateaus, meaning that there's a fluid exchange between different coordinations that exists in solution. You can think about this being related to the charges of the molecules and their actual sizes as well, and somehow how easily the solvamoys can pack around these different species could define their overall free energy landscape. One important thing to keep in mind, if we admit that there are multiple free energy minima, okay, there is a way you can relate the redox potential to the salvation free energy. So the overall free energy of the conversion of let's say metal ions from their bulk forms and solid metals can read like this. So there is, let's say you vaporize the solid, that's what this term is, and you ionize the solid atoms in the gas, and then you solvate them, and then this term here reflects the position of the potential basically. And the only term here that really depends on the solvent is this middle one, the salvation free energy. The other ones are kind of isolated either in the bulk or in vacuum. And so if those are constants and you can change the properties of the electrolyte, change the solvent, change the concentration and so on, then the redox potential is really dependent on the salvation environment to a certain degree. You don't often see it stated that way. You know, when you look up tables of redox potentials for different species, they're almost always in water. And the standard assumption for that is that that is the redox potential of that species, but actually it should be a function of which the solvage used. Okay, that's the first thing to keep in mind. And then what this implies is that if you have weakly dissolved species, they will have lower redox potential. It will be easier to reduce them. Okay, so we've been calling this existence of multiple species in solution, the salvation spectrum. And if you were to assume there was only one free energy minimum, then most of your previous understanding for how electrochemistry behaves makes sense, you can define a well-defined redox potential based on the salvation energy of that object. However, if we now think about multiple coordination environments being stable and having their own populations, you realize that they have ordered free energies of salvation, which means they have ordered redox potentials. And so what could happen is that the least stable of these species which you don't see a very high population of might be the one that first gets reduced. And if there is enough time depending on how fast you're scanning your voltage, that population will be replenished from the bulk. And so the active species might not be even visible in the bulk because it's such a small population but it is the one that gets reduced first. And therefore your charge, your current sorry is flowing through the production or production of that species. That's an interesting thing to think about. You could be deceived looking at the bulk electrolyte and think that you know what the dominant species are but actually the work might be done by something else, some minor species that exists at a lower potential. Okay, I mentioned entropy a little earlier. One way you can think about modifying your electrolyte population would be by changing the temperature because if certain free energy minima have let's say lower enthalpy of binding but they are on an equivalent scale of free energy to the higher enthalpy of binding species, then they must have quite a large entropy component in their free energy. And therefore if you change the temperature, the slope of this free energy defined by the temperature I'm sorry by delta S, T if you vary it you can alter the free energy significantly and therefore switch from one dominant population to another by increasing the temperature. The more entropy stabilized objects will become more stable at higher temperatures. And so that's something we still have to see verified experimentally but it's an exciting way to think about why certain species in the solution might be evident at different temperatures and how that might affect the electrochemistry as well. Okay, all that was talking about the bulk phase. Now let's get to the interface. So this is an example where if you explore the free energy of different species of the function of distance from an inert electrode in this case graphene. What we find is that there's some surprises here. The anions in this particular case, again because it's an organic solvent it only really can coordinate objects that are positive. The anions are very unhappy in solution. They're really only there because the cations are there. And so if they can approach an interface they will and tend to populate the interface in excess. And so this isn't specific adsorption to the interface it's just what we call a solvophobic effect that the anions don't want to be in the solution at all. And so one effect of this is that the species that arrive at the interface are often paired with anions whether you think they should be or not. So this is a two-dimensional free energy landscape where the heat map shows the free energy the scale here on the right. And we've got coordination number of different species magnesium with the solvent and distance from the electrode on the vertical axis. So each of these narrow minima or wells within this free energy landscape those represent countably different species that exist. So let's say if I have a four-fold coordination with the solvent that means in this case there's either one or two points of contact with an anion. Six-fold coordination there's no anion you see that one is really shallow. So it's very unlikely that you get those species. And there's a lot of contact ion pairing and tripling and so on. I've highlighted the dominant two configurations here which is magnesium with only two solvent molecules and connected to two different ions. That's at the bulk, sorry, at the interface when there's no charge. Now if we electrify the interface make it negative what you find is that there's a shift if I go back and forth here. Coordination numbers two and three we're stable without bias. And now with the charge on the interface we go to maybe four and possibly some of three core coordination. And it's amazing you would expect the charged objects to be drawn more closely to the negative electrodes. So the positively charged objects but actually the dominant species of this interface are neutral and maybe sometimes even negative in addition to positive species at the bulk of contact ion pairs. Not the bare ion. There isn't enough energy in the charging of the system to break apart the salvation environment around the fully coordinated cation with just the solvent. And so that remains a minority species. So that's pretty surprising. This has just came out in JPCL in the last month or so. And so it begins to explain a lot about what the dominant species might be at the interface. It's not just the bare ions even though you might think so simplistically. And this goes back to the continuum models, right? We needed to enumerate what the species were in those continuum models. Now with this insight, we could maybe think about that differently. There's interactions with the solvent itself. The anions are seen to be driven to the interface. And then there's lots of complexes formed contact ion pairs. It is challenging to think about that. But one thing you could draw from this is that the anions themselves are assisting in bringing the cations closer to the electrode even when the electrode is negatively charged. That was the biggest surprise, bare ion salvation. So just the solvent, it's too rigid to bring those objects close enough that they might get reduced. And so it could be that the anions that you have in your solution, in your electrolyte might be very important in terms of reducing the over potential, meaning the excess potential you have to supply to begin to reduce metal ions above the terodynamic potential where you'd expect them to become reduced. And so in this case, it's the solvent that might be the issue here, even though most electrolyte development focuses on the ions in solution. Maybe in this case, it's the solvent that's the problem. Okay, I'm going to do it on time. All right, this last part is short. So hopefully I won't run over. So I want to talk a little bit about the existence of insulating layers between the electrode and the electrolyte. Okay, so the context of this is lithium sulfur. Long story short, lithium ion batteries, they're great for cars maybe, but not so good for heavy duty vehicles because the amount of weight that the battery occupies inside something like a truck begins to dominate the weight of the truck. And so you really need a high capacity and high energy density solution to that problem. Obviously gasoline is the one we have right now, but lithium sulfur battery could supply that because it is about, let's see, a factor of six or seven more capacity than a typical lithium ion battery that's used in electric vehicles right now. Again, it's lightness as well as sulfur could make it also useful for aviation. Lithium sulfur is a complicated system. You're converting sulfur to lithium sulfide. There's 16 electrons and 16 ions involved. Obviously that's not a single step reaction and it involves the breaking of the sulfur ring in solid sulfur as it becomes reduced and forming various what are called polysulfides. So they have the formula lithium two SX where X could be anything from eight down to one. And you can also form radicals with only one lithium. So also complicated chemistry. And the sad news is that these polysulfides dissolve in the electrolyte. And so you begin to lose active material over time as the cell cycles. And that leads to problems with decomposition and the cells fail over time. So that's why there's not a huge amount of commercial lithium sulfur out there. One other thing is that there's big density differences between the different species. So as you move lithium back and forth, typically the anode is lithium metal. There would be gaps in your cell if you work out the densities. And so really what you'd need to have is some kind of spring loaded cell to keep all of these objects in electrical contact so that they can still function. And that's done actually the test cells that you see often have a spring inside them to keep the materials together or they're screwed together with some finite pressure to keep everything in electrical contact. Okay, now from an electronic structure point of view, we are interested to learn about what these initial charge transfer events look like. And so again, with the naive approach, throw some things in a box, see what happens, right? So we have graphene as a model of our current collector, the active sulfur sitting on top of that in a thin layer, we'll get to thicker layers later. And then we imagine this thought experiment of bringing a lithium cation from solution down to the surface. And then what we notice is that initial charge that was localized on the graphene, maybe with some hybridization of the sulfur ends up moving to this region near the lithium ion. And eventually the local sulfur molecule here is what gets reduced. That picks up this extra electron that was initially on the graphene and sits right beside the lithium. Okay, looks fine. One question you might ask is how conductive is sulfur? And it's not at all electronically conductive, it's an insulator. So as you get thicker and thicker sulfur layers, would this process behave differently and how would we model that? Okay. One thing, you can do some tests to look at this, but if you add electronic charge to sulfur, yes, you will elongate one bond. So there's a metastable structure where the entire molecule expands isotropically, but the stable state is where you form a localized charge region by stretching one bond at the electron localized here. And so this is effectively a strong or small polaron. And that object is what moves throughout the system from bond to bond and from molecule to molecule. And you can explore that energy landscape using non-genastic band calculations. And there's already some work published on this in the solid state. The charge transfer can be adiabatic or non-adiabatic, depending on how close the energy levels are in the charge state and the ground state of the molecule. And there's some nice work done by Don Siegel on that topic here in this paper. But what we were noticing as we increased the amount of sulfur is that we began to see something that didn't seem physical. So there's an instantaneous charge transfer of the excess electron that we've added to the system. So stepwise, what we think should happen is you charge your electrode, since you have a dielectric medium now before you get to the electrolyte, which I haven't shown here, there will be a polarization induced in the dielectric that will screen the electric field, but the electric field will still penetrate through. And then what you'll find is that the electronic structure of the dielectric and the underlying metal will start to offset as it follows this local potential. So in terms of energy levels, if I think of the electrode having its Fermi level here, and I want to raise the population of electrons by adding more electrons in the system, raising the Fermi level, then the position of, for want of better terms, the bandages or the homo and the lumo of the sulfur molecules, they will start to shift downwards as this average subsistent field or local potential responds to the presence of the excess charge on the electrode. Okay, so it's just simple electrostatics. But at some point, if the charge on the electrode is sufficiently large, the lumo of this molecule will dip below the Fermi level and spontaneously accept that electron. And now the subsistent field will rebalance itself because now you have a different direction of your charge. Okay, so it gets worse. Obviously this potential and its slope, it continues on forever, right? And so if you have an increasing thickness of dielectric, then even if the immediate molecules next to the electrode are not getting reduced, there will be another one at a lower potential and then another at even lower potential. Eventually you'll find one. If you get thick enough, that will accept this electron. And it gets worse again if an ion arrives on the electrolyte because now that ion is amplifying the electric field. It creates a capacitance. And then once these electrons arrive on the molecules of sulfur, what we've said is already that you'll start to break bonds. And so some chemistry will start to happen. And now you're going down this kind of irreversible path that you can't come back from. And you may begin to see some chemistry that is unphysical because at some point, what you know from human experiences that if you put an insulator next to a metal, you are protected from it, no matter how thick the insulator is. So why would it be the case that these simulations would seem to tell us that at some thickness, electrons will teleport across this gap and arrive on the outer surface of the insulator or the dielectric. So the issue is because we have a common Fermi level in the system, right? So that's the problem. We have assumed that the electrons are at equilibrium. And one way to counteract that is to use what's called constrained DFT. So you use your physical intuition to say, well, I know that my electrons should be pulled up, localized in some region of the system. And so what I'll do is I'll define a constraint that keeps them there. And then I can do my concham equations with industry functional theory or whichever theory you're using with this constraint. And it just adds a correction term to the potential. And then the result is that if you just do regular DFT, you'll find that there's kind of a Fermi level pinning, right? Wherever the Fermi level is in the metal, you'll see that the surrounding dielectric will pin itself to that level. Instead, if we use this constrained DFT, which you see in ours, what I've labeled as non-alpha occupations. So the alpha principle says you should fill states, whatever states they are, up to the point where you run out of electrons in order of increasing energy. But in this case, you can see there's some empty states here below some states at higher energy that are occupied. And that's enabled by using this constraint that shifts the local potential in different regions. Okay. All right, so in principle, that should work. That should prevent this instantaneous charge jumping out to the edge of our dielectric. And we've tested this to see, what is it relevant for? And so one way to do that is to start the system, constrained, almost like a race where you have horses kept behind a certain line and then release them, but to do electron dynamics. And so in that case, we do real-time TDDFT, so time-dependent density functional theory. And this is the evolution of the charge in the graphene. The lithium stays plus one all the time and the sulfur. And so what we notice here is when the lithium ion is very close to the graphene, you see it takes about 70 picosecond, 50 seconds, sorry, 50 seconds for the charge transfer to initiate. And then eventually the ion moves inward as well and becomes surrounded by the sulfur. Now, if we had a thicker sulfur layer and the ion was somehow pinned inside the sulfur molecule, what you see is a kind of a saturation. And this amount of charge that moves is limited to this first molecular layer here, even though you can see some polarization around the lithium ion. There's no net charge on those molecules. The net charge is actually localized in this region here. So there's some hybridization between the graphene and the sulfur, and then it saturates and stops. And that occurred on the order of a tenth of a picosecond. So that's plenty of time for some real ion dynamics to occur as well. You could have bond fluctuations occurring. You could have the beginning of a desolvation event and maybe even the start of some chemistry. So really if we want to do these simulations carefully, again, molecular dynamics, throw everything into a box and see what happens. That would be quite dangerous in this case because you could have strong electric fields and this spontaneous arrival of charge if you're using a common ferminol for the entire system that would lead to maybe some chemistry that would be enhanced or over exaggerated because of this effect. And so that's something we're trying to mitigate against. And I'm always thinking ahead to the molecular dynamics I might run. And so I want to foresee what these problems are and try to avoid them. Sorry for the bad animation here, but this is just an example to show very quickly. So as I move the lithium ion down through the system towards the graphene, what you'll notice is that if I do that just with regular density functional theory, the effective energy of each of those positions is almost the same. And the reason for that is that the electron is moving with the lithium ion. It's basically in the sulfur around the lithium at each point. I can't prevent that from happening. However, if I constrain the charge to remain in the graphene, you see the expected behavior, right? There should be a gain in energy or a lowering of energy of the system as you bring a charged object to the graphene. And that's what you'd expect from electrostatics as well. Okay, so that work is still ongoing. We had that initial publication earlier, I guess at the end of last year, beginning of this year. But what we're doing now is trying to combine methods that you would normally use to explore kinetics like the Neutralastic Band Method to estimate these barriers while also constraining the charge to be in certain parts of the system. And so you can see here lithium inside bulk sulfur with the Polaron formed on the nearest sulfur molecule. And what we'd like to do is to move that Polaron outwards and away from the lithium and to see what are the barriers to do that. Now, the literature has already explored the isolated charges and their kinetics for moving through these solids, but not when they're close to interacting with each other. And there's definitely a strong slope already. So if we were to just do functional theory, we'd always end up with this end state where the charge, the electron, is sitting around the lithium and we can't escape that. But what we'd like to know is, no, it has to come here from some point and there are barriers to its movement. So what are those barriers and what would be the rate of charge transfer and recombination? Okay. So the outlook from all of this, if I had to summarize it, is that, yes, we, there are some very nice experiments out there that we'd love to rationalize. There are different techniques that we need to bring together, I think, to do this well. And so we can definitely learn a lot from continuum models and use those to inspire atomistic simulations and then to do that stuff consistently. We're realizing with some horror that there's a very diverse population of species and some of these solutions and we need to take care of that carefully, especially at the interface. Free energy sampling is helping a lot there. And then we also need to be careful about our electronic structure as well, that when we model these electrified systems, there are some outcomes that might be unphysical and we need to mitigate against that by putting in the proper constraints. And so with that, I should thank my research group. The various folks involved in this work would be Fabrice Roncoroni, he's a PhD student, shared with Etihad and Zurich. Artem Baskin, who's now at NASA. Irpen Arkin, who's now a data scientist at Change Health. So there are other career options for us in the bigger world. And then other folks who are getting involved now as well, Anasen Spatias and Sid Sunderman. My experimental collaborators at LBNL and the funding that's supported this work. And then thank you, the organizers and the participants for your attention as well. I'm happy to take any questions. So thank you very much for this very nice presentation of all that's complicated bulk at the interface. So we can have a few questions now. I see already one from the audience, from Fatima Matrodi. Fatima, you can ask your question. Yes, please. My question is about the first introduction slides that you had about a conversion of sulfate regarding the pH. I didn't get from that slide, how much it should be the pH or how much it should change to have this conversion for sulfate. Okay, good question. So the proxy for the pH here is the profile for the hydronium ion population. Okay, so let's say the starting solution has its own natural pH, which for sulfuric acid is gonna be pretty high. But then you can see that the bulk pH would be, let's say the limit of this red curve as it extends out to infinity. Okay, and so you can see also that pH is a, let's see, these are different scales here, but it should be going to the same limit as you go outwards in distance. And then locally, you can see a strong enhancement of pH depending on the bias. If it's a negatively charged surface, obviously those positive ions will be drawn to the surface. And as you go to positive charge at the interface, you're driving those hydronium ions away. And actually there's no hydronium near the interface. It sits at about three angstroms out, actually here in the equilibrated phase. And it's basically at its bulk concentration. I hope that answers your question, but the answer is it depends on where your distance from the electrode surface. Okay, thank you. Yeah, it answered my real is because I'm working already on some ionic liquids is also regarding these sulfates. And I was wondering that when I saw this peak for free sulfate, it's also regarding of changing in the pH for the solvation. In the bulk phase or at an interface? In liquid, in bulk, yeah. In bulk, okay, yeah. So in the bulk, you're going to, everything should be determined, I would imagine by the, let's say the equilibrium constants for interconversion based on pH between sulfate and bisulfate. Now, in an ionic liquid, that might be very different if you don't have water around or you don't have an active source of water. And actually we have water. When we have the water, then we will see also these free sulfate. Got it, okay. Yeah, so I guess for the bulk, you could just use the equilibrium constants to derive the relationship between the relative population of these species. But typically those constants are published for just water. They might be slightly different in the ionic liquid, I'm not sure. And some ionic liquids are even super acids, like TFSI can be associated with protons sometimes. So that's something to keep in mind too. Okay, thank you. So we have another questions from Kazem Zoua, please. Hello. Hi. You hear me? Yes. I'm Kazem Zoua from the University of Florian. I just wanted to know the software you had used to calculate the constant DFT calculation. Good question. So we're currently using, and I didn't advertise it, I apologize. We're currently using the G-Paw code, G-P-A-W, which is the code that comes from the group in Denmark and Arhus. So this is very nice for doing development work because you can interact with the code with Python and kind of do a lot of scripting, but also see the variables that you're working with in a very transparent manner. It's a lot easier than Fortrum programming, which is my background. And then it has constrained DFT implemented within it and a lot of other functionality as well. And it can scale, those calculations can scale to big systems using localized orbital representations as well. Thank you. Thank you. Then we have a question from Simone. I'd like to ask a question regarding Magnesium in THF and your simulations of the free energy profile, plotted against the coordination number around that Magnesium ion. So first of all, is there, or can there be experimental evidence that there are these multiple environments? So is it possible, experimentally, to probe the local structure around the cation? Yeah. So first of all, I didn't show this, but the initial, let's see, maybe I did, the initial confusion around these different coordinations in solution was based on two different x-ray experiments, one of which seemed to be telling us, and they did talk about local coordination. So they were XFs experiments that should tell you about coordination number. Both experiments came to different conclusions about what the coordination number was. So that may be a hint that there might be some differences out there in the literature that would maybe imply that there are different populations. I think what I'm trying to convince my experimental colleagues to do is to do temperature varying experiments, for exactly this reason that, you know, we have an existence proof, maybe, if we can show that as a function of temperature, you might be altering the populations because clearly from an entropic point of view, there must be a strong entropy dependence in some of the species and their free energy. And so you should be able to control the population significantly in that way if their free energy minima are not too far apart. The second thing is NMR experiments. They have an intrinsic time scale that is on the kind of microsecond to millisecond range. And so any dynamics of mixing that might occur below that shorter time scales is averaged out in NMR. But if it's a slow process, then you should see two peaks if you have two species. And so again, finite temperature measurements as a function of temperature for NMR might see similar effects, particularly when you have a deep local minima that have high barriers between them. And so that would be another way to do the same exploration. Otherwise, it can be difficult because if there's a mixed population, you just see the average effect. And so yeah, it can be challenging. And obviously that's a concern for me because I've kind of put my reputation on the line here to say, well, I think this is happening but I'm hoping that the experiments can come to prove this at some point soon. But yeah, it's a good question. Okay, we have another couple of questions. Maybe we should go quick. So one from Tao Cheng. Hi, David. Thanks so much. Very, very great talk. It's very refreshing. I have a quick question about these software simulations. You definitely show it's very different from the CDFT calculations and the regular DFT calculations. And I'm wondering if it is better to do this constant potential simulations instead of the CDFT calculations? Because I mean, when you do this charging is actually in some constant potential. Agreed. Let me go back to this diagram here, but yeah. So you're right. I'm working in the kind of a constant charge picture, not constant potential. I mean, I don't think this principle would be removed if I did constant potential. The only thing that would be different is that I would have a specific equilibrium charge state for the surface that would be consistent with the given potential. If there were observed potential fluctuations, actually I'd be even more concerned because let's say an ion arrives from solution and becomes unscreened by the solvent. Now, suddenly the potential of the surface will jump up to try to match it. To do what a metal would do is the image charge screening. And that sudden increase in the electric field between those two objects would actually accelerate this process. And you have even less control in that sense if you do your dynamics at constant potential because any fluctuation in the system, particularly in a small system where fluctuations can be quite large, could drive you over the edge and begin to populate the outer molecules of this dielectric with electrons and then begin some chemistry that you can't come back from. So yeah, I didn't want to do constant potential here because it would make life even more difficult for me, but I think it's another caveat for future simulations that if you're trying to model interfaces and there is this kind of dielectric or region of insulation between an electrode and the system that you care about in the electrolyte, constant potential may also enhance this problem. And so it's not that constant potential is wrong, there's nothing wrong with it, but it's the assumption that you have a shared and common Fermi level across the entire system that your electrons are at equilibrium. That's the issue. And there would be a long time that it would take for electrons to actually reach that outer surface. So we need to take that into account when we design these simulations. Okay, thank you. Thank you, very helpful. Thank you. Okay, I think we're gonna have a quick question to the last, quick answer to the last question. Deepa Kumar. Hi. Yes, please. Hi, thanks for the nice presentation. So out of curiosity, I am willing to know how the thickness of the thin film affects the dynamics of the exhaust transfer, electron transfer you are mentioning. Okay, so in the worst case scenario, if you don't constrain the charge, then the thicker the dielectric, the worse this effect is. And the more likely it would be that your charge will emerge on the surface instantaneously, but that's unphysical. Now, if we imagine that the charge transfer, sorry, is a polar on hopping mechanism, then it's kind of a serial process. So the thicker the surface, the slower or the longer time it would take for the charge to come in from the outside. If it's a lithium ion working its way through gaps or for the electron to emerge out towards the lithium ion to meet it. And so it should be a slower process and it's just a series of potential barriers that have to be overcome, much like ions moving in intercalation and solids. So what is the optimum value you use during your study of the thickness? Oh, we don't know yet. That's part of the study to be honest is to try to understand, is there an optimal thickness? Because let's say when I draw a picture like this, I don't know at which point the electron will meet the lithium ion. Who's moving faster? Because the polarons are quite heavy. They take time to move through the system too. And depending on where they meet, that's where the reactions will start to proceed. So it's not even clear to me either, do you want your reactions to proceed on the outer surface but then you'll have species that can dissolve in the electrolyte? Do you want them to have an insight in the middle or at the electrode but then they might be more insulating and you don't get more charge out? That's still an open question. And it creates more thought provoking, let's say insights that might help in the design of these cells, hopefully to make them more efficient. Good question. Okay, so I think that this was the last question. I don't see any other hand raising. So with this, I would like to thank David and all the speakers today for their presentation. So thank you of course also to all of you for asking questions and then making lively discussion. I think that we close here for today and we meet again tomorrow. What time? Core organizer, I forgot. It's three o'clock, three o'clock. It's three o'clock tomorrow. Okay, good. Have a nice evening. Bye everybody. Bye.