 Yeah. Yes, there we are. Okay. Okay, hi. Thank you very much like and I like as always I like to thank the organizers for putting together this beautiful meeting, especially this time because they were very flexible in allowing me to switch the slot and give this last presentation today. And also apologies for keeping you longer but I hope we can still get the dinner in time. All right, so I'm Johannes, I'm going to talk about basically chemical machine learning for molecular systems and we're also interested in materials and the twists of what I'm going to do is that we are mostly using energy functions to predict the properties so we are using energy to predict the energy and the forces but also predict other properties like that for moments and densities. And I'll explain to you why. But before that, a quick recap of something that we've seen a lot already, which is the, the representation problem of chemical machine learning so in particular, if you're doing the kind of chemical machine learning where you are predicting properties of three dimensional chemical structures. You have to represent that 3D structure in some machine readable form. And, and there's, as you know, a big, there is big list of things you can do to to this end. I think the symmetry functions were one of the first things in that direction. And we are also part of the soap family so so this is what we most use but as we've seen I think this is quite quite interesting that these different approaches are now converging to a, to a kind of related family of things and in my view there's not a big difference in what you can do. And the basic reason why these things work is because they, they allow us to quantify similarity in chemical space and and a nice way to visualize this is using these KPC a maps I think you've seen something like this as well before so this is a kernel principle component analysis, where essentially the distance between two points in this plot, each point representing one of the q nine molecules is related to how similar, you know our representations and our kernel things they are. And this is all well and good but the real kicker is that if we then color these maps with some chemical properties, these tend to be smoothly distributed across these maps. So it means that then we can use the similarity essentially to interpolate between different systems we don't need to run 100,000 dft calculations separately, you just need some representative samples and the way of basically. Yeah, interpolating between them. Okay. One thing that is also come come out of this kind of convergence of different representation styles is that it's usually advantageous to encode the system in terms of atomic environments. We've seen now also maybe at and pairs but anyways local sort of features. And this is this is good because it makes introduces the computational cost it means that inference can can be paralyzed very easily for very large systems. And this is good because it's, it's an inductive bias it's smart to do this because, indeed, chemical properties are and also atomic interactions are somewhat short ranged and things that are close to each other are really dominate the properties of the system very much. But of course it means that we are neglecting the long range things now. So if we are doing something a little bit more fancy and we're doing message passing your networks, and this to some extent, overcomes this limitation because we are now including information from beyond the cut off. But I would argue that this is actually not a very elegant way of including long range information, because it's a very convoluted way of doing it. And also it's ultimately limited right so you can do six or eight message passing steps and this will get you some ways further. And for many of these small molecule data sets that that we're using ultimately this means that you are almost using a global description. But in reality if I'm interested in materials and we've seen, you know, this what was the 500 billion atoms relation by by by Boris yesterday. Then no message passing will ever get you into the real long range and of course also not for periodic systems. So the thing that that I'm interested in so this long range issue, and there are some, some, like I have to mention here so some some more physically inspired things like fizznet and spooking and they're also things that you're Bela and then chef and good I have been doing that that that do this in a more physically reasonable way and this is kind of the tradition that I'm also following. So, so just a brief to give you an idea of the kinds of systems we want to study and then we do study. So we saw the crystal structure prediction problem by by Andreas talk so this is something that we've been working on. And that is very, very interesting and challenging because of basically the big size of these unit cells and the large landscape of different polymorphs that you need to scan. So, you know, I mean learning is a real has a real big benefit here. You know we use it from a lack of design of electronic properties this this was mainly focused on organic semiconductors. And we've also also such a nice picture as you do a reaction network so so I think this is also very interesting because if you do catalysis, you do, you know the kind of process that has a many you need to do something to to tackle the number of transition states that that you're interested in. And this doesn't mean that you need to predict all of them but you need to know which ones are the important ones and this is I think also challenge where this kind of ML can help a lot. So in particular of reliance for this talk is that we study a lot of energy materials right so so again catalysts also battery materials electrolytes for other applications. And those tend to be polar right and kind of the longest range interaction that that is typically there in chemistry and physics is electrostatics right the case is one of our. You use a polar material, and you use a short range representation, you are neglecting something, it may or may not be important but it's you definitely neglecting something. There's something there. So this is for example an example of these iridium oxide catalysts and the interfaces there, or yeah solid state electrolytes lithium diffusion and these iron materials. And we don't always end up using long range models to do this, but but we do always check whether they are necessary or not and this is kind of, I guess the take the home message there so it really depends on what you're modeling and sometimes local models or very often the local models end up and the long range electrostatics really doesn't affect the property you're interested in. So for the these solid state electrolytes turns out the lithium diffusion. The mobility of the lithium atoms in these electrolytes is not really affected by long range. It doesn't matter to mobility if you include them or not. But if of course if you have charged defects if you have applied fields, the story changes and you need to take these things into account. So, this was kind of you know my my my bragging thing I showed you a bunch of projects that we're working on, and that where we use machine learning. But this is not to say that this is always easier without problems and kind of my, my, so my now negative slide let's say is about all of these problems so. For example, we saw in Andrea's talk that there can be natural limitations to how much data you can generate or to how economically it is to generate the data. So sometimes the training costs can simply upgrade the benefits because you cannot afford to. Where is the higher. You cannot afford to run all these. Give to calculations that's just not worth it. Transferability is always a huge issue. So what happens if I do some predictions outside of my training said, will I even know that I'm outside of the scope. And then, yeah, the locality already mentioned, and kind of the canonical way of addressing these problems, other than just throwing a lot of data at the problem of compute is to introduce physics into into the interior model somehow right so we have this kind of transparent box of physics around the black box of machine learning. And I'm hoping that we understand a little bit about the physics, definitely more than about machine learning in my case. So anyways, what does that mean what what what what what what does it mean to include physics so this has been done always, because you know people are not stupid of course it's very obvious idea to do this. So, of course all these invariances that we've been hearing about symmetries. These are fundamental, and it's a fundamentally good idea to include them. I don't agree to which it's necessary depends a little bit but but I think this this has been really a, I think, having representations that obey these symmetries and variants it's also size extensivity by the way, a big, big and very important issue. I think this has made chemical machine learning possible and applicable and useful if you don't do this I think it is very risky that your model will do things do unexpected things. And in particular, something that I'm very interested about and I guess we've also heard about neural DFT so that I'm very curious about what what Microsoft research will do in this direction is to include electronic structure information in some way. And this is what I will talk about now for the rest of my talk. Okay, so in building these representations, we usually start from these machine representation we usually start from this neighborhood density. And now the question is how do we get from the neighborhood density to the electric and why do I care about the electric density because it's to me they're kind of the fundamental property. It's by itself interesting of course you can get electrostatic potentials that will moments that are important, you know, for long range interactions. You can get some interesting derived properties if you can do the response of the density to some perturbations. And of course through DFT we can get energy and forces that have a correct asymptotic so and that's what I'm after in, in many of these cases because, ultimately, what we are doing 90% of the time is building some sort of potential. And there's a very nice work from from the Toyota group in this direction already on kind of predicting the electron density. And this is a very nice trick in a way because they decompose the density into local contributions, and then use soap descriptors essentially so the neighborhood density to predict what the electron density around that center looks like. And this works very nice. And you get this the full electron density in a localized basis. And if your system is somehow inhomogeneous and there are no local effects and in particular if there's charge transfer so you have, you know, regions of different electronegativity in your system and there's some charge flowing from here to there. Then this model has no way of knowing about this and then this can be an issue. And also of course if you do this there's no automatic conservation of charge in practice that's, I'm told not a big problem. So these things kind of learn to conserve charge reasonable. But again, I think this only works for homogeneous systems. And if your system is inhomogeneous, and it has different, you know, if it's supposed to be negatively charged in one place and positively charged in the other place. These models will not will not get this right. This is the better way in my view of doing this for for the purposes of of predicting energies and forces. It's to go through density factional. So this is kind of the typical DFT equation so energy is a functional of the electron density. And it has this kinetic energy contribution has the external potential, which in the simplest cases the interaction of the density with the with the nuclei of the atoms. And the energy energy which is kind of this classical self energy of the of the of the density, and it has the exchange correlation contribution. And there's kind of in my view to niches or two domains where machine learning can be really useful. So one is in basically building kinetic energy functions. And this is because kinetic energy functions are notoriously difficult to build. So there is no no universally applicable kinetic energy functions basically there are ones that are quite inaccurate. And there are ones that are accurate but you cannot run themselves consistently. And hence they are kind of useless for simulations. And there has been some some really interesting work by Kieran Burke and also by is a company in this direction in building really powerful non local kinetic energy. And the other big one is the one that everyone knows so if you if you decide to you don't want to bother with kinetic energy and you do conch on DFT, so that you get you know the bulk of the kinetic energy from from your orbitals. Then you can do focus on the exchange and correlation which are the remaining unknowns in this in this whole thing. And this potentially gets you improved on local functions and we also found that actually you can you can make make the functions cheaper in this way, which is nice. So this is actually what we what I'm going to talk about, but this is also very interesting and also something that that I think is worth pursuing. Okay, so if we if you want to then address this problem and I'm going to start by talking about the correlation energy itself. First, because the correlation energy is, well first of all it's the smallest part of the total energy. So it's, you know, making our lives a little easy in that sense because it's, you know, we have 99% of the energy taken care of and we're just predicting about that final missing 1%. But it's also the most expensive parts to calculate exactly it's almost impossible to calculate exactly for most systems. Right. So this is why of course it's a very worthy target for machine learning because we can, if we can get a good approximation to it. You save a lot of CPU time. And this is typically calculated using an integral right so you basically you have a grid around your molecule. And for each grid point here you calculate this energy density epsilon. And that somehow depends on the density, and usually depends on the density in a very simple way depends on the density just taking the local density and its gradients and maybe the kinetic energy density into account. Right. And so basically you're doing numerical quadrature you're running over the street and summing up the these energy densities. And that's kind of the natural target then for our for a machine learning approach, and this is actually also what was briefly talked about in this talk and so the nice thing about using this density is that it's, well, it's reasonably efficient to calculate this integral. It's already implemented in all of the codes. Right, so that you don't have to worry about the grid and the quadrature. But kind of this, this, the simple relationship between the density and the energy density is what what causes the problems of normal DFT approximations. And making this non local as well as shoulders is a good idea. But there are some some issues here and the issues that that that we found is that it's a little bit hard to infer this energy density because it's not really a physical quantity. So what is a sort of a physical observable that you can get from a quantum mechanical calculation is the correlation energy itself. But this this density here is not really defined so in fact any density that integrates the right correlation energy is as good as any other in some sense. So this makes this problem a little bit ill defined right we have for each molecular configuration we have one correlation energy point, but we have 10,000 grid points where we want to infer the density. So, this makes the learning a little bit tricky. So what we ended up doing was, we took a couple cluster calculation so couple cluster is one, one of the kind of gold standard methods of calculating the correlation energy from a wave function. And we modified this, so that it projects the couple cluster energy onto the grid of the density right so this is kind of an unambiguous procedure to do this projection. And at first glance, you know, so this is the simplest system but it's a good place to start the H2 molecule. And if you look at this the electron density here and the correlation energy density that we get from this procedure. This looks kind of good because the correlation energy density. And as you can see it's it's negative or zero or everywhere right which is makes sense because the correlation energy is supposed to be a negative number. And it kind of mirrors the electron density and this is important because we are trying to build a map from here to here. So if there's a kind of a, if these two things are similar this makes building that map. It turns out that if you then go out of equilibrium and you go to this. For example, stretched H2 this is a notorious case of static correlation of strong correlation. And this whole thing is not not so attractive anymore. So here's the correlation energy density. Now suddenly it has a big positive peak in the middle. And so that's not nice. But what is worse is that this peak is at a place where the electron density is basically zero. And this makes it then extremely non locally makes it extremely hard to build a map from here to here. And yeah, so this was kind of where we got stuck. And it turns out that essentially building using this projection is not really a good idea. So we were able to do some some useful stuff using mononuclear systems that don't have the static correlation problem. But but then the problem is so easy that you don't do machine learning to solve it so we ended up building kind of normal gga functions and that was kind of interesting and fun, but it didn't really get us where we wanted to be. And I guess the lesson that we learned from this and actually when I tell this and people tell me I could have told you that before but I had to learn this way. And that kind of this pre defining of the partitioning of the energy by by projecting the energy to the grid is not a good idea because we are taking away a lot of freedom from the ML potential from the ML method to to find the optimal partitioning right. So we're actually making it harder to learn by by doing this this projection, contrary to what we were actually intending to do. But we still have that mismatch problem right we still have so many grid points and only so few reference data persistent. So what we kind of figured as an accept is, we just need a more efficient representation of the electric density then this real space grid. So this, the grid is the problem so let's get rid of the grid. So what we ended up doing actually was then go back to the work of me to the idea that I mentioned about predicting the density, and just reworking that so the neat thing about the density prediction work was that they use basically localized basis functions, atom centered basis functions, and they were predicting the coefficients of those basis functions. So that the information about the density is basically in those coefficients. So we can revert that process we can take the density we can fit it with the localized basis, and then build our functional based on those coefficients. So this is kind of illustrated here. So the, the, the kind of normal control way of representing the density is based on products of basis functions. And that's not a good representation for us because these these products of basis functions are, they are localized they're not atom centered, and they can change as the geometry changes. But if you use density fitting so you use atom centered basis functions to do this. You don't have any of these problems the basis that is always constant and fixed the shape of the basis functions doesn't change up the geometry changes. So this is a really robust and efficient way to represent the density and then kind of a bonus that of course since your basis functions are at and centered this this partitioning, but you can naturally partition the density into atomic contributions, but no we are partitioning the density but not the energy density and that's the big, big difference here. And then you get these nice kind of pockets of electric density corresponding to each atom that sum up to the total density. So the, the, the last problem we have when we use these coefficients is that they're not rotation invariant of course so if I, if I rotate my molecule, then those density fitting coefficients change. But, but this is of course the same problem that you have when you do soap or any other. You know, a neighborhood density based representation. And so we can solve it in the same way and in this case we just did the rotation invariant power spectrum trick. Basically, you're done. You can just look at the, you know, soap paper from from 2010 and copy exactly what they did so you can build a kernel that that now measures the similarity not between atomic environments but between atomic electron densities. And you can then build a size extensive machine learning model based on these atomic densities, but it is like, it is a pure density functional I have to be very clear about this so the only thing that enters. So this expression is the electric density of course we are do some things with electric density but it only depends on the. And it works. So we can do, you know, we applied this for some some some simple toy models to kind of, you know, check whether we can, we can cover different chemistry so this is a water clusters protonated water clusters and molecules, and we basically generated a little bit of molecular dynamics data and then we're kind of predicting basically future MD steps based on a part of the trajectory. So this works very well, and it's also transferable. So the, the nice thing again about working with the correlation energy is that it is fairly local and in that sense short range and all of the long range stuff we do exactly anyways because we're computing the heart rate potential and the external potential. So we can train on, you know, for water molecules predict for eight water molecules or we train on small alkanes and predict for octane, and that works quite well. Okay. But so far, everything I showed you was non self consistent and then this is of course, a little bit of a, of a downside. So we were running essentially hard to fuck calculations to get the electron density and all of the other energy components first, and then we were using that hard to fuck density to predict the couple cluster correlation energy. What would be nice of course if we didn't have to do the hard to fuck calculation to begin with. And there's a principle no, no, no need for it, because we have a density function so we can run the calculations are consistently in principle, but if we do this. And the reason is that we've been training on physical hard to fuck densities. And then we do the self consistency and the self consistency basically now goes to what our functional things is the minimum, but our function has no idea about any, you know about the whole density landscape it doesn't know that and physical densities are in physical. And so that's what you get when you do self consistent training. This was actually also already observed by, by, by close to the but Miller and Kiran Burke when they did this orbital free dft work. And they explained this like this that essentially the, the training densities you have they are kind of lie on a manifold in this space of possible densities. So you train on things in that manifold. But when you minimize the energy of your new functional. The functional will just run away from that manifold into regions that are completely unphysical and where there's no training date. Okay, so the way we can solve this is quite simple it's the same way you solve it for it to the entire time it potentials to some extent, we can do iterative training. And this is a very simple illustration of this, but I like it because you can really see this, this, this manifold that Kiran Burke was talking about. So, basically now we are doing exact exchange and mp2 correlation as our targets, so a combined exchange correlation functional. And we do self consistent calculations and here this is a principal component analysis of all the densities for for the CO molecule, as we stretch it and push it together. And you can see that the other physical densities for training and test set lie on this manifold, and the SCF densities we predict are down here, complete removed. If you look at the density era of this method. It's huge so very, very unphysical densities. So these are density difference plots for our target and between this case, and the machinery model, and just to compare this is the density and the PB zero density to give you an idea of, you know, how big those areas are. But then we add basically some of these densities here to the training set we retrain. And you can see that as we do this, our SCF points really do approach the manifold of physical densities and after some iterations. We get a really good prediction of the density from this. And this is this is fun. CO is a very fun example because the. It's well known that the dipole moment of CO is actually wrongly predicted by the Hartree-Fock method. So this means that the correct dipole moment is a correlation effect. We are here training and density function on an exact exchange and NP2 correlation. And this is not not so easy because the NP2 correlation is a smaller part of like 10% of the exchange energy, but nonetheless we are fitting it well enough to to even reproduce this, this correlation effect in the density. So that was very nice. And of course for things that are a little bit more complicated. So we did hear the water dimer, we're currently working on scaling this up from from our prototype code to get some more. You know, complex systems working, but in principle it works quite well, although so there's a big caveat here, we are doing a kind of indirect thing here of course we are learning that we are predicting the density we're looking at the density predictions. But we're not training of the densities, the densities don't go into the loss function at all, it's all energy. And it turns out there are some density errors that persist that simply don't affect the energy very much, and we cannot get rid of those. So the densities we get are sometimes very good sometimes not so good. The energies are consistently quite good. So this is kind of the caveat here. In principle this can be just expanded by including density information to the loss function as well but this is just important to mention that it's not enough to just use the energy for all purposes. Okay. How much time drive. Okay. Okay, good. So, good. So that was all of the DFT stuff so the last couple minutes I want to use to talk about a very related project. And the question here is, when we do this, this this kernel DFT stuff, we are still doing, you know, basically a full DFT calculation so our reason for it for for this, you know, our justification for this is that we can do higher accuracies but we're not really making things very much faster. And this can be limiting right, you know, it means that we cannot do a million FM systems and all that. But the question is do we really always need the full density to get these long range effects correct. And the answer is basically no. We've been developing something something related that is very much in the same vein. But that is not predicting the full density so something very simple that you can do and that is also commonly done in. For example density functional type binding and related methods is that you can take the electric density and you can partition it into some reference density and a density fluctuation term. So the reference density this is very often kind of the spherical, spherically average neutral atom entities, right. And, and so here I have a very simple illustration of acetylene in one D, and then as you add these reference density. This basically gets you kind of peaks where the peaks need to be and they're approximately of the correct height, but what is missing is the density fluctuation so basically, there will be charge transfer within the molecule polarization in the molecule, and that's not not not well described by this reference. So that's what the fluctuation is for. And then we we kind of describe the final density in terms of a sum of row zero and, and, and this fluctuation. And as it turns out, you can use a very simple and that's for this fluctuation density. So basically if you also use spherical basis functions for that so just spherical Russians. You already get a pretty reasonable description of what the density distribution in the molecule looks like. Because basically now you're fixing the main problem of the reference density so wherever it was too high you make it a little bit lower, wherever it was too low you make it a little bit higher. There's some details here in the density so this this dashed line would be this approximate density. For example, if there's some some polarization between two atoms. This is of course not captured. But by and large the that the electron distribution within a molecule can be can be approximated quite well. And the good thing about using these spherical basis functions and their proficiency is that essentially this then is just a problem finding partial charges so very simple way of representing the density. And then we just need an energy functional that works with partial charges and this has been well known for a long time. So this is, for example, this this qq method they're also other related ones. And these methods are charge equilibration methods so they basically describe how charges are transferred within a molecule based on the electronegativity, he and the hardness of each element. They work quite well but of course they are pretty coarse approximations, and the reason why they are somewhat course and not not so as flexible as we would like them to be. It's because of course the electronegativity and the hardness of an atom are not a constant. So, the, the, the negativity of an atom in a molecule depends on its environment. But we know how to include environmental information from, for example so. And that's what, what we did here. Okay, so the idea is basically you instead of having a constant electric activity for each atom, you now have electronegativity that depends on its environment. As described by the soap descriptor and then it's again a simple kernel rich regression model basically. And we trained this for the first application on the dipole moments of pure nine molecules works very well there so this is a learning curve for different soap cutoffs basically it's not so important. The important thing is that you see that basically adding this environment dependence. So from QEQ which is the constant electronegativity to KQQ which has the kernel based environment information is a big improvement. There's also a big improvement over over other machine learning models and it's kind of on par with this new ML model which has a somewhat more sophisticated way of describing the density because it uses local dipoles to basically construct it. Yeah, I'm going to skip this. But just a very brief outlook on where we're going with this KQQ stuff so we're now also applying the energy function to get energies. And this this has been very prominent promising. So one key thing here is that that we need to combine it with a normal potential as well, because the KQQ energy is only a part of the total energy it's only kind of the electrostatic part of the energy. And again it's the problem is one of partitioning the kind of electrostatic and non electrostatic contributions to the energy. So the way to solve this is essentially to fit the two components so the gap for the non electrostatic and the KQQ for the electrostatic simultaneously, and let basically the, you know, the regression decide how these things should be weighted. And then you get a correct asymptotic behavior so these are again water clusters that we are kind of slowly expanding. And here I don't even know if black is KQQ or red, but anyways the agreement with DFT is quite good. And the asymptotic behaviors correct. So that's that's all I have for today. Thank you very much for your attention. And my summary is, you know, basically I think the including this this electronic structure information is very nice because we can make more data efficient models because we're not learning the whole chunk, but only the things that we want to approximate. And it overcomes kind of this locality issue in a kind of physically motivated way. And we have kind of two versions of that one is the DFT which is the full thing that gets to the full electron density. And the other is the charge equilibration which is a much, much cheaper model that nonetheless gets your kind of the most important aspects of electrostatics and charges. So thanks to, to everyone who was working on this and funding and you for your attention. Thank you. Very interesting this machine learning electron densities. So, just a quick question. What is the largest system or most complicated system that you have tested these this method. And so, so for the kernel DFT we've only done it for molecular molecules or molecular cluster so I think the largest thing was like something. So these these octane and then we did different alcohols and things like that so organic molecule smaller. And for the current charge equilibration. We've done bigger things like you know zinc oxide nanoparticles so hundreds of atoms. So it's, it's, it's a little bit of in a prototype phase yet at the moment. So the big thing that is missing for for many of the application we want to do is a priority boundary conditions. So this is something you're working on. And, but it's like, I mean, in principle, you know that this is no more expensive than doing normal QQ, and in lamps you can do normal QQ for thousands of atoms easy. So, kind of we know how efficient it can be but our version is not as efficient as it can be. Thanks a lot. Thanks a lot. You didn't come back to the, the H2 example the stretched H2. Right. So the, I'm trying to understand the new kernel functions you introduced, are they then non zero when when H2 is stretched, because the overlap of the atomic densities is still zero in the middle. Right. Yeah, you would have that bump in the density in there. Sorry, the energy density. Yeah. Okay, so it's, it's a little bit of a question of. So, okay, so we actually didn't do it numerically but Too many slides. Yeah, so here it is. So, okay, so the thing is in the new model, we are building a descriptor based on on kind of the density and the density here. And this, this is becomes constant as as the molecule stretches. And so our model will predict the constant density for in this limit, constant energy in this limit sorry, because the descriptors are no longer changing as we go along. So if we train on this point and on this point it will do the right thing, basically. So it has the has this is one of the advantages it does have nice kind of dissociation asymptotic properties. But you need to put kind of the asymptotic points into the training. Yeah okay I mean that you have to do in any case. And I mean and that's yeah. That's the stage two problem is a kind of worms, as you know. And so, you know, they are, and it's, it's, to me it's even an interesting question what the right answer should be right, because in, you know, so in hard to fuck okay you have you have basically you have this this curve here that goes to a different limit than, than the gray one here, okay and the gray one goes to the right limit. That's broken symmetry hard to fuck that's when you do kind of you, you, you cheat yourself out of this problem by putting one spin up here and one spin down here or the other way around. And that's kind of the deep T way of doing this problem. Right so if I do this calculation with with TV or something, and typically I would do this symmetry breaking thing and I would get to the right limit. And our models definitely do that. But since we're kind of emulating wave function methods. It's a question whether they shouldn't behave more like wave function method in that case is like if I do. I have a cluster open checker cluster they go to the same limit. And I have not thought about this thoroughly enough to give you a good answer but it's a yeah okay. There's one question from the chat. Can this method deal with excited states or distinguish between geometric or spin isomers or any molecule and slash system. What is the second part. If it can distinguish between spin isomers. Yeah, so we don't have a spin polarized version of it. But but it's in principle trivial to to extend. I mean, so this is just means that if you do spin density functional theory you have two densities instead of one and then you expand both of them and build the model based on both coefficient so excited states we haven't done anything on. I mean, I'm, I'm not a big fan of, or, well, I'm not even a, it doesn't matter whether I'm a fan or not. I'm not an expert at all on time dependent density functional theory. And I don't plan to do this, I think this is not my expertise so I mean we are really focusing on. Hello, and thank you for the nice talk. I have a question about the electronegativity that you introduce it as a function of the kernel. I just wondering, what does it mean exactly you mean that it's a you introduce kernel as a propagator from some point to another or it's. No, I mean, so what I mean is that basically, the, the, the electronegativity is well defined for an isolated atom right. And, but if I use that electronegativity of the isolated atom to build my charge of calibration model, it doesn't really for reproduce the charge distribution in a molecule well. Okay, and the way that that that that we rationalize this is that the electronegativity of an atom in a molecule is not the same as of the isolated atom, so we need to make it respond to the environment. And so the kernel is simply a way for us to quantify or to exert this influence of the environment on the electorate. Okay, thank you. And they can specify exactly a little that's how you introduce this exactly the effect of the environment. Yeah, so, so it's. So, this is again pretty pretty analogous to the way inter atomic potentials work so in inter atomic potentials you have a global property the energy. And, and you describe it by these local energy contributions, and the regression model learns what these local energy contributions are and you sum them up and they turn into the total energy. And these things are they are kind of fictitious so you don't need to define them you don't need to know what the local atomic energy is the model discovers this by itself. So this is just the freedom we give to the regression model to say, you don't have to use a fixed. You can use each element, you can make it adapt. And then the model does this adapting for us. Thank you. Thank you very much. Thank you.