 So, if you are connected on Zoom, please wait until the end of the presentation to ask the question and then raise your hand that we will unmute you and then you will be allowed to ask the question. Yeah, very good. Can you hear me? Okay. So, very, very happy to be here. I changed the title at the last moment to give this, I would say, introductory talk to the world of, as Antimo called it, the Vanier ecosystem, so I prepared a very introductory lecture to make sure everyone is on the same footing, but I decided also to spend a few slides on sort of an historical perspective. I think, you know, for better or for worse, everyone here is much, much younger than I was, and it's probably as the age when I started doing this. And ICTP is a special place for the electronic structure community for many reasons, for the research that was going on here at the department of the physical theoretical just two floors upstairs at the International School for Advanced Studies, but also because there was and there is an event every two years, the workshop on electronic structure methods and application that started in 1987, and, you know, it was for many, many years the single focal point for not only the European community, but also many of our, say, friends and colleagues in the United States in Asia and elsewhere. So I actually wanted to start giving you a little bit of the sense of, you know, the excitement for the field many, many years ago. So I'm jumping back, actually, to 1979, and I chose this because, again, it's like a selected history of, you know, the works that really made an impression for me, but this is really, you know, Marvin Cohen and collaborator, Jason Hymn, now in Seoul, and Alex Zunger in Colorado, really putting together the formalism of, you know, something that many of us still use today that is, you know, the total energy, plane wave pseudo-potential method. Of course, this is not the only, you know, methodology in electronic structure calculations, it's one that really had a major impact, and, you know, this was already shown a year later in 1980s, again, by Marvin Cohen, calculating, actually, with the LDA, the different phases of silicon. So these are the equations of state, discovering that silicon ambient pressure wants to be in the diamond phase, but also figuring out from the common tangent the transition, the pressure at which it becomes a better thing. And you know, I often see, you know, if you look at this picture in 1980, you know, you would understand what is happening, you know, 40, 50 years later, all, you know, the materials genome and the application of electronic structure calculations to material science. It was all in there. Another sort of, you know, layer of excitement, Haman Schluter and Chang in 1978 came out with this concept of non-conserving pseudo-potentials, ensuring transferability. And so, all of a sudden, we had also the tools to describe with this total energy pseudo-potential method, you know, atoms of all sorts, and, you know, again, this at the time, very often cited paper by Bachelet Haman and Schluter at the list of pseudo-potentials for all 92 elements in the periodic table. And again, you know, some local excitement here. I mean, again, most of you are familiar with these papers, but actually I sort of was very happy to point out that, you see, they were done just two floors above here in the department of the physical theory, both Karen Parinello coming up with a unified approach for molecular dynamics and density functional theory and Stefano Baroni and co-workers with linear response theory, density functional perturbation theory. So, you know, the 80s were really sort of putting down the seeds laying the ground for the great excitement that, you know, there was and there is for this field. And I think everything seemed possible, and, you know, it was taking place on there were row of terminals around the ACTPs of this digital VT100 terminal. That's how you would do calculation at the time running on the BACS mini computer. And as I say, early 90s, that is, when I started doing electronic science calculation, there was the feeling that really the sky was the limit because not only we could study accurately materials, not only we could study all materials, but also, you know, the problem that had hamper calculations of, you know, going towards a very large-scale system seemed very manageable. And these are some of the early ideas about large-scale electronic structure calculations. But in particular, I think there was a session at the 1993 APS March meeting in Seattle that felt a little bit like the 1987 superconductivity meeting, where in a small room, I think, there were a lot of presentation, all brimming with new ideas about linear scaling, how to deal with a very large-scale system, how to escape the cubic scaling of orthonormalization. So you see here Lin-Unes and Vanderbilt and Maurega Likar and Horde-Hondrable-Glumbang and Martin's. I just pick up some of the representative papers to say, you know, already, and it's really 30 years ago, it seemed that the calculation of electronic structure properties for complex systems were really within reach. I just started my PhD, so I wasn't at the APS March meeting in Seattle, but just the week after, I had Francesco Mauri that was just visiting, returning from the Seattle March meeting, telling me everything about this linear scaling session. We were driving from Cambridge to London to listen to Peleus and Melisande, Claudia Bado, and in case you don't have any scientific questions during the time allotted for question, you can ask me about the UK speed limits. But so I got really excited about, you know, this new frontier, these linear scaling approaches, and so when, you know, a couple of years later, I was finishing my PhD and I approached David, because David Vanderbilt, because I was really, really keen to work with him and because of all, you know, the major advances that he had made and, you know, he sort of offered me a position, but then he mentioned the fact that there was actually an NSF call from the Computer Science and Engineering Division for a postdoctoral research associate in Computational Science and Engineering, and so he said, you know, why don't we apply also for a grant like this together? It could be fun. And, you know, discussing, I said, you know, I really want to do linear scaling. This is what, you know, will bring a lot of progress. So we agree you see the deadline. You can read it here. It was November 1st. So around mid-October, David writes to me and says, how is the grant going? And of course, then I start writing the grant. And three days later, I write to him with, you know, a lot of anxiety. This is, you know, the early days of emails, basically. So, you know, contacts were still sparse somehow. And I sent it to him and, you know, I was very, very happy. I said, okay, it's very good. So he had a comment that is very typical for David. I actually had a chance to ask permission to post it this beforehand. Otherwise, I would have shown it nevertheless, but I remember this sentence for many years. He commented on the drafter saying, I would lower the verbal temperature to a few degrees just to make sure I was, you know, a little bit more coherent in my expectations. Anyhow, it all worked well. So we got the grant and off I go to Rutgers. And, you know, the plan was to start working on this linear scaling project. And, you know, David said, thinking about this, you know, all these linear scaling ideas are actually related to the fact that you escape this orthonormalization costs because you work in representation where the orbiters are localized. So you don't have to calculate the scalar products. But actually, we don't know really under which condition how much disorbiters are localized. Of course, there were, you know, the initial works by Walter Kohn of localization. And, you know, there was an entire sort of theoretical literature on this. But, you know, the thought is that we should explore it a bit more. And for that, we actually used, you know, the other major and exciting advance of those years that was really the capability of calculating, say, the polarization in solids as a very phase. This again are two of the pioneering papers. Raffaele was also soon to move to the Department of the Physics Theorica. But in particular, you know, the connection that was very important is that we finally had a well-defined mathematical and algorithmic ways to deal with the position operator in solids and then with all its power, the square and so on. And so that was really the last technical step that we needed to discuss actually localization of an orbital described in periodic boundary condition. So to explain a little bit these concepts, let me again, you know, just use a few slides that go back to the basics. This is actually a book, the book, the first book of Poetics of Aristotle that I bought at the Rutgers bookshop one Sunday that I was feeling particularly for Lorne. And it was the Penguin Edition, all in English, translated in English. I open it up and the first page says, let us as Nature Directs begin first with first principle. So this was clearly a message, if not from God directly, but from Aristotle. Tell me that, you know, I should just keep going. So let me actually sort of set the stage and the nomenclature. Again, apologies, most of this is familiar to you, but just to make sure that everyone here and home sort of follows from the beginning. And we start from Bloch theorem that, of course, states what are the symmetry properties of Huygens states of an Hamiltonian in a periodic lattice. So you have the kinetic energy, you have the external potential. If it's, you know, Konosama-Miltonian, you have also the Hart exchange correlation, but basically everything is periodic and so the Hamiltonian commutes with all the lattice translations. That doesn't mean that the Huygens states of this Hamiltonian are periodic, but it means that they can be chosen to have the block form that is this Huygens state psi of nk are going to be the product of a periodic function u and k times a modulation, a plane wave modulation e to the ik.r, lower case r. So this is all function of space. And the symmetry labels, the quantum numbers, are the band index n, a discrete band index, and a continuum index k, that is the quasi-momentum that is chosen to be inside the first Huygens zone. It's actually fairly easy to prove this. I will not go through it. Many textbooks do it very nicely, but in practice one simple way to do this is asking the two requirements that a translation cannot change the charge density. And of course, if you calculate the psi square that e to the ik.r phase factor goes away and so you only keep the u and k periodic part squared and also use the two translational equivalents to the sum of the two. So we have, you know, this form, this symmetry driven form for the Huygens states and what does it mean? I think I borrowed this slide from David. What does it mean if we ever say let's take one band, this red band, so maybe n is equal to one. k lives in the first Huygens zone of this one dimensional system. So each point along the red band is one state for our system. And if we choose the k equal to zero state, well, the modulation then disappears and the wave function psi is actually periodic. You see the black curve here. Something that looks atomic like, it looks a bit like a p orbital sitting on the blue atom periodically repeated. But if you move away from the center of the Huygens zone, you know, this k going towards, you know, the zone boundary, you start to have a modulation that become shorter and shorter in wavelength and so that the periodic function that in any case is different is not going to be the one that we had for k equal to zero is going to be slightly different but is modulated. So these are, you know, the ingredients and this is what our electronic structure code calculates. And the idea was, you know, to try and take this ingredients, this psi and k and transform them through unitary transformation into a different representation. So we wanted to move away from the block representations, the one, you know, driven by the symmetry with respect to the translation, into a real space localized form. Of course, something that conceptually had been introduced by Gregory Vanier in 1937 through this Vanier transformation written here on top. So, you know, if there is a formula that you want to remember for a while, is this that is, we perform, you see, it's a continuous unitary transformation but we take, say, for a given n, for a given band and, you know, let's see in a moment what it means in more complex case, but let's take maybe this band here from a semiconductor, the bottom valence and let's transform it with this Fourier transform where we integrate psi and k all through the Brienne zone with a phase factor e to the minus i k, where k is varying across the Brienne zone, capital R. So where R now is not a space variable but is a direct vector, direct Bravais lattice vector. So in this transformation, if we want, we remove the dependence on k and we create a dependence on capital R. The symmetry properties of this resulting Vanier functions will be discussed, actually, in a moment but the idea that I'll try to introduce heuristically is that these objects, rather than being, as we had seen here, they localized in real space, you know, everywhere, were going to be localized. Now, the concept of Vanier function, of course, you know, became very useful, very quickly in solid state physics, but the practice of Vanier function lagged behind for a number of reasons. First, maybe there wasn't really very good models for this psi and k, but most importantly, there is actually an arbitrariness in the definition of this Vanier function that I'm showing here in the simplest way possible, and it's what we call gauge freedom. That is, the Schrodinger equation doesn't fix an arbitrary phase factor for every point, for every state along those red lines. So if psi and k is a good eigenstate of our Schrodinger equation, well, anything, I'm sorry, the psi and k multiplied by any arbitrary phase is also going to be a good solution to the Schrodinger equation. The expectation values for a Hamiltonian are not going to be affected or all expectation values because those phases cancel out, but the representation is actually going to be different, and so the actual shape of this Vanier function is going to change depending on whatever phase we have here. And so if you were to calculate, you know, in an actual electronic structure calculation, this Vanier function, say you don't put any phase but you just get the psi and k out of your electronic structure code, typically at every different n and k, they will have arbitrary phases. So there will be a randomness in k that is not going to help. In general, in the theory of Fourier transforms, what you want is you want this object in square parenthesis to be very smooth in k, so that is Fourier transform is very localized, and this continuities would actually give rise to a longer range of behavior or actually not localization at all. That is, if you were to calculate the Vanier function with the psi and k as thrown out by an electronic structure code, you will not get something localized at all. I'm not going to demonstrate that Vanier functions are localized, but I just give you a very simple heuristic argument of, you know, maybe why they could be. And so I take a special case in which, say, we have only one band, so there is no index n, and we look at the Vanier function that is based in the cell where capital R is actually gamma is at the origin. So the Vanier function that should be at the origin, so the e minus i k capital R in the Fourier transform, in the Vanier transform is equal to one, and so we are left with this integral. And now we are going to see what values this Vanier function takes at actually Bravais lattice vectors. So we calculate this, say, home of Vanier functions at different points, capital R of i where R of i is a Bravais lattice vector. And so if we look at the definition and we are calculating this at capital R i because the u k is periodic, we can just have the u k of zero. And so from this, you can sort of see that as you make this integral in k, if there are no weird discontinuities here, these phase factors as R becomes larger and larger and larger as I move away from the origin will give rise to very, very, very fast oscillations that if you are lucky, are actually on average going to give you zero. Now if there is any mathematician in the room, I'm sure they will be horrified, but sort of this is just to give you a little bit of the sense of why this should be localized. Now, of course, there is one additional degree of freedom in our calculation. Let's take an insulator, the charge density, the energies have actually a further gauge freedom that is the charge density, say, is invariant. If we actually transform at every k point, are a sort of set of occupied orbitals, here I would have a case of say, let's two occupied bands, let me be takes up psi one and psi two at a certain k point here, you know, that's on boundary pi over a. If I perform a two by two unitary transformation, it's a little bit like an orthogonal transformation, but just for a complex Hilbert space, what happens is actually, again, my charge density doesn't change, my total energy doesn't change. And so if you want, we have an additional gauge freedom that is at every point in the Brienne zone for an insulator, let's take a gallium arsenide, let's take silicon here at every k points and not only we have the freedom to choose an arbitrary band at every point, but we have the freedom to mix in this case all the four bands together, say at the x point, we can mix them between each other with a four by four matrix. And so we can get four new psi prime mk that can go very well into the Vanier definition. Or of course, we could have decided that we want only to mix these three bands together. So for this one, we would only have to change phase factors. And for these three, we have to choose at every k point a three by three matrix. So this is really that the last formula you need to remember that is we want to do a vanier transformation of our eigenstates and the gauge freedom that we have is at every k point, we have a number of bands, a time number of bands, unitary matrices that mixes all of them together. And so the question now is, well, the choice of u that is up to us will affect the shape of our n. And basically, the more we choose these rotations, such that the resulting states here give rise to a very, very smooth dependence as a function of k as you integrate things, you are going to get more and more localized orbitals. So you really want to choose let's again think of this four by four transformation of these four bands in silicon at every k point such as the new four transform orbitals as a function of k are very, very smooth and they give rise to very localized vanier functions. So now we have stated the problem solving a problem. You know, half of the work is actually stating what the problem here. So the rest is just a little bit of numerics if you want. So how do we choose the u properties? Oh, actually, yeah, let me remind you what were the properties of these vanier functions that Gregory vanier defined. And because there is a double unitary transformation, they are basically the vanier function span the same space as are occupied orbitals. If we change the R in that phase factor, the capital R into R plus R prime, the resulting vanier function will just be a translation of the one that we had for R and they are all orthonormal between each other. So this is true no matter what the use are, but we want to choose the user to make this R n vanier function as meaningful as possible. Now there is a very simple way and very intuitive way to choose the use and this is I guess is a graphical representation of what we were actually aiming to obtain. We would wanted to find the mixing at every k point of the four bands to give something. This is Gallium arsenide that is very localized and represent all this Hilbert space here of these four bands, but maybe we wanted just to transform this single band and get something here or we wanted to transform these three bands here. So how to choose this set of matrices? Well, there is a very simple and very intuitive recipes that uses projections. Let us suppose that you have a physical idea of what your end result should be. You sort of you know figure out that in silicon or in gallium arsenide you have these four bands, you have really four covalent bonds in the unit cell, so you should have something that looks like a covalent bond. And so what you can decide is that you can take you know projection onto four covalent bond orbitals, like you know you choose four g function that's set in the middle of the bonds, a bit like a sphere, but say if you define g functions in this in this expression, well then the resulting phi defined through this projection are actually going to be the same irrespective of whatever you unitary transformation I apply to the psi. You see if I send the psi into psi U dagger and say psi bra into U psi, the U dagger U cancels out. So all these you know phase problems and unitary rotations problems that I had disappears and so these objects are actually independent of the arbitrary phases that depend just on my choice of localized orbitals. If I make sure through some algebra and the loading transformation that this is actually a proper unitary transformation, I have phi that I can vignette transform and get my localized vignette function. And that's actually a very good heuristic approach to do this, but we didn't really want to be heuristic because we really wanted to understand the true localization properties of our system in order to you know understand a little bit more how this linear scaling approaches worked. And so you know we came out with what you know is of course a very straightforward suggestion that comes on the strength of you know the very phase theory of polarizations, the capability of calculating meaningfully the position operator and its power because then it becomes possible to define a very simple measure of localization of orbitals that is just the sum of their spread around their centers and that turns out to be you know again for the four vignette functions in silicon that represents the four occupied bands, the sum of these four spreads around their centers. So it would be the expectation values of R minus the center squared and that that what comes out as this localization criteria. Actually I have to say at the time you know we came out with this as the most natural choice and it was I mean it wasn't really pre-internet but it was pre-digitalization of scientific papers. So doing a bibliographic search was much harder but I was sharing the office for a few months with a chemistry from Delaware Doug Doran and because he was a chemist you know when David and him talked he says well you know that actually this has been studied extensively in the chemistry literature and so actually there is a long histories in the 60s of localization criterion for orbitals in quantum chemistry in part driven by the desire to have again more meaningful representation of the chemical bonds in part in order to make the calculations there again be less expensive and so we actually added all this chemistry literature but somehow for finite system this is a very easy problem to deal with because you don't have this conceptual issues of calculating the position operator the difficulty was coming from the solid state but we said you know this is the localization criterion that we defined at this point you know we can state that the driving force to choose this gauge freedom so this U matrices is that they should be such so that the resulting the vanier function the one in this case in the home unit cell the zero and the four vanier function for silicon are as localized as possible and so to do this so sorry let me actually sort of you know summarize where we are so that the idea is that you know we are going to get block states from an electronic structure code we are going to perform this integral and this finite sum that sort of two unitary transformation but we are going to find out what are these gauge freedoms such that the resulting vanier function are as localized as possible and so basically we needed to you know figure out how does the localization functional omega depends on our psi initial states and on our arbitrary U matrices now the choice of the localization functional as you know the spread around the center was actually quite propitious because there is a very interesting and powerful the composition of this localization functional if you sum and subtract the off diagonal terms so you cannot take this omega and add and subtract this off diagonal terms and so it can be identically written as the sum of what we call that the omega I functional and the omega till the functional and the beauty of this the composition is that omega till the is trivially you see positive definite but omega I and the I stands for invariant has two very important properties one is also positive definite and so you see this also as we knew you know the functional omega is going to be positive but omega I is gauging variant so no matter what unitary transformations you do omega I is not going to change and so when you try to localize your vanier functions you're actually just changing a representation that makes these terms of this off diagonal terms of the position operator as small as possible the fact that omega I is gauging variant and is positive definite that can be seen quite simply once you introduce the projection operators on the occupied space and its complement so we'll use this operators p and q on the on the rest of the Hilbert space and basically one can see I will not go through the matter that omega I can indeed that being written in terms of this projection operators that are invariant for any gauge freedom if again I put the transformations here what I will have is that they cancel out unit of transformation you dagger and so we leave everything unchanged okay so the you know the last element in calculating this localization functional is indeed the dealing with the position operator that as you all know in a solid is ill defined because these integrals are ill defined same reason why we cannot apply a direct electric field but luckily you know we could use the formulation in reciprocal space that was already developed in the sixties by Blount sort of showing how the position operator in real space what is written here in this square rectangle could be recast as a quantity and you know that really was sort of premonition of a very phases and very connection as an integral of the expectation value of the gradient operator in reciprocal space over the periodic part of the block orbitals so with this operational definition of the position operator one could work out all the algebra in reciprocal space basically having to calculate this expectation value of the gradient and for that we used actually finite differences so we discretize our periodic parts of the block orbitals on the typical finite discrete mesh in reciprocal space let's think at a regular moncoast pack mesh and so we calculate the gradient with respect to k of u by calculating what is u here it's written as f at a collection of k points k plus b that are very close surrounding our point k so say if this was a one-dimensional system you know you would just take a k point a little bit to the right and a little bit to the left of your k vector in reality in three dimension it can be written in more general ways and that gave us the capability to calculate this position operator with finite differences and so the ingredients again that go into these finite differences are really the scalar products between the periodic part of the wave function at a k point and its neighbors and in terms of these scalar products you can write out what is the position operator what is the square you can write the localization functional and you can calculate these objects once for all at the beginning of your first principle sorry at the beginning of your post processing after you've done your first principle calculation you're trying such a code give you this and then you're going to evolve at every k point with those unitary matrices u this scalar product and this will lead to an evolution of the position operator and the square of the position operator and what you want you want to find a dynamics such as this unitary rotations evolve bringing you to the minimum of that localization of that localization functional so if we're going to do say rotations of the periodic part of the block orbitals what we really want are unitary matrices but we write it in terms of say infinitesimal rotations that are anti-symmetry matrices and we take the gradient the derivative of the functional with respect to our unitary rotation so once we have the gradient of the functional we know how to minimize the functional we just take say small step as steep as the center conjugate gradient steps in the direction of the gradient until we get to the minimum so it's a lot of algebra that I happily skip but if you want it's all here we have the localization functional we understand how it can be calculated starting from our overlaps between the periodic part of the block orbitals we can calculate the position operator the square and we can calculate how it changes if we rotate with a gauge freedom the umk at every k point and we are going to keep rotating them until we get to the minimum and if we do that again we might take silicon here the four valence bands of silicon we put all this machinery in place and lo and behold we get the maximally localized vanier functions for silicon we had four bands in a primitive cell we get in this transformation for vanier functions if rather than silicon we have gallium arsenide we basically get the same thing a little bit more polarized towards arsenic if instead of having a crystalline silicon we have amorphous silicon we have something that most of the times looks a little bit like a deformed silicon bonds but sometimes looks a little bit more interesting like in this bottom corner this was actually done by Marco Fornari Maria Peres and Alfonso Baldreschi also here in the department 20 years ago and even if we have used the periodic boundary language if we were to deal with say an isolated system uh the the the mathematics of the position operator would work sort of seamlessly we would have just a gamma point and we would calculate the localized orbitals recovering what the chemist had found in the 1960s and this would be the so-called foster boys localized orbitals the criteria that in chemistry were developed were the foster boys that was the same spread around the the center or the Roddington popular I forgot and the Raddes no there's an Edmondston Rudenberg criteria that the maximization of the self-interaction of the people means a criterion there are a sort of different ways okay so this was how much time do I have anti more roughly 10 minutes so is that okay so yeah I wanted to go slowly here because it was important even if you know two-thirds of you know this very well to make sure we weren't losing anything but you know once we have this that the question is what to do with these capabilities that to transform a block orbitals into localized binary functions and you know a shopping list of what to do next I had taken it from our review of modern physics from 2012 a lot will be discussed in the next talks I presume Rafael and David will talk about polarization magnetization topological properties I'll just give you a little bit of the flavor of two or three of the things that were you know some of the first applications and some that maybe are dearer to me but of course the first thing that came natural was using this to analyze chemical bonding and say then out came a collaboration with Michele Parinello of course at the time was exploring very complex liquid amorphous systems with carparinello molecular dynamics in this case generating you know representative samples of amorphous silicon and what the idea of Michele and Pirino Silvestrelli there was that you know we could use the centers of the vanier functions almost as a chemical species and build a per correlation function not of atoms but of atoms and vanier function centers so somehow you know physical representation of where the electron should be of where the bond should be so in addition if you wanted to a silicon silicon per correlation function that you could study in experiments or a simulation here we have as a solid line a silicon vanier function center per correlation function that tells us that you know there can be very interesting lone pair states that actually looked very trivial if you weren't to look at the electrons from the point of view of ionic coordination would look a very normal fourfold coordination for silicon but actually the vanier function center would pinpoint really an electronic nature of the defect or another example from the same group looking at water and the vanier function really pointing out the lone pairs in the say hydrogen bonding network the sort of I would say last step in you know this formulation of localized orbital for solids came once Ivo Sousa joined David in Rutgers I had moved to work with Roberto Karin Princeton and the second step was the so-called disentanglement of bands that is you know trying to build a localized representation of orbitals not for insulator just for the occupied bands but for you know a more general set of orbitals that would span you know both the occupied and the conduction bands where there is no separation of a gap of course when doing that one loses the formal connection maybe the intuitive connection to chemical bonding and in particular the formal connection of the vanier function centers with respect to the electrical polarization so the question here is how do you obtain a localized representation in cases like this the case of copper where we have say the B bands that are all mixed up with in this case the parabolic S band and so how do you extract from this manifold that at every k point there's a variable number of states a subspace of defined dimensionality here a subspace of dimension five that can be now transformed with the vanierization algorithms into localized orbitals and so the additional step that we had in this case was how to extract how to disentangle from this spaghetti a subset of spaghetti that was as meaningful as possible and for us meaningful means that it give rise to localized orbitals and again in order to properly vanierize that these states what we wanted we wanted that after here a five by five unit transformation they would be as smooth as possible as we span the Brienne zone and for this we use the concept of spillage that had actually probably been used in many other circumstances but had been introduced by Daniel Sanchez Portal and neither touch on Jose Soler in the context of the siesta project and so our goal I suppose that we have copper now at every k point in the Brienne zone we might have the five these states and we might have the S state that we have much more so in general at every k point here in the Brienne zone we have a lot of states and maybe we wanted to extract a subspace of dimensionality five so how do we extract a subspace blue of dimensionality five that is as smooth as possible well we want to make sure that this subspace as we surf across the Brienne zone changes character as little as possible so we have maximum overlap between S of k and this at all the nearby k points so again this is stating the problem and solving it at the same time because then it's basically just numerics finding an algorithm that selects out of you know any possible mixing of nine ten fifteen states a group of five states that are going to be as smooth as possible as we surf around of course so you know keep in mind that you know this will mean that we cannot represent everywhere in the Brienne zone perfectly our original band structure because we are now limited from our dimensionality of five and when there are six states mixed all together we are not going to represent perfectly all those all those bands but we have the tool to do this so we could take a say silicon and look at the bottom of the conduction band and say rather than constructing the vanier transformation of the valence band that would give rise to the valence bonding orbitals let's try to get from a certain energy windows four states out of sometimes four sometimes five sometimes six the four states that have you know give rise to the smoothest possible manifold as we serve the band and if we then a vanier eyes the states we get something that looks like a anti-bonding orbital and we put everything together the eight bands and we would get now you know linear combination of these that are as localized as possible so we get the sp3 orbitals or we could play say with copper as I said we can decide what to mix and if we mix states that are just in this window here we get something that you know once it's vanier eyes and with a target of five looks exactly like the d-bands of copper of course if we were to mix more of blue states in order to get a manifold of dimensionality five this would be more localized but they would be less physical so here we can make our subspaces a smoother in case space because we mix high states here but we actually make our vanier functions so let's physical and this gave us that the chance to do you know to play around with energy windows in many different ways and creates localized states capturing whatever parts in energy of debris and zone that we wanted okay let me conclude I had the three examples I'll I'll skip two of them because I think there is just no time I guess I was very optimistic for for for today let me actually just mention that again is very dear to me this the work that we did very early on with Alfonso Baldereschi also here in Trieste and in Lausanne interfacing the early vanier code with their own FLAPW code that give rise to this concept of an interface of the vanier code to post process the result of any arbitrary electronic structural formulation but let me just give you one recent example so I'll skip this example of using a vanier function as building blocks for the electronic structure problem let me give you a later examples this is work that June Fanchow will present more in detail later but something that we have been very happy with and that is based on a very simple idea that is sustained as substituting the concept of using an energy window to decide what to keep and what to disentangle in choosing a projectability window in deciding what to keep and what to disentangle basically projecting the Konecham states on to localized orbitals and deciding that if the projectability on those localized orbitals is very very large we should really keep those as they are in our vanierization and if our projectability is in between is you know some percent but not 100 percent we should work with them and that gives a natural recipes you know in systems like graphene where you have bands that come from anti-bonding combination of sp3 orbitals how to really separate so with you know free electron like bands that have nothing to do and so these color codes you know is a different definition rather than an energy window but a projectability window of what to throw in into the disentanglement recipe and that can work very well because we can even do sort of you know a systematic sweep of what is the what are the threshold for if you want in projectability what used to be the frozen window and the outer window and the fighting for you know every material what are the optimal projectability threshold and with this recipe Jun Feng has actually calculated 1.2 million maximally localized vanier function for 18,000 inorganic materials they all look like very well localized and I think we'll use this a lot in going forward so let me skip also the last topic let me go to the acknowledgement of course you know all of this has really been made possible by David Vanderbilt you know in his ideas in his drive in getting me to do my dream of linear scaling as a postdoc in Rutgers very soon Ivo Sousa was involved in the work and then just a few years later Rasmus Toffi and Jonathan Yates came from Cambridge to MIT and to Berkeley respectively in later years Giovanni Pizzi and Lozano has given really a drive to keep pushing the code and of course there is an entire new generation of researchers that are all here today and really have made the code into you know a community effort that they are driving so the acknowledgments here actually go to the early stages of some of the applications that are self we did of course there is a website and by now there are sort of review papers and vanier code review papers so I think I've gone very very long so I thank you all for your patience and I leave you with some of the new colored vanier function and thank you everyone okay we have five to ten minutes for questions let's start from the questions in presence and then we check maybe some of the directors if you can check if there are questions on zoom and then we will unmute them from here can you hear yeah you can hear me is there any way to check if the physicality of the vanier function is actually respected because we have to how do we know that maximally localizing these functions is actually really what's corresponding to reality yeah this is the the million billion dollar question so and there are different answers so first of all if you want a connections to the real physical words that were observable we need to stay on the side of constructing vanier functions from the occupied bands of an insulator so when we start constructing let's say a vanier function in a metal or mixing together occupied and empty states if we were to calculate any say expectation values on those orbitals it wouldn't be meaningful because we are calculating an expectation value on something that has an arbitrary number of mixed up empty states in there it can be very useful to mix up the occupied and empty states again if we want to construct a basis set that is if we want to construct the vanier functions like I don't know this is not happening on my screen but I hope you can someone click that yeah I think someone or there is probably a presenter that is trying to present and and and we should and oh thanks and yeah we should tell that that person not to present so so you see these objects this is a graph in order in this case it's a nano tube but these objects are fantastic to represent the electronic structure of the nano tube but in particular you know they came creating this purple mushroom like peasy like orbitals sitting on every carbon atom that they are going to be very useful to represent the pie manifold of graphene or carbon nano tubes but you wouldn't want to calculate expectation values so if instead you take an insulator and you calculate a maximally localized vanier functions so the two important exact statements is that you know those are just a double unit transformation so you are still calculating the correct expectation values and then there is the connection made by David that is the sum of the centers of the vanier functions corresponds exactly to the polarization of the system now these two statements are valid to be honest no matter what the localization recipe is okay so in reality it's the localization recipe that gives you some kind of heuristic chemical interpretation and some kind of heuristic physical interpretation of the bonding and of the local dielectric field but it is even that is completely heuristic it sort of makes a lot of sense but there is no one telling you there is anything right once you decompose a property of the entire occupied manifold into a property of individual vanier functions that's actually what you know drove in the 60s the chemists to explore other localization criterion rather than the spread around the center that foster and boys had introduced because if you were to study co2 and look at the vanier functions of co2 you would see a triple bond on the c side towards the oxygen and the c side sorry on c towards one oxygen c towards the other oxygen that the chemists didn't like at all and the Edmiston-Rudenberg criterion of instead maximizing the Coulomb self-interaction was giving more chemically intuitive concepts but of course you know chemically intuitive is not an expectation value it's not an observable in the same ways that oxidation state you know to a large extent are not an observation values to the same sense that CJT charges you know what is the charge around an atom is not a well-defined concept but you know they they work very well so we are happy to close an eye or even two sometimes you are connected on zoom please raise your hand and then we'll unmute you for this in the meantime okay guess the problems I just wanted to follow up with the comment which Nicola knows very well there's a history of calculating the dipole moment of individual water molecules in liquid water by summing the vanier center positions on the molecule and then tracking that during molecular dynamics this is Parnello group and others and that's an example where it became controversial at a certain point because you're calculating something which is just not in principle an experimental measurable there is no experiment in principle that can measure positions of 1a functions it can measure the dipole moment in the way that it was defined there but nevertheless it seems to be very useful in terms of understanding what's happening in liquid water so we just agree that it's a useful concept and go ahead perhaps it is an extension of the previous question but I just want to understand that I have it clear for myself so if you have the insulator then the vanier function of the valence band give you the average position of the electrons right and and that has connection to the electron density but if I look at the vanier functions of this of this electron so does that somehow connect to the electron density or is just just some cloud that tells you they are somewhere here but their shape is not really meaningful or it depends on the way you do the localization yeah now so that the charge density the total charge density is unobservable and you can get it exactly in two different ways that is in the block representation summing over all the occupied bands and summing and integrating over all the k-points or in the vanier representation you can sum over all the four vanier functions and all their periodic images and you will get the same physical object that is the physical charge density so the vanier functions are no matter if they are maximally localized almost maximally localized localized in different ways or poorly localized are still a unitary transformation of your states so when you sum over everything in an occupied so in an insulator you get exactly you know your expectation value your total charge density so they give rise to the exact total charge density that you had in your calculation so there are an exact mapping if you want in that respect I see okay thank you very much yes there is time for one last quick question was it there are on your back huh what is the shortest route I want to ask how important you consider is to to build the vanier functions in the s in the s or in the in the sigma bonds in carbon nanotubes in this example to describe the transport region to to study transport properties in these systems so how why did I choose that now sorry what was the beginning of the question how if it's enough to to describe the the p-set bonds in these systems to describe the transport region yeah now so wonderful so and let me use so exactly our choice was to disentangle a set that had if you want for each unit cell I think at this again as a graph in it had two covalent sorry three covalent bonds per unit cell and one p-z orbital for each carbon so we wanted five objects for each unit cell and the beauty of this so the target here is a five we construct these vanier functions this is what we get as a basis set and for this system because really there is a clear separation by symmetry with the antibonding orbitals this minimal set of vanier functions works really perfectly because with just those orbitals that you have seen before you can now diagonalize the Hamiltonian in that basis you get the black lines that are basically exactly reproducing you see both for a metallic nanotube and for a semiconductor nanotube the band structure that you would obtain diagonalizing the Hamiltonian in an infinite basis set of you know plane waves and because the transport properties that matter are only about you know one two volts around the Fermi energy you don't really need to describe this parabolic bands here that are actually either given by antibonding combination of sp3 orbitals or are just the free electron bands like the interlayer state of graphite that again was discussed by michel posternak and alfonso baldereschi uh 40 years ago okay i think it's time to move to the next speaker let's thank nicole again for the wonderful talk