 we can start. Yeah, the two words. Exactly, perfect. Okay, so we start the next session where Davide will talk about general implementations and common approximations in the beta-salpeter. Okay, so thank you Andrea. Now the idea of this lecture is to go a bit more on the details of the implementation and how to do beta-salpeter simulations. But so let me start again stressing this point which you have already seen in the lecture of Fulvio. So this was the end of my linear response lecture. So we had this Dyson equation for the response function within TDDFT, the TDDFT kernel, which face completely in getting the optical absorption of silicon and even worse in the case of solid argon. And the reason now we know is that this approach misses excitonic effects. So the approach was based on the idea that we take the response function, we start from the DFT calculation, and from that we go in G-space and we go through this macroscopic averaging procedure and we get a macroscopic dielectric function. So this is not a good scheme. And then now we ended up with a more refined scheme where you start from DFT, you get this response function, but you just use these for the screening. So we have this RPA screening. And this is again the starting point to build up on one side the GW self-energy with the particle corrections. And on the other side the exchange correlation kernel of BSC, so the electron interaction. And I stress again that in the GW case you take a frequency dependent screening, in the BSC case you take a static screening. So you put all this together, so you put the quasi-particle energy and the kernel inside a new Dyson equation and then you end up with a new description of the macroscopic screening which cut to excitonic effects. And okay you have already seen that, but let me comment again, in particular in the case of solid argon, the red dots are the experimental data. You have this very sharp peak which now we know is an excitonic peak. It's completely missing at the RPA blue line or even TD-DFT level within the adiabatic LDA. But then instead when you add this piece of the kernel, the electron interaction, you get this very sharp excitonic peak. And so we start from this concept and so the standard way to derive the beta-salpator or let's say historically the one of the first way was through the ideal equations. I'm not going to discuss that because you saw the alternative way from full view. And then I will jump a bit more on the description of how the spin enters in the Hamiltonian. And after some long discussion on that, I will show a few examples on optical properties absorption. And then I will also discuss other physical properties you can get from the BSE. So the focus or the standard way of speaking of the BSE is about absorption, but indeed there are many more experiments that you can capture after you solve the BSE. And at the end, if I will have time, I will show you the connection between the YAMBO input file and the BSE. If I will not have time, I will show that at the beginning of the tutorials in the afternoon. So the beta-salpator is this equation. So full view already derived it. Let me stress about this small form that you see on the top left of this polarizability. So it's a four point polarizability, which means even the independent particle one is written like that. So you have here 6, 7, 3, 4. And the deeper reason for that is that you have this kernel, so V minus W, which enters specially in two different ways. So V connects the two independent particle green function in this direction. And W instead, it connects the two independent particle green function in this direction. And this means that you cannot just take a closed equation, so the P2 would be closing 6 and 7 and 3 and 4 and write the Dyson equation, because this W forces you to keep open the bubble, let's say, the independent particle term. So these two terms are indeed the electron exchange, this one, and the electron interaction. And let me comment also on the fact that this is called electron exchange, and it comes from the functional derivative of the other term. So the other term is the direct interaction term, the classical term in the electron words. When you shift to the electron all words, it depends the exchange term. And instead of this W comes from the derivative of the screened exchange. So the fork term or the screened fork is the exchange term in the electron language. Now we move to the electron language, and this becomes the direct term for electron handles. So we said that we statically screened W. So W in general is a quantity which depends on two times, and instead we take this approximation, which means omega equals 0. And then in the, let's say, Feynman diagrams representation, it means that you have the same time between the two ends of the electron interaction. Now it is a four point equations, so it is not convenient anymore to solve it in G space because otherwise you would move to G space, you would have four G vectors, and it would be too much demanding. I mean, you have a very huge matrix, so it's much more convenient to move in the transition space representation. You still have four indexes, and one, and two, and three, and four. But the advantage is that you just need a few bands to converge, so you can keep the sides of the matrix still reasonable. Although the tricky point usually in extended system is that there is a momentum index attached to one, each of these, and the, let's say, the convergence with the momentum index is, let's say, the most demanding part of them. So, okay, this is the form of the Hamiltonian. You have here your quasi particle-corrected energies, and here you have this electron exchange and the electron-screen interaction, and you have these occupation factors waiting in the kernel. And if you try to change this N1 and N2 in valence conduction, conduction valence indexes, you need, you end up with a matrix like this. Now what we are interested in are excitons between valence and conduction transitions. This is this block, and we can forget about the rest, also because, I mean, this one is zero due to these occupation factors, and this one is not important. Indeed, you can also, if you want, try to rewrite these in a more symmetric way with respect to occupations and to have the occupation, the square of the occupation on the left and on the right, and you can prove that also this term will go to zero, and then this part will be independent, and then we just focus on this. So, when we solve the beta salpator, we solve the beta salpator in this space, valence to conduction. Now, we have both the valence to conduction transitions, so an electron moving up, but also the conduction to valence, which is minus the Hermitian conjugator, minus the star of this valence to conduction term. So, here you see valence conduction again, and then you have this coupling between transition going up and transition going down. So, this is much more intuitive physically. You say I have the ground state where all the electrons are in valence, and I can excite them up in the conduction, but in reality, your ground state in this representation, which is just a mathematical representation, your connexion basis set, could be just not all the electron in valence, could be a more correlated one with, you have to take, let's say, multi-determinant, and then it means that, in principle, you could also have the excitation from this occupied conduction state going down. I mean, this part is less intuitive, this is the most intuitive, and the two can couple, so this is the structure of the matrix. And in many cases, the Tandankov approximation is very good, so the coupling between excitation and excitation, you can neglect them, and you can just focus on this term, basically, for describing the accidents. So indeed, in the Tandankov, if you just want the absorption, you can really do the resonant all the calculation, and this is what we often do in practice. And I will show examples where this is not a good idea, but in general, it is a good approximation. So, let me focus on spin, I will discuss about spin at length, it will take a bit of time during this lecture, just to, I mean, give you all the details of how the spin enters, and the first simple answer is that, okay, let me say that, first of all, we can have different levels of calculations in the ground state, so we can do a collinear calculation for non-spin polarized system, where both the s squared, the total momentum and dsz momentum is a good quantum number for your system. You can have a magnetic system, but still without spin orbit, and sz is still a good quantum number, you will see that s squared is a bit less a good quantum number, and then you can have the non-collinear case where you have spin orbit, and even sz is not a good quantum number. In your DFT calculations, this ends up on the kind of wave function you have. In the collinear case, your spin order wave function can be represented by that, and then you forget that it is a spinner, you just take either the up part or the down part. In the spin order case, you deal with a full spinner, and then the distinction between non-spin polarized and spin polarized is that in the non-spin polarized, the up and down components are the same, so you even forget about this spin index, and you have just a general wave function. So when you translate, instead in the case of a spinner, I mean, you deal with a spinner, you have a big index for the spin order state, and for the spin order energy, and then inside, it's even the spin index. So in the many body language, when you compute the green function, I remember this is a representation of the green function. In the collinear case, you can assign a specific spin state to the green function. In the non-collinear case, you just have a representation of, let's say, my spinner. And this means that when I do all the derivation with my Feynman diagram, green function, formally nothing changes. So this, for example, a vertex, it's the same in the non-spin polarized and spin polarized case. The representation just in the non-spin polarized case, I can explicitly put a spin index, and I know that, for example, the interaction is conserving the spin, so I need to have the same spin on the left and on the right. While instead, in the non-spin polarized case, the spin index enters in the integration of this vertex, so I have a sum over the spin index, and I cannot say much anymore on the spin of the I state or on the J state. So why this is important? Let's take my BSE, and I start with the collinear case. I can label all my states with the spin indexes, and I can say what happens to the three terms of the Hamiltonian. So the diagonal part, of course, is diagonal, so I have a conservation of everything, so same VV prime, CC prime, and same spin in between conduction and balance states. And instead, what does it happen to the exchange and to the correlation term? So first of all, I recall the exchange can be represented in this way, so you have this matrix element between the states with the bare interaction in one case and with the screened interaction in the other case, and of course, here we have a double G sum, so this is the demanding part of the BSE calculation. So what happens to the spin? Since I have explicitly the spin indexes, and I say that the vertex, the spin must be conserved in the exchange term, I have this kind of conservation. So I have a first and initial electron pair, which is coupling via electron exchange to another pair, and I need that the spin of the conduction and the balance of both the initial and the final term are the same. And instead it is that the conservation is different in the case of the screened interaction. In this case, I have the initial electron pair and the two can have different spin, the valence and the conduction, but then the spin must be conserved when I scatter towards the final valence conduction state. So you see that the conservation this time is between cc prime and vv prime, instead before it was between cv and cprime v. And this has an impact on the structure of the matrix, so I go very slowly, let's say that I take a two-level system with just two electron, non-spin-polarized, and I start to consider a possible transition. So spin-up transition, the spin is conserved, and I have all the terms. Instead, if I consider this spin-flip transition, so this down-up transition, I see that the spin of the valence and the conduction are different, and then there is no the exchange term anymore, there is just the electron interaction, same for the opposite transition, the other spin-flip transition. And finally for the down-down transition, I have again all the terms, but then pay attention to this term here, this term, let's say relating the up-up to the down-down transition, and here you just have the exchange term. And this again, because of these spin rules. So the matrix has this structure, I'm starting from a ground state which is singlet, so no spin, and then you see that I can immediately block a part of the matrix, this one is completely independent, it has zero all around, and it goes on this side, and this is the spin-flipping matrix, and I will discuss briefly that again in the end, but you basically have the spin-flip transition, usually we don't consider that in the BSE. And this other one is the spin-conserving transition, so I mean the Sz is conserved, and here again, this matrix has a nice structure, I can block it farther, I can block into the triplet state, where I conserve Sz, but I change the total spin, and in another state, where I even conserve the Sz, and these are the two eigenvectors which I can use to do this blocking. So one of the excitations is this, is a triplet, and indeed it is the same as this one, and the other one is the singlet. And since this one is the only which is dipolar-louder, usually for non-spin-polarized states, you use this expression. So this is for one single state, then you have decay dependence on the other states, and then you have a matrix with all these terms. So this is the reason why you usually see these two in front of the electron exchange. So this is what happens for a non-polarized system. What happens for a spin-polarized system? Say this, I complicate a bit my model, I put one extra electron, then the energy of these two levels up and down is different now, and also they were functions. And then I consider this frozen, this state, and I just look at this transition from here to here, so I still have four by four matrix. The structure remains pretty much the same, but now I have to take into account that this V can be up, up, down, down, down, etc. I can still block my matrix in spin-flipping transition and spin-conservant transition, but now at this step I cannot block it anymore. And here the reason is somehow that having these different way functions between the up and the down channel, I'm breaking the total spin of the system. So let me explain a bit more what I need by that. So I'm playing with this model. So I have these two electron here and these non-electron here, and I'm considering transition from I to K. And if I try to take the possible, let's say excited state, which I get. Exciting either spin up here or spin down here. I can try to do linear combinations and I end up with these two states. Now these two states, one is a good eigenstate of the spin is a doublet. So I'm starting with a ground state which is a doublet, this one. And I have an excited state which is a doublet, which is reasonable. Instead, this other state is a triplet and it is not a good state because it is not reasonable to get a triplet from a doublet. One would expect either I get a doublet, I conserve the spin, either I get a quadruplet, I flip the total spin by one. And the reason is that to get the correct, let's say, spin structure, so my doublet and also quadruplet, I need an extra state, which is this one, which I can reach just with two excitations. So this is what is called the double excitations in the community of quantum chemistry. This problem is very well known and in these references from the quantum chemistry community. In the community of extended system, we don't care much about that usually. And the reason is that the total spin of the system is not so much important as you want to number usually, especially because magnetic material are usually metals, and then you cannot really define the total spin. But I mean, one should keep that in mind, especially if you study, for example, defects in extended systems where you can have some kind of isolated physics or if you use the BSE for molecules and you have a magnetic molecule, then this is an issue. So why we miss double excitations? Because I remember you, we take the BSE, we have a static kernel, which is just mixing single particle transition. If we want an extra transition, double triple excitations, we need a dynamic kernel, and this is the same of the GW physics. In GW, we have a dynamical self-energy, and this dynamical part take into account all the other excitations here we are neglecting them. Okay, so let me switch instead to the non-colonial case. As I said in the non-colonial case, we still have the same terms, but now there is no spin conservation here. So when you do the integrals, you have a sum over the spin index. And this is true for the exchange, and it's the same for the correlation. You have a sum, and then instead of having this very nice matrix, we end up with this huge matrix where all the elements are different from zero. This is what you do when you do a BSE calculation starting from a ground state simulation with spinors. And then one message is that you cannot say much about the SZ-Pantoon number neither, and that you are mixing the spin-conserving transition, which we had here and here, the two coordinates, with the spin-flip transition. So in a sense, you are mixing the basic transition, which gives excitons. With this spin-flip transition, which are the source of manuals. So you have a whole matrix which mixes somehow exciton and manuals. Of course, usually the spin orbit is weak, and this mixing is weak. But in general, I mean, when you solve the BSE, you have all the poles. And since you, I mean, you cannot block it anymore, the size of the matrix is four times the size of the, the matrix you would have for without spins or the non-polarized case. And also twice the size of the matrix in the spin polarized case. So if you remember, non-polarized case, we did the double blocking. So we ended up with a single term. In the case of spin-flip transition, we did the first blocking, but that we could not block further. So twice the size. Here we cannot say something or anything a priori, so four times the size. Okay, so this was pretty much everything on the spin part. And then in general, I mean, let's forget for a moment about spin. We have this big matrix, and we have to solve it. And in the Yambu code, we have different strategies to solve this excitonic problem. So one way is that you just take this matrix, I mean, this expression, as you see, it's an inversion. And then you directly perform the inversion of this expression. Of course, the inversion is demanding because you have to do that for each frequency. One advantage is that in principle here, you could also add easily a frequency dependent kernel. So we cannot do that yet, or at least not, I mean, we're not going to use it. But this is a scheme where you can invert frequency by frequency, and so you can include, eventually, dynamical kernels. It will give you just the spectrum, and you can do that through the LAPAC libraries. It becomes easily very demanding, but you can also do inversion in a more perturbative way, and it is a bit more efficient. An alternative way is to use what is called the Langtzos or EIDOC scheme. It's a perturbative scheme. It's the most efficient scheme to solve this excitonic problem. It has a good scalability, and it is the scheme which you usually use if you want to get a spectrum of a big material. Then on the other side, we have the diagonalization. The big advantage of doing a really diagonalization of these excitonic metrics is that you will get the excitonic eigenvectors. So the excitonic eigenvectors here are represented by these capital A and N1 and N2. Again, you can do a brute force diagonalization. You will have a huge matrix in many cases, and you can call LAPAC or SCALAPAC in a parallelized environment. You will get all the eigenvectors and eigenvalues. But that is very demanding, and in general, it's not really what we need because if we are interested in exciton and in excitonic wave functions, usually we want just a few bound states. And then in YAMBO, you have an alternative approach via these SLEP-C, PET-C libraries, which are exactly meant to take these kind of eigenvalue problems and solve them in a perturbative way, sorry. You do a perturbative scheme which gives you, however, the excitonic states and the excitonic energy, but you select just a few. So you will tell, you just want the first 100 states. You will see that in the YAMBO input file. And you will have just a portion of the spectrum, but you know everything about the spectrum. So this, let me comment on the general expression of the electric function from the diagonalization, either diagonalization of SLEP-C. So it has this shape. So you have here this matrix element. And since you are doing the Q going to zero limit, this will become the dipos. You have the excitonic wave functions, and then you have the excitonic pulse. And you also have this metric matrix in case you go beyond the Tandankov approximation, because in that case, the full Hamiltonian is not Hermitian anymore, and then the eigenvectors are not orthogonal anymore, and then you have to take that into account. This is rather technical. I mean, in the Tandankov approximation, you can forget about that. So if you do the Tandankov approximation, and in particular you consider the resonant block, the expression is much more simple, and you see that you end up in something which looks pretty much like a Fermi-Golder rule. But the difference is that you have here the excitonic energy, and here you have this object, which you can call the excitonic dipole. So under all these approximations, especially Tandankov, you can really think that your spectrum is a set of transitions from the diagonal state to the excitonic state weighted by the excitonic dipole. Okay, I'm not going through the details of the inversion solver, because I think we are not using that much in this school. Instead, if you comment on the Eidog-Langtz approach, which is the most efficient one, and the idea is that you can express your, let's say, linear response function in this way. So you have this operator, which is the excitonic Hamiltonian, and then you have these terms on the left and on the right, which are basically, say, the excitonic dipoles on your system. So it's an expression in terms of a fraction, and what you can do is you can try to solve it, expressing it as a continued fraction. So it means that you first start with an initial simple fraction, which is a zero-order approximation, and then you do an iterative scheme. I mean, it's a mathematical strategy. And with fewer iterations, or, I mean, much less iterations than the total size of the matrix, you can get a reasonable spectrum. So this is how it works. So you start with iterating the Eidog solver, and you start after, for example, in this case, after 40 iterations, you get this blue spectrum, which is pretty much similar to the exact one, which is the gray shadow in the background. And then the code will do, will compare the iteration 41 with respect to the previous one. We will get an idea of the error, and depending on how much is the error, you will decide if I should I go on or not. And then when this error will be low enough, according to the cutoff you will set in the input file, the procedure will stop in this case, so it's stopping after 86 operations. And I mean, the scaling of this approach, I think, it goes quadratically with the number of electron pairs, so it goes still quite fast, but it is much more efficient than standard diagonalization, which goes as a power of four, if I'm not wrong. And so this is the case of the resonant-only term, so it's the standard langtos. It can also be extended to the non-resonant matrices. It was done in this paper, for example, and there is also a PRB, I think, for the details. Now, a couple of examples of an optical spectra obtained via the BSC. I used them to comment a bit on the physics, so, okay, first of all, we have these two spectra, which we have already seen, so I stress again the BSC, the main role is to capture the bound accident. Both here in this case system, where we have a strongly bound accident, so this is a 2EV binding energy. But also, in the case of silicon, which is a semiconductor, so this is the independent particle part. Also, let me remember that, so you do DFT and you get this spectrum, so cone-sharp energy differences. Then you do the quasi-particle corrections, so you open the gap and you shift the spectrum. You get this dashed line. And then you do BSC, in this case, the binding energy is not that much. In back-silicon, the shift from here to here is a few hundred milliliters volts, even a few tens of milliliters volts. But the shape of the spectrum is changing, and in particular, there is an enhancing of the first peak. And this is a general behavior, so the BSC is giving a binding energy, but also an enhancing of the oscillator strands. Okay, so the approach has been applied to many more materials, silicon oxide, lithium fluoride. This is carbon, and also two surfaces, two liquid systems. The message is just that it is a very well-established scheme. It works in a wider range of systems. And here an example of carbon nanotubes, and I used these also to give you a message, which is in general true, I would say. So here you see what happens to the binding energy of the first acetone, the GW gap of a carbon nanotube, so that you see that they change as a function of all these cold distances, the sides of the nanotube, so the curvature. So the two are pretty sensitive, and I would say that this is in general true. So when you change the system and you play with systems, you will see that the electronic gap can change a lot, and the binding energy can change a lot as well. But in general, the two, they change in the opposite direction, and then if you look to the peak of the first acetone, it will stay pretty much in the same position. So this is, for example, also true if you take transition metal decalcogenite, so you take the bulk version, you have an acetone which is here, you take the 2D version, you have the an acetone which is more or less in the same place, but the gap and the binding energy are very much different. Okay, so another point is the convergence. As I said, the convergence with respect to the K point sampling is particularly important, and it is even more important if you go to isolated systems, so this is the case of a 2D material. And it is related to what Alberto was discussing for the GW case, so in 2D material you have the macroscopic dielectric function, which has to go to one, the exact one, but to get that in principle, you wouldn't need an infinite box, and then you cannot put an infinite box in your simulation, so you do use a smaller box, and this is what happens to your dielectric function. So here we have the limit of isolated, here we have the limit of the bike, so when you put some vacuum, you move towards that, but the approaching of the reality system is very slow. And so what you do, you put a cool on cut-off, a truncated interaction, and this helps a lot, so it helps in the GW and also in the BSE, so if you run tutorials on 2D materials, remember to include the truncated interaction and you will see how to put that in the input file. Okay, also a comment on the Tandankov approximation, we said this is the structure of the matrix, and in general it's a good idea to neglect the coupling terms in extended systems. And here in this paper they study the optical absorption of a 1D system, and the message is if you look at the spectrum along the periodic direction, then the Tandankov approximation is pretty good. But then if you start to tilt the direction of the field with respect to the direction of the carbon nanotube, then you see that the Tandankov approximation breakdowns in some region of the spectrum. And so the general message from this is that Tandankov is good for extended systems, is usually bad for isolated molecules, although going beyond Tandankov is always risky, so I'm not going to say much, but there are many cases in which beyond Tandankov you change a lot the spectrum, but you also get, let's say, some weird results. Okay, a bit of literature, how am I doing on time? I don't have a chairman, I chair myself, okay, I think I still have some time, okay? So okay, this is just, I show you some reference papers which you may be interested from the very old ones, the paper of BIT and Salpeter where you find the first derivation and application of the BIT-Salpeter equation to more and more recent papers, I mean, if you are interested, just you can have a look on this. And in particular in the, let's say, Abinishu community, there is this very nice review, which I mean, I wouldn't suggest this as a paper to study because everything is compressed, but if you need, you know already what is GWBSE and you need to check some details, so this is a good reference review. And now let me jump to the last part, so more response functions. So we have stressed a lot that BSE is important to get absorption spectrum excitons, you can get the excitonic wave function, the excitonic peaks, but indeed, once you solve the BSE, you have an instrument, this two-point, four-point response function which you can use for much more. So I start showing you, so this is usually what we use to compute absorption, so we use the BSE to compute the density-density response function and we get this longitudinal dielectric function if you recall from the lecture of the first day. Indeed, I can go beyond that, I can construct what is called the dipole-dipole response function and from these, I don't just have the longitudinal term, but I can have the full dielectric tensor, in some cases it is important, then I will show you when. And then I also recall you that from the BSE you can also get the current-current response function, which is an alternative way to get the full dielectric tensor. And I mean, in the TD-DFT world, of course one wants the density-density response function, but the BSE is much more general and you can compute any of these response functions. So let me highlight a bit the difference between the length gauge, so going through the density-density or the dipole-dipole response function and the velocity gauge where you go through the current-current. And the main message is that this one is much more general and it's very good to use, to go through formal derivations. You have the vector potential, so you can describe both longitudinal and transverse field, you have everything there, but there is a drawback. This approach is very delicate with respect to the omega going to zero limit, instead, indeed, if you look at the expression here. So there is this one over omega squared and also there is this diamagnetic term and in an extended system you don't have a one over omega squared divergence, which means that this term has to cancel exactly this in the omega equals zero limit is a sum rule, which is very nasty to be respected. So in the length approach, numerically it is much more stable. As I said, it is compatible with TD-DFT. However, especially with TD-DFT, you have to pay attention to the Q equals zero limit. And this is related to also what Fulvio was discussing before, in the Q equals zero limit, you have to pay attention to the direction, whether you are describing longitudinal excitation or transverse excitation. Here is all a bit hidden in the details because we have a theory which is in principle longitudinal. Besides that, you can use the response function and then eventually move up to the BSC level, although it's usually not useful, to also describe intraband transitions. So we have discussed so far semiconductor and balance to conduction transitions. If you have a metal, you can also capture intraband transitions. And indeed, in the length gauge approach, this comes through this Q equals zero limit. So the excitation at small Q, you reach the Q equals zero limit and you get these intraband transitions. And this term is super nasty to be converged with respect to K point sampling. And indeed, in the YAMBO code, we use a Drude model to account for that in solids. So usually, I mean, you don't want to go up to the BSC. We discussed already that in answering one of the questions. But in general, if you have that in the zero-order response function, you could also go beyond. And I also pointed to a funny thing in the velocity gauge approach. This one over omega squared divergency is there by default, let's say. And indeed, there is some rule to respect that for systems with a gap. For metals, the Drude term enters via a breaking of the sum rule. So it's not any more small Q transition, it enters via this funny way. And in this case, however, it's super nasty to be converged with the number of empty states. Okay, this was just, on the Drude term, much more interesting physics can be gained if we switch from, let's say, the standard dielectric function to other properties of the materials. And indeed, when you have this four-point response function, ijlm, depending on the dipoles you select, you can also construct, for example, this beta tensor where you have on the right the electronic dipole and on the left the magnetic dipole. And you can use that to describe natural circular decrysms, for example. If you take the spin dipoles, this is the S plus, so the dipole which describes a spin flip transition, you can have this chi plus minus response function, and you can describe spin waves. So the message is you solve the BSC, you have a lot of information, and you could also use it to get some interesting physical properties to compare with experiments. And I show again few examples. So we said you can have the full dielectric tensor, then you can define the absorption of circularly polarized light, right and left is just this combination in between the matrix element. I mean, in this case, I'm fixing the geometry, so the light is moving along the z-axis, I have a magnetic film with magnetization along z, and then I can probe the decrysms or the different absorption between right and left absorption. So this is related to what is called the magnetoptical care effect, and experimentally is used a lot to detect the magnetization of a material. What they do in practice, they arrive with a linearly polarized light. The light arrives on the sample, is reflected, and due to this decrysms, there is a tilt in the initial polarization, and then you can compute the rotation angle. So if you take a metal, you can do that. So you compute all the matrix elements at the independent particle transition, and you get a very nice description. But then in principle, if you have a magnetic semiconductor, you could be interested in doing the same at the BSC level. So natural circular decrysms is another example, it's again the same thing. So you want to measure the difference in between the right and left absorption of a molecule, this is called decrysms. So in this case, the information is not contained in this first order term, which relates polarization and electric fields, which in the end defines the first order dielectric function. But it has something to do with the response of the polarization to the time derivative of a magnetic field. So if you take this word expression, you have an extended definition of your dielectric function, where you have the standard, sorry, the standard alpha, so the standard dipole-dipole response function. And then there is this extra beta, where you have this magnetic dipole- electric dipole response function. And this is the one which is given, for example, decrysms in isolated systems. And this is an example, so usually this is computed for molecules, TDFT works very nicely. But if you are interested in evaluating decrysms in an extended system, you may be interested in using the BSC. It's a challenging adventure, especially because here you have these magnetic dipoles, which are really defined in periodic boundary conditions. You may want to do that. So another interesting experiment, which is very useful to study surfaces, is related to what is called the reflectance anisotropy spectra. Or in this case, you can also do high-resolution electron energy loss anisotropy. And basically the idea is that you probe the system with polarized light in one direction and the other. And then you take into account in the differences between the two absorption. And it's a very powerful technique, and it can give you a lot of details on the surface. And again, you can do that at the simple independent particle level, but you can also do that at the BSC level. So these are examples of a few applications at the independent particle level in silicon and gallium arsenide. So here you see, you have to take your box with the surface. So it's a big box with some layer of space, same for gallium arsenide. And at the independent particle level, you get a reasonable description of the experiment. But at the BSC level, so here you see the difference between BSC and independent particle. You get correction, which can be important, also in this case. And I think this was, okay, and the last one, which is related to the question on manuals. So we said that the response function, I have shown you it has the spin conserving channel, which you can block away. It gives accident and the spin flip channel. So if you take the spin flip channel, you can describe what are the manuals. So here is an example of manuals computed at TDDFT level again. So the blue line is the independent particle response function at different momenta. So usually for manuals, you look for a dispersion in momentum space. So you have to solve the response function at different momenta. And you see that shifting from spin flip transition to a correlated version, you get a big enhancement of a peak at low energy, which is called the magnetic peak. In this case, for different reasons, TDDFT works pretty nicely. So it's not like in the excitonic case that you miss the exciton with TDDFT, you get the manual. But then, this has been tested mostly on metals, where excitonic effects are not important. If you think now you have a magnetic semiconductor, you may be interested in using BSC for that. Okay, and that was all. So the final message was you have the BSC. You can really try to explore many interesting properties. And not much was done, indeed. So it's really an open area for research and for new results. And okay, I hope I'm doing well with time. I don't know. Okay, thank you for your attention. Okay, so session open for questions. Please, I remember off the price. Where is the mic? Oh, yeah, the mic. Where is the mic? I think it's... So this is somewhat related to some technicality in YAMBO. So can we turn off these excitonic effects after certain energy? So I want to compute the absorption spectrum till, I suppose, like 10 electron volts. So I want to consider only the excitonic effects till 5 to 6 EV. And I want to rely on the independent particle level after that. So can we do this thing in YAMBO? So if I understand correctly, yes, you can do. So, I mean, one way is you take the excitonic problem. You use the EIDOC solver. You get the whole spectrum. If you insisted that you also wanted the excitons, you can use brute force diagonalization. You will get all the excitons. Or you can use this SLEP-C solver. And you can ask to the SLEP-C solver. I want the excitonic wave functions just focusing on these energy ranges. So in particular, what you tell to the SLEP-C, I'm interested in the exciton around, I don't know, 10 EV. And I want the 10 closest excitons to these 10 EV points. And YAMBO will give you that. OK, so a bit different to that. So now I want to consider, like, suppose, like four valence bands and four conduction bands. And above like a four to like 10 conduction bands, I want to, I don't want to include the excitonic effects. OK, so let's say you said that for the first few bands, I want the excitonic effects. And above, I am not interested. I'm not interested in. Well, what I would do, I mean, you can do by hand, you just solve the excitonic matrix with few bands. And then, of course, you should pay attention to convergence, because, I mean, you select an energy range, but maybe the transition will mix with higher energy range. You get the excitonic spectrum there. And then you compute the independent particle one with many more bands. And then you overlap the two. This could be a receipt. So when you include the coupling in the absorption with the beta-salpita equation, I found that the standard procedure in YAMBO is to adjust the exchange part of the kernel for the coupling. So can you comment on that? Yes. Well, we said, so we said, first of all, that coupling is important for isolated system. And in general, if you try, you see that the electron exchange is the most important term for coupling, while the electron interaction is not. So by default, YAMBO will just add the electron exchange. You can also ask to the code that is an input flag, the flag in the input file, also add the electron interaction. And we don't do that by default because it's more demanding and you don't gain much usually. And I think the reason is that coupling has something to do with, let's say, growing up in energy and going down in energy excitations. And this is what usually is important for a plasmon. The physics of a plasmon is captured well by the electron exchange. You don't need the electron interaction. And indeed, this is why. Indeed, for example, in this paper, the message they give is that you go along this direction. You have excitons. And then you start to tilt the field and you have a kind of mixing between exciton and plasmon. This is the language used. OK, thank you. Thanks. I'm repeating a question from before about magnons and, more generally, the ability of BSE, especially in the ambient implementation, to take into account also other two particle interactions, just electrons and holes. So I didn't get the last point. Well, I know there are some extensions of BSE to include spin interactions, for example, not for electrons and holes necessarily, but also for other two spin states. OK, so you mean spin-spin interactions? Spin-spin interactions. Some people think about it for copepers, for example. OK, so I think that spin-spin interactions are somehow effective interactions that you build up from, let's say, the fundamental Hamiltonian because you really want to capture the physics of magnons or other kind of things. I would say that here we are at the basic level. And with the direct-electronal interaction, you somehow have some effective spin-spin interactions. You don't have everything because in some cases you may need to go beyond the static approximation. But I think you already have much. So you can get a lot. In the literature, there is a lot on using TD-DFT for magnons. And usually the issue is not the interaction. So the exchange correlation kernel of TD-DFT is already pretty good. And the reason is that what you do, I mean you have a magnetic system and you have an exchange splitting between the spin-up and spin-down band. So to get a magnon, you need a kernel which is able to close that. And so you need a kernel which is exactly the difference between the spin-up and spin-down exchange correlation potential. And this is what you get with TD-DFT. And with BSE, you pretty much get the same. So you have the spin splitting, which is determined at the GW level or CosEx level by W. And then you have the W interaction. And this closes the gap. And the other comment I have is that to, I mean, this balance is very tricky. And it has to do something to do with the fact that the magnon, like the phonon, has to respect the sum rule. So you need to respect what is called Goldstone sum rule. The magnon at zero momentum should be zero, at least without spin orbit. And this is something you can check that it is respected by TD-DFT, by BSE at a certain level. And I would say that the respecting of these Goldstone sum rule is the most tricky part because you need many bands, too. Thanks. Davide. Yeah. There is a question from Remotes by Tom Sayer. Maybe you can unmute yourself and. Yeah, sure. Thank you. Can you hear me? Yeah, we can hear you. Can you hear me? Yeah, you can. OK, right. Thank you, Davide. Your talk's delightful. So you had a slide on the intra-band transitions. And you brought up Andrea Marini's thesis. And you said it's very, very tricky to converge this problem. Yep. So if I ran in the AMBO sum system, which is slightly metallic, will it try to calculate these things automatically within the BSE, or do I have to tell it, or we want to include intra-band transitions, et cetera? So I mean, the first message is, if you have a metal, usually it's not very useful to do BSE because the metal will screen a lot and you are not interested in BSE. OK, well, let's say it's a dope semiconductor. OK, let's say that you have a, I don't know, a semi-metal or, I mean, some weakly metallic system. So well, the metallic density is very low. So if it's a 3D metal, you can use this Drude model. Although, I mean, it's not, I mean, you can try. So it will add this Drude term to the response function. And you can both include that in the screening and in the final response function. If you have some semi-metal where the dimensionality instead of the thermal surface is lower, well, you can still use it. But I mean, this is designed for 3D metals. So I think you should include a better model. So for example, for graphene, we were discussing with Alberto. One could, in principle, extend the approach and insert other models. It's pretty simple. OK, but it will need additional. No, I mean, you can try brute force. But in the end, it's the convergence which will be an nightmare. So I think I need to. It's not a good idea. Thank you. Can I add? And there is an important point. I mean, Yambo cannot calculate that omega d square. The omega d square is the nasty part. Of course, that expression is not legal. Because it's just simple, of course. The problem is that omega d square, if you work out the expression above, you see that it's a surface integral. So it's just an integral restricted on the Fermi surface. So when you do any calculation with quantum espresso or that you pass to Yambo, it is a regular grid that at least have one point on the Fermi surface or a finite number of points on the Fermi surface. And this is not enough to integrate and to get the duty frequency. So the duty frequency must be calculated outside using tricks, and it is a really difficult thing to converge. In that case, it was calculated by doing an integral on a Fermi surface that is a little bit, you know, the width is increased because of a fake temperature added. But essential information is that omega d must be provided in the input file from an external calculation. And then Yambo uses that simple analytical form that holds for 3D materials. Exactly. This is, I mean, that omega d square is not an output of Yambo, but not because it's not coded. Because it is impossible. I see, right. This is right. I mean, it's not impossible, but I mean, super demanding. You need a really super fun. The regular grid is impossible. Because a regular grid will put a discrete group of points on the Fermi surface. And with a discrete point, you cannot do the integral. Yeah, I think you can try to use some tetrahedron method to integrate in case space this integral. Well, I mean, the Fermi surface of a simple metal is a surface. Already, if you increase the complexity of material, Fermi surface can be, you know, like arts, like painting. Yeah, I mean, you're just super complicated. One trick you can use, which was also explored here, is to smear the occupation with some temperature. So you come out somehow broadened the Fermi surface. And then you can, instead of doing this, I mean, Yambo, whenever there is this integral, we replace it with a sum over k-points, which is a very simple approximation, because we are not focusing much on this kind of integral. So, well, let's say it is much more compatible with many body perturbation expansions. But then you can also try to do that with tetrahedron methods, 3D, they are very sophisticated, and you could converge faster. No, just a simple comment. I want to say that we are working on a magnetic method to solve this issue. So you mean models or? Way to treat the entraman transitions. Yeah, but I mean, through a model system, like the rudelike or to do some? Also, I will say, without any parameter. Okay. So maybe in the next release, there will be something. So, as you showed, I mean, you're constructing the BSE Hamiltonian for every k-point, and you have a dimension of conduction times violence bands. So, for example, like if you consider exciton as a hydrogen model, like, so we only have NC times NV excitons here, excitonic energies, but can Yambo compute the excited states of a single exciton? For example, like, the rename this as one as exciton, the second excited. So, first let me take a moment. The size of the BSE is a number of valence states times number of conduction states times number of k-points. Yeah, I mean, I'm talking about for every k-point. Yeah, but so the k-points will mix together and this is the main effect of the BSE. So you have a system which is, let's say, an infinite system, and you have many k-points, and the transition in between different k-points will mix a lot to build up your exciton. So if you think to the exciton in terms of the hydrogenic model, your excitonic wave function will be given as a superposition of many k-points. This is the main. But we have like a series of exciton energies. Then you have a series, yeah. So, like, what I want to ask is like, we only have this NK and NV and NC excitons because we are diagonalizing the size of our matrices and NK and NC and NV. But like, we can have like a number of exited, many exited states for every exciton, right? But I mean, these NV and NC and Ks, the basis set used to get all the excitons. So I mean, one of the features of the BSE is that you capture all kind of excitons in the Rydberg series. So you get the 1S, 2S, 2P excitons and you get really everything. There is nothing left behind besides the mixing with higher order excitations. So you really get all the excitons just as a well-defined peak. They don't have lifetime. And in case they mix, I mean, you don't have as in the GW that you have some satellites. You miss that, but you have all the excitons. What do you think? More questions or? I don't know. Yeah. Please, a question from online, okay? I don't know. Yeah, there is another question from online by Rashid Khan. Please, Rashid. Can you hear me? Yeah, not very well. Hello, okay. Actually, I want to calculate the emission spectra of this indelator material. So can I calculate emission spectra with the help of YAMBO core? So you said that the emission spectra and not the absorption, right? Yeah. So I mean. Not absorption. In general, I would say no. YAMBO is focused on absorption, but it shouldn't be too difficult to move from absorption to emission. Indeed, I mean, there was a person who studied the emission spectra, which is photoluminescence. And he did the coding. I think he's not yet available in the GPL. But I mean, also, I would say that emission is very similar to absorption. So as a zero-order approximation, you can just take the absorption and use it to infer on the emission. Yeah, zero-order, I mean. Usually, at least I would say, and then I mean, you can correct me. For extended systems, it is a good approximation unless you have some, let's say, exit on self-trapping, some polyvones, then in that case, the emission can be shifted. But in general, I would say, absorption and emission are more or less the same. In isolated systems, instead, usually you get the stock shifts, but this is more a matter of atomic position shifting rather than a different theory. And if you want to go to one minute, I feel free. Okay, so from a physical point of view, absorption and emission are very different. I mean, absorption, you don't need to quantize the electromagnetic field. For emission, you need to quantize the magnetic field. So it's very different. So within certain approximations, it can be restringent, like detailed balance, this kind of stuff that has molecules. You can say that the peaks correspond, the weights are different. So to calculate the weights of the luminescence, even in very stringent approximations, you need to take it in account in version of population. So the general message is, don't just take the absorption and as an inverse for the luminescence, just try at least to get the general physical at least the inverse of population, the fact that transitions are just from different states. I mean, as a general, very general. But I mean, it's the same excitonic matrix. Then you have extra transition. If you have a very low density, in general the excitonic poles are really the same. Yeah, but at the numerator, if you look at the, even in the super simple, the most simple thing in the case of absorption, you go from occupied to unoccupied states. In luminescence, you go that way around. From occupied above to unoccupied below. So it's not the same transition. It's the opposite, I mean, minus the... But in for the luminescence, you have zero for the luminescence in the equilibrium. There is nothing. So this is just, be careful, just be careful. It's not as simple as absorption, just be careful. Okay, so I don't like to share myself. I think it's time to move on to the next speaker. And thank you.