 and hear my voice. So I am presenting online in here, Trieste. This talk is part two of the final difference, continued from the previous part by Jemo, a translational invariance. So my surname is GHIM, but it's the same as KIM. But I decided to choose my surname as GIM because KIM is too frequent, anyway, let's start. So we are going to compute this position matrix element calculated by this equation. So V is the volume of the unicef and UNK is the initial flow rate function obtained from the coarse grid calculation. Due to the numerical reason, we should calculate this K integral with finite number of K points using discrete K sum. And we also calculate the gradients using the first of the finite difference, which is an approximation. Here B is neighboring vectors from K to K plus B and WB is the corresponding weight or coefficient for the finite difference. These position matrix elements are required for a binary sprint and very, very curve-like term to calculate conductivity or between the decision, ship current and so on. So it is important to make position conversion fast. So why it is important to consider a hierarchical finite difference formula is that the error of position matrix elements arising from maximally localized binary function is known as exponentially decreasing with NK on the NK by NK by NK at initial coarse grid. But the error arising from the first of the finite difference is in order of NK minus two. Also, Marzari and Vanderbilt mentioned in their PRD paper, it would be interesting to explore whether usable higher or the finite difference representation of gradient K might improve this convergence, especially the binary spread of the binary part or the binary spread. So let's start from one dimension. We are assuming that you are using the central symmetric formula for finite difference. So for example, the first order formula is given as here. So we have two neighbors minus H and plus H with the error proportional to H squared. The second order has four neighbors with the error proportional to H to the four. So in this manner, we can extend to the N's order and find the coefficients and error dependence. Now let's move on to the general case. The last term in the finite difference formula is added in this FK to match the convention with the previously published papers such as Marzari and Vanderbilt in 1997. But this term plays no role because we have assumed the central symmetric assumption. And since WB equals W minus B, so we don't need to keep this in mind. The weight, the WB are determined from neighbors by the completeness relation, the condition for the first order case, even by this equation symmetric in Cartesian indices alpha and beta. So now we have to find the analogy for the N's order case. The general formula can be compared with the one-dimensional case for the first order example. The formula for the finite difference can be reconstructed as this form. So one over two B square is the weight and B is H or minus H, so B is the neighboring vector. So this WB and B also satisfy the completeness relation. Now for the higher order finite difference, the second order formula in one dimension can also be reconstructed similarly. So we, in three-dimension, we have to choose more B vectors and corresponding completeness relations and WB. So the question can be divided into two parts. Question one, how to find B for higher order cases? Question two, how to find WB from the higher order version of the completeness relation? So it's the first question. We have chose neighbors as B to be 3B in one dimension, but the situation is different because we have freedom to choose B in 2D or 3D because we can consider more directions other than one directions. So we can come up with two options. The first strategy is as near as possible or in other words, near as search. Using the first strategy, we search from the origin to the further region with increasing distance from K. So we'll find the nearest B vectors. And the next strategy is a simple extension. We first find the first order finite difference B vectors and multiply it by two, three or N. So we are going to use B to B, 3B or NB with modified rate. Then after the determination of B vectors, next we have to find the corresponding rates for the first order. We just had to make only the first derivative correct, but now we have new terms, such as second derivative, the third derivative or the second order. So we have to eliminate these terms. Using more terms in the Taylor expansion, we can extract the first derivative. The first derivative is written as the B vector summation. And if we insert this Taylor expansion into the FK plus B, we have many terms and we can compare the left hand side and the right hand side term by term. Okay, so the first term in the right hand side is nearly F. So the sum of WBB alpha should be zero. And the next term is the gradient of F. So to make this term gradient alpha of F, the sum of WBB alpha B beta should be the chronic delta of alpha beta. And next, the third order is the terms with three Bs or four Bs should be zero. Now we have four equations in total for the second order. But the first and the third equations are automatically satisfied because we have assumed Bs are centrosymmetrically distributed and the number of Bs are in this case all numbers. So they are automatically satisfied. So let's look at the first order equation. Actually the number of the first order equation is six because it is the combination with repetition of two Cartesian indices. So for example, it is constructed by XX, XY, YY, YZ, ZZ and ZX. So we can get maximally six independent WB. But for example, in these cubic cases, we have only one WB with six points for the simple cubic cases, eight B vectors for the PCC cases and 12 points for the FCC case. But with more non-symmetrical cases, we require maximally six independent WB like tri-clinic cases. For the second order equation, we have now a combination with repetition of four Cartesian indices, which is 15. So we have total 21 equations. In this manner, the number of the first order to the end-solder equations is proportional to the end cube. When it comes to the error gradient and binary spread, it's proportional to B to the two N. So let me compare the two methods, your research versus simple extension. The first method, your research works by including additional B from the origin and check the conditions repeatedly until the conditions are satisfied. The good point is that since the nearest search uses the nearest B vectors and because the error is proportional to B to two N, this results in a smaller error relative to the simple extension. While the bad thing is that it has a large number of equations. For the simple extension, the weights WB can be readily found as introduced in the next page. So to find WB is very simple. And also the number of B is relatively small. So also the dot mmn file, such as dot mmn file from PW to one-year 90.x from quantum espresso becomes very small. This is about finding WB with the simple method. The first assume the completeness relation with the readily found first order B vectors and WB. And the higher versions of completeness relations are expanded and using the first order equation, the final equation is simplified by this matrix equation. Therefore, the weights of simple can be found by the Kramer's rule. The Kramer's rule is quite expensive to be calculated but the coefficient matrix is similar to the so-called Vanderbilt matrix, which is easy to find determinant. So the determinant of this matrix is able to be calculated by hand. So just the answer of the weights are simply found by this form. So we are ready to calculate one-year quantities. The first example is polarization, linear polarization of potassium nanobates. Whose structure is an elongated perovskite. Here though, white spheres are oxygens, the great sphere in the middle is niobium and the black dots at the corner is potagems. So polarization has two contributions. The first contribution is the ionic contribution and the second contribution is the lecturing contribution which can be calculated by some of the binary centers in occupied states. Here the factor two is inserted because we have done non-spin polarized calculation. So we expect the error dependence follows this form. In the lecture here, we can see the convergence of polarization with increasing number of key points. Here simple and nearly search are not much different because here dot and cross mark are not much different here. And the third order is not much faster than the second order relative to the first order, the convergence of the first order. And the converged value is found near 0.38 which is the same of the result of Stengel and Spaulding. So in the right figure, the x-axis has been changed to this proportional to nk, so minus 22 to see the error depends more clearly. So the y-intercept at x equals 0 is the expected converged value. The next example is the convergence of the binary spread of silicon. The same analysis can be applied because the error behavior of the binary spread is the same as the error dependence of the binary polarization. So here the insert in the right figure shows the B dependence of the error is also correct for the second order and the third order. The main bottleneck is the non-self-consistent calculation. So the reducing nk is the most important matter and the time consumption becomes not quite long using the higher order calculation. So the higher order calculation is beneficial when we consider total computational time because we can reduce the number of k-points. Also the Stengel and Spaulding's work can be interpreted to the infinite with the finite difference, but only with the second order we can obtain a quite good convergence without much time consumption. But we have to be cautious because the higher order finite difference can be helpful for convergence, but it may not work without translation or invariant form was introduced by Jemul. This is an illustration of translational mirrors. The pink line, the sign function is the position operator and the blue line is the position operator with translational invariance. The rule of higher order finite difference is to make the position to the real position with more Taylor expansion terms. So the position is now like a salt shape. However, without translational invariant, higher order finite difference may not work because the wrong location of salt may cause some error. So especially when linear functions are far from the origin, I recommend to use the higher order finite difference with translational invariant formulas at the same time, especially when linear functions are far from the origin. So the conclusion is that the error, we verify the error or the order of error is proportional to b to the two n and the higher order finite difference requires more completeness relations. Oh, we had two methods, near search and simple extension, but for the polarization or binary spread, they showed no significant difference. And also just a simple second of the finite difference can enhance calculations much. One more thing is without translational invariance, the higher order correction may be insufficient. So translational invariance should be the default for Banyo 19 or S1. So we have only calculated Banyo spread or Banyo polarization or other quantities such as orbital magnetization, which has two gradients of ab initio pool states or spin current, which is graded with spin. Not only this term, these quantities should be calculated to test the higher order finite difference further. So this is the end of the talk. Thank you for your attention. Thank you, Minsu, for the great talk. Is there any question here? In the meantime, if online you have a question, please write them in the chat. Thanks very much for a very clear talk. And so I hope you feel better soon. Just a quick question. So when you go to higher order, then obviously you have to calculate more matrix elements, the M, M, N, K, B. And so there's sort of a trade off between calculating more of those matrix elements and just using the first order and using more K points. Can you comment on sort of the relative computational cost and where you see that? Oh, that's a good question. In the process of Banyoization, we should calculate first self-consistent and next non-self-consistent and next Pw to Banyo 90 with new neighboring vectors from the NNKP pipes and finally, Banyoization. So the NK point, the number of NK points affects the NSCF or NSCF and Pw to Banyo 90 because Banyoization speed is now not quite right. not quite boosted. So for the NSCF calculation, it's very important to reduce NK because that is NK, but the number of K points is proportional to NKQ on the NK by NK by NK additional course grid. So computational time of NSCF is proportional to NKQ. And if you assume that we use the simple method, the neighboring vectors, the number of neighboring vectors is proportional to N because we are using B to B, B or NB, I also recommend to use the simple method rather than the nearest nearest search method. So the computational time of victory to one year 90 may be the proportional to all NK. So the total computational time will be reduced with higher order because the main bottling may be the NSCF calculation. If I could just ask a point of clarification, when you're doing the, your higher order finite difference, am I right to think that you're doing that for the construction of the Vanier functions themselves? Because I can imagine a situation in which you apply it as a post correction. So you could use the first order formula that we have at the moment in order to get, to do the minimization of the spread functional. And then you could use the higher order formula afterwards in order to get a more accurate representation of the spread. My point being that the Vanier functions themselves, I think converge quite quickly with respect to the K point grid. It's just simply the representation of the spread that is slower. So I just want to give you a comment on that. Oh, so could you repeat that again, please? So your higher order formula for computing the spread, is that what is used for the minimization of the Vanier functions? Or are you applying this in a post minimization step to correct the spread and the centers? I'm sorry, can you hear the question that's being asked from the world? Yeah, I'm thinking. I'm afraid I can make a good explanation. Could you email later? I can follow up pretty sure. Okay, so actually Minsu, if I understand correctly, used the correct formula in both vanierization and in calculating the matrix. And we haven't yet separated the two effects, but we are, he's going to study the separate effects. Yeah, separate it. Yeah, that's a very important question, right? Yes, I guess I'm worried with the higher order formula, whether there's anything more complicated in the minimization process, but perhaps it doesn't actually lead and your inability to test this, you can prove perhaps that the Vanier function is fairly invariant to how we choose that spread. Yeah. And it was just a follow-up comment. I mean, what you choose to calculate the gradients affects the symmetry that you end up with the Vanier function. And so it might be easier with a first order formula to choose the right group of symmetries, the vectors, so that you have a desired symmetry. Because sometimes you can afford a lower accuracy, find a difference formulas, but with the right symmetry properties for the gradient and the real space vector that comes from it, then a more accurate formula that though breaks the symmetry. So sometimes it's easier to work with a first order, find the differences, and then maybe just doing higher order as a post-processing. At the same time, the symmetry because the number of neighbors are quite light, so it will be good to use both of them at the same time. Let me just, yeah, thanks Minsoo. And also thanks for the nice comment. Actually, we also found that if we use just first order formula, the symmetry is broken. So we need very many fine course break points. So we want to also, we were planning to look into this, how it affects the symmetry. Yeah, thanks for the comment. Hi, do you think that using this high order formula, my improve also the symmetries when using the automated procedure? Because it's also there, we found that the automated procedures sometimes generates a function that do not respect the symmetries, the center. I'm not sure the symmetry is larger related to the higher order finite difference, but maybe I think I can test it more extensively. That's a good question. I just, another quick question. I mean, so you've been looking at finite difference formulae that sort of work along lines in reciprocal space, but actually there's also more complicated ones called like mesh delin discretizations, which take account of more than just one direction, they're multidirectional finite difference formulae. So the value at a particular point depends on points around in 3D in a more complicated way. And there's a lot of literature actually in the engineering field on accuracy of finite difference formulations and looking at some of these more complicated ones. It would actually just be interesting to know which is the best for this problem, even counting for these ones. Is that similar to the nearest search or a near-search uses not one, not or Cartesian, Cartesian texts and this method uses many directions other than special directions. I think they may be different. So is it possible to just avoid doing the finite difference and directly solve for the derivative of the both states? I mean, I think the FPT must compute at some point. So maybe like, is there a way to like pull out of ph.exe this derivative directly and work somehow with that? So what do you mean by direct? So, you know, NSCF solves, you know, U of K, but you can write the equation for derivative of U of K with respect to K and then, you know, compute that and store somehow. Yeah, so in the center on the right, this matrix element, you know, directly compute derivative of U. Is that possible somehow? It's not if you find U of K. Oh, if you compute it in the blockage, it didn't transform back somehow. And the other comment is, I think on slide four, like you had this higher order expressions, but I think they are, they give you smaller errors only if the function you're integrating is very smooth. I think if the function is not smooth, then the higher order, I think actually gives you worse results, but I might not remember this, right? Generally when you do function. The finite difference would work if the function is like analytic, not having a singular points. Yeah, but then is it possible that the band structure sometimes have some things which are not smooth or I don't know, maybe that doesn't make sense. Maybe it's always smooth, I don't know. I was just thinking. Sorry, maybe I'm just ignorant, but when they calculate the position matrix, I calculated some, how to say, several compounds, but some of them, when they have the band crossing, there is some numerical errors in position matrix. Is there some thing like this when you calculate related to the band crossing? Because I don't know, just maybe ignorant, but it seems there is some problem in my calculation. Do you aware of that such kind of a problem? I'm sorry, I'm not aware of that. Okay, maybe, I don't know. Maybe my calculation was not good, yeah, thanks. Any other questions? Okay, if not, we can thank Minsoo again. And now for real this time. This was the last talk of this morning. So now we have a lunch break and we come back here at 2 p.m., okay? So for the flash talks, we have one hour flash talks. And then there is a small change in the schedule. As I told you, keep an eye on the website. So tomorrow morning, we will not have the talk of Ivano Tevernelli on quantum computing and applications in natural science. The talk is being moved to Thursday, same time, 9 a.m. So if there is anyone who is supposed to give a talk at any other time and wants to anticipate that, just come to me and otherwise we will start off an hour later, thank you.