 Okay, so our next speaker is Jay Mo Lim, and he will talk about the accurate calculation of position matrix elements for vanilla interpolation, so indeed part one. Yeah, thanks for the introduction and the work I'll present is the first part of my collaboration with Min-su-gi and Cheol-han Park, and I will focus on the translational invariance of position matrix elements, and in the next talk, Min-su will talk about higher-order finite difference methods. So the quantity I'll focus today is the position matrix elements, which are the matrix elements of the position operator in the basis of localized one-year functions. So the diagonal part of it is the one-year centers, and the off-diagonal part is the optical matrix elements. And these matrix elements are used in various places in the one-year function methodology, and the property of those matrix elements that I will discuss is the translational invariance. So let us consider that we translate the whole system, including the electrons and the lattice, by some constant amount a. Then the one-year functions and the centers of the one-year functions will also translate by the same amount, and for the off-diagonal matrix elements, they will not change because of the orthogonality of the one-year functions. So this property should be satisfied for an ideal case, for an accurate calculation of position matrix elements. But the problem is that these position matrix elements are usually calculated in an approximate manner, and some approximation preserves this translational invariance, and some does not. And what I want to claim today is that approximation that preserves the translational invariance are more accurate and should be used. And for later use, I also note that this overlay matrix element between two block wave functions, transformed by this multiplicative phase factor. So let me now explain what kind of approximations are made in practice. One usually uses the reciprocal space formulation of the position operator, and since one knows these block wave factors only on the set of force k-point grids, one needs to use this finite difference expression. And the error of the finite difference expression scales as the sphere of B, where B is the distance between the neighboring k-points, and so it is proportional, inversely proportional to the size of the k-point grid. Now the issue here is that one can add some arbitrary terms of order b squared or higher to this right-hand side, and that will still be a valid finite difference formula. So that which will also converge to the same exact value in the limit of an infinitely dense grid. But the difference there is the rate of the convergence. In the literature, there are several different formulas proposed. The first one is what I would call the Naive formula, and it's basically what I explained previously. And there's also the formula by Maserian Vandibles in their original paper. They use the complex lower item of M instead of M itself. And later Stengen and Spaulding also proposed a slightly different formula where the order of the lower item and the sum over k-points are reversed. In terms of translational invariance, one can easily show that only the latter two formulas are translational invariance, while the Naive one is not. This can be seen from the fact that the multiplicative phase factor in M becomes an additive factor thanks to the complex lower item. But this does not happen in the case of Naive formula. In the YR90 code, only the former two formulas are implemented, and the Naive formula is the default in post YR90.x, while the Maserian Vandibles formula is used in YR90.x. And this difference has actually led to some confusion among the users. There are actually two open issues in the YR90.x repository on this topic. For example, in the first issue, it is reported that the RIMN and AAR data gives very different very curvature. And the reason behind this turns out to be that those two data are calculated using two different formulas. So it is very highly desirable to settle down this issue and to determine which one of those formulas are more accurate and should be used. To do that, let me first analyze the finite difference error of the figure's formulas. So first, for the Naive formula, one can show that the Naive formula is equivalent to the expectation value of this exponential operator. From table expansion, one finds that the leading error is the third order moment of the position operator. For the Stengel Spalding formula, it is the formula is equivalent to the cumulant generating function, which is the logarithm of the expectation value of this exponential operator. And the property of the cumulant generating function tells us that the leading error is the third order central moment. So the difference is that the one-year center is subtracted from the position operator. And to compare these two errors, we go to the real space interpretation. And for simplicity, let us consider a 1D system. Then the Naive formula can be understood as approximating the position operator with a sine function, as can be seen from this exponential and the sum over b vectors. And the sine function is centered always at the origin of the unicell. So if the one-year function is also near the origin, then this is a good approximation. But if the one-year function is centered far from the origin of the unicell, then the approximation breaks down regardless of the localization of the one-year function. In contrast, the Stengel Spalding formula can be understood, again as a sine function, at least up to the leading order in the error. But now the sine function is centered at the center of the one-year function. So this approximation is a valid one if the one-year function is localized enough regardless of its center position. Therefore, this analysis shows us that using the translation invariant Stengel Spalding formula will be much more accurate and robust than the non-translational invariant Naive formula. And so far, I have talked only about the Stengel Spalding formula. And for the Maseri-Vandev formula, the result is very similar. The only difference is that we have this additional term that includes the position matrix elements between a one-year function and its periodic images. So this additional term does not break the translational invariance, but it does break the size consistency. And by size consistency, I mean that primitive cell calculations with some k-point sampling should give a same result with corresponding supercell calculation with a gamma-point sampling. However, since only the Maseri-Vandev formula is implemented in wire 90, and also because our focus is on the translational invariance, not size consistency, I will use the Maseri-Vandev formula in the following and leave the Stengel Spalding formula for a future study. So now let me show you some results. To demonstrate the translational invariance, or the lack of it in one-year centers, we calculated the one-year centers of monolayer germinium sulfide by shifting the one-year along the vacuum direction. So here shift 0.0 means that the monolayer is centered at the origin of the unicell, and shift 0.5 means that it is shifted vertically by the half of the unicell length, so it is as far as possible from the origin. And the results show that the colored dots show that when one uses the naive formula, the results are indeed dependent on this shift, while the Maseri-Vandev formula gives translationally invariant results. And what is more important or interesting is that the X-axis, the NKZ, is the k-point sampling along the vacuum direction. So since this is the monolayer calculation, what people usually do is to sample the vacuum direction using only a single k-point. But here you find that if one uses the naive formula and the error can be extremely large, depending on the shift, but when one uses the Maseri-Vandev formula, then the error is very small and the convergence rate is really fast. And this affects not only the matrix elements, but also the physical quantities calculated from it. For example, here we show the optical conductivity of the same material along the direction. And the gray thick lines are the reference results, which are converged calculations, where we used six k-points along the vacuum direction. And all the other data, the colored lines are calculated using a single k-point, which is what usually people do in practice. So first, when one uses the naive formula, we find that the results are independent of the shift for some unknown reason. But the important thing is that they are both very different from the converged results. So using a naive formula can lead to a large error in the spectra. And when one uses the Maseri-Vandev formula, or in other words, if you set this translational invariance flag to true, then the blue line shows that if one centers the monolayer at the center origin of the unicell, then the spectra is very accurate. However, this is not the whole story because if one shifts the monolayer again along the particular direction, then the spectra changes a lot and the error is significant. So even though we use the Maseri-Vandev formula, the error is significant. And the reason is that Maseri-Vandev formula fixes only the diagonal part of the linear position matrix element, but the off-diagonal parts of the matrix element are calculated in a non-translational invariance manner. Therefore, we find that the translational invariance should be imposed not only on the diagonal part, but also on the off-diagonal part. So now we turn to that issue. So in post-wire 90.x, what is currently implemented is the following. So being off-diagonal means that the one-year function index is i and j are different, or the position, the unicell index r is different from the zero. In post-wire 90, one uses the some generalization of the Maseri-Vandev formula if i and j are same, but if i and j, the binary function indices are different, then one uses the naive formula. This had to be done this way because if i and j are different, then the n matrix element converts to zero, not one as in the former case. So one cannot use the complex logarithm to approximate m itself. And here we need to find that when one calculates the off-diagonal matrix elements with three different shifts, the results are dependent on the shift if i and j are different. Therefore, we need to develop a new expression which gives translation-invariant results for the full matrix. To do that, we turn that again to the real-space interpretation. So for the naive formula, the off-diagonal case can be also understood as approximating the position operator with a sine function. And I forgot to mention that the periodicity of this sine function is that of the bone-borne karma supercell, which is the size of the unicell times the k-point sample, size of the k-point grid. And again, what we can do to improve accuracy is to shift this sine function close to the y-ringer function. Now since there are two y-ringer functions, we place this sine function at the middle of the two centers. And numerically, this becomes multiplying this small phase factor when calculating r, and the computational overhead of this additional step is almost negligible. But that almost negligible computational overhead gives much more accuracy and robustness in the calculation of position matrix elements. Here, we implemented this new method in binary code, and here we indeed find that for the three off-diagonal matrix elements I've shown before, using the new translation-invariant finite difference formula showed in black symbols, the results are independent of the shift, and more importantly, they converge much faster and are much more accurate when using only a single k-point along the vacuum direction. And now for the optical conductivity, this is the figure I've shown you before, that using the Maseri-Vandeville's formula only gives the translation-independent results, and the error can be quite large. And when one uses our new translation-invariant formula, which is fully translation-invariant, then the results are really independent of the shift, so it is really translation-invariant, and more importantly, the results are very similar to the converged results, even though we used only a single k-point along the z-direction. So it is much more accurate and more robust compared to existing methods. So imposing translational invariance is particularly important when the linear functions are far from the origin of the unicell, as I discussed in the real-space interpretation. And so one example of such a case is the low-dimensional systems I've discussed, such as the monolayers, where the monolayer or the molecule is placed far from the origin of the unicell. So in this case, one might try some ad-hoc solution of translating the system to lie close to the origin. But that kind of ad-hoc solution does not always work, because there are other cases, such as systems with large unicells, where we have non-structures or defects. And in this case, one cannot translate the system to make all the linear functions lie close to the origin. So there will be always some linear functions that are far from the origin, so using a translation-invariant formula is necessary for these latter cases. So to conclude, we have shown numerically and from the real-space interpretation that using a translation-invariant formula gives much more accurate and robust results compared to the non-translation-invariant results. And we have also developed this new expression, which gives fully translation-invariant matrix elements. And this, our work is implemented, the code is implemented in this branch of the Wanyer-Berry repository, and I plan to release it, merge it into the developer branch, official developer branch during this development. Thank you for the attention. Thank you very much for the very nice talk. So are there any questions both here in presence and people on Zoom? Yeah, there is one. Thank you for the very clear talk. From the implementation point of view, currently post-Wanyer-90 uses the nive finite difference formula. So there, can we use the idea there just by multiplying the e to the minus i d dot ri of the current iteration and then plus ri of the current one, and then use that as repeating a few loops to arrive at some converged correct result. So could you explain what you mean by iteration? Because the problem there is the origin is at zero. Yes. Maybe one could, by multiplicating that by some e to the i d dot r of the current previous iteration, and then add that one at the end. Maybe one can improve, and if you repeat a few cycles, then without changing the code too much, because that post-Wanyer-90 is based on the finite difference formula. Maybe can we improve the code that way? Do you think? So first, what one needs to use, one needs to implement to implement the translation very formalized just simple base factor. So one does not need to change the code a lot. And also... Sorry, my comment is about the diagonal part. For the diagonal part, yes. For the diagonal part in post-Wanyer-90, there is an option called translation invariance. So if you set this to true, then you can use the Maseribandib formula, which is translation invariance. Okay. Also, I would like to note that the r i and r j are the one-year centers in this formula, and these one-year centers are calculated in advance using the Maseribandib formula, and they are used as inputs to calculate the remaining off-diagonal matrix elements. So the implementation would be very straightforward. So this is really beautiful work, and so from what you said, it sounds like the optimal thing to do is to use your new formula for the off-diagonal part and the Stengel-Spalding formula for the diagonal. Is that your conclusion? Yeah, I have not studied the Stengel-Spalding formula numerically or in detail, so I cannot say much about the Maseribandib versus Stengel-Spalding, but I think it's definitely promising because the size consistency is of important criteria, so it would be an interesting future direction. And are you planning to do that study in the near future? I'm aware that there are some groups, the group of Anil Damley is also trying to go in all this direction, so yeah, I think there are several people working on this. Okay, thank you. Thanks, Jemo, for a beautiful, very clear talk. Sounds like you might be suggesting that we changed the default in Vania 90 to the translational invariance, is that right? Yeah, this is right. So actually, this is more a comment for other people. This workshop is a really good opportunity. If you have suggestions with changes to default parameters, please make a case when you give a talk. We'll keep a note of them, and the next release will be a good opportunity to change some default parameters that need to be changed. Yeah, I definitely agree. I think the translation invariant formula should be the default algorithm instead of the non-translator invariant ones in Vania 90. Yeah, any other questions? Okay, Stefan, let's wait a second. Just regarding what is the default event understanding of W90, it's translational invariant. It cannot be used when you have additional matrix elements, like for the orbital magnetization and so on. Well, but maybe not many people use them. And actually, sorry, my question is, can this procedure be somehow generalized to these matrix elements, you know, which have more than one R? Or maybe I missed it. Yeah, this is one of our future words. Ah, yeah. Okay, sorry. I haven't studied yet, but definitely I will study. Okay, thank you. Thank you. Very nice talk. So this R minus R to the third, that comes from the logarithm naturally, right? In the logarithm, there is no R minus R. That's kind of just magically comes when you expand the logarithm, right? Right. So is there then similar formula for the off diagonal? Like you kind of take the logarithm of the matrix or something? Yeah, so that does not work to my knowledge, because here before the diagonal case, M converts to one. M is very similar to one. Not logarithm of a matrix element. Is there a way to write this formula that it would work for entire matrix, you know, naively, you know, like take a logarithm of entire matrix, which is different than each matrix element or something like that. Then you would this R minus R, R minus R center would come out naturally out of the formula. I see. I haven't thought about that, but this seems very interesting and promising. I think I have a comment on this. I was trying to find it, but I couldn't find it, but Pirino-Silvestrelli in the late 90s wrote out with Michele Parinello a series of formulas that were all equivalent to the logarithm of M and one minus M square and so on. I forgot what they are, but it could be good exactly in the spirit to try if there is something that then can be a unified sort of translational invariant. Yeah, thank you for the comment. Other questions? Anyone on Zoom? Okay. So if not, I think we can thank again Gioembo. So there is a small change in the schedule. Now we're supposed to have a talk by Julien Ibanez-Atspiroz about the implementation of nonlinear optical responses, but actually, unfortunately, he wrote to us that he cannot deliver his talk. So we stop here a little bit earlier. Yes, please. I have. Okay. That's fine. Yeah, in my mind, he was supposed to be here in presence. That's why. Okay. So this is going to be the last talk of this morning then. Minsoo, you can share your screen. Yes. Can you see my screen? Yes. I have cold symptoms, as you can hear my voice. So I am presenting online in here, Trieste. Okay. So this talk is part two of the final difference continued from the previous part by Jemu, a translational invariance. My surname is GHIM, but it's the same as KIM, but I decided to choose my surname as GHIM because KIM is too frequent. Anyway, let's start. Okay. So we are going to compute this position matrix element calculated by this equation. So V is the volume of the unicef, and U and K is the at initial blow weight function obtained from the coarse grid calculation. Due to the numerical reason, we should calculate this K integral with the final number of K points using discrete K sum. And we also calculate the gradients using the first of the finite difference, which is an approximation. Here, B is neighboring vectors from K to K plus B, and WB is the corresponding weight or coefficient for the finite difference. So these position matrix elements are required for vinylsperit and very coverage-like term to calculate conductivity orbital magnetization, ship current, and so on. So it is important to make position conversion fast. So why it is important to consider a hierarchical finite difference formula is that the error of position matrix elements arising from maximally localized vinylsperit function is known as exponentially decreasing with NK on the NK by NK by NK at the initial coarse grid. But the error arising from the first of the finite difference is in order of NK minus 2. Also, Barzary and Vanderbilt mentioned in their PRD paper, it would be interesting to explore whether usable higher or the finite difference representation of gradient K might improve this convergence, especially the binary part of the vinylsperit. So let's start from one dimension. We are assuming that we are using the centrosymmetric formula for finite difference. So for example, the first order formula is given as here. So we have two neighbors, minus h and plus h with the error proportional to h squared. The second order has four neighbors with the error proportional to h to the fourth. So in this manner, we can extend to the nth order and find the coefficients and error dependence. Now let's move on to the general case. The last term in the finite difference formula is adding this fk to match the convention with the previously published papers such as Barzary-Vanderbilt in 1997. But this term plays no role because we are also the centrosymmetric assumption. And since w of b equals w minus b, so we don't need to keep this in mind. The weight, the wb, are determined from neighbors by the completeness relation, the condition for the first order case, even by this equation, symmetric in Cartesian indices, alpha and beta. So now we have to find the analogy for the nth order case. The general formula can be compared with the one-dimensional case for the first order example. The formula for the finite difference can be reconstructed as this form. So 1 over 2b square is the weight and b is h or minus h, so b is the neighboring vector. So this wb and b also satisfy the completeness relation. For the higher order finite difference, the second order formula in one dimension can also be reconstructed similarly. In story dimension, we have to choose more b vectors and corresponding completeness relations and wb. So the question can be divided into two parts. Question one, how to find b for higher order cases. Question two, how to find wb from the higher order version of the completeness relation? So it's the first question. We have chose neighbors as b to b, 3b in one dimension, but the situation is different because we have freedom to choose b in 2d or 3d because we can consider more directions other than one direction. So we can come up with two options. The first strategy is as near as possible or in other words near research. Using the first strategy, we search from the origin to the further region with increasing distance from k. So we'll find the nearest b vectors. And the next strategy is a simple extension. We first find the first order finite difference b vectors and multiply by 2, 3 or n. So we are going to use b to b, 3b or nb with modified rate. Then after the determination of b vectors, next we have to find the corresponding rates for the first order. We just had to make only the first derivative correct, but now we have new terms such as second derivative, the third derivative or the second order. So we have to eliminate these terms. Using more terms in the Taylor expansion, we can extract the first derivative. The first derivative is written as the b vector summation. And if we insert this Taylor expansion into the fk plus b, we have many terms and we can compare the left hand side and the right hand side term by term. So the first term in the right hand side is nearly f. So the sum of w, b, b, alpha should be zero. And the next term is the gradient of f. So to make this term gradient alpha of f, the sum of w, b, b, alpha, b, beta should be the chronicle delta of alpha, beta. And next the third order is the terms with 3b or 4b should be zero. Now we have four equations in total for the second order. But the first and the third equations are automatically satisfied because we have assumed b's are centrosymmetrically distributed and the number of b's are in this case all numbers. So they are automatically satisfied. So let's look at the first order equation. Actually the number of the first order equation is six because it is the combination with repetition of two Cartesian indices. So for example, it is constructed by xx, xy, yy, yz, zz, and zx. So we can get maximally six independent w, b. But for example, in these cubic cases, we have only one w, b, with six points for the simple cubic cases, eight b vectors for the bcc cases, and 12 points for the fcc case. But with more non-symmetrical cases, we require maximally six independent w, b, like triclinic cases. For the second order equation, we have now a combination with repetition of four Cartesian indices, which is 15. So we have total 21 equations. In this manner, the nth order, the number of the first order to the nth order equations are proportional, is proportional to the n cube. So when it comes to the error gradient and value spread, it's proportional to p to the 2n. So let me compare the two methods, nearest search versus simple extension. The first method, nearest search works by including additional b from the origin, and check the conditions repeatedly until the conditions are satisfied. The good point is that since the nearest search uses the nearest b vectors, and because the error is proportional to b to 2n, this results in the smaller error relative to the simple extension. While the bad thing is that it has a large number of equations. For the simple extension, the weights w, b can be readily found as introduced in the next page. So to find w, b is very simple. And also the number of b is relatively small. So also the .mmn file, such as .mmn file from pw to linear90.x from quantum mass pressure becomes very small. This is about finding w, b with the simple method. First, assume the completeness relation with the readily found first order b vectors and wb. And the higher load versions of completeness relations are expanded. And using the first order equation, the final equation is simplified by this matrix equation. Therefore, the weights of simple can be found by the Kramer's rule. The Kramer's rule is quite expensive to be calculated. But the coefficient matrix is similar to the so-called Van der Von matrix, which is easy to find determinant. So the determinant of this matrix is able to be calculated by hand. So just the answer of the weights are simply found by this formula. So we are ready to calculate linear quantities. The first example is polarization, linear polarization of potassium nanobates, whose structure is an elongated perovskite. Here, though, white spheres are oxygens, the gray sphere in the middle is niobium, and the black dust at the corner is potassium. So polarization has two contributions. The first contribution is the ionic contribution, and the second contribution is the electronic contribution, which can be calculated by the sum of the binary centers in occupied states. Here, the vector 2 is inserted because we have done a non-spin polarized calculation. So we expect the error dependence follows this formula. In the lecture here, we can see the convergence of polarization with increasing number of k-points. Here, simple and nearly search are not much different because here dot and cross mark are not much different here. And the third order is not much faster than the second order relative to the first order, the convergence of the first order. And the converged value is found near 0.38, which is the same of the result of Stengel and Sputnik. So in the right figure, the x-axis has been changed to this proportional to nk, so minus 2n2 to see the error depends more clearly. Although y-intercept at x equals 0 is the expected converged value. The next example is the convergence of the binary spread of silicon. The same analysis can be applied because the error behavior of the binary spread is the same as the error dependence of the binary polarization. So here the insert in the right figure shows the big dependence of the error is also correct for the second order and the third order. The main bottleneck is the non-self-consistent calculation. So the reducing nk is the most important matter. And the time consumption becomes not quite long using the higher order calculation. So the higher order calculation is beneficial when we consider total computational time because we can reduce the number of k-points. Also the Stengel and Sputnik work can be interpreted to the infinite order of finite difference. But only with the second order we can obtain quite a good convergence without much time consumption. But we have to be cautious because the higher order finite difference can be helpful for convergence. But it may not work without translational invariant formulas introduced by Jemul. This is an illustration of translational mirrors. The pink line, the sine function, is the position operator and the blue line is the position operator with translational invariance. The rule of higher order finite difference is to make the position to the real position with more Taylor expansion terms. So the position is now like a salt shape. However, without translational invariant, higher order finite difference may not work because the wrong location of salt may cause some error. So especially when linear functions are far from the origin, I recommend to use the higher order finite difference with translational invariant formulas at the same time, especially when linear functions are far from the origin. So the conclusion is that the error we verified the error or the order of error is proportional to b to the 2n and the higher order finite difference requires more completeness relations. Oh, we had two methods, near research and simple extension. But for the polarization or varnier spread, they showed no significant difference. And also just a simple second of the finite difference can enhance calculations much. One more thing is without translational invariance, higher order correction may be insufficient. So translational invariance should be the default for varnier 19 or as on. So we only calculated varnier spread or varnier polarization or other quantities such as orbital magnetization, which has two gradients of two gradients of evidence of both states or spin current, which is graded with spin. Not only this term, these quantities should be calculated to test the higher order finite difference further. So this is the end of the talk. Thank you for your attention. Thank you, Minsu, for the great talk. Is there any question here? In the meantime, if online you have a question, please write them in the chat. Thanks very much for a very clear talk. And so I hope you feel better soon. Just a quick question. So when you go to higher order, then obviously you have to calculate more matrix elements, the mmkb. And so there's sort of a trade off between calculating more of those matrix elements and just using the first order and using more k points. Can you comment on sort of the relative computational cost and where you see that? Oh, that's a good question. In the process of vanearization, we should calculate first self-consistent and next non-self-consistent and next pw2 vanear 90 with new neighboring vectors from the nnkb pipes and finally vanearization. So the nk point, the number of nk points affects the nscf or nscf and pw2 vanear 90 because vanearization speed is now not quite boosted. So for the nscf calculation, it's very important to reduce nk because that is nk, but the number of k points is proportional to nk2 on the nk by nk by nk additional coarse grade. So computational time of nscf is proportional to nk. And if we assume that we use the simple method, the number of neighboring vectors is proportional to n because we are using b to b3b or nb. I also recommend to use a simple method rather than the nearest nearest search method. So the computational time of pw2 vanear 90 may be proportional to nk. So the total computational time will be reduced with higher order because the main bottling may be the nscf calculation. If I could just ask a point of clarification. When you're doing your higher order finite difference, am I right to think that you're doing that for the construction of the banear functions themselves? Because I can imagine a situation in which you applied as a post correction. So you could use the first order formula that we have at the moment in order to get to do the minimization of the spread functional. And then you could use the higher order formula afterwards in order to get a more accurate representation of the spread. My point being that the banear functions themselves I think converge quite quickly with respect to the k-point grid. It's just simply the representation of the spread that is slower. So I just wanted a few comments on that. So could you repeat that again please? So your higher order formula for computing the spread is that what is used for the minimization of the banear functions? Or are you applying this in a post minimization step to correct the spread and the sensors? I'm sorry can you hear the question that's being asked? I'm afraid I can make a good explanation. Could you email later? Okay so actually Min Soo, if I understand correctly, used the correct formula in both banearization and in calculating the matrix. And we haven't yet separated the two effects, but we are going to study the separate effects. It's a very important question, right? Yes I guess I'm worried with the higher order formula whether there's anything more complicated in the minimization process, but perhaps it doesn't actually lead and your inability to test this. You can prove perhaps that the banear function is a fairly invariant to how we choose that spread. Yeah indeed, also as a follow-up comment, I mean what you choose to calculate the gradients affects the symmetry that you end up with the banear function. And so it might be easier with a first order formula to choose the right group of symmetrized b vectors so that you have a desired symmetry. Because sometimes you can afford lower accuracy finite difference formulas, but with the right symmetry properties for the gradient and the real space vector that comes from it, then a more accurate formula that though breaks the symmetry. So sometimes it's easier to work with a first order finite differences and then maybe just doing higher order as a post-processing. At the same time the symmetry, the number of neighbors are quite light, so it'll be good to use both of them at the same time. Yeah, let me just, yeah, thanks Minsoo and also thanks for the nice comment. Actually we also found that if we use just first order formula, the symmetry is broken so we need very many fine course break points. So we want to also we're planning to look into this how it affects the symmetry. Yeah, thanks for the comment. Hi, do you think that using this high order formula might improve also the symmetries when using the automated procedure? Because it's also there, we found that the automated procedure sometimes generates banear function that do not respect the symmetries at the center. I'm not sure the symmetry is larger related to the higher order finite difference, but maybe I think I can test it more extensively. That's a good question. Just another quick question. I mean, so you've been looking at finite difference formulae that sort of work along lines in reciprocal space, but actually there's also more complicated ones called like mesh-delon discretizations which take account of more than just one direction, the multidirectional finite difference formulae. So you know the value at a particular point depends on points around in 3D in a more complicated way and there's a lot of literature actually in the engineering field on accuracy of finite difference formulations and looking at some of these more complicated ones. It would actually just be interesting to know which is the best for this problem, even counting for these ones. Is that similar to the Nearest Search or Nearest Search uses Cartesian texts? This method uses many directions other than special directions. I think they may be different. So is it possible to just avoid doing the finite difference and directly solve for the derivative of the both states? I mean, I think PSPT must compute that at some point. So maybe like is there a way to like pull out of ph.x this derivative directly and work somehow with that? So what do you mean by direct? So you know NSCF solves you know u of k, but you can write equation for derivative of u of k with respect to k and then you know compute that and store somehow. Yeah, so in the center on the right, this matrix element, you know, directly compute the derivative of u. Is that possible somehow? Oh, if you compute it in the block gauge, it didn't transform back somehow. And the other comment is I think on slide 4, like you had this higher order expressions, but I think they are, they give you smaller errors only if the function you're integrating is very smooth. I think if the function is not smooth, then the higher order, I think actually gives you worse results, but I might not remember the strike. Generally, for your point, the finite difference would work if the function is like analytic, not having a singular points. Yeah, but then is it possible to bend structure sometimes have some things which are not smooth or I don't know, maybe that doesn't make sense. Maybe it's always smooth. I don't know. I was just thinking. Sorry, maybe I'm just ignorant, but when they calculate the position matrix, I calculated some, how to say several compounds, but some of them, when they have the bound crossing, there is some numerical errors in position matrix. Is there some thing like this when you calculate related to the bound crossing? Because I don't know, just maybe ignorant, but it seems there is some problem in my calculation. Are you aware of that, such kind of a problem? Sorry, I'm not aware of that. Ah, okay. Maybe, I don't know. Maybe my calculation was not good. Yeah, thanks. Any other questions? Okay, if not, we can thank Minsoo again. And now for real this time, this was the last talk of this morning. So now we have a lunch break and we come back here at 2 p.m. Okay, for the, so for the flash talks, we have one hour flash talks. And then there is a small change in the schedule, as I told you, keep an eye on the website. So tomorrow morning, we will not have the talk of Ivano Tevernelli on quantum computing and applications in natural science. The talk is being moved to Thursday, same time, 9 a.m. So if there is anyone who is supposed to give a talk at any other time and wants to anticipate that, just come to me. And otherwise, we will start off an hour later. Thank you.