 And for the quantum 1D-XX pin chain and the Bessel kernel. Thank you. First of all, I would like to thank the organizers for giving me an opportunity to visit this very nice place and to give a talk here. Yeah, today I'm going to talk about spin current for quantum one-dimensional X-X pin chain and its connection to Bessel kernel. So apparently, my talk is related to random matrix theory because this Bessel kernel is kind of a well-known object in random matrix theory. But the subject of this kind of non-equilibrium properties of one-dimensional quantum system, this may not be so familiar to you. So maybe I start from the beginning. And in fact, my talk is based on this paper which was already put on the archive. And in fact, this was already published in Journal of Statistical Mechanics. So if you are interested in, you can take a closer look. Right. So first, maybe, first I can describe roughly what kind of setup and the model and the quantity we are considering. So the quantity we are interested in is the statistics of the integrated current of up spins at position alpha, which means that we count the number of up spins which crossed from this place to this place between time 0 and t. And the dynamics of the system is described by the famous Schrodinger equation for the so-called xx spin chain, quantum spin chain. And the initial condition is taken to be the so-called domain wall initial condition. So the Hamiltonian of this quantum xx spin chain is written down here. And the quantity of interest, this integrated current, which is equivalent to counting the number of up spins from the position alpha to the left to the right, which is written in this way in an operator form. So basically, this is the model and the quantity we are interested in. If you understand this, then maybe you can skip the next few slides. But for those who are not so familiar with this kind of subject, let me try to explain more on this first. Yes, so yeah, one question. Yes. Yeah, yeah, I will explain. If you are, yeah, this is, yeah, the next few slides are really for such kind of purpose. Yeah, so yeah, one question. Are you physicist or mathematician? Physicist? Almost no, some physicists, yes. So probably physicists know better about quantum mechanics. But I expect that there are, especially with some probabilities, who don't know almost nothing about quantum mechanics. So let me really start from the beginning. So yeah, in fact, I myself has been working on probabilistic models for a long time. But so this time, my talk is about quantum mechanics. So let me try to explain a little bit. Of course, you should know that there are some quantum systems, which describes evolution of atoms and molecules and so on. And the time evolution of such quantum system is known to be described by the famous Schrodinger equation, which can be written in this way. So here, so for simplicity, maybe at least in the next few slides, we only consider a finite dimensional case. And for the case of a finite dimensional state space describing quantum state, this H is just a Hermitian matrix. So this is called the Hamiltonian. And psi, so this is called kind of ket vector. So this is the famous notorious Dirac's notation. And this is the n vector corresponding to this finite dimensional state space. And then, so this can be considered as kind of time evolution of vector, n vector, using this Hamiltonian, Hermitian matrix, right? And yeah, this vector represents a quantum state. And we are interested in some physical quantities, which is represented by a Hermitian matrix, say, A matrix. And its average, at time t, is kind of defined to be this object. So this is a Coulomb vector. This is Hermitian matrix. This is a low vector with transpose and complex conjugate. So this is now number. And this is denoted by this bracket, A of t. So this is the kind of average value for the physical quantity, A, at time t, right? And yeah, sorry, so this psi of t ket is a state at time t, which is according to the Schrodinger equation given by this, when H is just constant. So this is a kind of dual for this psi ket. And this can be written in a slightly different way by using this kind of definition in this way. So in this way, in this representation, we are thinking that A is kind of constant. It does not depend on t. But the state psi depends on t. But by using, by doing simple calculation using this, then this can be rewritten in this way, in which psi is kind of constant, does not depend on t. But this A, physical quantity depends on t, where A of t is defined in this way. So this is related to the so-called Schrodinger picture and Heisenberg picture. Question? H, yeah, for the moment, yeah, in this, in this, at least in this talk, we only consider, yeah. Of course, we can consider some time-dependent situation. Then these formulas become more complicated. But at least in this talk, we consider constant matrix Hamiltonian H. So this is the quantity we are interested in, in quantum mechanics. But if you are more familiar with stochastic processes, maybe one can compare this formalism with more familiar stochastic processes. Of course, time evolution, for example, for Markov process, is described by Kolmogorov equation. In the forward quiz, this can be written in this way. And L is the usual generator. And L psi is the adjoint for forward equation. Of course, if you compare this familiar Kolmogorov equation with Schrodinger equation, this is almost the same at least formally. Of course, the big difference, the main difference is just, if we consider this as a kind of real time, this is imaginary time, we should put just i for t. So in that sense, they are very, very similar. So from algebraic and formal point of view, there are many similarities. And one can do many similar calculations and so on. But if it comes to calculating some physical quantity, especially average, then there's a big difference, in fact, between quantum mechanics and Markov process, in particular. Because in the case of Markov process, the average is, of course, defined to be the expectation, which is defined to be this way. And P is here, evolving according to this Kolmogorov equation. And the average, so this expectation, is linear in P. Whereas, in the quantum case, so this average, quantum average, is quadratic inside. At this level, this is just a formal, and the difference seems to be just a little bit, maybe, you think. But in fact, this technically, this difference makes a huge difference. So there are many things one can do for stochastic processes. And when we want to try to kind of do similar calculation for stochastic case, this difference between the definition of averages give us, sometimes, headaches how to proceed. Anyway, so this is the kind of explanation about the basics, about quantum dynamics. And then comes the notation. We are interested in particular quantum spin chain called xx spin chain. And maybe first, we should introduce notation for two-byte matrices, so-called power matrices, sigma x, sigma y, sigma z. So these are two-byte matrices defined in this way. And sometimes, it is useful to introduce half of them, called the spin matrices, where i is x, y, z. And we also introduce vector, which is denoted by the ket again, corresponding to the up spin, which is represented by 1, 0. Down spin represented by 0, 1. So we are considering two-dimensional vector space. These are the basis vectors. These are some matrices. And Hamiltonian for the xx spin chain, which appeared already in the first slide, is what's written in this way. And j is controlling the strength of the interaction among spins, neighboring spins. And this is called coupling constant in physics. And the subscript n here for n means that this should be considered as a matrix of huge size, which acts non-trivial only on the space corresponding to the site n, the position n. But acts trivially as an identity matrix to other sites, to spaces related to other sites. So this is basically a tensor product for matrices. So s and z can be considered as tensor product of unit matrices for the sites j less than n. And on the sites n, this acts as a sz, one half of this. And then for the remaining, again, acts as an identity matrix. Some question? And then the initial condition can be represented as some vector, maybe with this notation. So this means that we have up spins to the left from the origin. We have only down spins to the right of the origin initially. And this should be also considered as a tensor product of vectors, right? And this we denote by dw, because this is called domain wall initial condition. And the probability that the total number of transported spins at time between time 0 and t at the position alpha may be defined to be this sum. So this is the initial state, domain wall initial state. So this is the time evolution. So this is the quantum state at time t. Then we do kind of measurement, which means that we look at the configurations of quantum state at time t in the basis of sc, whether this has up spins or down spins at the position n. So this is one part, and this is the kind of dual part. And then we consider this quantity and take a sum of states, general states, which satisfies this condition. So this, so szm is basically just 1 half or plus minus 1 half. And this means that we take a sum. So this contributes, not trivial, only when smz is plus 1 half. And in this case, this contributes to 1. And then we consider the sum of all of this quantities. And then this should be the number of up spins from the position alpha to the right. We only take a sum of such configurations in which this total sum is given by a small n. So this can be interpreted as the probability that the total spin, up spins, which transported at position alpha between time 0 and t is given by n. So this is the kind of quantum mechanical interpretation of probability. Right, and I hope you understood. And then once we have some formula for the probability, of course, one can introduce a moment generator function by using this probability in this way. And this is denoted by this or maybe chi of lambda t. And at this point, there are kind of two notations for this bracket. One is from this moment generator function. The other from quantum average. But in fact, they are equivalent, consistent, because we have this kind of identity as well. Anyway, so this is the quantity we are interested in. The average, sorry, moment generator function of nt. So this is the quantity we are actually interested in. Everything clear? Model quantity? OK, good. Right, and one small remark is that even though we defined n of t by using operator or by using some probability distribution, this can be considered as an integrated current in the following sense. By using the Schrodinger equation, one can write down the time evolution for time-dependent operator, s alpha z. And this can be calculated in this way. And this can be written in this way, where j alpha is given by this expression. And this can be considered as kind of instantaneous current in operator form. And then we can rewrite this quantity, integrated current, which was defined to be this object, by using this instantaneous current in this way. In this way, this can be considered, really considered as integrated current. This is a remark. And our main result in the kind of theorem in this talk is an explicit formula for this quantity. So this generated function for the integrated current. And we found out that this quantity can be written as a single threshold determinant with the Bessel kernel, which is given by this expression. So this is the usual well-known continuous Bessel kernel. And some of you probably know this Bessel kernel is well-known to describe the statistics of eigenvalues at the spectrum H, especially at the so-called hard H, not soft H. And because there are many things which have been known for this Bessel kernel, once we have this formula, we can study various properties of this spin current for x-axis spin chain. And for example, we could calculate large deviation, which was our main motivation. Right. So yeah, let me try to explain what kind of large deviation we see for this spin current. But before that, this is a really basic. So large deviation already appeared in the morning. And he said that this large deviation should be quite well-known here. But still, let me try to start from the basics. So maybe to compare it, it's useful to recall the very, very basic stuff. So this is a large deviation for random work, single random work, very, very single. No random environment, so on. So let psi i be iid Bernoulli random variables. So this takes values 1 or 1 minus 1 with equal probability, 1 half. And we introduce random variable xm to be just sum of psi 1, blah, blah, blah, psi n. So this can be considered as a position of a random worker at the time n. And it is easy to calculate the average and variance to be 0 and n. So this is really standard. From this, we see that the typical fluctuation of this position of random worker is on the scale of a square root of n. And as you know from the central limit theorem, the distribution of this scale is simply given by Gaussian. Which is depicted here. But maybe we could be interested in large deviation at the scale of n. Then we consider the probability that xn is equal to ny, which should be very, very small. And this decays like e to the power minus something n. And this something is, which is noted by phi of y here. So this is the rate function, or large deviation function. And in this particular case of this simple random worker, this can be calculated explicitly. And so this is given by this binary entropy function, which is also depicted here. In a sense, what we are interested in is to find something like phi of y for our problem of x-expansion integrated current. Yeah, so coming back to our problem of spin current for x-expansion. So average and variance had been known. So let's recall that. So about the average, in 1999, these people succeeded in calculating the average. And for large t, they saw that this behaves like, this grows like t over pi. So this grows linearly. Average grows linearly. And the coefficient is 1 over pi. So this was their result. And almost 10 years later, these people, some of them are coming from here. So they succeeded in calculating the variance. And they found that for large t, this behaves like this. So basically, this variance grows like logarithm of t with coefficient 1 over 2 pi squared. And they could also calculate the next order, which is constant, and they could find some formula as a sum of a few definite integrals. And numerically, they could calculate the numerical value to be given by 2.96, blah, blah, blah. So this was the situation. Then, so our result is about a large deviation for this quantity. For the moment, we are focusing only to the case of alpha equals 0, which corresponds to the current as the origin. And then for this particular case, the large deviation at the scale of t is given by this expression. So this decays like e to the power minus t squared and some function psi over a. So in many cases, this scale is t, but somehow in this case, we see t squared behavior. And also, we could calculate this rate function, large deviation function psi of a explicitly. So this comes just now. To describe the results for psi of a, we let's recall the definitions of complete elliptic integrals of the first and second kind. So denoted by k and e. So these are given by this. In terms of this, sorry, we found that the rate function psi of a, which describes the decay of the probability for our xx spin current, is given by some parametric form using a function f of r and a of r, where f of r is given this way and a of r is given this way. So a of r depends on the value of r. So when r is between 0 and 1, this is given by this expression. And when r is bigger than 1, this is given by this expression. And using also this function a of r, f of r is given in this way, again using k and e. And yeah, so one can check that a of r is a monotonically increasing function. So there should be inverse. So this is denoted by r of a, which appears here. So by combining this, psi can be considered as a function of a, right? Yeah, so this is a kind of basic result in my talk. But I have to say that for the moment, to get this formula, the arguments are in our arguments, there is still one unjustified change of limits. So in that sense, this is not really a theorem yet. But yeah, this is, I think that I think that there's no proof yet. So this is the, because we have explicit formula for functions a, f, and also psi, we can draw their figures easily. f of r looks like this, and a of r looks like this. And by combining this, we can draw a figure for large deviation function psi of a as a function of a. So this is given by this. And in fact, we could also, because this is coming from quantum evolution, quantum non-equilibrium problem, we tried to kind of simulate the quantum dynamics by using so-called DMRG method. So this is rather easy now. You can just go to some website and download some program to simulate the quantum dynamics, and then try to calculate such quantity. And this is what we did. And compared with the theoretical prediction, and as you see, so the agreement was very, very good, I think. So green curve is from our theoretical prediction, and this plus, purple plus, pluses are from DMRG numerical calculations. So they are agreeing very well, even though this time is only at eight. Of course, the large deviation is expected to hold when t becomes very, very large. But even for relatively small time, t called eight, so agreement is like this. Very good, I think. Yes. Okay, so this is kind of main messages I wanted to convey. But in the remaining time, let me try to explain how we could arrive at such formulas. So first, there is a kind of standard mapping from this xx spin chain to free-framing problem. So this is what we do first. So the model itself was described in terms of spin variables, but there is a very nice mapping from spin variables to fermions, so-called Jordan-Wigner transformation, which is given here. So this is a non-local transformation, but by using commutation relations of spins, one can check that this CJA and CJA-DAGA satisfy the so-called canonical anti-commutational relation, which is given here. So this means that this CJA, CJA-DAGAs defined in terms of these spin variables can be considered as fermions. And in terms of these fermions, we can rewrite the Hamiltonian and quantity and so on in this fermion language. And the Hamiltonian of xx spin chain can be written in terms of the fermions like this. And this is now kind of quadratic in C operators, which means that this is so-called free fermions. So one can diagonalize this Hamiltonian in a standard way and so on, right? And the domain of initial condition can be also written in terms of these fermions, where zero means the vector which has no fermions. And our integrated current can be also written in terms of these fermions in this way. So all things can be written in terms of fermions now. And in fact, at the beginning we didn't know anything, but after searching for a while, we noticed that in fact, by using kind of standard machinery of this free fermion, one can write down some determinant formula already for our quantity. And this was done by Iceland Lux, already several years back. And they found the formula for our quantity in terms of determinant. In this case, they found the formula in terms of the so-called discrete Bessel kernel, and the kernel is given here. Again, in terms of Bessel function, but here is a sum, not integral, as in the continuous Bessel kernel. Anyway, there was a formula like this. One remark here would be that this same kernel, the discrete Bessel kernel, also appeared in the analysis of surface growth, the so-called polynuclear growth model, PNG model, which is known to be in the KPC class. And so first, we thought that maybe we can simply use this to study large deviation, but for the moment we don't know how to do it. So for the moment it seems that this expression is not so suitable for asymptotic, for getting some expression for large deviation for large T. But maybe there are some experts who can do it. If you can think of it, please let me know. But for us, so it took some time to proceed, but at some point we noticed that in fact there's an interesting identity between the Fredholm determinant in terms of the discrete Bessel kernel, which was found by Einstein and Luxe, and the Fredholm determinant in terms of the continuous Bessel kernel, which describes the H statistics of random matrix theory. Right, yeah, once we have this identity, then the problem of spin current for x-axis spin chain can be just mapped to the problem of Bessel kernel, the continuous Bessel kernel. And as some of you may know, Bessel kernel also appears in the large N limit of Wischart matrix, so maybe one can use this fact. And the trace of this identity was not so difficult once you expect this, this can be done just by comparing the traces in the expansion of Fredholm determinant. Even though the calculations are not completely trivial, it's not too difficult either. And anyway, once we have this formula, then combining the result by Einstein and Luxe, and this identity, then we can arrive at our main theorem. Right, yeah, but still, one remark is that, as I said, so this continuous Bessel kernel is very, very well known from the work by Tracy and William in 1994. And this discrete Bessel kernel is also pretty well known, which appeared in PNG around the year 2000, I think. But this identity has not been known for 20 years or so. So it has a bit surprising. If you have some reference, which already explained this, please let me know. So when I finished this paper, I sent an email to Peter Forester, which is the kind of best expert I thought to ask this kind of question. And then he said he didn't know it. Even though he, after looking at this identity, he immediately checks his book and so on. And he let me know that by combining this formula and this formula, at least the equal one case of this identity can be understood. But at least in this way, this identity has not been written down. Yes? So this depends on alpha here, the integer part. So this sum is, so there's a sum for, for example, if we expand this, there's a sum of a, yes? Of course, this, yes, sorry, this alpha has to be integer, that's right. Yeah, that's right, yeah, sorry, yeah. Only for the case where, yeah, that's right. Thank you. Okay, anyway. Once we have this formula, then there's, there are a few remarks, maybe. So one, the first one is that we can consider lambda goes to minus infinity limit. So in this case, so this generator function becomes just a probability that n of t is equal to zero, which means that the system, starting from the domain or initial condition, has to come back to the real original initial condition, which is the domain or initial condition. And so this can be written in this way. And because of this interpretation, this, and this can be considered as a probability, and this is known as a return probability or Loshmit echo in quantum dynamical context. And so this return probability is kind of a little bit easier to handle. And this was already calculated by some people, exactly to be given by e to the minus t squared over four for any finite t. And this suggests that our rate function at eight or zero at the left edge should be given by one quarter. So this will be, it can be checked by our formula. One more thing, because of the connection to Bessel-Carnel, maybe we can look at famous paper by Tracy and Wiedem on Bessel-Carnel. And in that paper, they did some asymptotics for this region. And this seems to suggest that the rate, the derivative of our rate function at eight or zero should be given by minus two. And this can be also checked in our formula. Right, so in the remaining, I have 10 minutes, right? I will try to explain how we could get the formula for the large deviation. So first we go to the Wischert matrix, which is a kind of generalization of GUI matrix, which appears in the lectures in the morning. So this is, so first let's consider random matrix X, which is n by n Gaussian matrix, whose elements are complex and basically IID, as in the case of GUI. But then let's consider n by n matrix now, which is denoted by W, which is constituted in terms of X in this way. So this is called Wischert matrix. This is n by n. And as in the case of GUI, joint eigenvalue probability density function can be also written as explicitly. And this is given by this formula. So this is square of Vanderman determinant. So this is also very, very similar to the case of GUI. But in the case of GUI, we have Gaussian here. But this is, for the Wischert matrix, this is replaced by this XI to the power something and e to the power minus X. But this has very, very similar property to GUI. And because of this, if you put this on the exponent, this can be considered as coming from Kuhl interaction. So this is also sometimes called Kuhl. Yeah, yeah, sorry, so this is interpretation. Right, anyway. And the correlation functions of Wischert matrix, we also see something similar for GUI in this morning. There's a similar formula for the corner which describes the correlations of eigenvalues of Wischert matrix, which is written in this way. So this morning for GUI, we had Hermit polynomials. But for Wischert matrix, we have to use Laguerre polynomials. But they are pretty similar. And the one can also consider large n limit. And especially at hard edge case in which we consider X close to zero. Then for this particular scaling, we get this continuous pressure corner. And if we compare these formulas, one can see that if we consider the interval of this Wischert matrix from zero to t squared over four n squared, and if we take n goes to infinity limit, then this corresponds to our quantity of X X current between time zero and t. Anyway, so now we, our problem can be, was mapped to a kind of problem of Wischert matrix. And we're interested in large deviation for Wischert matrix. And this kind of problem has been considered by several groups in physics literature. For example, Majunda and the group. And there are also other works. And I think these also worked on this kind of issue problem. And anyway, so there are some explicit formulas for large deviation function for Wischert matrix. For example, the number of eigenvalues in the given interval i satisfies the large deviation property of this form. So this probability again decays very fast. And the decay is given in this way, e to the power minus t n squared. And this is the rate function for this Wischert matrix, large deviation. And this n is the largest size of matrix. That's right, yes, for Wischert matrix, yes. This rate function can be considered, can be calculated by so-called Coulomb gas method. Rostar is the most probable density with the condition that the number of eigenvalues are in i is given by some certain number. Of course, this should satisfy this normalization condition. And if we introduce the resolvent this way, then the rate function can be described. So here I'm just borrowing some results from previous papers. So I'm not explaining how we can arrive at this point. But there's also a constraint coming from the number of eigenvalues in a given interval i. Anyway, by using these formulas, and we can arrive at the formula for the rate function for our problem. From some calculation, we expect that this lambda minus, which appears as a description of this large deviation function, should have some scale like t squared over n squared. And we introduce some parameter r here, which appeared in the description of large deviation function. Anyway, and for the case of this interval, we can see that this rate function for Wishart matrix behaves in this way for large n. And here is some kind of exchange of limits. So this formula was found for large n, but we want to go to large t, right? And here this limit, the change of limit has not been completely well understood yet. But assuming that this limit can be exchanged, then from this formula, we can find a rate function for our problem of x exponential. And this is what I showed you almost at the beginning. Right, and once we have a formula for the rate function, for example, one can check that the most probable point is given by t over pi. So this is not so difficult. And one can also do kind of expansion around that most probable point, one over pi, a equal one over pi to see kind of small fluctuations. We can see that this is given by Gaussian. And the variance is given by log t basically. So this was also obtained in a previous work, as I mentioned. So the way we do is that using explicit formula for the rate function, we try to do some small delta R calculation. And we get this kind of formula and this kind of formula. And then combining them, we see that this rate function multiplied by t squared behaves like this one. So n is the number of eigenvalues. So, and of course, we should put this on the shoulder of exponential, right? So this is basically probability and this behaves like this. And so because this e to the power minus something delta n squared, this is Gaussian. And the variance is given by basically log t divided by two pi squared. And this is consistent with the previous result for the variance. Because now we have better formula in terms of freedom determinant using continuous Bessel kernel, we could also do a little more about the variance. And we could check that, so this can be still for large t, this is given by log t divided by two pi squared plus constant. And now in this way, we could see that in fact, the constant is very, very easy. And it is simply given by two log two plus gamma plus one, where gamma is of course, Euler's constant. Right, so this is basically what I wanted to say. So to summarize, so in this talk, we consider integrated spin current for particular simple model of quantum xx spin chain. And the initial condition was taken to be a very, very special one, domain wall initial condition. And basically in this talk, our main result, our main theorem was an explicit determinant of formula for the generated function in terms of the Bessel kernel. But the main motivation and kind of main result in this talk was that we could also calculate large division function explicitly by doing asymptotics for this formula. And then it was written in terms of the complete elliptic integrals. But yeah, so complete free free is not that giving you. We have to consider change of limit more carefully. And our main observation is the equivalence between discrete Bessel kernel and the continuous Bessel kernel. And this is coming from some calculation, but I think we haven't completely understand how general the correspondence is. That would be very interesting, I think. And of course, our model is very special. The quantity was very special and so on. So I think there should be various generalizations of this work. Okay, that's all. Thank you for your attention.