 Thank you very much. My name is Shinjuku Shida from Chiyo University, Japan. And first of all, I'd like to thank the organizers for giving me this opportunity to talk here about my recent work about McDonald's process. Today's talk is based on my paper on archive. So, excuse me, so in my talk, I write Y for the collection of all partitions which are in one-to-one correspondence with young diagrams. So then the McDonald's process is actually a stochastic process over this Y, that said Y, but please let me say that the end step, today the end step McDonald's process is a probability measure on the end-fold product of Y and the probability weight for this kind of sequence of partitions is given by this product, where the psi lambda nu is given by this formula, where p and nu are the McDonald's imagic functions labelled by skew partitions. Then under this setting, any function of on Y is regarded as a random variable, so the problem is to compute a collerator of a given random variables under this probability measure, which is naturally defined by this formula. And no, no, no, this is a just a sequence because, yes, yes, but I don't specialise any variables, so this is just a just a sequence without any restriction, okay. And this problem is actually a very classical subject, so there are many and many preceding works which treated many applications to actual stochastic models, for example in these works, but today my point is rather formal one. I try to develop an algebraic method to compute the computer collerator, yes, purely algebraic method to compute collerator and actually it is in the same spirit with the work by Okunkofer and Okunkofer-Shetihim for sure case, sure process. And there what is our point is that is to identify the space of symmetric functions with some folk spaces, folk representation of Heisenberg algebra, and I also try to propose some kind of generalization of McDonald's measure from the viewpoint of some algebraic structure of the so-called the Nihara-Miki algebra. Okay, so let's get started and for in my first part let me introduce some very basic of the theory of symmetric functions. Okay, so in my talk I fixed the ground field to be f which is the field of rational functions of q and t, then we write lambda to n for the ring of symmetric polynomials over f of n variables. Then the ring of symmetric functions, not just not polynomial, the ring of symmetric functions denoted as lambda is constructed as the project its projective limit in the category of graded rings. This is in the category of graded rings just not just the rings. So by definition any element of lambda is a symmetric function and please remember that it depends on infinitely many variables. So I write capital X for a set of such infinitely many variables. So please show, please let me show a very important set of example of symmetric functions as called power sums. For a natural number r the r's power sum symmetric function is defined as the sum of variables powered by a given number r and this is denoted by pr of x and for a given partition lambda denoted by lambda in y I set the corresponding power sum symmetric function that is the product of the product of power sum symmetric functions corresponding to these entries. Then it is known that the collection of this kind of power sum symmetric functions where lambda runs over all partitions forms a basis of lambda. So I also introduced some kind of inner product on lambda which is important in the theory of McDonald's symmetric functions. Because in previous slide I said that these power sum symmetric functions forms the basis of lambda so it is convenient to define this inner product in terms of these basis. Then it reads at this the inner product of p lambda and p mu vanishes on this lambda and mu are identical and if they are identical the value is defined by this formula whereas the lambda is also given by this kind of product where mi of lambda this is a multiplicity of the number i in the given partition lambda. Then actually we are ready to define McDonald's symmetric functions. Yes the McDonald's symmetric function denoted as p lambda where lambda is labeled by partitions lambda are characterized by these two properties. In the first properties I don't define some important notions. One is the monomial symmetric function denoted as m lambda and the other is the meaning of this inequality in terms of the dominance partial order but yes I don't define these notions but please notice that the transition function between the McDonald's symmetric functions and the monomial symmetric functions are triangular with unit diagonal. Then the second property is very clear it says that the McDonald's functions are orthogonal systems and we also introduced some normalized one which is denoted by q lambda then the collection of q lambda of course lambda runs over all partitions they form dual basis of the original McDonald's symmetric functions. And to define a McDonald's process we also have to define a McDonald's symmetric functions corresponding to a skew partition which is down as follows because capital X and capital Y are both sets of infinitely many variables so they are combined to be a single set of infinite variables so p lambda of x y makes sense it completely makes sense. Then we expand this symmetric function in terms of the McDonald's symmetric functions only of y you can do this and the McDonald's symmetric function corresponding to skew partition appears as the coefficient in this expansion and the similar definition for q the case of q is defined excuse me the McDonald's symmetric function q corresponding to skew partition is also defined in a similar way and so so then we can move on to the next section which is actually a main ingredient of my of our theory so first thing to do is to define the relevant algebra which is a Heisenberg algebra I define this algebra denoted by u as a unital associative algebra generated by the symbols a n where n runs over all integers except for 0 over f then I then I impose on them the following commutation relation then we can consider two kinds of folk more folk representations of the Heisenberg algebra one is denoted as f which is the which is a left representation of a Heisenberg algebra generated by a single vacuum vector denoted as zero ket and the vacuum vector is defined to be annihilated by a n with positive n then this for grip space f admits basis labeled by partitions here here we write lambda ket which is obtained obtained by applying negative mode of a Heisenberg algebra to the vacuum vector corresponding to the entries of lambda and on the other hand we can also define the the final representation denoted as f dagger which is a right representation f is a left representation but f dagger is a right representation the the algebra acts from right and this this space also is generated by a single back single vector which is which is which should be called the dual vacuum and the dual vacuum vector is should be annihilated by annihilated by a n with a negative n then this space also admits a basis labeled by partitions naturally defined then then the bra of lambda is naturally defined by this this formula then I like to think of these four representation f and f dagger as dual to each other so I define a kind of I define a f by linear pairing between f dagger and f and so that's the norm of the vacuum vector is unity and the second condition is that the right representation of a Heisenberg algebra is compatible with the left left action of the the same algebra for all vectors and hope all generators then the proposition says that these fox spaces are do sorry these fox spaces are isomorphic to the space ring space of symmetric function to be precise we can I can construct I can consider the consider a linear mapping yota from f to lambda that sends lambda k to the powersome symmetric function corresponding to the same same partition and we also find the linear mapping f dagger sorry yota dagger from f dagger to lambda that sends lambda to the same part same symmetric function then that these that these assignments are isomorphic is trivial but moreover if that assignments are compatible with the structure of in the product on lambda namely the pairing between lambda and mu in the fox space is actually identical to the inner product of p lambda and p mu in on the space of symmetric functions then using these facts I can see that McDonald's functions can be can be expressed as matrix elements of some operators to that purpose we introduce these kind of things these objects do you notice there's a gamma of x plus plus and minus this gamma plus gamma plus and gamma minus and they are they are defined by this this formula where I can find the generator of a Heisenberg algebra but we also find symmetric function p n of x so this is basically basically an operator on the fox space but its coefficients are symmetric functions then what we find is that the application of yota is actually identical to the computation of the matrix element of this operator gamma plus and application of yota dagger is also identical to the computation of this operator gamma minus in there for every vectors so using this proposition the the following one is the actually is a direct corollary of the the first proposition then if we set if we if we write ket q lambda to be to be to be the vector which is identified with identified with the q lambda under yota and and I write bra p lambda for the vector identical identified with the p lambda under yota dagger then the the McDonald's symmetric function corresponding to this kind of skew partition is actually identical to this matrix element okay I think I think it's very good to show proof of this proposition because because it illustrates the typical computation involving these kind of operators so let's do this please let me do it and the goal is to I should I show only the left one the please note please notice that it it suffices to show this identity this is equal to power some symmetric function times this matrix element this is the goal so I okay sorry n is positive any positive and ket v is arbitrary vector in f so so I consider the this kind of quantity gamma x plus plus powered by alpha where alpha is some complex parameter and a minus n times gamma x excuse me gamma x plus powered by minus alpha so so this one is in the inverse of this one and let's differentiate this quantity by alpha by definition of gamma plus this is actually alpha but here we can find these these element yes this is just a power but yes exactly exactly that is the definition and t minus but here we here commutator appears right okay so by definition of the Heisenberg algebra this commutator commutator is actually equals to this m times minus please note that m is positive q to m over one minus t m and delta delta of m minus is m minus n is zero so the so actually only terms of m equals m survive then these these factors are cancelled so this is actually because gamma to minus alpha and gamma to alpha are inverse of each other this is just the power sum symmetric function nth power sum symmetric function so looking at this as a differential equation in terms of alpha and solve this differential equation then gamma x plus and alpha and integrate in terms of alpha from zero to one and this and this is equal to because alpha when alpha zero when alpha is zero this is just a minus n a minus n and p n of x appears you I can use this use this to obtain this because because zero and gamma plus plus a minus n this is actually using the using that formula a minus n but by definition of the dual vacuum vector zero bra which is annihilated by the negative mode of the Heisenberg algebra this this the first one vanishes then the desired result is obtained so this completes the proof of the first first property of this first property of this proposition and this is a quite a quite typical computation in involving this kind of object or in this kind of object I mean uh exponentiate operators of Heisenberg generators so please okay so let's move on to the next part I here I show you are some correspondence between correlators correlators yes so I write f of y for the collection of random variables which is which are just the function of y then I define some mapping all from f of y to nf which send some function f to this operator which is which is which is just an operator f but is diagonalized by the mcdonald vectors with with eigenvalues specified by the given function and I also define some operator excuse me so I also define for function f this kind of object this is this is a game basically an operator on the fox space f but the coefficients are symmetric functions then the theorem reads as follows let f1 to fn be some random variables then the correlator of them under the n step mcdonald's mcdonald process is actually identical to the matrix element of the matrix element of the corresponding operators up to some normalization so let's take a quick look at the proof first thing is to compute this kind of matrix element so point is that the identity operator on the fox space f is it can be expanded as a sum of sum of projections of this kind so I insert this in identity operator in the definition of this operator and we have this and by definition of all of f it is diagonalized by these mcdonald mcdonald vectors then corresponding eigenvalues eigenvalues appear here and the remaining matrix element of gamma minus and gamma plus are just mcdonald's symmetric function of some skew partitions so we have this this okay and the next thing is to compute the numerator of the formula I again insert identity operators in between I mean between two operators then we can obtain this the second line where the sum of the product of this kind of matrix elements then I apply the first result to each of these matrix elements then we have the third line where the new sum over partitions lamp new sum over partitions appears and next and then I take take summation take the summation over new then this quantity is actually proportional to the correlator under the mcdonald process then this this okay and this kind of computation essentially who essentially completes the proof of the theorem so let's see application of this theorem excuse me sorry sorry yes this is so as an example let's consider this kind of kind of random variables epsilon r whose value at lambda is given by this formula where e r of x is the rth elementary symmetric function which is defined by this formula and the the value is just the elementary symmetric function specialized at this value then the corresponding operator is admit this kind of expression because the this this random variable is essentially is basically the spec basically a spectrum of some much mcdonald mcdonald operators so anyway the anyway the corresponding operator has this expression so I have to some I have to explain some notions where okay eta of z is just an operator on the focus space defined by this formula and column columns here is a normally ordered product in which the positive mode of the heisenberg algebra are put on the put on the right and the negative mode of the heisenberg algebra are put on the left in in this normally ordered product and I also have to say that this the sense of this integral this integral is rather a formal one I mean this is this is just a linear functional over the space of formal infinite series that takes its residues residue means the coefficient of z inverse and actually this this operator is basically mcdonald free field realization of the mcdonald operators which was which was obtained by shiraishi and fagin at all and I I found that I found that the same operator actually admits the expression using a determinant okay so I would like to I would like to consider a correlator of these random variables so the central central task is to compute these kind of matrix elements which is explicitly written at this formula and here I can see the the the vacuum expectation value of some operators but please notice that this this product of math product of operators are not normally ordered so the strategy is to rearrange these product into normally ordered way because this is the very very important property that the normally ordered product of vertex operators it is when it is excuse me so the vacuum expectation value of a normally ordered vertex operators is unity so this so the relevant strategy is to rearrange these this the operator appearing here into the normally ordered way to this name the the kind of operator product expansion is very convenient for example the usual eta z the usual product of eta z and eta w is actually this product is not normally ordered but if if I transform this product into normally ordered way then I will find some extra factor in front and the same computation is actually valid in valid for eta eta times gamma or gamma times eta and I yes and these kind of a computation actually gives as the the final result which is very which seems very complicated but let me excuse me sorry let me remark that in case of in case when all r's 1 to rn is 1 then the equivalent formula is far can be found in the work by Borodin Kovin Gorin Shakirov actually yes so okay sorry sorry so in the remaining few slides I just I show some some direction towards a generalization of the mcdonald measures the central ingredient is there's some algebra called dinio haramiki algebra it is the dinio haramiki algebra is a unital associative algebra that is generated by four currents x plus x minus and psi plus and psi minus and an invertive center invertible central elements gamma to one half and they are subject to some relations I omit because they are very complicated but the important point is that the dinio haramiki algebra is a hope algebra so it is it is especially it is equipped to be equipped with a coproduct denoted by delta and for example the image of x plus the current x plus under the coproduct is given by this formula then the dinio haramiki algebra admits a folk representation to be precise the folk space has the coefficient of the of the folk space has to be extended in some in some way but basically this basically this space is a representation and it is that representation is often called level one representation it is denoted as raw from u2 and f2 and for example the the current x plus is represented as represented by the vertex operator eta of z which which we already used in previous sections then it is nice to recalls a version of the definition of the mcdonald basis p lambda where lambdas runs over all partitions actually these family of vectors is uniquely determined by these two properties well the first one is very simple it is if the mcdonald function is expanded in terms of the monomial function then the monomial function corresponding to some partition which is strictly smaller than lambda and the m lamb the coefficient of the m lambda is unity the second one is is says that the mcdonald function is eigenfunction of some operators which is not by which is the representation of the zero mode of x plus and the eigenvalue is given by this then actually this kind of definition admits a generalization let's let's write both lambda for the top of m top of the partitions and and let's introduce a simple simple analog of monomial function monomial function m lambda board which is a product of the ordinary monomial function in f tilde sorry m fold tensor product of m f tilde and i also write the i also write the coproduct action as a row two brace m which is just which is just the representation of the d in your mark sorry d new horror mickey algebra on this m fold tensor product of the fox space then it is shown by our avatar at all that this this space m fold tensor product of f tilde admits a mcdonald type basis that is that satisfy this the following two properties one property is the analog of a triangulation triangulated properties and triangle yes and please note that the analog of the dominance same then dominance partial order is defined properly and the second one is says that requires that the mcdonald basis is eigen eigen system of eigen function of operator of some operator which is x the zero mode of x plus represented on the tensor product space so because each fox space each tensor component of the fox space is identified with the space of rush symmetric functions so the m fold tensor product of f is identified with the m fold tensor product of the space of symmetric functions so i write p lambda bold for the sorry some function which is identified with the member of this mcdonald type basis then it is called then this function is called level m generalized mcdonald function corresponding to this tuple of partitions then it is natural to think of some probability measure on m fold product of partition so that the probability weight of some tuple of partition is given by the product of generalized level of generalized mcdonald function so i don't think it's very hard to figure out how the same this a similar story as to the ordinary mcdonald process in the in this generalized case so i don't say i don't say more okay so let me conclude my talk i developed an algebraic method to compute correlators of the mcdonald process actually it is the it is in the same spirit with the work by okunko for okunko reshash reshachi him for the sugar process and the relevant correspondence was this kind of formula which which said that some which said that a correlator under some mcdonald process is actually identical to the uh matrix element of the corresponding operators up to normal with some normalization and if if if these operators admit some useful expression typically using using vertex operators we can compute this quantity further and we also proposed some a kind of generalization of mcdonald's mcdonald measure by means of the new haramiki algebra but actually this generalized mcdonald measure has to be studied further for example positive especially because mcdonald's very literally is known about generalized mcdonald's mcdonald function for example one has to classify positive specialization of generalized mcdonald function to obtain genuine probability measure and or also it is it is important to obtain combinatorial expression or expression of few variable specialization of generalized mcdonald functions uh this is important this is important for application to scale actual actual application to stochastic models if any yes okay that's it thank you very much