 I should say right away that I will have no scaling limits, but plenty of interacting particle systems. And I hope to make one or two remarks that hint to why that might be useful for scaling limits. But to be honest, this for me is even hearsay. So let me first say some words about the background and where I'm coming from for this project. So the notion of duality of Markov processes with respect to a function, not with respect to a measure, it's a very popular notion. And it's used in queuing theory. It's used when you want to relate random walks or diffusions with absorbing or reflecting barriers. Instead, indeed, one of the early examples by Levy of a duality of relation is between Brownian motion on the half line absorbed or reflected at the boundary. It's also used in mathematical population genetics. So that's where I first heard about it. It was a lecture about mathematical population genetics and was presented as a kind of magic tool and an interacting particle systems. The basic idea when you want to apply duality is that in order to understand some complicated system, typically consisting of many particles, try to use a good duality function to follow instead a simpler system backwards in time. And the way this is typically applied in interacting particle systems is that some of the dualities that you have for certain systems allow you to map the study of a k-point correlation function, let's say a 2-point correlation function, of a many particle system, or even infinitely many particles, to something much easier, which is follow two particles instead. So that is why it might be useful. I think it is used sometimes to derive things like hydrodynamic limits because you want to know how the correlation functions evolve. And duality allows you to switch from correlation functions for many particles to finitely many small number of particles. All right. And some standard references are paper by Harley and Stroke, and you will also find a section on duality and several applications in Liggetts' book on interacting particle systems. And what this talk is about is actually not about using or applying any of those dualities, it's more an attempt to translate things and see is it possible to translate certain things. And what I'm interested in is, well, on the lattice, let's say you have a Markov process with state space N0 to the ZnD, there are many systems where you have duality functions that are actually products of single-site functions. And some of those dualities actually involve situations where the single-site functions are orthogonal polynomials. And the question is, can you do the same or something similar when instead of working on ZnD, you now have particles that move on Rd? And when you look at the formula, it's not that immediate, right? Because writing a product of X over Zd and replacing it with an uncountable product of X over Rd, that doesn't look like the right thing to do. So nevertheless, it turns out that you can write something. In the end, it turned out to be much easier than I thought. And what I liked about the project is that it links... It allows you to sort out a little bit of concepts and to link some stuff that is done in the discrete case on the one hand, to on the other hand, things that sound a lot more complicated and notions from infinite-dimensional analysis, white-noise analysis, and things like that. Now, before I proceed, let me say what is the notion of duality? What is intertwining? Because actually, I will not generalize duality, I will generalize intertwining relations. So the notion of duality very quickly summarized. The setup is the following. You suppose you have two Markov processes. Let's say one is Xt, the other Yt. They may have different state spaces. So it could be E, it could be F. For interacting particle systems, you might want to think of 0, 1 to the Z and D, but it's a much, much broader thing. And let me call the corresponding Markov semi-groups Pt and Qt. And the duality function is a function that is going to depend on two variables, taking from the two different state spaces. And the processes are called dual if the following relation, which I'm going to write in several ways and you pick the one you prefer, holds true. So I have a function of two entries. So what I can do is that I let the time evolution act on one entry and I keep the other one fixed. Obviously, I can do that in two ways because I can let the first entry run in time or the second one. And what you ask is that this will actually give you the same result at all times and for all initial values little x and little y. If you prefer to think in terms of semi-group, what this means is that Pt tensor identity acting on the function h is the same as identity tensor Qt acting on the function h. And if you want to make it even more elementary, suppose that E and F are just finite state spaces, then you can think of everything as matrices. And then this boils down to a matrix equation, which is this one. And the typical examples in interacting particle systems is you might have a duality function where that is perhaps something like an indicator of where you have partial order, which is defined component-wise. And I let you play with this a little bit and see how this allows you to do the translation between correlation functions. Good. There's something else which is intertwining. Now, intertwining, I start the same way over here. But now two processes are going to be intertwined not by a function but by a kernel. So I'm assuming that I'm given a kernel from E to F. Let's say I have something like K of X d1. And then I would say that this intertwines my two processes if I have something like this. So when you compare the equations, it's not hard to see that these two things should be related. And one easy relation that one gets is that suppose that you have duality. And let's say that the Qt has a reversible measure mu. And when I say a reversible, that's the same as detailed balance or saying that the semi-group Qt is self-adjoint in the L2 space with respect to the measure mu. And then what you can do is that you can simply define a kernel by saying you take the duality function H of X d1, H of XY. And you add over there the invariant measure mu of d1. And then the statement is that duality together with versatility implies intertwining. And there's also a converse to this. So this is how these things are related. And actually I will not work with duality. I will work with intertwining. But the thing to keep in mind is that they are really closely related. Any question on the definition? Anything like this for more self-adjoint? Yes. And that is actually something that... Do you mean this one? Yeah. So here if I want to... So if I only ask that mu is invariant, then here I would have the intertwining relation between Pt and the time reversal of Qt. So it's...yeah. Good. So what I will do, so duality and intertwining I've introduced, I will just, to settle notation, say okay, what's the configuration space I'm interested in. Just remind us of two Markov processes that fit into the class of examples I'm looking at. So this is free Kawasaki and the symmetric inclusion process. And then formulate two intertwining relations. One is related to linear escape transform. And the other one there we aim for an intertwiner that is unitary and that is defined with orthogonal polynomials. And for that I will first need to remind ourselves of notions of orthogonal polynomials for levy random fields. So of course many of you know hermit polynomials. If you have Gaussian fields, it's the same thing. All right. So point configurations, I think I can be fast for this audience, is while there are several ways of thinking of a point configuration, the most basic one is of course that I just have a collection of points to fix ideas in RD. I want that the labeling is irrelevant, but I want to allow for multiplicities. So there might be several points on top of each other. If I want to formalize this in a bit more, I could also say, well, I can describe it by a family of occupation numbers, where for each region capital B in RD I specify the number NB of points in that region. Or I can use notation which takes some getting used to, but I think by now I'm used to it, which is to think of configurations as counting measures. So to a collection of points you just add, you just associate the counting measure which is the sum of the corresponding Dirac's. And the measure associates the measure of a set B is simply the occupation number of that set is the number of points in that region. Good. And so the configuration space for me, and most and the biggest generality is simply the space of finite or countable sums of Dirac measures, including the empty configuration between zero. And of course there's some easier subspaces, which is the space of locally finite configurations, or the space of finite configurations where you would have just finally many points. And I want to make my life easy today, so I will mostly think of continuous time, Markov processes whose state space is N. So let's write down two examples and part to get used to the notation. Number one would be simply independent random walkers. Every point moves independently, which I could also call free Kawasaki. So there let's say you have independent Markov processes, XI, state space RD, could be Brownian motions, could be random walkers according to some jump kernel. Then you can define a process with state space counting measures simply by letting individual points evolve. And now if you like this formalism of fellow Dinkin processes and having generators and the like, you can write down directly in the space of counting measures what is the generator. And you see it is the generator acts and functions that map configurations to R. It is given by this thing. And what happens is that, so this integral against the counting measure you should always think as a sum of all points in the configuration. So you sum over every point of your configuration and then what can happen is that this point dies. So that's the minus delta X over Y. And it is reborn if you like, or it jumps to no location Y, which is chosen according to some jump kernel K. And this is assuming that the underlying one particle motions are jump processes. Another process is the symmetric inclusion process where the formal generator is given by formula somewhat similar to what we just had, but now instead of having here a kernel K I have here this alpha plus eta. So what happens is that, so the description somehow is what happens wait for an exponential time. And then when your clock rings what happens is that a particle can jump and it decides to go and where to go in two possible ways. Either it joins an existing particle, an existing location, or it might set up shop somewhere else according to the measure alpha. And you can add a little bit more space dependence by replacing this alpha plus eta by a C of XY alpha plus eta dy. So to have it maybe more visually, if it was RD, so I have a bunch of particles they might sit on top of each other. And then the possible transition is that either I decide to join somewhere where there is already somebody or go somewhere new which is chosen according to kernel alpha. Now this model falls a little bit from the sky. You can think of it in several ways. One way is to just have some kind of toy model which is an interacting particle system with attraction. So you have an independent random walker part which comes from the alpha and then you have this plus eta which is you like to join existing parts. And then you can play around and ask is this a model where there is a condensation phenomenon or something like that. But it was actually introduced originally in the lattice version as a model that is dual to a model for energy transport. This was by Giardina Corrican and Riedig and there is some, I think they were inspired by a paper by Kipnis, Marcchioro and Presulti. And the model also has some relevance if you change interpretations in mathematical population genetics. So maybe someone has heard the name of Muran model. Good, so those are the processes. And now I want to formulate my first intertwining relation which is actually very easy but it's more about what is nice about it is that it brings things together. And I want to focus on processes that satisfy a certain consistency property and a slightly weaker version of standard Kolmogorov consistency which informally says that we would call the process consistent if two steps can be done in either order and you have the same outcome. Step number one is remove a particle chosen uniformly at random from your existing particles and then time-evolve the configuration. Or you do it the other way around. Time-evolve and then you remove. So this is consistency. If you like to have more concrete equations one way to do it is that you introduce some kind of operator. I say annihilation. I put annihilation in quotation marks because if I were to translate everything into a fox space it's precisely not something I would call annihilation. So I map a function from finitely many configurations to another function I can write it in terms of counting measures but actually the nice equation is to settle the last one over here. So if I want to evaluate new configuration new function in a configuration of endpoints what happens is that I just sum over every possible point of the configuration and I remove it and then I evaluate the function. So this is the step of choosing a particle uniformly and removing it. And then you would call a process consistent simply if the semi-group commutes with this operation. So APT is equal to PTA. Now when I first saw that I thought that's it sounds too restrictive to be satisfied by many interesting models because it's almost like consistency. So it sounds like these are models dynamics for infinitely many particles would be trivial and that it sounded like too much to ask for but it turns out that there are many interesting examples that satisfy this property. So on the lattice there's symmetric inclusion processes exclusion processes there's also a dual particle model to energy transport and Kipnis, Markiura and Presuti that satisfies this property. There are a bunch of models so the two examples that I gave are the same property. And then there's a couple of examples in a paper by Leyanne and Raymond for example Sticky Brownian motions where I have certain interacting diffusions and you tune the interaction a certain way. So there are examples. I have a question. But here the jumping depends on the particles which are there that confuses me. If you remove the particle first and that changes how the next particle I don't see the it's a computation to be done. Pardon? At the end of the day everything works out but it's been a while that I checked it. Okay, but I was just confused because when you remove the particle there then the dynamics looks a little different, right? Because the jumping probabilities change, isn't it? No, but the jumping probabilities are chosen in a very specific way which is precisely why it works out for any jumping probability. Okay. But it works out. Alright, so now the K-transform it's like a souped up version of a number of things that we have worried about so there's correlation functions coming back in. The K-transform written in its full glory is a transform which maps a function of finitely many points so you could think of for example an interaction potential of two points. And to it you associate a function which is a function of many, many points simply by summing over all corresponding n-tubbles. So the example to think of over here is if your function f was something which is simply something that lives on two particles and there it is equal let's say to some kind of interaction potential then what this K-transform formalizes is the mapping where you go from v of x, y to something which is the sum over i smaller than j v of x, i, x, j and you do it in such a way that in one go you have a formula where you could plug in a sequence let's say of multi-body interactions if you want. Now this was introduced by Lennart in the 70s in his paper I think for simple point configurations to get a systematic grip on the relation between correlation functions and the underlying point process because when you want to go from correlation functions to the distribution of your random point configuration there is a moment problem involved and he wanted to study that in a systematic fashion and the relation is that if I want to know what is the expected value of the K-transform of a function I can simply write it in terms of correlation functions right so no surprise if I wanted to know what is the expected value of something like this that's just an integral against the two-point correlation function and it's the same relation that is written there in a slightly more complicated way and now the observation is that if your particle system is consistent and the sense then Pt and K commute and it's actually not that difficult to check and it generalizes some relations that were in the literature before but if you look at it it looks like some horribly abstract formula even though it's nice and short so what this actually tells you is that if you want to look at the time evolution of the K-particle correlation function then all you need to do is simply apply your time evolution for K-particles for K-particles to the correlation function so that's in words and let me give the formula so in formula if I call alpha and t the factorial moment measure or correlation measure or whichever way you want to call it at time t then you can simply obtain it from the time-zero correlation functions by applying the time evolution for n-particles for no complicated b-b-t-k-y hierarchy it's really a very simple system and that's just a consequence of the consistency and what is kind of nice is that you see that this relation can be recast as an intertwine like here good where the p-t-n is just a labeled particle version of my dynamics so for free Kawasaki would simply be the tensor product of the one particle dynamics alright so that was relation number one let me just because I said I motivated things by saying I want to generalize stuff where I have a product of a lattice size so can I see how the K-transform is related to a product of a lattice size and it turns out you can because if I think of the space of point configurations accounting measures on Zd that's just the same as n-zero to the Zd right on each point I count how many variables are there this gives me an integer one integer per lattice size now I can compute what is the K-transform and do the little combinatorics where I ask what is the number of ways of choosing m-tapples of points among my big configurations of n and so there is no surprise that there is a binomial coefficient appearing and what you see is that in that case you can just write down this linear transformation in terms of a matrix and the matrix turns out to be a product of a lattice size and for each lattice size the binomial coefficient and this is how in this case I can generalize something which is of product form of the lattice to something in the continuum which is actually very natural in the continuum so this was the first intertwining relation I wanted to present and now comes the second one where we go to orthogonal polynomials so I don't want to have binomials and this coefficient this product representation as we had before I now want to have not n times n minus 1 but instead some orthogonal polynomial of n and the context is again so we want to generalize something that has been done for lattice systems and for lattice systems there are a number of systems where even though you have some kind of interaction and your dynamics they turn out to have invariant or reversible measures that are of product form so there are examples and also in those examples very often you have a duality function which turns out to be a product of single site functions and that single site function happens to be given by a discrete orthogonal polynomial where the orthogonality is with respect to the invariant single site measure I'm not sure exactly where this is applied and if there has been if something has actually been proven with it the way that my co-authors try to explain it to me is that they say and there's actually some articles in that direction is that the hope would be that you could use orthogonal polynomials to somehow define higher order fluctuation fields and that orthogonality would help you in deriving scaling limits I'm afraid that's all that I know about this but that's a little bit background so now the question is again how would I generalize this to particles and continuum and well for some things it's not that difficult to see how you would generalize it so the first thing is having a factorized measure so product measure that is invariant this product property simply becomes infinite divisibility or if we prefer think of an ideal gas where if you have disjoint regions what happens in region 1 is independent from what happens in region 2 so that's clearly the analog of the product property and maybe instead of putting infinitely divisible that's the property that I should have emphasized so I will come back to that later the next thing as well what about those orthogonal polynomials and there it also turns out that there are notions around and so here are some references that I found useful one is the book by Schrautens on orthogonal polynomials and stochastic processes that's kind of more down to earth then there's a book, an article by Nualaert and Schrautens on chaotic and predictable representations for levy processes so here you see slowly stochastic analysis chaos decomposition start creeping in and then there's an article by Litvinov building on work by Berezancki and many others about orthogonal decompositions for levy processes with applications to several examples and this third article comes more from the school I would say of infinite dimensional analysis and stuff that still feels a little bit esoteric to me so let me first say what is the kind of levy point process or probability measure on a space of configurations that I would call a levy measure in order to define it I need two parameters one is, okay let's just take it finite a measure enfin on rd think of the density of a gas or for probability an intensity measure of a Poisson point process and then there's an additional family of weights indexed by the natural numbers m of k so that's this m of k and then I would call so the levy measure or the levy point process would be a random variable with date space n or nf and nf is given by a certain formula so the plus function is a functional which maps test functions of a single point f to a real number and the mapping maps to little f the expected value of e to the minus integral f d eta and remember integral f d eta is really the sum over points in the configuration f of x okay and then you ask that this very concrete formula on the right side holds true so if it doesn't ring a bell then I think we just leave it as that at that but perhaps you've seen or you've done some exercises where you had to compute what is the generating function or laplace transform of a Poisson random variable I hope this reminds you of it or think of levy levy kinchin formula for levy processes it's really similar and in fact the simplest example is if this weight function n of k is either 1 or 0 then you recover simply Poisson point process with intensity measure alpha or if you prefer ideal gas with density alpha and this happens to be the stationary measure for the free Kawasaki dynamics good and of course for the ideal gas you know this property that if I take this joint region in space and I count how many particles are there in those disjoint regions then this gives you independent random variables and if I count how many particles are there in a certain region then this is a Poisson random variable with parameter given by intensity measure integrated over that region okay another example which is less standard I don't know if it comes up in any physically relevant system I don't know of any is sometimes called the Pascal point process so there I choose my m of k in a certain way it's a logarithmic distribution and then it turns out that I end up with a process that also has this property that disjoint regions are independent but now the counting variables for each region are no longer Poisson instead they are negative binomial random variables and this measure turns out to be in a reversible measure for the symmetric inclusion process that I presented earlier so these are two examples of Levy point process that polynomials well polynomials okay so I wanted to think of sorry I'm not being very consistent with whether I want to work in locally finite or finite configurations sorry about that so when I say orthogonality I want to work in the Hilbert space of functions on configurations of points where I integrate so in order to define my squared norm in the Hilbert space I integrate with respect to this infinitely divisible distribution of a Levy point process p okay now I want some notation one is eta of f so that's just integral f of x eta dx if you work with distributions I think that's a familiar notation and again think of it as a summation of all points f of x i and f is a kind of test function and I want to think of yeah p n bar simply take the closure in that Hilbert space of linear combinations of monomials and a monomial is simply a product of eta of f1 eta f2 eta fn and if I were to work on the lattice you see this would really be a polynomial of a certain degree in my single side occupation numbers so there is a notion of polynomial of degree small or equal to n and polynomial of degree n on that space and now what you can do is simply define a space of degree n orthogonal polynomials by saying that while you just do some Gram-Schmidt procedure you say you have degrees smaller than n polynomials and you project off everything that is not orthogonal to the lower degree polynomials right and I want to call to use the notation again so here is the quantum part and the notation that comes that starts creeping in so the projection of so I use that these columns the Vick-Ordin notation there is a general expression eta f1 eta fn this is the orthogonalized to think of Gram-Schmidt procedure version of the polynomial eta f1 to eta fn and that looks a little bit abstract but it turns out that this is precisely what we needed in order to generalize this notion of having a product on the lattice of having a product of single site occupation numbers of single site orthogonal polynomials because if you look at certain special cases where you use your test function fi you choose them as being indicators of disjoint regions so in that case I just write right away eta of a instead of eta of indicator a and I ask what is the orthogonalized version of number of points in a1 to the power n1 blah blah blah number of points in region ak to the power nk then this happens to be simply a product of one variable orthogonal polynomials namely so in this case it is ok so here I have a product the number of terms is equal to the number of disjoint regions that I chose on the left side the degree corresponds to the power ok that's how it should be and it's a function of the number of points in that region and then intensity measure of in that region enters as a parameter of the polynomial and the polynomials that are here also called Poisson-Cherrier polynomials so those are discrete orthogonal polynomials and they are orthogonal with respect to the Poisson distribution and this is the formula that makes the link with product of a single side stuff this is why this is the generalization and this is the formula that you have if I look at the Hilbert space for the ideal gas of Poisson if instead I look at the Hilbert space for the reversible measure for the symmetric inclusion which is the Pascal point process then I have similar formulas but now instead of using Poisson-Cherrier polynomials I should use Max-Napolinomials and just in case you're more familiar with it this is like a sibling of relations that you would have in the Gaussian world with Hermit polynomials just a discrete point process version of it so we're almost there now these polynomials can be used in order to define unitary isometries between different L2 spaces so now it starts getting a little bit heavy on the formulas I do hope that at the end you see behind the smoke screen of all that notation and see that it is simple but let's see so again I work with Hilbert spaces with Fox spaces we've seen these before but still let me say how we want to define them here so for the Poisson case you choose more or less the standard Fox space which is the space of sequences of symmetric functions f and you use as a norm you use the standard thing except the only thing which is maybe slightly non-standard is that I'm using here this 1 over n factorial in the definition so 1 over n factorial people in some communities like to put because then when you go to the definition of creation and annihilation factor operators you get rid of the square roots n plus 1 and 1 over square root n we like it for that reason and then comes the fact that the theorem so this is also known that if you map a symmetrized tensor product of n one particle function so this is something which is in the Fox space in the sector for n particles and you map it to the corresponding orthogonalised polynomial then this actually gives you a unitary isomorphism from the Fox space to the L2 space with respect to Poisson ok and the way to think of it is that this unitary maps functions of n variables symmetric to orthogonal polynomials of degree n now this kind of thing appears in various areas I think with slightly different names so in probability this is very much related to chaos decomposition and vena chaos decomposition and then they think of the corresponding subspaces with n particles as nth vena chaos there's a certain vocabulary that comes with it for the quantum many-body people among you to make the link with quantum with the quantum world the way to think of these is that this orthogonalised polynomial eta f1 to eta fn you obtain it as an n-fold iteration of creation operators applied to the vacuum and you choose a representation of the CCR relation which is such that the vacuum is on one side is the constant function one so there's a way of matching these things in a nice way I wanted to write down what is the formula for the creation operator but maybe I don't do that now now for the other example that I focused on the symmetric inclusion process there's also a similar relation it's almost the same as before so again you have a unitary isomorphism between a Hilbert space of sequences of symmetric functions towards our L2 space with respect to the reversible measure but now the difference with the Poisson case is that the scalar product on the Fock space is slightly more complicated and the corresponding space is sometimes called extended Fock space so I'm not far enough along in my reading to tell you why I hope to understand better by some point but let me just say what is the corresponding scalar product so the corresponding scalar product is that so again you have just a sum like here but whereas previously you would always integrate a function of n variables against the product measure I mean dx1 dxn or d lambda n with some intensity measure now you choose a more complicated measure on the space which is in full generality given by a sum of a set partition of your mdc's and just see it for the case n equal 2 integrating a function of 2 variables there's a first part where I just integrate the two variables out separately with respect to the corresponding input measure and then there's a second contribution where I decide to put the two variables in one basket and integrate out these two against that so this was a lot of notation that I needed to formulate finally one relation which I formulate just for the example of the inclusion process and which is the unitary intertwining relation that I was aiming for namely what you can now say so suppose you give yourself and I think we have you give yourself a finite measure which I want to be atom free because it makes my life easier and I take the distribution of the Pascal process that I introduced earlier so this was one example of a Levy random field and I look at the semi-group of the inclusion process so this was the system of particles that go hopping around and I also want to have a labeled particle version that I think of as acting in the Fox space I mean if I have and then the result is that okay observation number one is that the Pascal measure P is actually a reversible measure for that process so you have detailed balance the semi-group of self-adjoint and the unitary actually is an intertwiner now that sounds again a little bit abstract so to make it perhaps slightly more concrete one way of thinking about it is the following so let's say I take a test function of n variables that is symmetric and I would like to know what is the expected value of the orthogonalized version of the integral of fn against the product measure which again is easier than it looks like so I would like to know what's the time evolution of things like that whereas some overall particles in my positions in my thing and I look at fn of xi1 xin okay and then what the result is telling you is that if you take this observable you do this orthogonalization procedure and you ask what does it look like if here the summation is over points at time t and I take the expected value okay then you can evaluate it instead by saying that you let the n particle time evolution act on the test function and then you do the summation that with respect to the initial configuration and then you orthogonalize and that is the second intertwining relation I wanted to present and this is how you can generalize this duality where you have a duality function which is a product of single site orthogonal polynomials so that's what I wanted to say so the conclusion is that well perhaps not duality but the intertwining relations they do indeed generalize very nicely from lattice to continuum and it's actually not difficult the difficult part is sorting out all those concepts it's not difficult but it takes some reading and getting used to but that's actually all that varies to it and what I liked about it is that you have this conceptual link with a k-transform and that you bring into the game orthogonal polynomials as they are defined in infinite dimension analysis vena chaos decompositions and okay so I didn't mention that before but there's also a link so there's yet another name which is multiple stochastic integrals of course what's missing is can you do anything with it is it just abstract things where you say you have vocabulary but it's useless so this question mark I don't know and the other thing to be understood so I haven't said anything about it but maybe it briefly appeared when I said creation and annihilation operators because what happens is that both at least in the discrete case it is known that all these things that are very much related to representation theory of Lie Algebra or different representations of commutation relations for creation and annihilation operators raising and lowering operators and different things so let me take a step back and mention one thing which is all these dynamics where there's a generator it happens that sometimes I mean for actually more models than I would think you can think of the generator as a many-body Hamiltonian for a quantum system and you can write it in terms of creation and annihilation operators as we've seen for example in Serene Stoch or perhaps instead of creation and annihilation operators from a bosonic algebra you could also use generators for representations of generators for other Lie Algebras and one that plays a role for the symmetric inclusion process would be SU11 and then one way to understand sometimes duality is that you think of duality as being related to choosing different representations of commutation relations and from a more algebraic point of view what is behind all that is a relation between orthogonal polynomials and Lie Algebras which here would become infinite dimensional and I don't know what current Algebras are but I was told that's the key word and so there is literature in that direction and once you start digging you kind of land in the territory of quantum probability and I found a bunch of articles by Akardi but we haven't read them yet thank you Thank you, Sopina Any questions? Well one application would be if some computations would just become simpler if you use the formulas that you had on the board is there an example where suddenly the combinatorics becomes easier on one side compared to the other? Probably at least on the discrete case but I would like to have a continuum example in the discrete case there are really many papers that actually apply duality and often without even using the word duality the paper by Kipnis, Marcur and Presouti doesn't I think the word dual doesn't appear but that's what they use but I don't know of continuum examples and I would like to have some If you think of equilibrium measures related to your dynamics do your dualities have anything to do with other dualities in statistical mechanics? Probably yes I don't really feel competent to answer I know that there is a short note by Haber-Schwarn on stochastic intercability and KpZ equation and I think that he makes the link with things like also vertex models and things like that but I would need to do more reading Any other questions? Thank you again for being here