 Okay, welcome everyone to today's TSVP talk. Today we are joined by Travis Scrimshaw, who's currently visiting from Hokkaido University. Travis completed his PhD in 2015, UC Davis, and has since had positions at University of Minnesota, University of Queensland, Osaka Metropolitan University, and now is finally in Hokkaido University, where he's an assistant professor, soon to be an associate professor. And he's been working closely with a few of the members of our unit while visiting. So we're glad to be joined by him today. And the title of his talk is Throwing Balls to Make Microscopic Waves. Oh, thank you. Thank you very much for the introduction and it's allowing me to be part of this nice program, this wonderful traveling salesman's visitors problem type solution. But I guess because I'm not talking to a bunch of computer scientists, no one understands that joke and I'll move along. So during my talk today, I'm going to just give an introduction about some very classical results from probability theory. Well, not super classical, but more than 30 years old. But the model itself actually is about 80 years old now and talk about what happens in the long-term behavior and going from the microscopic level to the macroscopic level. And then I'll talk about a very similar type of model, but that's deterministic and that's motivated from a completely different area. And then I'll at the end of the day just sketch some open questions and maybe you'll have some nice applications for whatever problem that you're interested in. And the model I'm going to start with is actually a very, very widely used model in mathematical, all sorts of mathematical contexts, but also physics, chemistry, biology, traffic congestion, engineering, things like that. So what it is is known as TASAP. So this is the Totally Asymmetric Simple Exclusion Process. So what you should imagine is you have just a single lane of traffic and a bunch of cars on that road. And just to save time, I'm gonna draw a car as just a dot and each car on its own gets to decide, well, okay, now it's time for me to move. But of course, you never want any cars to hit each other. So you get this particle here, this car, it can't move forward because there's a car already in the next space available. And so this is our model and what we have is each particle weights, well, to start with, let's actually make this a little easier rather than it has a stopwatch and decides how it wants to wait. It has a coin with some bias. So I'm going to fix some real number alpha greater than zero and I'm gonna flip a coin with probability alpha over alpha plus one of telling me, so that will be my success telling the particle to move. But it's just a coin flip and you just wait until the first time you get a heads and then you repeat and repeat and at every time step, every car that can move, its driver flips a coin and goes, oh, it's heads. Now I move to the next space. If there's a car already in front, well, you're stuck in traffic and well, you can call it a fun game of flipping a coin but you know you're not going to do anything so you might as well watch a TikTok video. So that's the start of the model. And so the probability of the waiting time being T is equal to P to the T times one minus P. Well, actually I've done it the other way. So let me, you have one minus P to the T number of failures and then you get a probability, you get a success and therefore you move. And well, this here is basically a geometric series. You probably learned about this in high school and if I make this infinite sum, I get one because well, an event should happen for sure. That's probability one. So this is our discrete time model but when we're doing this, we actually want to do continuous time and we wanna find a way to sample this because well, if you have a continuous, you know, say you're picking a random number between zero and one, a random real number between zero and one, the chances of you getting exactly pi over three minus one, let's say is zero. There's no way you're gonna get precisely that one real number. There's just, it's too many. So we need to find a way to actually model this. And so what we do is we scale time and the probability at the same rate by say some delta T going to zero. And when we do that, this probability, it goes to an exponential waiting time. So what that means is I get some rate alpha, e to the minus alpha T becomes the probability that we have to wait. And so this idea of flipping coins at the scale of the time step and the discrete simulation that you run on a computer, we can model this exponential waiting time in this way. And so now we have a continuous time process and that more reflects what we see in the real world of actual traffic movement. But, you know, if we also want to model electrons moving down a wire or blood flow in a capillary, anything where things are constrained, it basically looks like a one-dimensional problem where things move with some randomness, but some fixed rate, this model gives you a way to do that. And we want to now understand what's its macroscopic behavior. What happens after we wait, say 20 hours, rather than just, well, in the first five seconds, things like that. What kind of behavior should we expect? And how we're going to do that is we're actually gonna change our picture altogether. So I'm gonna simplify this a little bit more and I'm going to start with all of my particles at base, all of my cars basically out of traffic light. So if you want, you can think of this as some kind of like capacitor and then it's discharged charging its electrons and moving down the wire. And of course, you know, as one particle moves, another one needs to eventually get to the wire and things like that. So this is our starting condition. And then, well, after some amount of time, then our particles, we have our special zero spot here. And our particles will move along to say some points here, here, like this. And what we do is we will draw this tilted square grid and I'm sorry for my very poor drawing skills here. And from that point, what we do is we just say, well, this first row here is how far the first particle moves. So the first particle here is moved one, two, three, four times step, four steps. The second particle has moved two time steps, this third part, sorry, two steps. The third particle has moved one step and then the rest of the particles are still stuck over here. And so I get a figure that looks like this. Maybe what I'll do is I'll color in another color here, all of the positions of my holes. So basically what it comes down to is the steps that go down correspond to the position where there's a particle and the steps that go up correspond to a hole. And this shape that I get is known as a partition. And so this is good because now we get something that actually starts looking like a function. It's piecewise linear. It goes one step down, one step up, and we just kind of chain those together. But as we start to scale out bigger and bigger and bigger, and we rescale everything so that after, say, t time steps, we just scale it so that the things that we can see actually lie between zero and one. And we can see that there's gonna be, you know, clearly less than t particles that move. Then the shape that we get, it looks a little jagged, but it starts getting closer and closer to an actual smooth curve like this. And what we wanna do is we actually wanna describe what that curve is precisely. So now we have our model, but our model is still just a little bit complicated. And when we're in continuous time as well, two particles will basically, will never move at the same time. Again, the chances of you picking a random number between zero and one, that's exactly pi over three minus one is zero. So let's actually tweak our model a little bit and say, well, only exactly one particle moves at each time step. So we're kind of forgetting about time a little bit. And we're just saying, well, we only care about the sequence that the particles move. And we therefore, again, only one particle moves every single time, we just record that. And so we record that in the boxes of the partition. So again, the partition is cut out from this square grid. And so I get a bunch of boxes. And now I'm gonna fill those boxes with integers one, two, three, up to however many boxes I have. But I can't do this in an arbitrary order. I basically need to play a big game of very simple Tetris where one block drops at a time and it slides down into a corner. And so if I want to do an example, say, here's my partition shape. Well, the only particle that can move is the very first particle at the first time step. But now we have a choice. We can move the first particle again or move the second particle. So let's say I move the first, then the second particle moves and then the second particle moves again, first particle moves, first particle moves, then the third particle, then the first particle, then the second particle, and then the fourth particle like this. And these are known as standard Young Tableau. So now our model is basically we want to actually count the number of standard Young Tableau. So I'll write that as F lambda. This is the number of standard Young Tableau. However, it turns out that if I want to properly encode the dynamics of my original particle system with that rate alpha that I originally started with, I don't quite want F lambda by itself. I actually want the probability of the particles being at a position lambda being equal to F lambda squared. And then I need to normalize that. And there's a few questions involved. First, does this make sense? Can this normalization constant be infinity and therefore everything is zero and then things go bad? Well, I want to actually make things just a little bit easier because it will go to infinity and things are bad. So let's normalize this, but let's only do it where the size of the partition, the number of boxes of the partition is equal to a fixed value n. And then we just take n off to infinity because n is our time. Maybe I should actually call it t for that matter because it's how many steps have we done and we want to take time off to infinity and then rescale. So basically I want to take a sequence of random partitions where one's contained inside of the other and that's how I'm kind of recording these tableau. And so from this, now I need to actually, I can compute zt because I know I have a finite sum. I want to compute the sum of f lambda squared. This will be my zt where I'll just throw a little bit of math notation here, but I'm just summing over all partitions of t, all ways to put these boxes, put t boxes. And so the claim is that this is equal to t factorial. In other words, this is the number of ways of shuffling t cards. And how do we see that? Well, we have a nice way to do that. And I'll just draw yet another grid. And I think this is square. One, two, three, four, five, six, seven, eight. One, two, three, four, five, six, seven, eight. And well, the number of ways of shuffling t cards here in this grid, I have eight, I have an eight by eight, I have a standard chessboard. And I want to shuffle eight cards. Well, how I can encode that is by placing eight rooks on this chessboard such that none of them can attack any of the others. So for instance, I can do something like this. And so why is this a shuffling? Well, I can just write eight, six, three, five, four, one, seven, two. Say this three is coming from, well, let's say this five here. Well, it's in the fourth position, the fourth row fifth column. That's how I can assign these. And now how do I get the standard young tableau out? Well, what I do is I can just write eight, well, what I do is I draw straight lines going up from every point and to the right of every point such that they intersect in the first possible way. That's not quite right. That goes up. I'll go like this. And now I record one, two, seven, one, four, seven, like this. And now the next thing is I look at all of these points where they've collided and I repeat. I'll go up and escape. I'll go like that and I'll go up. And so now I get three, four, eight, two. And I repeat yet again, five, three. And I'm running out of colors. And so that's six, five. And then finally six, eight, like this. And so I started with the red and I write one, two, seven, and one, four, seven. Now I take the blue, I have three, four, and two, eight. Then I next was the green. So I have five and three. And the purple. I have six and five. And finally for the pink, I have eight and six. And so what we see is we get a pair of standard young Tableau. Out from this process. And this gives us our bijection. Because well, this, why I started with is a shuffling of eight cards. And I've given you a pair of standard young Tableau. And now I need to show that I can go in the reverse direction. I can start with a pair of standard young Tableau and go backwards. Well, how I do that is basically the sequence just done in reverse. I start with line coming in at eight line coming in at six. And that creates a point there. Then I have a line coming in at six line coming in at five. And then lines coming out at this pink point. And well, that gives me the, all of the purple lines. And you can repeat that over and over again until you get, until you're done. And then the points that you're left with are the red points. And those are precisely your permutation, your shuffling of cards. And it turns out that this construction is also related to something called polynuclear growth where you have a line and it's somewhere along that line. You, a point comes into existence. And then, or like imagine, you have, you know, a fire breaks out somewhere, and then it spreads linearly, but spontaneously you have additional fires that pop up. And you may have a second fire and you record that. And that's this polynuclear growth model. And so this basic, what I'm saying with this is, well, I've transformed the picture and I now have my normalization constant. So I now have the problem, I now have an actual probability for a random partition with inboxes in it. And I can also say a little bit more from this that the shuffle here, if I look at the things. Let's see how does it go. Well, basically I look at this longest row here. And that's going to correspond to the longest increasing, yeah, longest increasing subsequence of this permutation. So the three, the five, and the seven is an increasing subsequence. And this will come up, you know, why I'm mentioning this, it'll take me another moment to actually get there, but please bear with me. So we now have a probabilistic model. And it turns out we can, I won't bore you with some of the details, but essentially using this idea that they're particles, we can use techniques from mathematical physics to actually compute these integrals, or sorry to compute the long term behavior by studying asymptotics and doing some analysis. But the, the main results is care of. Sorry, Varshak, care of, and independently, Logan, and Shep around 1986, which says that the their theorem was that the limit shape is a semicircle. So after a really, really, really long time. If I choose a random partition with this distribution with this waiting, then what I get is something that looks basically like a semicircle. So if I, you know, could draw a perfect semicircle that touched properly, and then I sampled a random partition, it would just look, if I make n sufficiently big, even something like n equals 100, I get something that looks exactly like the semicircle here. And that's, you know, the, and basically I now know the long term behavior and how the essentially the density of particles is as I go to a macroscopic level. But we, we even know things a bit in a bit finer detail because we know what the limit shape is. But we also want to look at the, a lambda one varies from the limit shape. So we want to look at the little microscopic fluctuations of the first particle compared to where it should be. So the first particle, you know, we know how that should move, you know, it's free to move. There's no constraints. It's flipping a coin. It should follow basically this geometric distribution if we do it kind of in a, you know, we've changed our model a little bit. So things are a little bit different, but essentially, you know, that should behave very, you know, very well behaved, but there's that little tiny fluctuations involved. And it turns out that this distribution is equivalent to the distribution of the largest. Distribution value in a random unitary matrix where we choose entries by Gaussian distribution. In other words, we take, you know, basically the complex analog of the normal distribution, you know, little bump, you know, the number back, you know, standard, you know, well, actually everyone here is, I'm sure taught a class and your students ask you about what's the standard deviation of the grades, you know, all of that stuff that that's the Gaussian right. And so, you know, it's a fairly simple model for you choose half the entries and then the fact that it's transpose and complex conjugate has to give you the same thing you now have all of the entries of the matrix. And then you look at its largest eigenvalue, and you look how that fluctuates from another semicircle. This is not as Wigner semicircle law, as the matrix it's really big the eigenvalues approach a semicircle and distribution for value and so. And this is known as the Tracy Wittem distribution. And this is not quite a normal distribution. It looks a little skewed. There's, you know, a little bit slower up on one part and sharper down on the other. So why is this important? Well, these random matrices. The people in high energy nuclear physics. Well, if you want to model these very heavy atoms. And how the nucleus behaves, well, you from what I understand you don't want to do it directly. Instead, you take a random matrix and essentially what I've done is I've connected just particles moving down the wire with nuclear physics. Well, I, well, this is classical results, but I'm saying that it's related. And this is. Due to by death and Johansson. I believe this is the first known proof of this. And so this fairly simple model particles moving along a line. It actually has very, very rich structure to it. Not only that, but it also has other interesting macroscopic behaviors. If we add another light particle. And by this, what I mean is I have a another particle that's lighter than the original particles I started with. And if you want to get past one of those big heavy particles, well, they can just push the light particle out of the way. They treat the light particle like a whole. But the light, the light particle, if it has a hole, it can move, but it can't get past one of the heavy particles. And this particle models in the system. So if you want to think of it like water or gas, or maybe like a gas flow, you have two gases. One is lighter than the other. In a tube, you remove the things separating them and you look at how they mix and say a current. Well, they start mixing. But what will end up happening is depending upon how you set your parameters is suddenly the density changes. I mean, it starts kind of initially in the system, but you can get that same sort of short change in density as the system propagates in time and you want to measure, well, where does that shock actually move? Well, if you want to do this microscopically, this is what this light particle does. So it's just a very slight tweak of the model. But you can actually do quite a bit more with it. It also has spontaneous symmetry breaking. So again, this is kind of long-term behaviors, but if you start looking at different pram, you start adding different parameters, local inhomogeneities to the system, particles can move with different independent rates. You can see very sudden changes in your macroscopic behavior for small changes. So you get, you know, all this behavior without actually having to do too much. Your phase transitions can come boundary conditions. Sorry. Usually you see phase transitions when you start changing internal structure, changing parameters, things like that, you know, spontaneous symmetry breaking. But it turns out that you, rather than working on a line, you work in a finite dimensional, you know, finite lattice, then your boundary values can actually play a role in how the system behaves and drastically change the macroscopic behaviors of the system. So this, this model is very, very rich and basically what's currently being studied is you know, current studies are on a ring with inhomogeneities. So as I said, the particles can have a different position. And there's still a lot of open questions about this model at a mathematical point of view and being able to rigorously prove, well, if I just start tweaking my model a little bit, I allow particles to hop over another. They hop more than one space. I'm not going to talk about that. They hop over another. They hop more than one space. They hop with different rates, all of these different things. You can get very different behaviors that occur and we can actually prove them mathematically. You can run simulations, you can see lots of good behaviors, but the rigorous proofs are actually much harder than that. There's a lot of detail that's hidden in these. And was there anything else I wanted to say with this? Yeah, maybe not. Well, these are also very recently connected with how many lines pass through a generic configuration lines. Or more generally k-dimensional subspaces of n-dimensional space and how they align with each other, things like that. And so this is a very understanding how to count these things as a classical subject known as Schubert calculus. And it's really only in the past few years that we've realized, oh yeah, the TASAP is being governed somehow by this geometry. And we go from something that's very, very rigid to something probabilistic and we don't really understand how to go between these, what it means to, on the geometry the probabilistic stuff means and vice versa. So that's kind of the first half of my talk. Are there any, well, less than half, but let's say the first part. Any questions at this point? I'll save my intro at the end. All right. So now, so that's the probabilistic side, but what if I want to make this a more deterministic process? And how we're going to do this is something called the box ball system. And this was introduced by Takahashi and Tsutsuma in 1990 and what you do is you again take the model. Now I'm going to draw it slightly differently, but it's still the same model at heart. I put a bunch of particles, like so and then what I do is I scan from left to right and when I encounter a particle for the first time I throw it into the next available spot. So I take this and this particle goes over here. Now, well, I move from this point to this point, but I've already seen that particle. So I continue on I throw this particle here continue this particle gets thrown here. This gets thrown here. This gets thrown here. And so what I end up with is this local or this configuration and now if I run that process again that particle moves one step that particle moves one step and then this collection of three particles moves three steps but it still stays stuck together and what we see as we investigate this system is that collections of balls move with speed equal to its size the number of balls in particular this one moves with speed one this one moves with speed one and this moves with speed three provided they're far enough apart here they're kind of in this interacting process mainly these four here interacting with each other and so we get you know at that point we can't really see what's going on but when they're far apart they move with speed equal to their size and they keep their same size after interaction and so we call these solitons so why do we do that well if you like to study waves and thin channels you can probably come across the coat wig the fries equation for KTV if you one if that's if speaking Dutch names is way too hard for you it is for me so I usually call it the KTV equation and the solution for this are something you can actually do at home you get you know a very long thin channel say maybe half a bamboo and you make waves through it and you'll find that the waves will propagate with speed according to their size they'll come they'll interact with each other and then they will separate precisely back into the same waves that you have before and so the box ball system is just a microscopic version of the KTV equation and these standing waves or solitary waves are known as soliton solutions to the KTV equation and essentially every solution to the KTV equation separates out into solitons and so this gives us you know this really is a microscopic version of the KTV equation and so well this is good we can study this and we can drive explicit formulas for this but it turns out we can actually do something a little bit more we can introduce a carrier to describe the dynamics and what we do well rather than throwing the balls I come I walk along and every time I see a ball I pick it up and every time I see a space I drop it off and let's assume I have a fairly large basket more than the number of balls I would ever carry so I can always pick up a ball and I and progress like that and the small change in perspective it's fairly easy to see you get the same dynamics but you can model this by basically quantum symmetries so if I if I want to be very rigorous to maybe the physicists in here I'm taking represent finite dimensional representations of a quantum of a drenfeld Jimbo quantum group of affine type bunch of fancy words basically I'm doing quantum mechanics and using stuff from quantum mechanics to how these ways move and this then allows us many generalizations sorry Leeran I'm writing that with Z and I said Z so because of these quantum symmetries I can change things around and do things like introduce antiparticles and describe now how variation of these waves move and so this is all controlled by something called our matrices so if you know what an our matrix is then you pretty much understand this statement if you don't well I have vector spaces I have maps that preserve these symmetries and an our matrix is basically such a map and it defines everything here I to be really precise I'm doing this at a combinatorial level and it turns out it's just a marketing rule I pair things up and things that are unpaired move into my carrier out of my carrier depending upon how things go so I can actually if I had more time I could very very easily describe how this procedure works but it you know I've more or less already told you it's just you know picks up a ball when it can drops off when it drops off. So we have lots and lots of this actually gives us a huge amount of power. But there's also a relation with alignment of spins so if you think about basically there's two types of spins things point one way or they point the other way and if we chain these two choices together well this is known as the spin chain and then there's a bunch of physics involves but just to keep things light well I had every position along for my box ball system had two states it had a ball or it didn't have a ball and that gives you the way to realize with the spin chains and the local energy basically telling you about the interaction of two of the adjacent spins and okay I need to fuse some of the spins together but the local but essentially the local energy coming from the physics turns out to be related to the shifting of solitons from the interaction and this shift of positions is a sign of the non-linearity of the system so just to give you a brief kind of example of what I mean by this well if I have two particles and then a particle like this this jumps here this jumps here this jumps here so here and here like this and so this is my interaction and I have this size two particle here size two soliton here and the size one soliton here and the size one soliton if it was moving just by itself would be at this position and the size two soliton moving by itself would be at this position but instead they've shifted by two from where they normally would be and this is the non-linearity of the this local interaction in this spin chain system and so we get some very nice connections there and there's still a lot of open questions involved with this where probably the biggest one is well how do we include super symmetry into this there's some very recent results that have made some progress in this for one particular case but beyond that we have no idea in large part because the analog of the symmetries the quantum group and its representations we don't really understand or even know about and so you know within this one seemingly very simple question there's a lot that needs to be done and there'll be almost certainly a lot of very good mathematics and very good physics that will come from this and kind of the other major open question is what happens when we make this system random and so now in my last few minutes here let me actually connect the two parts of my talk because right now it seems like they've been completely separate from each other but it's actually this R matrix here this little tiny thing I'm glossed over well it turns out that pace up you can describe it in a lot of very interesting cases by our matrices that satisfy stochastic condition in other words there's a certain sum of the entries that's involved and so every the way we study all of these at least that those of us in this representation theory probabilistic area of mathematics is we use our matrices to do everything on the on pace up and then we have this which has also been the box ball system but very few people have done probabilistic versions of the box ball system and usually in some recent results have shown that if I introduce an extra probabilistic parameter I get a very interesting type of object that's related with other affine the algebras if I want to throw words around I would say Hollywood polynomials and Q Wittaker polynomials but it turns out you get other extremely interesting objects and with geometric connection so yeah I think that's where all in my talk thank you very much for your attention question I had earlier you had the at the end of your taste that thing you had kind of open questions or something like other directions current studies there we go on a ring is it a set or a set for maybe one of those on a ring somehow relates to McDonald polynomials and some kind of symmetric functions is the same thing happen here on a line like should I understand or be able to understand why come here besides partitions well the so why you should expect symmetric functions is the the connection it's the our matrix the and you have Yang-Baxter equation and Yang-Baxter equation gives you the symmetric functions but you can't if you now allow particles to hop back with some additional independent parameter that's ASAP that's removing the totally parts and there's very recent results of Mandelstrom Iyer and Martin maybe Sylvia was involved I forget for that that say that the stationary distribution is a McDonald polynomial basically and there's modified McDonald's by using Tazarp where now you change the model a little bit and you allow things to stack and you break the exclusion process part but not so badly that things become completely wild are you suggesting that like using this our matrix you point on this that it's basically just looking at this as something to do with the Li-Arbor and therefore the symmetric function stuff should fall out more basically yeah anyone else have any questions thank you Travis again