 I think that did it. OK, so let's go ahead and get started. So welcome, everybody, to the Graduate Student Seminar Series. Just want to say a couple of quick notes at the beginning here, so despite graduate students being in the title of the seminar, the attendance is open to anybody and those backgrounds vary a lot. So just please be respectful to your friends and colleagues. And please also do ask questions during the seminar. I think the primary goal is for this to be sort of a positive learning experience for everyone here. In the spirit of a sort of a usual in-person seminar, if you have any questions or like clarifications or anything like that, perhaps easiest just to go ahead and unmute yourself and politely ask out loud. You can also just go ahead and type it in chat. I'll keep an eye on it and I'll try to relay any questions to the speaker that come up. If you have a question that might involve like a longer or more involved answer, we'll also leave a little bit of time at the end if you'd like to ask then. So with that being said, it's my great pleasure to introduce Kimberly Ayres at Cal State San Marcos. And she'll be telling us about stochastic logistic maps and invariant distributions. OK, great. Thanks, Zach. I think we're sort of small crowd today. So like Zach said, if you have any questions, like feel free to just interrupt me. I have a hard time when I'm screen sharing seeing the chat. So I guess, Zach, if you don't mind monitoring the chat and then letting me know if there are any questions there, hopefully, things should be mostly, for the most part, there's a little bit of jargon and stuff that I don't want you to worry too much about. Hopefully, if you've had some analysis background, I'm hoping this talk should be mostly accessible. But like I said, please feel free to interrupt with any questions at any time. OK, so why don't you want to change slides? There we go. I've started, including land acknowledgments in the talks that I've been giving recently. So I just want to open up with a quick one. Right now, I'm currently based in Helena, Montana. Helena, Montana sits on the native land of the Knights of Tappies Stackaway, or also known as the Black Feet people. But I will be starting a new position at Cal State San Marcos in the fall. And so that's sort of my current affiliation. That university sits on the land of the Lucenio and the Cunier people. And I know that this is the graduate seminar for University of Georgia in Athens. So I looked up Athens, Georgia. So it's on the native land of the Cherokee, the Uchi, and the Muscogee peoples. And if you are curious about where you are or who's native land that you are sitting on, there's a website. There's a land map. And you can find that at nativeland.ca. OK, so to start us off, so what I'm talking about today are what are called discrete dynamical systems. So a discrete dynamical system is a pair fx, where f is a function whose domain and range are the same, so some space x. Typically, at least in my case, I consider functions that have a compact domain and range. So if you must talk, you can think of these spaces as being compact spaces. Compactness is nice because it gets you limits and convergence. And you know that things can't run off to infinity anywhere, basically, which is kind of nice. So since f has the same domain and range, what I can do is I can take compositions of f with itself. I can look at successive iterations of f of f of f of f of however many times. So dynamicists like myself are then interested in looking at what I call orbits. These are sequences where if I have an initial starting value somewhere in that domain x, the orbit of that point is basically what happens if I just hit that point with f over and over again. So I'll look at f of x not. And then I'll look at f of f of x not, and then f of f of f of x not, and so on and so forth. So we're creating these sequences. So once I have sequences, and there's going to be maybe two instances of audience participation here, if I have a sequence, what are questions? Like if I have sequences, what are some questions I'm interested in if I hand you a sequence? What are things that we like to ask about sequences? Convergence. Yeah, right. Like do sequences converge? Do I have convergence of these sequences? Any other things that we ask about sequences? Is not convergence boundedness? Boundedness, sure. We can ask about boundedness. Since I am considering compact spaces, I do we do have boundedness sort of built in already since we're looking at sequences of compact spaces. But like bulls on a wirestress also gives us that bounded sequences have to have convergent subsequences, right? So what types of convergent subsequences do I have if my sequence itself is not convergent? Or like do these sequences ever repeat themselves? Do I ever get into this point where I get this periodic where I see things repeating themselves over and over again? And then what happens if a sequence does neither of those things, right? If it's neither convergent nor periodic, what does that mean about that sequence? What does that tell me about the behavior of this system? OK, so as a real quick example, so I'm going to look at what's called the doubling map. So this is a function that maps numbers from the unit interval 0, 1 to itself. So basically all I'm going to do is I'm going to take a number between 0, 1, multiply it by 2. If what I get is bigger than 1, then just take it mod 1, right? So just subtract 1 off. So for instance, 0.75 times 2 is 1.5. 1.5 is bigger than 1. So I subtract 1 off and I get that 0.75 maps to 0.5. So here the dark blue, they're both blue. The dark blue is the graph of the doubling map there. The light blue is my identity function y equals x. And I can use this to build what are called cobweb maps to look at what orbits kind of. This allows me to sort of visualize orbits. So let's do a quick example together. So let's say I start with a starting value of 0.41. That's like if you can see the little red dot down there is 0.41. So where's 0.41 going to map to under the doubling map? 0.82? Yeah, 0.82, right? So OK, we know how functions work. I look up where the y value is 0.82. And now I'm interested. So now my 0.82 is now going to be my new input. So what happens to 0.82 under the doubling map? 0.64. 0.64, right? 0.82 times 2 is 1.64, so tracked off that 1. So what I can do is I can translate horizontally over to that y equals x line. This is where my x and my y are both 0.82. And now I can look down vertically at where I am and I get 0.64. Go on over horizontally to the y equals x line. 0.64 is going to map to 0.28. Translate over horizontally, 0.28 will map to 0.56. 0.56 will map to 0.12. 0.12 maps to 0.24, 0.48, 0.96. You guys get the picture here, right? So I'm looking at successive iterates from that one starting value. And eventually, if I were to sort of do this many, many times, I'd get a picture that starts to look like this. So hopefully, it's now a little bit self-explanatory why we call these things cobweb maps, right? It kind of looks like a spider has stitched up a little cobweb in between there. OK, so there's the sequence that I started building. Obviously, this is only the first handful of terms. We could have done this infinitely or arbitrarily many times. What do you all think? Is the sequence ever going to repeat eventually? It should. What is that? Well, I mean, your answer is always going to be two decimal points. So at some point, you're going to have to go back to something you landed on before. Yeah, exactly. I'm never going to see more than two decimal places through this process, right? I can't multiply something with two decimal places by two and get something that I can send out to three decimal places, right? So there's only finitely many things that is possible for me to sort of reach from this, right? There's only 100 or really 50, rather, because after 0.41, these all have to end in an even number, right? So pigeonhole principle says that once I've gone, more than 100 or so times, eventually I have to hit something I've seen before. And once I hit something I've seen before, I know exactly what's going to happen after that, because then it's just going to follow that same pattern over and over and over again, right? OK, so this is a really basic example of a discrete dynamical system. In general, as a dynamicist, what I'm interested in is classifying systems, right? What sorts of systems are similar to each other? What sort of systems are qualitatively different? I want to be able to understand a system completely, by which I mean, like, I want to say exactly what is going to happen to points within 0.1, right? I want to be able to make a complete statement about those orbits. And I want to be able to quantify that behavior in some way. Does it seem, quote unquote, random or predictable or patterned in any way? OK, so but today the focus of my talk is on a very specific or very special discrete dynamical system, which is known as the logistic map. So the logistic map, again, I'm taking the unit interval 0.1 to itself, and it's given by this function lambda x times 1, 1 minus x, where lambda is going to be some value in between 1 and 4. So lambda is just a fixed parameter. You can pick what you like, as long as it's a value in between 1 and 4. You can let lambda be less than 1. That's not a terribly interesting system, because everything eventually just converges to 0 in that case. If you let lambda be greater than 4, does anyone have any ideas why it would be a problem if I let lambda be greater than 4? And I'll give you a hint. This picture here is with lambda at 3.9. What do you kind of notice about the range of that function? If lambda is greater than 4, then the range, or there would be values of x not within the range. Right, exactly. If lambda is bigger than 4, my parabola here is going to go above y equals 1. So I'm mapping the unit interval 0, 1 to numbers outside the unit interval 0, 1, which would be a problem. So that's why I'm taking lambda to be a maximum value of 4. You can consider situations with lambda greater than 4. You basically just have to remove the interval that gets mapped to something bigger than 1 each time. And by removing subintervals, what you end up doing is you actually construct what we call an invariant cantor set. The cantor set is taken by removing little subintervals each time. So you end up having dynamics not on the whole interval 0, 1, but on a cantor set type of thing, which is cool in and of itself, but not the focus of what I'm doing today. OK, so when lambda takes values in between 0 and 1, 1 and 4, this map is always going to have two fixed points. So by fixed points, what I mean is a function that are a value of x such that x equals f of x. So once I hit it with f, I get back the same point. And I can see visually where those things are by looking at where my function, so my green is my logistic map there, where this guy intersects the identity map. So you can see this is always going to happen at 0. And I'm also going to get an intersection here at 1 minus 1 over lambda. That's going to be another fixed point there. So I always have those two fixed points. OK, if I take values of lambda between 1 and 3, including 1, but not including 3, no matter what value I start with, my coblab map, my sequence is always going to end up converging to that non-zero fixed point. So if I look at the sequence of that orbit, I'm going to get a sequence that converges to whatever 1 minus 1 over lambda is. So in this particular case, I've picked lambda of 1.95. And I'll see that for this particular starting value, my coblab, my sequence converges to that non-zero fixed point. And that's going to happen no matter what starting point you pick between 0 and 1. At 3, something interesting happens. Instead of converging to the fixed point, we actually end up bouncing around the fixed point in what we call an attracting period 2 orbit. So it's sort of small, but you can hopefully see that this coblab diagram is sort of converging to this. It's probably not quite a square. That sort of contains that fixed point. So I don't get arbitrarily close to that fixed point, but rather, I get arbitrarily close to this period 2 orbit. So this is with lambda equal to 3. And again, that same starting value is before of 0.86. And one of the reasons the logistic map is so famous is because it's an example of what we call a chaotic system. So if I keep increasing, so here this is with lambda equal to 3. If I keep bumping up that parameter value there, looking at like 3.1, 3.2, 3.3, I start seeing new attracting orbits show up. So eventually, I'm going to see an attracting period 4 orbit. And I'm going to see an attracting period 8 orbit. And I'm going to see an attracting period 16 orbit as I keep increasing that value of lambda there. So that's what we call a period doubling cascade. Because again, I'm seeing the period of those periods of those attracting period orbits are doubling each time. So this image here is what we call a bifurcation diagram. And it basically shows you what the stability of my system is going to be. So here you see this is my attracting fixed point here. So this is 1 minus 1 over lambda. Ignore the fact that that's an R down there. Think of R as lambda. So I see I have one attracting fixed point up until I get to my lambda value of 3. And then I see a split. Because that split represents, oh, now my limit behavior is approaching a period 2 orbit. So now I see two branches here. And that represents that period 2 orbit that shows up. And then at R being somewhere between 3.4 and 3.5, we get a split again. So now I'm going to see an attracting period 4 orbit. And then you can see it's obviously going to have to happen more and more and more quickly. But I'll see eventually a period 8 orbit, a period 16 orbit. Again, that's that period doubling cascade that I talked about. And so for almost all lambda values bigger than this 3.56 number, we see what is called chaotic behavior. Now, I haven't told you what chaotic behavior is yet. And I will in a minute. But I do want to point out something very special here. You might notice slightly bigger than 3.8. This is sort of like white band. Can anyone tell what the period of the attracting periodic orbit is looking at that? I realize it's small. Is there a kind of zoom in? Can anyone tell? Is it 3? It's 3. Yeah. And for reasons I won't get into today, 3 is actually a really, really special number in these one-dimensional dynamics. Period 3 orbits are really rare, and they're really special. And so it's so cool that this logistic map shows, demonstrates that period 3 orbit, that attractive period 3 orbit, because that's basically the rarest period that you can see. So again, that's a little bit beyond the scope of this talk. If you're interested more, you can ask me about it later. But I just wanted to point that out because that is very, very cool. OK, so what do I mean by chaotic behavior? So rather than my orbits being attracted to either fixed points or to periodic orbits, instead what I'm going to see for the most part, basically for almost any starting value that I can pick between 0 and 1, the orbit actually just kind of fills up that entire interval between 0 and 1. So the orbit turns into the sequence is actually going to be a dense set inside 0 and 1. So I see this orbit kind of filling up the entire space, and it doesn't converge to like one fixed point or to one periodic orbit, but rather just kind of goes all over the place and fills up the entire space. Chaos has, I'm sort of like engaging a little bit of discourse about this on Twitter, but chaos is a little bit, the notion of chaos mathematically is a little bit chaotic because I'm completely intended because there's like many, many, many different definitions of chaos. And depending on who you ask, you might get different definitions. So there are other definitions of chaos that people use out there. But this dense orbit thing, we call this topological transitivity, is kind of one of the hallmark things that you'll see showing up in most of those definitions. OK, so before I was talking about what I call the deterministic logistic map, so that's where lambda takes a fixed value in between 1 and 4, and it doesn't change. With each of my iterates, lambda is always going to be 3.87 or whatever value of lambda I'm looking at. So now I want to introduce what's called the stochastic logistic map. So instead of looking at logistic map with a fixed constant value of lambda, what if with each successive iteration, lambda took on a new value each time according to some kind of a distribution, according to some kind of a probability distribution. Here I say, and I should probably change that, I say according to a uniform distribution on a sub-interval of 1 to 4, but there's no reason necessarily that that would have to be uniform. You just want some probability distribution of lambda on some interval of 1 to 4. And ideally, I guess I can't think of, ideally for the properties that I've been studying, ideally you would want that PDF, that probability density function, to be absolutely continuous. So I'm not looking at delta distributions for the most part, just because that makes a lot of the math that we end up doing later on a lot more nice. But so this is a cobweb diagram that I made where lambda is taking on random values between, I think according to a uniform distribution on 1 to 4. So what you see here is the blue parabola is the logistic map when lambda is equal to 1, and I'm questioning that whether that's the case actually, I feel like that's not the case. But the red here is with lambda equal to, and I guess that's not quite, that would be probably two, right? So I'm taking different values of lambda with each generation, right? So I no longer can have convergence to like a fixed point or a periodic orbit because those things don't make sense as they don't exist deterministically, right? Like I can't have a single fixed point anymore because that fixed point changed values with each lambda and now my lambda is changing values each time. So I no longer have like a single fixed point anymore, right? So instead what I'm gonna do is I'm gonna consider instead of looking at initial starting values of x between zero and one, I'm gonna consider an initial starting probability density. So if I have a probability distribution for my initial value of x naught, taking values between zero and one, what's the probability distribution of my next iterate, x of one, once I've hit it with this stochastic logistic map that one time and further iterates. So now instead of looking at orbits that consist of single points in between zero and one, my sequence that I'm building, the like entries of my sequence are now probability density functions that are supported on the interval zero one. So now instead of finding like periodic orbits or fixed points, I'm interested in finding like invariant distributions. So if I have a distribution of x between zero and one, what distributions remain unchanged when I hit it with this stochastic logistic map one time? And also like what about the stability of these invariant distributions, right? Do other distributions kind of converge towards this one invariant distribution that I'm seeing? Okay, so a little bit of notation here. And this is basically, I don't want you to worry too much if some of this kind of goes beyond your understanding. It's basically just the rigorous framework for how we study these things. So just a little bit of notation here. So here I'm gonna consider k to be a sub-interval of the interval one four. Since I'm studying topological properties of these things, I need like measures and stuff. So I'm looking at the Borel sigma algebra of subsets of k. So those are, that's a sigma algebra that's generated by open and closed intervals, right? I'm gonna have some kind of a probability measure on that, on my Borel subsets. And now my logistic map here is taking, it's a function on a product space, right? Because I have two inputs now. I have what value is my parameter lambda taking? And then what value of x between zero and one am I looking at? So this is now a map from k cross zero one into zero one. And you can denote it either of these ways. But the point is that I now have to consider two inputs in a way, right? My lambda value and my x value. Okay, so some more notation. So sigma here, basically what I'm gonna do is I'm sort of gonna look at the sequence of possible lambda values, right? So I'm gonna sort of consider the universe of all the different like sequences that lambdas could take with each of my iterations, right? So like, what are all the possible values lambda could take for the first? What are all the possible values that lamb can take the second time around, and then the third and the fourth? And I'm looking at those possible sequences there. And again, I need some sort of a topological understanding. Now I have a space made up of sequences. So I need to like be able to look at things like measures on that thing. So what we're gonna do is I'm gonna look at the sigma algebra that's generated by products of my Borel sets on K, except only finitely many of the things in this product here are proper subsets of K. The others are all like the whole set K itself. And we call those things cylinder sets. So what I'm doing is I'm building a random dynamical system. So like I said, I now have a measurable product space, right? I have two, I have sigma algebras on this product. And I'm gonna look at what's called a skew product map. So this here is the formal definition of skew product map. It looks really intimidating when you first think about it. But basically all I'm doing is I'm considering I have two different spaces now, right? I have the sequence of X values that I'm looking at. And I have the sequence of lambda values. And my lamb does don't pay attention to what's happening to X, right? The lamb does just get generated each time, you know according to whatever distribution I'm using. Like I said, probably some kind of a interval of K or a sub interval of zero one, not zero one, one four. So those get, you know, every time we generate a new lambda value and that's totally independent of what's happening in my unit interval zero one with my X values, right? However, my X values, each subsequent X value needs to look at what's happening in the lamb does get information from the lamb does before it can tell you what the next value of X is gonna be. So basically all this is saying is I have sort of two things running concurrently, one of which is dependent on the other one but that other one is happening sort of independent of anything else. And this just gives me this here, I'm just talking about what we need to kind of formalize again the dynamics on what the lamb does are. So I have a sequence of my lambda values. Basically this is the list of the different values that lambda is gonna take as I keep iterating this thing. And now I just take what's called the shift map. Basically I just lop off the first entry there and I still have a new infinite sequence. We call it the shift map because you can sort of think of it taking the whole sequence and shifting it one entry to the right and then that first entry just kind of falls off the end. So either way you wanna think about it either just like by not to be morbid sort of beheading the sequence and getting rid of that first entry or by shifting the whole thing to the right and then having that first entry fall off. Okay, so like I said, we have this skew product map now, right? I have these two things running at the same time. I have my values in zero one. I have my sequences of lamb does and now these are running together where these things in zero ones need to look at what's happening to the lamb does each time. So that's why this first entry here you see there's that dependence on lambda happening whereas in the second coordinate here there's no dependence on the X values at all. This thing just happens completely on its own. So this thing now is what we call is a Markov processes which is a type of stochastic process. So like I said, we are interested in probability density functions now, right? I'm interested in taking some probability density hitting it with my stochastic logistic map and seeing what the new distribution looks like after one iterate, after two iterates, after three iterates. So just again, quick notation here. This is basically I can look at like if I have a starting value of X and I have some borrel subset of my unit interval zero one we can ask questions like what's the probability that if I hit X with the stochastic logistic map that my next iterate is gonna be inside that particular set that I'm looking at, right? So for instance, if X takes value my starting X value is like 0.8 and the interval G that I'm interested in is, I don't know the open interval from zero to 0.1, right? I can ask like what's the probability that when I hit 0.8 with the stochastic logistic map that the next iterate is somewhere between zero and 0.1. Okay, so I give this definition because I want to motivate a new definition of this thing called Harris irreducibility. So Harris irreducibility, again, you can read the formal rigorous definition for yourself if you'd like, but the way that I really think about Harris irreducibility is that basically no matter where, so we call this process Harris irreducible if no matter where your starting value is anywhere between zero and one, if you consider a set of positive measure, a subset of zero, one of positive measure, and I guess I should say probably a Burrell set, that eventually I have a positive probability of getting into that set of positive measure at some point, right? So even if I start way over close to zero, right? So let's say that I start with 0.001 and eventually that orbit has a positive probability of being mapped into, I don't know, the interval from 0.9 to 0.91 or something, right? Because that interval has positive measure. And so if my system is Harris irreducible, I know that eventually I have to get mapped into that with positive probability at some point. So basically everything is sort of getting like all mixed up here, right? I can start way over the right and I know that eventually things have to move over to the left. I can start way over the left and things eventually have to move way over to the right. So I mean, Harris irreducibility is cool in and of its own. It kind of feels a little bit like, at least to me, it feels a little bit like the stochastic version of chaos, right? Remember, chaos is when we saw this dense orbit, these orbits that sort of fill up the entire space. Here again, I have stochasticity in play, so I can't talk about like really dense orbits per se, but I can talk about this kind of like mixing quality, right? Where everything sort of gets mixed up everywhere else. It gets really like blended together, like a nice smoothie or something. So I mean, Harris irreducibility is cool in and of itself, but we actually care about it for another reason. And the reason is because of this theorem, which basically says that if a Markov process is Harris irreducible and has some invariant probability or invariant measure, and I say invariant under the Perron-Ferbenius operator, I highlighted that because I haven't defined what that is yet. So if I have Harris irreducibility and I have an invariant probability, then that invariant probability is the only one. It's unique. It's like the unique fixed point of this system. And furthermore, I see that other probability measures converge to that invariant distribution. So that's what this, and I should say this norm here, it really actually doesn't really matter. I think we've been taking like the soup norm for the most part, but basically like no matter what you sort of, no matter how you define this norm in a way that is sort of makes sense, you see convergence to that unique probability measure. So no matter what starting distribution of X values you take, eventually you'll get this thing sort of converges to this one unique probability measure. So that's what I mean. When I said earlier on, I was like, I wanna be able to understand the system completely. This is what I mean. I wanna be able to say no matter what, no matter what starting distribution you give me, I know eventually this thing is gonna converge to this invariant distribution here. And that distribution allows me, tells me like, it tells me information about like the underlying dynamics. If this thing is sort of spread out all over the place, or if I see like pockets appearing, it tells me information about the dynamics there. Okay, so I mentioned I was gonna define the Peron-Farbenius operator. Again, I don't want you to get too hung up on the Peron-Farbenius operator. This is the formal definition of it. But basically it's just an operator on my probability distributions, right? If I have a starting distribution and I hit it with my stochastic logistic map, what is the distribution of X values look like in the next iterate? So that's the formal definition there. Okay, so we have numerically my collaborator and I have numerically figured out that this thing is gonna converge to some kind of invariant density. We can see that it's happening numerically because my first iteration is like, let's say I start with a uniform distribution, right? So my X values are uniformly taking values between zero and one. My first iteration, there's a light green guy here. And then I have other ones, the third iteration, the ninth iteration, 12th iteration. Eventually I get convergence to, we call this like a shark fin map because it kind of looks like a shark fin. So numerically we can see that there is some kind of invariant distribution that appears to be stable or asymptotically stable. That appears that other distributions kind of converge to this thing. We're still working out the details of proving this. So we still haven't proven, we kind of have like the bits and pieces of proving this and proving that the stochastic logistic map is Harris irreducible. And therefore we have the uniqueness of this invariant distribution and the stability of it as well. But that's still a work in progress. The next question is like, what is actually the value of this invariant? What is the actual function of this invariant distribution here? And this, I have to credit my collaborator, Amy Rand Skyway, because I have no idea where she pulled this out from. But our numeric, so the red here is what are from under simulation, what we found the invariant density to be. The blue here is kind of her guessed invariant density given by this function here. Again, I know she did some kind of a von Neumann analysis to get this. That's, I frankly don't really understand that at all. So I just have to give her full credit for that. But we sort of, this is like her kind of best guess at what that function actually is. And it's possible it doesn't have any like really nice closed form thing, but we're trying it, we're trying. And that's kind of as close as she's gotten there. So some more open problems, and these are things that my collaborators and I are currently working on. So like what happens, we can talk about things like the expected value, right? What happens to the expected value of the iterates when noise is added? So for some other, you know, expected value, I mean, it still makes sense in this sort of deterministic way, but for some other maps, we've seen that when you add the noise or like the stochasticity of this thing, that the expected value actually changes that the expected value becomes lower or bigger than in the deterministic case. And also, how can I stabilize distributions that sort of correspond to underlying periodic orbits? So you can see, for instance, like the shark fin is mostly supported on that entire interval zero one. We would really love, for instance, to see a distribution that kind of has, that's supported for instance on like little sub intervals, right? So let's say, for instance, I had a distribution that was supported on three little sub intervals of zero one. And what that would tell me is like, I'm sort of, it's like kind of like I'm converging to a period three orbit, right? Because I'm seeing instead of my X values in the limit being able to take values anywhere between zero and one, they have to take values in these sort of three little neighborhoods. And if they sort of rotate between those guys, that's kind of like, we call them quasi-periodic orbits. So are there ways for me to find those periodic, those periodic orbits or quasi-periodic orbits? So those are some problems, some projects that we've been working on. So thank you. Let's go ahead and thank Kimberly for wonderful talks. Thank you so much. Thanks, Kimberly. I'm muted in class, and it's like the clap emoji in the chat somewhere. Does anybody have any questions? One thing I was kind of wondering is how there was this plot a while back where you had, you started with the uniform distribution, you got them converging to sort of the shark fin thing. How do you generate all of those intermediate distributions? I mean, these are, they're not really density functions. These are really generated from histograms, right? So I'm looking at like sampling things from a uniform distribution and then looking at the histogram of what's happening like to their orbits, right? So keeping track each time if I hit it with the stochastic logistic map. So yeah, these are not, these are really coming from like a numeric simulation here. This is not unfortunately coming from, I guess you could sort of, you could kind of try to solve like the integral equation that you would get by looking at like what happens when you apply the promo for beating us operator. But yeah, that's a hard problem. This is like you're fixing one lambda for a while and then running a bunch of numerical simulation with that lambda fix and then you move on to the next. So we're not fixing lambda. We're taking like, we're still pulling lambdas from some kind of a density on that interval zero one. So I think that these were generated from like taking lambdas from a uniform distribution of one to four, right? So each time I get a newly generated lambda according to that distribution and then apply, use that and then use the logistic map with that parameter value and hit X with it and then just kind of keep track as you're going and do this many, many times. You guys kind of, would you all expect that the same sort of behavior if I don't know, if you take like just different distributions of your lambda, like instead of uniform like some Gaussian mixture concentrated on. Yeah, I believe because a lot of the results out there for this like heresy reducible stuff requires that your distribution of land just be absolutely continuous. So I believe in those cases as long as your probability function is absolutely continuous that you should still see these results. It would be a different question if you had some kind of like discrete or like a delta distribution for lambdas, right? So let's say for instance, lambda took value three with probability one half and it takes value four with probability one half, right? That would be a slightly different situation. Anybody have any other questions? So that's where you go ahead and thank you really once more. All right, thanks so much Zach for inviting me and having me. This has been fun. Yeah, this is wonderful. Everyone's like, let me stop recording here. There we go.