 So, let me tell you about the work I've been doing over the past few years with Lina and Prakash. This work aims at generalizing by simulation from discrete time processes to continuous time Markov processes, and we ended up with a bunch of behavioral equivalences. So the first question is what do we mean by continuous time? The well-known label Markov processes and Markov chains are step-based processes. So in the case of L&P, you're in a certain state, you do a jump every step, and you end up in a new state with a certain probability. In Markov chains, there is a clock associated to each action, and you have a generalized clock that synchronizes all those clocks. When one clock reaches zero, the process does jump and re-initializes the clock associated to that action. These are normally called continuous time Markov chains, otherwise they do not have that clock business going on. So whenever I say Markov chain in this talk, what I really mean is continuous time Markov chains. So in both cases, it's very based on jumps happening at steps. However, in continuous time, we do not have next steps, but we have two transition functions that correspond to this notion. The first one is maybe looking more familiar with respect to L&Ps, and it tells you what is the probability in time t of going from a state x to a set c. The other one, which is kind of new here, pxb, b is a set of trajectories, and this one tells you what is the probability that a trajectory starting from x is in the set b. There are, however, key differences. You could think, okay, we're just going to make the time very small, and the distance between each step very small, and we'll end up with the same thing. It's not the case. Let me illustrate this on entry points. So in discrete time, it's very simple to be talking about entry point. Here you see in two steps, my process enters the blue area. In Markov chain, it's a bit similar. I'm spending five seconds in the first state, then 25 seconds in the second state, and then I reach the blue area. So I need 30 seconds to reach the blue area. In continuous time, well, it seems to be fairly obvious. I'm starting on the left, I'll reach eventually the blue area, and this tells me what my entry point, entry time is. In discrete time, it's also very convenient to define second entry times. You're just waiting for the step where you enter, then the state where you exit, and then when you re-enter. So in this example, it takes me five steps to re-enter the blue area, enter, exit, enter. And similarly, with the Markov chain, each state, I spend a certain amount of time in that state. I just add up all the different times, and if I'm not mistaken, it takes me one minute and five seconds to re-enter the blue area. Here in continuous time, in my example, at time t1, I enter the blue area, at time t2, I exit the blue area, and at time t3, I re-enter the blue area. So it seems fairly simple, right? Well, not so fast. Obviously, there are times when this is harder. So for instance, now the boundary is not in the blue area, and at time t2, the trajectory is exactly on the boundary. So it exits and re-enter right away, the blue area. So what would be the second entry time of this process, of this trajectory, sorry. So you could expect it to be t2, however, I would argue that it is t4. And indeed, what you would want to do in that case would be to define a black area that's inside my blue area, and a green area that's outside. And you would have that thickened border, and you would consider the, so the second entry time you would consider when the process goes from the green area to the black area, then from the black area to the green area, and then again into the black area. So that would be in that case t4, and not t2. And finally, another not. So in 20th example, here you have a trajectory that reaches the blue line at time t1, and then weagles around the blue line. And here you see that the second entry time would be t1, and the third one same and same and same. So this is not something that you would expect to see in an LMP or a Markov chain. So these, these example give us hint that continuous time doesn't behave quite like discrete time. So the second term that's in the title of my talk is behavioral equivalences. By simulation in discrete time, we have two conditions. The first one is based on the fact that we have atomic repositions with distinguished particular areas on the state space. So in order to be by similar, we want to have the same atomic repositions. And then we have the very familiar induction condition x and y is if they are by similar, then after a step, their probability of reaching our closed sets equivalence classes are exactly the same. Let us look at an example, random walks. We have a state space, which is the integers, and a Markov kernel. That Markov kernel tells me that if I'm in the state n, with probability one-half, and I'll do minus one, and with probability one-half, I'll do plus one. So in that example, when our two states n and m by similar, well, zero is singled out, and then I look at the probabilities of reaching zero in n and m steps. If these are not equal, then n and m cannot be by similar. Indeed, if I look, if n is strictly less than m in absolute value, and I look at the probability of reaching zero in n steps, then from n, I have a nonzero probability of reaching zero in that amount of steps. From m, I have zero probability of reaching zero in that amount of steps. So I end up with n and m are by similar if and only if absolute value of n is equal to absolute value of m. I have to prove that this is a batch simulation, but it's fairly simple. So this is my first example. We can now move on to continuous times. And why did I give you such an example? Well, if I take random walk and I make the time between each jump and the distance between each state, increasingly small, then I'll end up with permanent motion. So the state space is the real line. Zero is still carrying an atomic proposition, so it's still singled out. But the dynamics of the system is a diffusion-like. It's exactly actually a diffusion. So after time t, my probability of being in a certain set is given by this density function, which really tells me my best estimate is still x on my process. However, how certain I am of what my process is kind of fades away with time. So this is the reasoning that showed us that n and m can only be by similar if they have the same absolute value. Can we adapt this to brilliant motion? As I said, zero is still singled out. This works if I feel the same. We have no reason of changing the initiation condition up early. But the question is, how do we replace steps? Well, there's a fairly simple solution. We just say, let's let time t go by. However, there is a big problem with this. And if I start in any state, and I wonder what is my probability of being in zero after time zero, then this is zero. You have a zero probability of being precisely in zero at time t starting from z. So we end up with just two equivalence classes, zero, and the non-zero reals. This is indeed by simulation if I generalize by simulation that way. So let's just call this attempt not a success. We cannot just replace steps by time steps. But I could have rephrased my question slightly differently. I could have asked, what is the probability of having reached zero between the kth and the k plus one steps? Or what is the probability of having reached zero at some point during k steps? And here you see that you are starting to see entry times in the picture. So the thing is, we need more than just a single time step. We need actually to have more information. We need trajectories. So we end up with a bunch of behavioral equivalences. By simulation circled in red here is the notion that we introduced in 2019. We thought at the time that it was analog of discrete time by simulation. However, we realized later on that we didn't understand well enough what happened to trajectories. And now we realize that it's not quite right. A very exciting notion is that of group of symmetries also. Because most of our examples exploit group of symmetries. There are really a bunch of functions that need the dynamics unchanged. So this is pretty exciting. I'm not going to give any details in this talk in formalism. I would rather refer you to our paper, which is carefully detailed and technical. We used fellow dinking processes, which is a class of continuous time processes. That are crafted to account for a wide range of continuous time markup processes. So if you remember discrete time by simulation, I had the initiation condition, which tells me that if x and y are by similar, they have the same ops. And the induction condition that is stem based. So where are we going to put trajectories in here? Well, if I put trajectories in the induction condition, this is the definition of bisignation that we gave in 2019. As if we put trajectories in the initiation condition, this is going to give us temporal equivalence. So first, bisignation, as I said, we do not change the initiation condition. If x and y are related, then they have the same observables, the same atomic propositions. And the induction condition becomes a condition on the trajectories. So if I have a set of trajectories that are closed, so since it's a trajectory set, we call it time are closed, then the probability of having a trajectory starting from x being in that set is exactly the same as having a trajectory starting in y being in that set. For temporal equivalence, we keep the induction condition from discrete time, but instead of just being a step, it is a time t step, but we change the initiation condition, which amounts to trace equivalence. So if we take a bunch, a set of states that is time ops closed, so instead of being r closed as before, it's having the same observable, then x and y have to agree on that set. And we end up with a bisignation is a temporal equivalence. And if two states are temporal equivalents, then they are trace equivalent. Note that we can have several bisignation and temporal equivalences, but there's just one trace equivalent. Let's illustrate that on an example. In particular, the first example is interesting because it gives a counter example to trace equivalence implies temporal equivalence. So the first arrow cannot be reversed. And it's the fork. So here you have the state space. On that state space, I have a deterministic drift process that goes from left to right. And when I have a fork, the process with quality one half goes up and one half goes down. I have two atomic propositions, the blue one, which is at the end of the upper branch and the black one, which is at the bottom of the lower branch. And here you see that the two states that I have marked intuitively, they are by similar. It makes sense, right? And indeed, if I replace the orange branch by the other orange branch, I won't see any differences. And similarly, if I exchange the purple branch with the other purple branch, I won't see any differences. However, you can see that the point that's right at the fork in the upper section cannot be by similar to any point in the lower section. And this shows that x0 and y0 cannot be by similar either. This is actually, if you have recognized it, a generalization of the vending machine example. And why is this interesting in this case? It's because x0 and y0 aren't, they're not by similar. They're not temporarily equivalent, but they are trace equivalent. Let us look at another interesting example back to thrown in motion. But this time, the atomic propositions are on the integers. So you're not able to tell which integer is, but you're able to say, oh, it's an integer. And here, intuitively, you'll realize that what makes the difference is the distance to the closest integer. So two states are by similar if and only if they have the same distance to the closest integer. So proving that this is a bisimulation is not that tricky, or proving that the greatest bisimulation is trickier. However, there is something very interesting happening in the computations that this is a bisimulation. First, we realize that if we just shifted everything by an integer, this isn't changed. The whole process isn't changed. Similarly, if we do a symmetry around an integer, this is also, this leaves the process also unchanged. And similarly, if I do a symmetry around a half integer, the process isn't changed at all. And this has led us to the definition of group of symmetries. A group of symmetries is a group of homeomorphisms on the state space that commute with ops. So here you see that I do not change the value of the atomic propositions satisfied by a state. And it leaves the dynamics of the system unchanged. So let me advertise our paper a bit. There are a lot of open questions still. For instance, we do not know if temporal equivalents and bisimulation are equivalent or not. I personally believe they are equivalent for a surgeon class of processes. But in the paper, we do give a game interpretation for basimulation and temporal equivalents. We provide a lot more example. And maybe more interestingly, we studied that relation to discrete time. And this is where we actually realized that actually temporal equivalence is the notion that best extends bisimulation from discrete time to continuous time. There are a bunch of open questions. The first of which is understanding sets of trajectories. It is really technical and hard. We're also interested in finding relevant metrics, bisimulation metrics, temporal equivalence metrics and so on. And also the philosophy of the clock. So we think that we can say a lot about just by looking at clocks, like when is a process entering something, exiting something for as much as we can define those notions. However, these are still open questions. Thank you for listening to this talk to the end. This is a very technical subject, as all our reviewers have noticed. But it is very subtle and very, very interesting. So I hope I have managed to share this with you and that I'll have made you want to read our paper. Thanks a lot.