 Okay, so I'm going to talk about reproducible integration testing in MIR. I'm Sergey Federo from Consensus Lab, one of the developers of MIR framework. And let me just quickly recap the completeness that the MIR is a framework to implement distributed protocols with focus on consensus protocols. It's made modular and flexible. You can find it here, GitHub, and it's a part of Consensus Lab's Y3 project, which is also called scalable consensus. So the general architecture of MIR is even centric. So there are different modules that can produce and consume events. And basically the node operates by dispatching events from source to destination modules. And as any software, we would like to ensure its stability and correctness by proper testing. And with distributed protocols, and I think especially with consensus protocols is particularly difficult. And the consensus protocol is a critical part of any blockchain or distributed system that uses that. So our goals when we do integration testing is to ensure stability against different kinds of failure, like crushes network partitioning, Byzantine failure. And we would also like to catch some implementation bugs. When we think about integration testing of a consensus protocol, it appears a good idea to have such testing at deterministic, so that if we get the failure, test failure in CI, then we can take some kind of random seed and reproduce the test exactly. The failure exactly so that we can debug it step by step. So to achieve that, we need to control concurrency in the node and between the nodes. And we also would like to explore different schedules of execution, so that we use pseudo random shadow ability to run pseudo random schedules, so that we can catch different bugs. What prevents us from achieving reproducibility so it's a different kind of inherent sources of non determinism. So this this the sources of non determinism in our case is mostly communication between nodes over the network, it can come because of unpredictable message delays, or unreliable message delivery or out of order message delivery. And as well as communication between nodes, we can also have non determinism within a node because our even dispatching between modules happens concurrently. And we also have a local clock in each node that can fire timeouts. So that is also non deterministic. And when I'm talking about integration testing, I talk about a scenario where all nodes run on the same machine within even within the same process. So it's kind of like, don't really have to communicate through network they can communicate with some stub. But nevertheless, when we run several nodes, we need to run them concurrently. And that concurrency gives us non determinism that we want to control. So how, how we can control that non determinism. So our first trial and a step towards that is introducing a simulation engine that I recently implemented within the mirror framework. So the core of this engine is the runtime. It counts simulated time. So it simulates time. It counts logical time, and it executes actions that are scheduled at specific points in the simulated time. So we can so the simulation, the runtime knows at which instant of time any each action should happen. And the action itself happens as it were instantaneous. And this runtime, it functions such that it executes all the actions scheduled at the next. Well, it executes the shadow the next scheduled action in the in the simulated timeline. And it waits until that scheduled action is complete and then it knows that there is nothing more to execute at this time. And it can proceed to execute the next action scheduled in the virtual time it may be scheduled in two hours of virtual time, but in our simulation it happens as the next step immediately. This this runtime has a notion of processes that can help us to control concurrency. And so it represents running actions within the runtime. So whenever an action is executed that is bound to a process that becomes a runnable, it executes some code and then it should go to to block on some operation or sleep in virtual time, so that the action is complete and then the runtime can proceed to the next action to the next scheduled action. So processes can also spawn new active processes. They can virtually fork like one process can spawn a new process and then both are active. And until they both go to sleep or block the simulation runtime doesn't execute the next action. And they can also synchronize and communicate with each other. And that is achieved by means of channels. This is the mechanism to synchronize and communicate values between processes. So how does it help to run mere nodes in reproducible integration testing is that we wrap all modules of a know of a mere node, and we run unmodified node core, and also unmodified modules code, but we wrap modules so that we can get control of module execution and event dispatching. So in our case, handling of each event, it happens as if it was instantaneous, but we can also introduce some delays in virtual time. So the random delays in virtual time to simulate modules taking more time to execute. And since modules do not communicate are not supposed to communicate with each other directly, only through the node through the mere node through the core. That is perfectly fine, because we have full control over event dispatching and before we can schedule module modules running with our simulation framework. The execution of modules it's reflected as a simulation processes those processes that that were mentioned the previous slide. And we try we track events that come out of the modules that are generated by the modules and that are consumed by the modules so that we know exactly what to expect from the module from from the core. We know how how dispatches events and then we control concurrency through the simulation runtime. And we have to provide two substitutes. Before I mentioned that modules are not modified, but with two exceptions. One module is transport which implements communication between nodes. And we provide a substitute for that module called SIM transport that that implements communication with the simulation channels instead of real network or go long normal channels and so that we can control messages passing between nodes, as well as the events in each node. And we also have a substitute for a local timer because obviously we cannot use real system timer. If we run in simulated time, we also have to use the simulated time. And this since mirror in mirror any modules are not supposed to use system timer directly but instead they emit events targeting a timer module that can install timers and send back a specified event when the timer fires. And we provide an implementation a substitute for this timer module that is connected to the simulation and basically utilizes the simulated simulation runtime. To implement this timer with the virtual time and the implementation of these parts that are specific to mirror are located here in package deploy test. But the simulation engine itself is located package testing it's not really so coupled with mirror is kind of independent. Right and this this mirror specific stuff it uses the simulation runtime to me to to to work right so this is the code and I will just run to two integration tests, a number four and number 15 and the difference is that the number four uses real time and number 15 uses virtual time. So the test runs. And in the end we will see the difference in how much time does it take so the number four. It's with real time it takes 20 seconds because it has to wait all the timeouts it has to wait all the things in real time and number 15 is the simulation time and it runs significantly faster. So it's good in CI and also it does not depend on this on how fast is the virtual machine running the test, we had some failures because give have virtual machine in CI, it was sometimes a bit too slow. So real time doesn't really work reliably there. Whereas simulated time is just doesn't matter it. It simulates time that that's all I wanted to show. Thank you.