 be a registry or a control circuit or any other electronic circuit. This may result into memory corruption, but since the component is not hardly or severely damaged, the software may continue running smoothly until it rates from the part of the memory that was corrupted. Evaluation of commonly used CPU relief that half of transient faults may cause calculation error or program crashes. So, either the circuit or this program must be protected. In my research, we choose to protect the program, and precisely the program we are talking about here is the operating system. The approach we choose is to control the operating system execution through a virtual machine. This leads to this architecture where we have a hypervisor running directly on the hardware and allow us to control the guess for us. The hypervisor we are using is a combination of Nova and the keynote software with the use of virtual bus to control the operating system execution as guest machine. The general approach is to rely on existing hardware detection and recovery mechanism like matching architecture, memory protection facilities, error correcting codes, which may remain unprotected by hardware mechanisms. We rely on double execution with comparison of short processing elements which last at most 200 microseconds and are executed atomically. The problem now here is how we handle the coherency of the system and the performance impact while we are executing redundantly the sequence of instructions and in the presence of external interrupts which are asynchronous events. The software to be protected here is divided at one time in short processing elements. A processing element is, as we said, a sequence of process CPU instructions that are delimited by either a maximum number of instructions. We detect this by using, by triggering the performance monitoring interrupt. The processing element may also be delimited by a system called CPU exception like page folds, general compression folds, or taxed state segment errors, this kind of exception. Also, we stop the processing element when there is input-output instruction. We process switch and later we'll also rely on virtual matching exit to stop the processing element. So all these events trigger processing elements stop so that we can re-execute them compared to execution and detect if there are transient folds or not. Okay, so concerning the external interrupts, we distinguish two classes of external interrupts. The first class is performance monitoring interrupts. This interrupt is used by the hardening model to stop the processing element when a specific number of instructions is executed by the CPU. And these interrupts have to be handled immediately. The second class is all the other external interrupts. These interrupts cannot be part of the processing element. The handling is delayed when they arrive. If they arrive, if they are triggered, we queued them so they are queued for the first service until the processing element is finished. So when they arrive, we amquee them, we execute end of interrupt, and after committing the current processing element, we execute all recorded interrupts in first-in first-out order. So a special care must be taken when this special care must be taken regarding real-time processing. If the interrupt requires immediate servicing, it is proved not influencing the processing element's hidden potential, we may service it directly or immediately. But if it cannot satisfy these criteria, we face this in this way. We face that case where we cannot handle this kind of interrupt now. So we test this approach on this approach on G-Node system with its micro-carnal nova, and during the booting phase, which lasts about 600 CPU giga-cycles, approximately four minutes, with no idle loop, we notice that 99% of all time learning interrupts are delayed, whereas all other interrupts, we've also all other interrupts. So it is a busy time and all interrupts that were triggered during this booting phase are delayed. But especially time interrupts were delayed in average 18 microseconds. All other interrupts, like keyboard or other global system interrupts were delayed with about 40 to 50 microseconds in average. So this was the case when the system was booting. After the system booted completely, we let the system run with idle user, and the overall result was that only 20% of timer interrupts were delayed, whereas none of the other interrupts were delayed. This is quite optimistic about our performance with dual execution of processing limit because the current implementation is largely improvable, and all the system, after it's finished, will run after booting phase completed. So in conclusion, we may say that when executing redundantly sequence of instruction of piece of software, interrupt management is a key aspect to performance. Here, we chose to delay their servicing to ensure that the two executions are the same. We investigate how interrupt delaying performance impacts the system, and we found that in general, this will not impede the operating system dual execution in normal execution. So that's the conclusion we brought in, and that's also the end of our presentation. Thank you for your kind attention. Any questions or suggestions? Welcome. Thank you. Thank you. So we may have some questions here. Yeah, so it sounds like you're executing a presence of concurrency on an S&P. I don't get you. I don't hear you correctly. Yeah, maybe it's a large question. Maybe it's better if we... Okay, can you hear me? Yeah, yeah, yeah, yeah, yeah. Okay, it sounds to me as if your processing element is a unit that only works on a single processor system, because that's right, right? It doesn't work on S&P. I didn't get the question. Yeah, the question is coming. The question is, does this part of the program inter-processing elements work on an S&P system where you have concurrency also due to other processors not just interrupts? I was just going to write the question because the stand is not correct here. Okay, let's go on. Did the question work? Is your concept working for S&P systems? Okay, so right now it's not already running on S&P processor. We do not consider multi-fledging now, so this will be a part of future work. So now all configuration about multi-processor are left for future investigation. Did you get me? Yeah, yeah, we are. Okay, have a question? Oh, yeah, thank you. I think it's highly dangerous instructions where you have interrupts disabled and you have a small window of, say, 20 instructions where interrupts are disabled and then you disable them again. Okay, can you shorten this question? All right, maybe you come in front and sort it directly. Yeah, maybe he sends you an email. It's another large question. Okay, our questions. Seems not to be the case, so you two get in contact. Okay. And yeah, thank you. Thank you too, and we'll see you next time. So see you later. See you later. Thank you.