 Alors, bonjour tout le monde. Donc c'est un short update sur G1. G1 stands pour Green 1. Ne me demandez pas pourquoi. Accepte que ce might... Sorry, sorry, sorry. Ah, il y a des latencies. Il y a des... C'est Green, parce que sur cette figure, qui est la mère de toutes les figures, c'est-à-dire, c'est Green. Et ce que nous sommes intéressés dans ici est l'exécution. Et donc, il y a peut-être deux choses dans cette partie, peut-être plus de choses. L'une est de avoir une exécution parallèle d'acteurs. C'est une chose. Et l'autre chose que Marco m'a mentionnée d'aujourd'hui, c'est des computations arbitraires sur l'IPFS. Le baccalao ou le projet de code, selon le nom que vous voulez donner. Donc, nous avons deux milestones dans cette partie de Green. Milestone 1, c'était le 15 janvier. C'est une review de la technologie de parallélisation pour les machines virtuelles pour les modèles d'exécution. Et un second milestone de mid-June à mid-July, qui est en train d'utiliser l'architecture pour ces systèmes Avant de continuer, nous allons dire que la parallélisation des modèles d'exécution des modèles d'exécution de smart contract. Pourquoi ? Parce que les modèles d'exécution n'ont pas été utilisés mais pas dans le contexte du contract smart. Si vous êtes en train d'exécuter les modèles d'exécution des modèles d'exécution de smart contract. Mais tu fais partie du propositions pour les modèles de standard autour de la parallélisation ? Non, je ne vais pas interpréter Vous devrez voir ce que j'allais dire Mais je ne vais pas interpréter la design de Wazem Vm Je vais toujours interpréter Au niveau av elevé Non, non. Mais j'ai joué un peu avec Wazem pour comprendre comment est-ce Il y a des choses Il y a quelque chose, oui, close to pithreads and so on, that I use that and actually it works. But it's not, from what I understood, it's not in the core wasm specification. It's something that sits just, I mean next to the specification, that can be integrated within wasm vm and so on. But at the time I looked at wasm vm, the choice had not yet been made by the field team and they were considering different wasm engines. One was wasmer, the other one was EVM or he wasm, he wasm. There were a third one I don't remember, which one? So the choice of wasm is because I think wasm is standard that will be widely deployed. It's kind of, let's say, higher level language that they want to rely on and then they will compile this higher level language into actors and they might also compile other languages into wasm, like vm to wasm, EVM to wasm, I mean solidity to wasm and so on. I think it's because it's popular, portability, it's popular. That's great, but this is what we want to escape, actually. One of the difficulties we have so far is that in part of what we are doing, the engineering team is defining the next architecture. The architecture is not defining it now, it might be. But at the time we started thinking about that, there were several designs that were possible, like multiple wasm vm, so a single wasm vm, that would be parallelized and so on. And this was not fixed yet. But this is something we will work on. This will be part of the second milestone. Now we will know what they do in a vm, so far it's going in vm. We will use their design and we will base our work on their design. But so far there is nothing related to wasm that I will discuss. So what do we want to do, actually? So we want to parallelize smart contract execution. So let's assume that you have a block, you have two transactions and they are linked within the block. As you were saying, what is very convenient is to have this deterministic execution and sequential execution. One at a time, this is actually what most blockchains provide today. So transaction one will be executed before transaction two. So if they perform actions on a smart contract, like set x to three, set x to 20, the miners and the validators will do exactly the same operations in the same order and we are fine. We will get a consistent state. We know the story, you recall yesterday what was total of the broadcast, consensus, blockchain and so on. So this is all in this slide. And actually what we would like to have is this parallel execution, so meaning that the miners and validators might be using several threads at the same time and perform the executed transaction actions in any order, not arbitrary order, but in parallel. The problem, obviously, you know what it is. The set three and set 20, they might commute and then the challenge will be, what's the expected benefit? We will improve the blockchain support. Obviously, the latency, I'm not that clear about whether we improve it or not. I wouldn't be in that. How leveraging the multiple available cores on miners and validators. But the challenge is how do we ensure the consistency of the blockchain despite this parallel execution. So there are not so many papers discussing this. These are a few. There are some more, but I listed the more important ones. They are here ordered by date, not by the conference. This one has no conference because also it's a very interesting paper by early. From what I saw, it just appeared as a DAX tool report so far. There is a core report, a DAX tool report, but it's not published. It's not really published. DAX tool is not this. It's not this. Because this publishes the DAX tool report. It might be. It might be. It might be the case. So I will quickly... So the goal of my presentation is to tell you quickly what these papers propose considering the ideas a bit more than this list. We have the following research ideas that have been produced so far. Speculative concurrent execution using STM-like techniques. This is one area of work. Then there is another area of work leveraging static analysis to a priori detect conflict. So it's no longer a speculative. You will execute something that you know will produce something consistent. Then there is what I call multi-future prediction. This is this last paper that appeared at SOSP. Just this year. Foreigner paper. Very complexe. Very interesting system. But probably over-engineered a bit, I would say. As many SOSP papers. And I'm not saying that because I don't have one. Just because I have something in the case. And then there are papers discussing off-chain executions which I don't list here because this is a different model. Basically they would say, okay let's assume that smart contracts will be executed somewhere else because the blockchain is just here to manage their execution at a high level but they are not really part of the blockchain. So I don't cover them. If we want to follow that path then they might be of interest. Yes, yes. You could stone some sub-chains with the goal to... Basically what they... Basically what they would do, this system is a typical thing, is they would use the blockchain to kind of agree on, let's execute this contract, let's maybe agree on the output. But you know the main blockchain will be used to perform this agreement. Okay, we agree that the output of this smart contract is that one, rather than really executing the smart contract within the blockchain and having the smart contract if I update the state of the blockchain. Basically they will distinguish, they will separate agreement and execution as was done 20 years ago in the consensus world they would kind of apply this high level idea to smart contracts. Yes. So I will go back to each one very quickly but just to present you these ideas. So speculative concurrent execution using STMs, so the operating principle, so there are several papers, but the operating principle, if you read this POTC paper, you will get it. So it's fairly straightforward. So miners will execute transactions concurrently and they will use STM techniques to detect conflicts between transactions at runtime. I assume you're familiar with STMs. If you're not, I mean the STM techniques allow pinpointing the memory addresses that are accessed either in read or write modes. And so at runtime you're able, when you have an STM runtime you're able to say this memory address has been accessed by a set of transactions, more than one transaction and at least one was of write. These are reads, obviously. This is not an issue. So this is something that, I mean, those of you who made a TPFL in distributed programming, know a lot. Rashid was working a lot on this topic during years. And so what does happen to transactions that access the same portion of memory? So these transactions, they fail and they are rolled back. And so it's really like the database world where either you commit or you roll back a transaction and then you re-execute it. The thing is that once this has been done, I recall that this execution should be then reproduced by validators. So this means that you cannot then say validators also have an STM-like runtime and they will execute the transactions because they need to end up with the same state than the miners. So this means that miners, once they have executed speculatively the transactions using this STM-like runtime, they will generate a serializable concurrent schedule. Okay? They will produce a schedule that they will embed in the block. They tell you how you should do it, exactly. And they embed this in the block, which is part of the discussion of people saying this cannot be integrated for instance, within Ethereum without doing a fork of the blockchain because the blockchain currently is not designed, is not architecture to get this... Yes. So I do not want to enter within the debate Ethereum or not. But yes, so they generate a schedule. And this this is the kind of happens before graph. You understand why? Okay? This transaction should happen before that one that should happen before that one and so on and so on. Again, so happens before in the lamport relationship that we know. And so the validators, they will execute the transactions according to this graph. And the thing is, so block is accepted by validators if the miners propose the correct happens before graph, so no data conflicts during the replay. So they are replaying and they observe I mean, the transactions that are executed in parallel do not access the same part of the memory. And there are no differences between the proposed final state and the obtained state. And they perform a evaluation within this paper but using a Java prototype. Why? Because obviously you don't have Ethereum virtual machine supporting STM-like runtime and so on. So they basically emulate to see the throughput gain they might observe. Just a question here, how big is that graph and how how is it I mean, it is it's a granularity of a block. The transactions that are within a block should be represented as a graph. Yes, yes, I would say yes. I would need to be this would need to be checked because you might have this at the granularity of operations within transactions but that would be useful so I assume it's at the granularity of the transaction. We would need to confirm that. So can you verify that the graph is correct? Because like so you don't verify it's just that when you execute it if you don't access the same portion of the memory this means that there is no problem. Front running transaction right? If by there's a conflict in some address I can like sweep sweep the graph so that the one that I'm interested in yes executes in that memory before and I take out that yes so are they addressing that at all? I mean that the minor so the minor they execute okay they they build the graph okay and they build the graph so that graph should yield then an execution by validators during which validators will not create this weird thing where to transaction access the same portion of the state Front running is still there because like I assume that the validator will actually test whether he will first they check and they need to obtain the same state the same final state which they know it's up to the minor how they resolve the conflict so the validator will think that there's no conflict yes but the minor can have the quality they want to resolve the conflict which means that I can do the graph that I don't know benefit to the users but anyway ah yes yes yes my idea is that I can show the execution you can just walk yes yes yes yes the minor decide what the graph will be okay but yes they don't address Front running no no no they don't address Front running the question they just address conflicts and you're perfectly right相er m can can jean car car car car car car car car car car car Et maintenant vous avez le public qui a traité le dîner. Oui, mais c'est une série de choses. Vous avez peut-être un graphiste, mais comme le graphiste, il s'agit d'une seule. Oui. C'est une série de choses. Le problème est que, en fait, ceci est maintenant sur le contraint de l'ordering, qui fait le travail de l'ordre plus facile. Mais si on a retiré tout ce qu'on veut, et qu'on ne veut vraiment poursuivre le contraint, on n'aurait pas besoin d'ordre, car l'ordre n'aurait pas besoin. Le travail de l'ordre est de décider sur ce qu'on veut. Oui. Juste pour que vous sachiez qu'il y a un autre papier en déclinant cette stratégie STM-based pour l'existence de smart contracts. Pierre Latillot, micro PDP 2018-2019, qui n'est pas une conférence de précision, mais de toute façon. La principale différence est la suivante. Les minors ne sont plus en train d'utiliser, en déclinant des loches. Ok, donc c'est plus optimiste la spéculation des minors. Les transactions de failles ne sont pas de rollback, mais sont retraitées dans un loop jusqu'à ce qu'ils arrivent. Donc, il n'y a pas besoin pour les rollbacks de l'Universlog, qui sont utilisés dans l'autre approche. Et les points avant le graphisme sont répliqués par ce qu'on appelle un graphiste bloc. Et par les validateurs, les frais, qui sont appelés les travailleurs, ils extractent des transactions par ce graphiste bloc, et ils déclinent que ce que l'on appelle l'ordre topologique, c'est qu'ils ont un graphiste. Comme vous l'avez dit, j'ai cette architecture où j'ai un pool de frais qui choisiront les événements pour être exécutés. Ils choisiront les transactions et ils vont juste faire sure d'utiliser le graphisme pour qu'ils renforcent l'ordre que le graphiste est encodé. Mais c'est très similaire, les mêmes idées et vous pourriez avoir les mêmes performances. Ok, c'est une partie de l'exécution spéculative par les validateurs et les validateurs reproduisent ce qu'ils ont fait et les libèrent. Il y a des différences margénales en ce moment. Pourquoi est-ce que ce n'est pas une solution ? Ça pourrait être une solution. Ça pourrait être. Je vais aller à la fin de mes conclusions. Pourquoi pas ? Alors, ce n'est pas une solution. Les libérations ététiques analysent ceci pour détecter les transactions. Le principe d'opérations, si vous voulez le savoir, c'est d'utiliser le papier. Donc, l'idée est d'utiliser les conflits statiques pour détecter les conflits entre les transactions. Parce que vous n'êtes pas sûrs que ce sera un conflit qui est un conflit actuel. Et vous générez l'exécution spéculative. Donc, nous ne sommes plus spéculatives. Nous analysons le code et vous générez l'exécution spéculative. Vous avez bien repris votre question précédente. Vous avez les transactions. Vous avez le read et le set de toutes les transactions. Nous voyons que Tx1 et Tx2 devraient préciser Tx3 parce qu'ils l'ont regardé par X et Y. Et Tx4 devraient être Tx5. Mais il n'y a pas de relation entre les autres. Et les validateurs et les minors peuvent utiliser ça pour exécuter les transactions pendant qu'il y a un état consistant. Le problème de la solution est que le code statique n'est pas un expert. Vous ne me demandez pas. Quand je dis que c'est pas efficace ça ne fonctionne pas sur les langues. Vous pouvez retirer les conflits qui sont compliqués plus que les références qui sont les cas de solidité. Pourquoi ? Parce que vous avez ces trucs qui sont dans votre code, dans votre transaction. Vous ne savez pas si vous avez un conflit de mémoire entre les transactions. Et c'est-à-dire que la solidité ne peut pas être ététiquement analysée. Donc, ce que je dis, ce n'est pas une solution. Et ce n'est pas une solution mais un remarque. Si vous utilisez Solana et la partie C il utilise un genre ététiquement analysée comment ça fonctionne. Donc, les transactions spécifient une structure d'instruction. C'est le nom qu'ils utilisent. Chaque instruction contient le programme et la liste des accounts que les transactions veulent lire ou rire. Ils disent que c'est comme un système opérant des APIs où vous pouvez parfois spécifier que c'est exactement ce qu'ils font. Ils travaillent sur la granularité des accounts et vous devez spécifier les accounts qui vont être accessés par chaque transaction. Et cela permet d'étudier les transactions non overlapées en parallèle. Cela permet aussi d'optimisation de l'exécution d'utiliser des processus particuliers des processus SIEM et d'expliquer comment ils peuvent livrer cette information pour avoir une meilleure performance. Donc, c'est un design stromande. Mais pourquoi pas ? Si vous voulez payer le prix de la question de l'application de l'application de l'application, vous pouvez mettre plus en parallèle. Vous avez besoin d'un langage qui permet votre langage d'être designé correctement. Vous avez besoin d'un langage domainé spécifique dans lequel vous allez avoir les constructeurs qui vont être facilement analysés. Pour exemple, dans l'EVM, il y a un static analyser pour que vous ne puissiez pas utiliser le static d'exécution. Oui, oui, oui. Vous avez besoin d'un langage qui n'est pas complet dans lequel tout est type donc c'est ce que l'application est précisément type avec des opérations type qui ne peuvent pas manipuler les accounts sans expliquer que les accounts sont manipulés. Mais si vous avez ça, oui, vous pouvez. D'où je comprends, dans le projet PL, le langage est donné, c'est un wasm c'est un code wasm il n'y a pas de langage, il y a plusieurs langages, mais je ne suis pas sûr que le team de engineering veut avoir seulement un langage. Et dire, ok, maintenant on a un genre, comme vous le voyez, c'est comme le JVM, c'est comme un code wasm pour lequel vous... Vous avez dit que vous avez besoin de l'account, oui? Oui, c'est écrit par les transactions. Nous avons un gaz trésor donc quand vous avez un code wasm vous devez choisir les opérations qu'est-ce que c'est? Peut-être, dans cette analyse vous devez vérifier pourquoi c'est écrit. Parce que vous avez les opérations à quel point vous devez faire un pass sur le message. Peut-être, on peut... Nous devons vérifier. Mon dernier slide est ce qu'il y a précisément pour discuter ce que nous devons discuter. Peut-être, donc la prediction multifuture. Donc, dans le principle vous devez trouver cette idée dans ce paper SOSP. Donc, la idée c'est d'optimiser ce model de détermination de l'exécution de consensus. La partie détermination est importante. La idée c'est de dire qu'il y a un temps entre le moment où la transaction est connue, parce qu'elle a été produite par quelqu'un. Je veux exécuter la transaction et exécuter. Et dans ce temps entre la détermination et le consensus la idée c'est d'optimiser l'exécution du futur de cette transaction. Comment vous optimisez-vous par faire ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... mais il n'y a pas de consensus. Et ce que nous avons discuté avant, dans quel ordre sera-t-il appendie dans un bloc, vous ne le savez pas. Donc, vous devez avoir un multi-future, c'est-à-dire, vous savez, un set de possibles transactions qui pourraient être incluses dans un bloc, mais le prochain minor qui va gagner le consensus n'est pas connu, donc il peut faire tout ce qu'il veut. Donc, c'est pourquoi, basically, ils disent « OK, let's consider that we can have multiple possible futures, OK, so we generate an accelerated program for this future, this future, this future, this future, this future, and at runtime, they encode a set of constraints when they produce this multi-future, and at runtime, they are able to say, OK, now we see that we are in this future and not in this future or in this future and not in this future and so. So, yes, yes, I mean, so they adapt to the fact that there might be multiple futures and they need at runtime then to be able to, you know, to take the right future and if they didn't guess the one of the, I mean, if the future that actually occurs had not been studied before, they will just perform a one-at-a-time execution, a serializable execution. The nodes between the dissemination and consensus phase. The nodes, they know the transactions, they got them, they are waiting for the consensus to be reached, they have plenty of unused cores, at least they say so, and they leverage unused available CPU resources to generate, so, the thing is, so, traditional speculative execution has one execution context to deal with, whereas they have multiple execution contexts to deal with, and this is the overall architecture, so no, let's not consider this, OK, this is what would, so you have the future context, so there are multiples futures that can occur, depending on which transaction will be ordered before which one in the next block that will be produced, so they produce some context, they do some prediction of what the next block will be, OK, that might be this one, this one, this one, this one, they perform a problem specialisation, they use memorisation techniques, you know, to cache the results and not perform memory accesses, etc. This generates accelerated programs, AP, OK, that then will be, at runtime, used to accelerate the execution of the transactions, OK, so what I'm saying it's over-engineered, it's a bit complex for, you know, I mean, plus you need to indeed say that you have a plenty of CPUs to average, which might not be the case, or you might use your CPUs to do something else. Yes, yes, you won't have many possible futures. If the miners is a nice miner, you might have rules such that you know what possible futures will be. And actually, they implement this on top of Ethereum virtual machines, so it's compatible with, and they show that they have some improvements and so on. So it's a very complex system paper, I mean, it's very, I mean, this is not a toy that they did, but it's... So the nodes are the ones that compute the future context. Yes. But they have all the states that they are required. But they don't know in which order transactions will be placed in a block. Right, but they may not have even all the transactions. Yes, exactly, they don't have even all the transactions. So they compute the future according to the transactions that they've seen in their mempool? Exact. And this is why they compute multiple futures, but they might not have the actual... Yes, yes. And what Matej was saying is that if you have some rules on how miners integrate transactions within a block, okay, it might be possible to say the future should be that one if you know all transactions and so on. If you are missing some transactions, there is no way to guess... Yes. But this base on the idea that before doing that you will know all transactions that should be in the next block. And not mean you should not know more, you should not know less, you should know the exact numbers of transactions and you should know these transactions. Message pools. Yes, that's... No, no, I mean... That's right. You know what... I can imagine if you do this, you have a big pool and then you say okay, probably all the end transactions, everybody already knows them by now and it will be... Or even miners, but they have like... They have like very interesting having as much information as possible for the new block in that pool. But no... They have no incentive on having... Yeah, I think... I think it's worth all the... It's what I call an over-engineered work. But it's... I mean, between Solana, which is very basic, where basically you declare, and yes, base on declaration, we are all able to do an even base runtime that will pick transactions and execute in parallel those that do not access the same state and this, there is a huge gap. This is our main months of work and so on. What should we conclude on all these research ideas? There is something nice to draw a conclusion. This is this paper by Erli, this 2019 paper because basically it answers the following question which performance gain can we... can be expected, can we expect? And the idea is the following one. It uses historical data from Ethereum and it tries to estimate the potential benefit of speculative execution. Speculative execution means executing in parallel. I should have written parallel execution but executing in parallel is smart contracts. And so, they assume the following execution model. They say we have two phases. One is the concurrent phase where the virtual machine will execute smart contracts in parallel. It will track the written rights of each transaction. It will intercept and buffer the rights. And so, two transactions conflict if they access the same relocation and one at least is a right. And in a second phase, all conflicting transactions are replayed in a one-at-a-time order. So, they take the transaction of a block, of an Ethereum block, so they look at historical data, and take a block. They try to execute all transactions in parallel. A set of them work, meaning they commuted, they didn't access the same state. The other one that fail, they reexecute them in parallel. And they are looking at the time it takes to execute this rather than executing all transactions in a serializable manner. They cannot perform this study on all blocks. There are too many blocks, too many transactions. It would have taken for too many time. So, they take transactions occurring from 2016 to December 2017. And so, this is the number of transactions per day on Ethereum. We can discuss this jump here. And they studied each time a window of one week of transactions. And on this one week of transactions, they execute one tenth of the transactions. Just to decrease the size of the data, they are actually processing. This is the cryptocity thing. It's important because it's something that is often mentioned in papers. This might induce bias in the way you interpret data and so on. So, cryptocities was a smart contract, token-like smart contract. And so, huge interest, many transactions and obviously many conflicts. Because... Because the parallelization cannot be done between functions in the same smart contract? It could, it could. And that's one possibility that I mentioned at the end. It's to do, for instance, conflict-free replicated data types to implement smart contracts. So, CRDTs, conflict-free replicated data types, these are data structures that, by design, commute. And, for instance, what is said is that accounts, credit and debit operations and account, which is something that blockchains are used a lot for, are the typical kind of objects for which you can design operations that commute when you don't have overflow or account. Yes. Yes. Yes. Yes. Yes. Yes. So, no, no, no. You can, I mean, you can parallelize that in-negro generality. Just, we need to decide the path we want to follow. This is Milestone 2, which, so, the last slide, we discussed. So, but what's interesting in this, which performance gain, so these are the number of transactions, the number of blocks on these seven intervals that they studied, just for you to have this, if you want. And that's the performance gain. And interestingly, or, I mean, or let's say, might be seen as a problem. This is the speed up on the y-axis. These are the seven periods of time that have been studied. This is over time. I mean, there are more and more conflicting transactions as time... So, this is the Crypto-Kitties case, but even here, this was already the case that the speed up was... This three lines are for the number of cores that are being used to execute transactions, obviously. This thing, I mean, the conflicting transactions, the number of conflicting transactions depend on the number of cores. We'll decide how many transactions are actually executing in parallel, and not... Eh, yes. Yes, yes, yes. No, no, no, no, exactly. That's... Yes. Yeah. That's the funny story. And so, having a larger number of cores is better. But there is a point after which at least there are lessons learned. They say after like 64 cores, they didn't observe any benefit in increasing the number of cores, at least on the data they studied. And so, this is the conflict rate, the percentage of conflicts that they observe. So, the conflict rate increases and the performance gain decreases. Yes. Yes. No, I'm curious about the conflict rate, because instead of Crypto-Kitties, everyone was transacting on basically the same conflict, so you had a lot of conflict. But if this is not all Crypto-Kitties, that was like 4-5, right? That was 7. Oh, it's 7. Okay, so that makes sense, because I mean you would expect that the more applications... No, no, Crypto-Kitties is this 7 one. So, these are the 7 periods. And this is the Crypto-Kitties. This is also why we observe higher number of transactions and so on and so on. All this is Crypto-Kitties. The speed up is what they gain by having these two phases. The Y-axis. Ah, the Y-axis? No, it's execution time. So, it's throughput. So, you have a block. It's the time to execute transactions within a block. A speed up of 2, it took 5 seconds rather than 10 seconds. So, it's the speed up of the execution of transactions within a block. So, I have a block to re-execute or to execute. It's what's the performance gain multiple suites. Concurrent execution. Yes. I mean, when you validate a posteriori for instance, you have all the blocks and you need to validate the chain again. Yeah, you catch up and there you will gain a lot. But otherwise, yes, there are other ways to optimize the support faster consensus is one, larger block is one. Hello. This is the number of conflicts per address. So, this year is so how many conflicts do we see and how many so we see that most addresses have very little conflicts and you have hotspots. So, you have some contracts that will yield very, very, very large number of conflicts. So, these are the so-called hotspots, these cryptocities and so on. And they perform an experiment where they remove this hotspot contract and obviously the concurrency is better. I mean, and the time the speedup is improved. Ok, so that's also a possibility for instance with subchains and so on to kind of redirect the very popular hotspots on some subchains and have some subchains that don't have hotspots or things like that. I mean, everything is open. I mean, this needs to be carefully just to say that these hotspots have a huge impact on this study. So, what are the lessons learned from the paper? Over time, speedups decline as transaction traffic increase. So, what we've seen with this slope they show that it's important to distinguish between reads and writes obviously, you know that reads, commutes, writes, do not. And they perform two experiments once in which they would just consider that if they touch the same memory address they conflict and one in which they will distinguish writes and reads and obviously it's just to say some incentives in going deeper in the possible analysis and really understanding what are writes and what are reads. They also studied another strategy where they have multiple concurrent phases, one first concurrent phase ok, you have a set of transactions that work, perfect. Then you run again a second concurrent phase your retry, ok. At the end, you always have a serializable execution of transactions that did not make it in one of the previous phases and they showed that this didn't yield a good benefit. Basically, the ones that did not that induce conflict in the first phase they will induce conflicts everywhere and you don't have much benefit in trying to re-execute them in part. They show that accurate static conflict analysis yield a modest benefit so why actually just because what this means is that executing transactions that conflict is not a big issue you don't get much not executing them in parallel ok, you wasted some time in executing in parallel transactions that did not conflict but not that much it's not a big factor so they basically don't spend time trying to static analyze do a runtime that allows for concurrent execution be able at runtime to detect the conflict roll back the transactions that should be rolled back re-execute them and so and so but don't spend your time trying to generate a good schedule a priori be optimistic that's the idea but because the static analysis in a way probably works yes and it's a conservative approach and you don't gain much, it avoids conflicts but this conflict is not actually what slows down the thing ok the thing is, the system is efficient when you can execute things in parallel and even if you have conflicts and so on, you have executed many things in parallel and you got the benefit the static analysis part will avoid some conflicts but not much and the gain will not be much better the schedule is in the blockchain it's part of the blockchain state sorry it's in the block that the miner proposes it's a different block yes so it's a miner, it's a Byzantine miner it's producing two different blocks the two blocks cannot win one will win over the other so you still need a majority of validators to yes I guess otherwise I don't see how this would work but at the end, this is part of the block so you only have one schedule you have a winning schedule you agree on the schedule it should be part of the thing plus let me just finish have one slide and we discuss no problem, just that so increasing the number of cores it improves the performance but more than 64 was not useful and in high-contention periods it resulted from a very small number of popular contracts also known as hotspots and this is why I interrupted you Marco because what's next so now we can discuss so speculative execution we have four choices declare conflicts this requires an appropriate language detect conflicts requires runtime instrumentation we need to be able to statically analyze the code to create language to be able to analyze it use commutative data types to produce this was your remark contracts that use operations that commit, like to manipulate David operations STM is here Solana is here STM is here there is something else enable-disable speculative execution so that's something that basically why should it be by design I mean if you see that it's not you have periods of time where you have hotspots and so on an idea might be if you don't put hotspots within a subchain, etc it's to just see, observe that the performance is not better with speculative execution you just yes that says you just need to be included in the runtime design I believe, because it's not difficult to do and I think you can gain a lot it's like a B700 where you have this multiple things and you should not consider that by design speculative execution should be enforced maybe you will end up with cases or it should not be you should just be able to skip it if you have any of VA like the VM from JavaScript it does a first just in time compilation of the code and then it does the more deep compilation so that you can start fast and then have the efficient compilation could this be the case have a first password transaction just in time compilation detects some conflicts and then if you see that it's not a hotspot do a more or take only the transactions that are in the process I don't mean that's possible that means that you are in the static analysis you seem to like the static analysis the case where you analyze transactions no, it's a thing that compilers are already doing a lot of this work the thing is this is what interesting from the previous paper is I'm not sure that we gain a lot from knowing which one will actually yield conflicts what they show is that if you know this you will extract this transaction so you won't put them in the concurrent phase but what you will gain from doing this is not that high so that means that if you spend time if you spend resources trying to detect for at the end gaining 5% this needs to be carefully designed yes Ethereum c'est très proche de ce qu'on a plus ce que je n'ai pas dit, je suis désolé on n'a pas multistradie Ethereum, VM ceci est basé donc comment est-il estimé le temps ? 2 choses gaz estimation il y a un nombre d'opérations et ils montrent que les deux ont les mêmes graffes pas exactement les mêmes parce qu'il n'y a pas de map entre la coste de gaz ce sera en opérations c'est très proche ils montrent que c'est proche mais vous savez, c'est la façon qu'ils émulent ces mesures de temps ils prennent les mesures de temps STMs ? non, non parce que vous savez ce que les mesures de temps ont été accessées en STM d'ailleurs, toute la STM la histoire serait brouillante par design c'est ce que je dis, vous avez besoin de temps il faut être instrumenté parce que par design vous devez pouvoir dire que ces transactions sont conflites et les règles sont bouffées ils peuvent seulement s'appliquer à un temps de comité l'analyse, est-ce qu'ils se considèrent comme... parce que vous pouvez avoir des transactions sur un contact smart pour un autre contact parce que vous avez la transaction et vous savez les adresses que cela touche et comme la logique mais dans ce secteur est-ce qu'ils s'appliquent comme ça ? les papiers sont vraiment en train d'analyser ils considèrent que Ethereum n'existe pas et ils disent on va proposer une théorie d'exécution smart et donc ils disent que les tokens ERC ne pouvaient pas suivre ce design mais ils disent on a une théorie de la concurrence de l'objectif on devrait avoir le même contact smart d'être capable de dire mais ils ne sont pas au niveau des transactions nous avons d'autres transactions et si nous allons pour l'analyse de l'analyse nous allons au niveau de l'analyse comme l'account d'accounts ou quelque chose comme ça d'ailleurs c'est juste de l'air de temps plus nous n'avons pas d'experts de l'analyse donc nous devons avoir une partie du travail d'ailleurs pour savoir ce qui est nécessaire pour produire mais oui mais oui ce qui me semble le plus appréhensible c'est de détecter les conflits et en temps de temps alors si il y a un très simple compilation plus de stratégie d'imaginer nous pouvons essayer de livrer cette information mais nous ne devons pas dire que c'est la priorité pour moi ce serait pas la première idée ce n'est pas la première idée ce n'est pas la première idée la priorité est un peu épousé mais c'est un autre modèle c'est un autre langage je veux dire pardon pour moi ce serait un détecne ou la déclaration la approche solenne est très straight forward vous déclèrez basé sur la grandeur de discours sur la grandeur de discours et si la déclaration n'est pas faite par les développeurs vous les penalisez vous avez plusieurs blocs en même temps cela signifie que ces plusieurs blocs ont été votre contexte votre nouveau contexte vous ne faites pas des choses à la grandeur mais un set de blocs c'est vous avez des questions sur le BFT c'est la question concernant le BFT vous avez plusieurs vous avez des questions sur l'application ici mais vous avez des questions sur l'application maintenant oui il y a quelqu'un qui s'occupe vous avez déjà dit que vous avez des questions sur l'application mais comment ce mec peut savoir vous ne pouvez pas normalement vous pouvez je veux dire si vous avez juste regardé la transaction pas la change de state des transactions vous pouvez figurez quelle complète mais c'est un état oui un état qui peut le faire je pense que ce qu'est le fabricage dans l'autre modèle c'est l'application d'execution en fait je n'aime pas la distinction entre l'exécution d'exécution et l'application d'exécution parce que l'exécution d'exécution c'est juste un cas spécial d'exécution si vous avez trouvé votre application l'application d'exécution c'est en fait l'exécution et dans l'exécution d'exécution c'est juste que vous restrez le genre de transactions que vous pouvez exécuter et l'exécution de l'application mais peut-être ici si vous avez préparé la transaction il y a un état d'approche et que nous venons de l'exécution de l'exécution et il ne faut pas nécessairement être exécuté dans le programme je peux facilement imaginer des outils qui peuvent être arbitraux mais c'est un état d'approche et il peut y avoir quelque chose d'interaction mais je propose l'application d'exécution n°1 et vous proposez l'application d'exécution de l'application d'exécution mais il n'y a pas l'application d'exécution n°5 c'est juste un nombre de l'application donc comment ça ne marche pas ? non je dirais que vous êtes les leaders de l'exécution ah, ok, ok l'interface est bon mais avant, le leader propose d'exécuter quelque chose qu'il faut que l'application d'exécution d'exécution d'exécution d'exécution d'exécution de l'application d'exécution il manipule la salle parce que c'est la réception d'exécution et pendant ce que il fait il faut que l'application de l'exécution non, ça ne marche pas je ne sais pas comment vous parlez il y a un état qui va s'occuper du bloc ce bloc que l'exécution s'occuper va s'occuper du quelque chose donc comment ça peut s'occuper ? ça ne marche pas il faut qu'il s'occupe du bloc il s'occupe oui parce que si on le met il faut que tu aies l'access il faut que tu aies l'access il faut que tu aies l'access maintenant, tu as l'access et c'est le 5ème bloc où je fais mon 4ème bloc je ne la trouve pas mais il peut y avoir un bloc pour expliquer, vous devez le savoir