 Excellent, we are now live on YouTube. So welcome, hello everyone on YouTube. And as everyone here in the meeting, if you could tell us where you are, that would be great in the chat and keep the chat going. So Ali was going to run it today, but unfortunately he got pulled into an important meeting. She has some urgent things to go so he can't make today. So I am happily covering today. I'm Julian Gordon. I'm the VP for Hyperledia in Asia Pacific. And I'm pleased to kick off and facilitate today's session, which got a pretty cool session today with Andrea from the University of Sydney, who is a PhD student and he's going to be talking about his benchmark suite. As you can see, a Diablo. I have just one or two housekeeping items which I had to mention first. First, this is a Linux foundation meeting. It's a Linux foundation. Hyperledia is one of the projects with the Linux foundation. So we have to talk about antitrust policy. So if you want to know more about that, please look at our website. And also, we will have a Q&A afterwards. So put any questions in the chat. And with that and with no speed, I will now push it straight to Andrea. So Andrea, do you want to kick it off here? And welcome and thank you for presenting at this meetup today. Thank you. Again, yes, hello everyone. Yes, so today, just our topic is measuring the performance of blockchains. And so first of all, yeah, we are dealing with this, well, idea that, well, let's say lately, there have been a lot of different blockchain protocols announced by different companies and they all claim impressive performance. Like for example, Elgarand claims that they have throughput of up to 46 transactions per second. In some setting, Avalanche, another example in their blog post there and in their documentation, they claim that they can support more than 4,500 transactions per second throughput and Solana, which claims more than impressive performance of more than 200,000 transactions per second in on their website. The idea here is that while they claim this on their website, usually we have no idea what kind of setup did they use for such evaluations or such measurements. And the setup means we don't know what kind of machine did they use in terms of hardware. Like for example, Solana in their documentation, they mentioned that they can leverage the use of GPUs and special instructions of the processors. And for example, what other blockchain might not, other blockchain protocols might not do. And we have no idea where the machine's actually situated. So they can be either in a single data center or in multiple data centers across the world. And another point is we have no idea what was the content of transactions that they sent to measure such impressive performance. And so here on this slide, we see the overview. So one of the points that they claim is the throughput in terms of transactions per second that their network can process but other metric is latency. So that's the time between clients sending a transaction to the network and getting a confirmation that it's been committed. So it's in the blockchain and it's not going anywhere. And yes, so on the left side, we see the claims. However, in our measurements, we observed quite different performance. So it's much less than what they announced. And the idea here is that, yes, if we consider the much different environment from the unknown environment that were in their claims, we get different results. And so all of this, so we are given with the problem that it's hard to compare different blockchain protocols and somehow different developers of applications they need to choose which blockchain protocol they actually need to use for their particular use case. And that's why we came up with a solution which is called Diablo. So it is an open source benchmarking framework. It's written in goal lane. So in addition to unknown workloads that we mentioned before, so we used realistic workloads based on real world traces from applications and for our experiments, we geographically distributed the machine. So we used 10 different regions in AWS which I will show in detail later. In our work, we evaluated six different protocols. And the idea was that we try, well, we test different, well, like protocols with different consensus properties and different virtual machines. So for example, Algorand and Evalanche, they have probabilistic consensus. So when the transaction is considered to be committed, we have high probability that it won't go anywhere from the blockchain. For DM and Quorum, we have deterministic business and total consensus algorithms. So in that case, the finality is guaranteed. And also we have Ethereum and Solana with eventual consensus protocols where we have to wait up until several confirmations. So for example, several blocks to be committed to save with high probability that the transaction won't go anywhere. And as well, so as we can see, they have different virtual machines in terms of smart contracts. So for our work, we had to implement our workloads in different programming languages to be able to compare all those different blockchain protocols. Here, you can see the architecture of Diablo. So first of all, it's important to say that the setting that we used to evaluate the protocols, it's realistic, so it's not a simulator. We actually deployed the binaries that are compiled from the source code available on GitHub from those blockchain protocols. And for Diablo, we have a primary, secondary architecture where we have a primary which orchestrates the whole experiment and we have secondaries which send the actual transactions to the blockchain network. And so for the primary, we have two different configuration files. So we have benchmark configuration which tells how many secondaries do we have, how many threads are used to send the transactions, what is the load distribution and the transaction parameters. And also we have the blockchain configuration. So what type of blockchain it is, what are the keys used to send the transactions and other parameters. And the idea here is that with such separation, what we can do is we can have one benchmark configuration that can be used with different blockchain configurations. And in general, the idea is that in the beginning, we externally set up a blockchain, like a private blockchain network on the machines that we allocate. Then we deploy Diablo with possibly multiple secondaries in different regions. And so on the primary, we only have the descriptions of the transactions to be sent. Then on the secondaries, we generate the actual transactions. We send them, we note the time when we send the transactions from the client, then we wait for the confirmation from the blockchain network on the secondary. So we get the commit time and then we send these results to the primary which then aggregates all the results. And from that, we can have such metrics, such as throughput, so in transactions per second and transaction latency. For the performance comparison, like I mentioned, so we, in our experiments, we used geographical distributed topology. So we used 10, a subset of 10 AWS regions in different countries. So for example, United States, Brazil, so up to Sydney. And so with this setting, we wanted to still have an idea of like a real, let's say, what can be a real network. So we have different countries and we have varied latency. And also from these 10 regions, we used different, well, for different experiments, we use different configurations. So for example, for data center, we used only a single region in the United States, so the data center in Ohio, which is a powerful machine with a lot of memory and CPUs. And for example, this denotes like a data center of a single company. Then we have testnet, which is something that the protocol developers can use. So it's also a single region, but it's much less powerful machine. Then we have a devnet where we used all the 10 regions that I showed in the previous slide. And this can be used by the developers to test the realistic latency. So well, to check how the protocol operates when it's not in a single data center. Then we have a community configuration, which mimics the real world main net in a way that it's commodity machines run by actual individuals. And we have, well, so we have 200 blockchain nodes in total, that's 20 blockchain nodes in each of the 10 regions. And as well, we have consortium setting, which mimics companies in different regions. So it's more powerful machines in all the regions. So to describe the realistic workloads that we developed to evaluate these blockchains, the idea is like the main key point here is that we took traces of the real applications. Like for example, well, you can see the names. And well, before going into the detail, just to mention that these workloads have different characteristics. So for example, NASDAQ, which shows how a stock exchange works in the beginning of the day. So we have high peak when the trading starts and then we have much less workloads throughout the day. And so for example, this characterizes the burst load. Let's say, we can test how the blockchain can handle the burst load. Then for example, Uber, which is about, well, like running a taxi service. So here the idea is that we have to calculate some distances in the city. And so this is a characteristic, the idea is to have compute intensive. So we have to make a lot of calculations, but we have more or less static workload. For FIFA, the idea is that we have some, a service which has high contention. So everyone accesses the same memory slot. We also have Dota with very high sending rate of around 13,000 transactions per second. And we also have YouTube with even higher sending rate of 40,000, around 40,000 transactions per second. And to come to the results. So we have, sorry, to start showing the results. First we want to overview all the different workloads and different protocols. So here you can see in the columns, we have different smart contracts that we used. And you can also see the average workload in terms of transactions per second on top. So it's ranging from 168 transactions per second up to almost 38,000. For the roles, we have the average throughput measured during the whole experiment. We have the average latency and we also have the ratio of committed transactions to the submitted transaction. And so of course there are a couple of important points to note here that first of all, none of the blockchains managed to commit all the transactions that we submitted. So as you can see in the lowest row, none of the blockchain goes up to the ratio of 1.0. Also what is important to mention is some, in some cases there are no results at all for the ratio and for the, or you cannot see what are the results for the ratio and also no throughput. So it might be the case that the blockchain didn't handle the workload at all. So for example, with Uber, we have quite highly high intense workload and some blockchains they, in the transaction, they limit what is called usually gas or the number of computations that can be done in single transaction. So in some cases we couldn't make the blockchain to execute a transaction at all. So there might be different reasons for not having no results for a particular blockchain experiment. But going to the individual results, we can see that for NASDAQ, all of the blockchains managed to commit something and maintain some throughput with Quorum having the highest average throughput throughout the experiment. And the idea here is that the workload is not high and Quorum with the deterministic consensus algorithm. So, well, let's see by implementation, deterministic consensus algorithms they aim to commit to all of the clients. And so what they try to do is not to drop any transactions and so, well, commit them and not to throttle. And so in this case, Quorum can handle this workload and that's why we can see that there are the results for Quorum, but however, if you look at Dota 2 and YouTube we have no results for Quorum and that's because if the workload is constant and high, well, in this case, we might have the over, well, like the Qs, the transaction Qs or transactions will go overfill and so we cannot measure any results. So that's the idea for the deterministic consensus algorithm. So we can also note, for example, Ethereum and Evalanche which have quite high latency. So in this case, we can also have some idea that those two protocols, they have a fixed block rate and in this case they aim not to optimize the latency but still to have some throughput so not to be available in general. So this is the overview of different results. Now we can go into the detail of different experiments. So for example, here we have also the average throughput and latency for the workload of my constant workload of 1,000 TPS of usual transfers so not smart contracts. And here what we wanted to test is how the blockchain scales in terms of clients and deployment. So here you can see that in different columns we have different topologies that I mentioned before. So either one region or 10 regions and also in terms of different hardware. And as we can see here, so the blockchains perform more or less the same in terms of when we scale, for example, from commodity hardware up to a powerful hardware. So that's test net and data center. And well, yeah, so what we can see here is that we have relatively low throughput and high latency of Ethereum and Avalanche which can be also explained by having consensus algorithm with fixed block rate. But however, what is interesting to note if we go from one data center to 10 geographically distributed data centers for DEM, we can see that the throughput drastically lowers from almost 1,000 TPS to around 200 TPS or even lower if we add up more machines. And what we can, like the conclusion that we can make from this is that DEM was optimized for low latency networks and probably wasn't optimized for a geographically distributed network. We can also see that Solana manages to, like Solana is one of the, we'll say the only blockchain that manages to handle the throughput in different, in different apologies. And the idea here is that Solana has a verifiable delay function which helps it not to depend on the number of machines. So for example, if we increase the number of machines from 10 to 200, for some blockchains, we see the increase in latency and decrease in throughput. However, for Solana, that's not the case. We see like quite the same throughput and also the latency which also stays more or less the same. So going next, in this particular experiment, we compare also the constant workload of 1,000 TPS and 10,000 TPS and in this case, what we want to test is how the blockchain is robust against like denial of service attacks. So for example, how we can handle such a high workload if someone wants to run such an attack. And so here in different columns, first of all, you have, we have different blockchain protocols and we have the configurations in which they perform the best. And we also have the average throughput and latency. What we can note here is that for DM, it doesn't, so it doesn't crash. However, the throughput goes down almost 10 times. So from 1,000 TPS to 100 TPS and the latency goes up. However, yeah, it still doesn't crash. So in this case, the transaction queues that it has. So it helped it to not to crash and still able to commit some transactions. For Solana, we have, well, we see that the latency increases. So it takes more time to process the transaction. However, also the network doesn't stall. But in case of Quorum, we see that with high workload it crashes and we cannot get any results. So it's, it cannot handle what we mimic as a denial of service attack. For Avalanche and Elgerand, it's also, well, hard to give more details on this. So they, will they still manage to commit some transactions even with the Ethereum having less latency with the high workload, but having lower throughput? And the possible explanation here can be, as mentioned previously as well, that having a fixed block rate helps it to maintain the availability. Going to the universality and distributed applications, executions. So here we want to test or to show in terms of how different blockchains can handle the same workload. And as I mentioned before, so this here in different columns, we can also see different blockchains, also average throughput and latency. And the smart contract and the workload that we used here is Uber. So that's compute intensive. And in this case, the three protocols that where we say that the blockchain is unable to execute the smart contract, that was the reason for that is because the number of, I'm sorry, the number of the guests, the amount of guests, so the number of computations that we can have in a single transaction is limited. So that's why we couldn't execute this workload. However, for the three other blockchains, all of them use Ethereum virtual machine. And so we can see that in this case, we don't have the hard capacity, hard upper limit for the number of computations that can be done in the transaction. So we have, we have the results for the three blockchains that use Ethereum virtual machine. And again, here with Quorum, because of the deterministic consensus algorithm and the idea that it tries to commit to all of the clients, we see that it maintains the throughput of 600 transactions per second with relatively what was acceptable latency of 25 seconds. For Ethereum and Avalanche, we don't see as good performance, which can be also explained by the throttling in terms of transaction pool and the block rate that is configured in the, well, for the blockchain. And well, like in the next experiment, what we, what we test is, we test the NASDAQ workload, which has a peak in the beginning and then which flattles them during the next phase of the experiment. And here you can see the accumulated distribution functions for the latency. So you can see how many transactions were committed with, like what percentage of transactions which were committed with the given latency. We test three different workloads. So they all use the same smart contract NASDAQ, but we have different workloads. So Google has 800 EPS peak load, Apple has 10,000. So that's the highest in this case. And Microsoft has 4,000 transactions per second peak load in the beginning. And here we can see that Quorum manages to commit all of the transactions. So the CDF goes up to one and it manages to commit them under 10 seconds. So that is given the feature described before was the deterministic consensus algorithm. If we look at another blockchain with deterministic consensus algorithm, so DM, it manages to also commit all of the transactions for Google and Microsoft. So 804,000 EPS with quite low latency. However, if we scale the load up to 10,000 EPS in the peak, it only manages to commit around, well, like higher than around 75% of the transactions. Also was the latency under 50 seconds. And also if we look at other blockchains, so for example, with Ethereum, we can see that with the low workload it manages to commit almost all of the transactions. However, with quite high latency, so that's around 75 seconds. And well, what we can say here is that while the transaction pool might not be limited, we still have the fixed bulk rate. And that's why it takes a lot of time to commit all of the transactions. And we can also note, well, like a similar behavior in the other experiments for Ethereum. So technically, if we would have waited long enough time, we could have seen that more transactions were committed. However, we limited our experiment to up to when the transactions were actually sent. However, the previous results might have been quite pessimistic. So in the first slides for the experiments, we showed that none of the blockchains managed to commit all of the transactions. So the ratio was, well, the ratio of committed to submitting transactions was less than one. We, like a colleague of mine, another student at the University of Sydney used Diablo to evaluate the smart red belly blockchain. So that's, so first there is a red belly blockchain and smart red belly blockchain integrates Ethereum virtual machine. And so that's why now we can test the smart contracts together with other blockchains. And in this particular experiment, we, so the colleague used the gap time workload. So that's also the NASDAQ smart contract with the peak. However, the peak load here is even higher. So that's around 20,000 transactions per second. And what we can see here is that smart red belly blockchain manages to commit all the transactions under 20 seconds. And during the peak, we have quite high throughput of well, around 3,000 transactions per second here. So we still have hope that well, there is still work to do to be able to execute the real workloads and realistic applications in blockchain. So in addition to the experiments that we did in AWS, geographically distributed setting with realistic latencies between the regions, we also made some experiments in a local setting with the machine. So we had in total, so yes, so machines with such configuration and we have several aisles. And in general, what we wanted to show here is how we can reproduce the geographically distributed scenario in a local setting. Because on the one hand, in the AWS setting, you have a real network so that the latencies can also fluctuate during the day. However, the experiments might be quite costly to set up. So for example, if you scale up to 200 machines in total, so like 20 in each of the region that adds up. And maybe you still want to mimic the comparable workload in a local setting by adding well, adding artificial latency in this case to the network links. And so here what we did is that we have several aisles and in each aisle, we have also as an AWS, we have Diablo and the blockchain nodes. And in different experiments, we add artificial latency and so the actual results we can see in the following slides. So first of all, we also wanted to test the scalability now not in terms of clients but in terms of adding machines. And in our case, each aisle had five blockchain nodes. So in this, so we go up to from five blockchain nodes up to 35 blockchain nodes. And now we just use, we don't add any latency. So this can be understood as a single day center and what we can see here is that with 100 TPS workload, all of the blockchain protocols, they scale like the number of machines doesn't affect the performance. So we have constant measure throughput and also relatively constant measure latency when we add the machines. And if we, now if we try to increase the workload from 100 to 1000 TPS, here we can still see that the measure throughput is, stays relatively the same for all of the protocols. However, we can still see that there are a lot of for all of the protocols. However, in terms of latency, we can note in the difference. And like what is quite important to look at this quorum. So here we have not the average latency, but we have the box plot. And so here on top, we have the maximum measured latency for all of the committed transactions that we measured. And here, if we scale from five nodes to 35 nodes, we see the maximum latency increases from 30 to 120 seconds. So this can also be explained with the deterministic consensus algorithm that all of the nodes have, well, like the majority of nodes has to participate in the consensus and it takes more time to commit a block and to get the transaction committed. Yes, however, for the other protocols, we, well, we see almost a constant average latency. And moreover, if we increase our workload to 10,000 TPS, we can see that only Solana manages to handle this workload with almost 8,000 measured throughput, 8,000 TPS measured throughput. And also with more or less constant latency. With quorum, while it works, well, we can see that we still have some measured latency for, well, for some topologies, basically the throughput is, we cannot see the throughput. So it doesn't manage to handle this workload. Also as we have seen in the previous experiments with AWS and for other blockchains, for all of them, we see how now the latency, the average latency and also the maximum latency increases almost up to 100 seconds. That's the duration of our experiment. And now we can look at the results where we emulate the latency. So here we use all of the seven aisles. So that's 35 blockchain nodes. And here we changed the delay from zero milliseconds. So that's the original setting up to 300 milliseconds between each aisle. So one aisle consists of five blockchain nodes. And here in the first experiment with 100 transaction per second workload of native transfers, we can already see how, for example, for DM, so the maximum and minimum measured latency, so the difference between them increases and also we as well see the increase in average measured latency. And so with this we can, and well we also see the corresponding drop in throughput. And this confirms our previous idea as well, that DM was optimized for low latency and aiding the delay between the nodes and negatively affects its performance. However, all of the other protocols will like manage to also maintain the constant throughput and latency more or less during the experiment with different delays with this workload. And then we also increased the workload to 1000 TPS. And here we already see the different behavior in terms of throughput. So with DM, well, we see the same behavior, but well the effect is impacts the performance more. So we have much more drop in the throughput and also higher increasing latency. So now Ethereum, Avalanche and DM, they all measured the maximum latency of 100 well, almost 120 seconds. And with Quorum, interestingly, we also see this kind of drop. So with no delay, we see that it tries to commit to all the transactions. However, when we increase the delay even slightly, the performance almost helps in terms of throughput. And well, it's also important to mention that Elgar and then Solana handles this, well, like handles this workload and latency fairly well. And lastly, for 10,000 transaction per second workload, as before, we see that only Solana manages to handle such workload. So it's, well, we can say that it's fairly optimized to handle it. And however, what we have seen previously is that when Solana was handling smart contracts, it didn't commit all the transactions even with a relatively low workload. However, for, we see a much better performance for native token transfers, and with this, we can have an idea that Solana manages, well, it handles the native transfers better than the smart contracts. So possibly because of lower instruction count and also some other optimizations built in the protocol. And also as before, we see that Elgar manages to commit some of the transactions, as well as DM. However, it's much less than the workload that we actually sent to the protocol, to the network. So it's also important to mention that this is not, well, like this is not the first attempt to actually measure the performance of the blockchain. So other protocol, well, other benchmarking frameworks were proposed before. So there is also now an active project from Hyperledger called Hyperledger Caliper, with which you can also measure the performance of Hyperledger Fabric. However, it doesn't, like in suit, it doesn't feature any realistic workloads and only feature synthetic workloads. And with synthetic workloads, well, you can measure some performance as we have seen for the constant workloads with native token transfers. But for example, we cannot account for, or we cannot compare the performance in terms of peaks or high computers we have seen in NASDAQ or Uber workloads that we have developed. There is also Blockbench, which was published earlier. So it features also synthetic workloads for databases, which are important, but as well, it doesn't reflect the real-world workload as an artwork. And there's also Chainhammer, which is specialized for equilibrium, and it only features continuous high load so we cannot vary the workload during the experiment, like having peaks and so on. And as of course, at last, I would like to say that, yes, as I mentioned in the beginning, Diablo is an open source project, so all of the code is available on GitHub. And we are very welcome to the contribution, so we would like to support other protocols than we tested so that we can have more results and have a fair comparison of all of the different blockchain protocols. So you can access the webpage on the link now displayed in the slide. And in this website, we have all of the instructions, how you can, well, like how can you either repeat the same experiments as we did in the paper or how you can make your own experiments locally and also we provide a virtual machine image. So you can even try running some experiments like without doing some complicated setups and configurations on your own machine. So that's it from my side. Thank you very much and I would gladly answer your questions. Thank you, Andrea, that's excellent. That was a lot of information, very cool. Do you wanna drop the presentation so we can all see each other? Yeah, so we have a bunch of questions, right? Can you see the questions in the chat? So the normal way I think we like to do this is people to put your hand up and then maybe you can answer the questions or we can just go through the, or maybe I'll leave you to Andrea to go through the questions, but if anyone has a question, put your hand up. Of course, we have so many people, I have lots, are there's a first hand up? Shall we pass it to Nishant? Is that okay, Andrea? Nishant, to ask a few questions? Sure. Thank you. Thank you so much, Andrea, for giving me this opportunity. I just would like to know about the hyper-legit part. Have you done any sort of testing in the hyper-legit part? Anish, could you explain who you are? Maybe it. I'm sorry, I'm sorry. My name is Nishant Kiri and I've been working with the Microsoft Technologies from last 18 years. I'm recently moving to this blockchain platform and learning a lot of things and currently focusing mainly on the enterprise applications. And that's the reason why I asked for this hyper-legit part because I haven't found good blogs and good articles on hyper-legit. I don't know why it is so. When I go in the Ethereum part, I can see a lot of things, a lot of contribution by different developers, but hyper-legit is still maybe not mature enough to be honest. That's what my limited understanding. So my question to you is like, what is your observation about the testing of hyper-legit part? So by testing the hyper-legit, do you mean like executing the experiments on hyper-legit protocol? So, yeah, as I mentioned, so we didn't really test, for example, hyper-legit fabric or hyper-legit so-tools or Iroha. So, well, there are different reasons for it. Like for example, hyper-legit fabric involves quite a, let's say, sophisticated deployment. And so we would really like to support it as well to measure the performance and to compare it with others. But yeah, so now we don't support those, for example, yeah, those three, hyper-legit blockchain protocols, but we would really like to have them. So I think it's a future, right? So I think just also to answer to your queries there, Nishant. Yeah, definitely, if you can't find hyper-legit stuff, there is a lot of stuff, it is mature, right? It's been around a long time, right? So I'm happy. I don't know, have you looked at the hyper-legit wiki at all? wiki.hyper-legit.org? I'm just looking into IBM hyper-legit fabric. There are plenty of information there, but still I'm not, to be honest, not find myself comfortable into it so far. Yeah, and when you say that, there isn't such a thing called IBM hyper-legit fabric, it's an open source project, right? Well, where are you based, Nishant? I'm from India. Okay, so we have a, I'll tell you what, send me a link and I can connect you with the Indian chapter there. We have a whole community, right? That you can connect, that can help you in India, right? And globally, right? So also another thing we did mention that Canapa, we have Canapa that already does benchmarking for hyper-legit fabric, but hyper-legit has got many, many projects. So I'm happy to connect to you. I'm on LinkedIn, Julian Gordon, and I'll connect you to the right people. Thank you. Thank you so much. It's a pleasure. It's a pleasure. Sorry, any other questions? Thank you, Nishant. I think we have some questions in the, Attila, do you want to go through those questions or do you want to go through them on Andrea one by one? Attila. Hi, hi, everyone. Thanks for the great presentation. I imagine a lot of work went into this as just by the sheer number of plots you showed. So congratulations on the good work. Hi, everyone, I'm Attila Klanik, a researcher at the Biduapesh University of Technology and also a maintainer of hyper-legit calipers. So I think there is an interesting overlap here. So yeah, I posted four questions in the chat in order of importance. My first would be, how did you choose these exact workloads to be blockchain-ified? So do you think these are relevant for the blockchain domains? Because as I saw, they were taken from traditional centralized workloads. Yes, so this is like the reason that you mentioned that we wanted to see how can we use the real centralized application in a decentralized setting. And yes, also just to mention that Vincent is also on this call, and if he has any additional points to the answer, yeah, please do. But however, yes, if just in case that was the idea that we wanted to see how decentralized applications work in terms of different aspects, like having contention or having high workflow or having high computation number. And none of the figures you showed that there were a lot of transaction failures, so not many of them got committed actually. What were the main transaction failure types that you observed, were it timeouts, out of resource errors or something like this? What was your experience? Yeah, so there are technically a couple of reasons. So one of them is, yes, I think the major one that I mentioned was that out of resources. So some blockchains make a hard limit on the number of instructions or like, let's say how much computation you can do in a single transaction, possibly to limit the execution time over a single transaction, when, for example, you run some custom, well, like you deploy your own smart contract and you want the whole network to execute it. And if you, so it might be the case that on some machines it will take a lot of time for the virtual machine to execute it. So some blockchains put a hard limit on it, which you cannot even change in, well, in configuration, like, for example, Janace's book or somewhere else. And so that's the one thing. And the second thing I'd like to mention is the transaction pool limit. So in some cases, a lot of transactions were dropped because of this. And in this case, this can be also sometimes a hard limit for the number of pending transactions. And when the workload is high enough, the blockchain can drop the transactions as we have seen actually for Solana, that the transactions were just getting dropped because of the high workload. Okay, thank you for the answers. I think I'll pass along the torch now. Okay, any other questions? Yes, so we have a friend. Chris. Yeah, hi. Thanks for the presentation, Andrew, really. Thanks, Ryan, for my time at University of New South Wales just because I'm a little bit jealous of where I'm sitting right now. But here's a question about, oh, okay, my background. I'm a professor of computer science or program languages at the University of Copenhagen. I'm also a head researcher of a company, Dion Digital, who's working on capital market infrastructure based on DLT. So the first question is, so none of the systems, if you correct me if I'm wrong here, is it right? None of the systems actually have sharding or they're basically all of them solving the problem of putting all the transactions, the messages that you're submitting into a single sequence. Is that correct? I mean, essentially, right? Like a blockchain. They're all solving the consensus problem in the sense of total event order consensus. Yes. They're not sharded or kept separate or whatever, you. Yes, like there is no, like a dextra, like a cyclic graph structure or something like that. So yes, it's all, here it's all put into the same sequence. It's totally ordered. So the NASDAQ experiment, just curious what it means to have a transaction there. Is this an order submission, a conferred, you know, something can just tell a little bit about what the experiment is in the NASDAQ sense, you know, in NASDAQ for the NASDAQ workload. Sure. So for NASDAQ load, the idea is that in the smart contract, you also have a single key value corresponding to a ticker, like to a particular, to a particular stock. And you have the transactions, which interacts with that key value there. And the transaction would be an order, you know, either a bid or a sale order or a cancellation or what? Yes. So here, for example, we can see that it's like a buy. So it's a matter of, let's say decreasing a value of that key value there. Okay. So a little bit unsure what this means in terms of, this is trading data you're talking about, right? This is... Yes. So the idea is that, well, like, well, we have quite a simple smart contract. So the workload trace, like for example, having a peak value at the particular time point in the experiment that is gathered from, like from a, let's say, a real trace, like so we took a single day in the NASDAQ and then like the beginning of the day and then met those values into the experiment. So at the particular time point, we send a particular number of transactions. And the Apple, Google, and Microsoft, this was for the stocks, right? And you did that separately. Yes. So what would happen, here's the question, what would happen if you random workload with all of the stocks, you know, Apple, Google, Microsoft, and all the other ones running from nine to nine, 10 in the morning on NAISI? So if you, what happened then, what would happen in these blockchain systems if you, you know, the same experiment? So I would say that, I mean, first of all, yes, you have, so all of those transactions will add up. So like, as you know, well, as you might expect, the peak will go higher. Like that's the idea and all of the other time points, so they will be higher because all of them will add up. So I would say that the peak will go higher because all of the other time points all of them will add up. However, we might expect some blockchains to, well, like to handle the contention a bit easier in this sense because they might be accessing different, well, different key value pairs in the map. So it will be, so the blockchain will still have to handle the high peak workloads, so the burst, but we might also see how it works in terms of accessing different key value pairs. Okay. Thanks, Andrei. Appreciate your presentation also, and you're great again. Thank you. Thanks a lot, thanks. Yeah, thank you Fritz for those questions. And you're in Copenhagen, that's a pretty nice place too, right? I know Sydney's brilliant, but... Well, if you're sitting in Sydney right now, we can swap if you want to, so who are you? Well, it's a seasonal thing. It's a seasonal thing. Yes. All right, I think we've got a bunch of questions in the chat right still. So everyone put your hands up if you want. A lot of people are saying, what a great presentation, thank you. Jerome's asking, and a number of people are asking you doing, are you gonna try Tesla and other networks? So we have Jerome's asking, do you try Tesla and Cardano too, Andrei? Yeah, so I see, yes, the question for Polygon and Cardano. Yeah, so we didn't test those two protocols, so we, like, let's say in terms of APIs, well, I can just give more details how, well, for the implementation of Diablo and particular experimental setting. So the idea here is that in Diablo, what we have is like an extensible API, so you can just write a module. You can implement the interface, like a simple interface of practically four main methods, like creating the transaction and encoding some values into a protocol-specific format and also sending it with the network to the blockchain network with the specific protocol used. And in these terms, we have implemented the interaction with, so we use the Web3 interface, which is used by Ethereum, also used by Avalanche in C-chain and also used by Quorum. So Quorum uses, so all of them reuse the Ethereum virtual machine code base in this sense and we can interact them with a single API. And technically, if you want to test some blockchain which already supports Web3 API, you can basically plug it in right away and only write your deployment scripts or deploy your own private network as you want to and then just point Diablo to it and you will be able to run the same workload. And yes, if you would like to add the support to other, for the other protocols, then yes, there are several examples in the code base, like for example, we have, so we have the Web3 interface that I mentioned, we have the DM, that's the interface for DM and interface for Solana and also for Algorand. So you can look at those different implementations and just well implemented for the protocol that you want to test, but yes, like specifically to answer those questions, yes, we didn't try neither Polygon nor Cardano, but of course we would really appreciate if you can contribute to Diablo and to add support for those protocols. So, yeah, like I guess also in the meanwhile, since there are no hands, yes, like if you want to ask any question, please do. Like in the meanwhile, I can just go through the chat in the in-order. Okay, that'd be good. So the AWS cost for the experiments, like I unfortunately, I personally don't have the exact figures in terms of US dollars for that, but for how long were the experiments? So as you might have seen in different figures, there was an upper limit in latency for 120 seconds. And that was the idea for the constant workload with native transactions, like 100, 1,000 and 10,000. So the experiment duration was 120 seconds. And for the smart contracts, it's different. So I think the experiment duration differed from around 180 seconds to 300. And yeah, so for average latency, as we have taken into account all of the transactions that were sent during that period of time. So now going to the question from Attila. So the reason for workloads for authentication, yes, like that's what we answered. So the idea was to show different features of centralized applications and how blockchain can handle it. Failure types, yes, so that's what we also mentioned. Tuning the networks, yes, that is a good question. It's important to mention this as a limitation for the experiment. So yes, what we did is that we took the default configuration that is provided by the protocol and also what they recommend. And so also not to make any custom tuning from our side, we just went with the default configuration that they provide and wanted to make sure that we compare the different protocols on the same ground as fairly as possible. But as you say, is it possible to tune it to achieve higher performance? Yes, like for example, in some settings, for example, in the private network of not so many nodes and for example, in the same data center, probably you can take some protocols like Ethereum and Avalanche and to configure the block time to a lesser value than 13 seconds. And in this case, yes, you might see the improvement in the performance for the scalability of the workload generation with a single instance primary. So also going into the implementation here, it's important to say that the primary, like the single instance of primary only distributes the configurations between the second and there is so that, and it's a minimalistic descriptions without any key pairs. So it doesn't have any fully signed transaction. So the used bandwidth is quite low. And so in this sense, the workload is generated on the second there is either during the experiment or before the experiment. And the scalability is, well, it totally depends on the hardware that you use. So like how many vCPUs does the machine have? How much memory? And also in the sense, how many secondaries do you want to have so that you can run how many machines? And like all of these different parameters affect the scalability performance. So in our experiments, so we used 10 secondaries where possible. So for example, it wasn't possible to do it in the data center setting. However, with the data center setting, we had already quite powerful machine. So even less secondaries were enough. So we have seen that they were not fully saturated, but for the other scenarios having only 10 secondaries in the whole network. So having a single secondary nature of the regions that we have also seen that it was enough to support our workload. Yes, so as Nishant asked before for Hyperledger, we would really, yes, appreciate your contributions to have the support for it. So why didn't we evaluate it? Why we didn't evaluate it is because of the higher complexity in terms of deployment. And we also wanted to focus more on the protocols that had main nets, so which were more like a public blockchain. So for Polygon and Cardano, yes, that's what we had, so I answered that. Yes, Hyperledger, yes, it has multiple blockchain protocols aiming at different use cases. But yes, of course, we would like to support them as well. And yes, so as Gerald mentioned, yes, in general, if different blockchains provide Web3 API, yes, that helps a lot. So in this sense, we don't have to re-implement the smart contracts in different programming languages so we can have a fair setting in this sense because actually different programming languages for EVM, they introduce different limitations. So for example, Solidity and EVM might have one set of instructions and they will move and the languages in Ethereum, also in Algorand, yes, they have their own limitations. So having a Web3 interface is great. Yes, so there is a question too, to learn creating a Hyperledger network from scratch, yes, I would recommend Highly as to, first of all, have an idea which particular Hyperledger blockchain you want to use, like Fabric or SotoSource or something else and then go to the corresponding Wiki page and documentation and, yes, well, Julian answered that, so thank you. And yes, again, like, well, thanks to everyone who attended the meeting and thank you for the attention. Thanks for that, that was great. I just want general question about, this is outsourced Diablo, right? What's the kind of plans for that? What's the, how do you, when people get involved? Is it gonna, you know, we've got Calliper, you know, there's all kinds of, what's the kind of thoughts moving forward for the platform? Yes, so for this, so I've been, let's see, trying to, let's say, have a discussion and keep in touch with the performance and scaling working group. So we had a meeting in January, well, where I say a little bit introduced what Diablo is and I think here the idea would be that, like, in terms of this working group, we can collaborate and figure out what kind of metrics we can use or what kind of, or for example, how we can reuse the same API, well, let's say the same module, the same interface for sending the transaction so that we can all support this, like for example, if you implement the blockchain support only once you can use it with Diablo and Calliper. But yeah, so in this sense, we would just, yes, like to have these two different projects, so they have their own contributors and well, just like keeping it open sourced. That's great, this variety is good, right? So, yeah, exactly. And ultimately some of these things merge and all kinds of things happen in the open source community. So yeah, that's great. So I think we've got, I think we've had a lot of questions and lots of compliments to you, Andre. So I think that was a great presentation, right? Thank you, great material. Obviously a lot of effort went into that, right? And really good outcome. So I'd like to say thank you to you and thank you to everyone listening here. This will be on YouTube. So people will watch it again and again and I will try and put some of your links below. So people watching YouTube will get that. And I think, and maybe you wanna say a few last words, Andre, and then I think we can wrap. Sure, so yes, of course, it's also, I would think I think it's important to mention all the co-authors of the papers. So we also have Vincent here who is now my supervisor and who put a lot of effort in this work as well. So also Gauthier, who is a post doc at EPFL, like he also made significant contributions to the implementation and for the experiments. Also, we can say thank you to Chris Natalie who was the, who implemented the initial Diablo prototype and the technical report is also available on the website for that as well. And yes, so that's, that's what I would like to see. So yes, again, thanks everyone for joining in and joining in. And if people wanna contact you, is that LinkedIn, what's the kind of best way to do that? If people wanna contact or get involved? Yes, so I am available on LinkedIn and also probably, I'm also available on this on Hyperledger Discord. So I was also like previously, I was a maintainer for Hyperledger Iroha. So there, I think I'm listed as a maintainer still and another probably the sure way to communicate this email. So in the Diablo paper that is on the website, there is my email and you can use or people can use that as well. That's great. I don't realize you're Iroha as well. Iroha's done incredibly successful, right? Like Cambodia case and central bank digital currency. Yeah. Yeah, that's, yeah, like for, well, that was quite a lot of work in this sense. Yeah, it's still going. I'm very much in contact with that's excellent. So actually just the question, how do you think people, maybe, sorry, how do you think people should contribute or how people come and maintain? Have you got any advice on that? People want to contribute back to these open source programs? Do you mean for Diablo in general? Or just in general? In general, oh, for, yeah, for open source, of course, I mean, first of all, I think it's important that the people try to run the software, like local e-vores, somehow just try to, well, let's say, interact with the, with the program. So like try to compile, try to get the whole thing running. That's the first step. And then for the contributions, I think it's, well, even if there are some issues, like let's say a feature request or issues listed in the GitHub project, it's still important to contact the maintainers just to make sure that nobody else is working on that as well. So that like the contributed implementation adheres to some like programming guidelines that the team is already using. So, but yeah, like first try to run the thing yourself, then for example, if you already have some improvement points in mind, like you can write to the maintainer. So it's communication here is important. Okay. Thank you for rights. Any other questions? I think we'll give a couple of more seconds. Any other hands? I think, yeah, I think it's been a great, great presentation. Thank you. Thank you, Andrea Grain. Okay. So I think, oh, yes. Claps. I see a few things. So thank you everybody and take care. All right. Thank you everybody. Cheers. We'll wrap it up here. Thank you.