 Hi, everyone. Welcome to the meetup. So today we have with us, Sam Yuan, who is an active member of performance and scale working group and board of technical working group China. He has been, you know, actively presenting on various topics on blockchain space, including telemetry support and, you know, pluggable crypto service. Today, he'll be doing a deep dive session on performance sandbox, you know, which is part of Hyperlegical Labs. I'm very excited to, you know, learn from it and, you know, understand it better. So without further ado, over to you, Sam. Okay, thank you. And, yeah, so as you said, what's the session community session. So I put anti-crime policy here. And as you guys can see my camera, I wear, you know, fabric, hyperlator five years, a t-shirt here to attend. So let's start. Okay, so for today's session, I'm Sam Yuan from IBM as a developer in IBM, but for today I come here on behalf of two working groups in Hyperlegical to bring you guys and work together with you guys to start our journey for blockchain operability with our new Labs project performance sandbox as a deep dive session today. This project is announced at March of this year. And personally, I hope it can has forced 0100 release as forced version at the end of this year. And I prepared QR code here for my personal GitHub and for this project. And let's move on. So here's to this agenda as the, you know, steps for our journey. So we will have two hubs at first session as first part, we will start with some background as performance and scale working group that paper published years ago and some concept with mounting and operability as some background and modeling. Second, from that, we're going to see the targets for performance sandbox project and then go moving on to design implementation, current achievements and ongoing some discussions there. Then we'll have for some minutes as time breaks and second part will be the hands on session. And I'm going to use performance sandbox with fabric with basic areas for asset transfer chain code to start with with online workshops here with you guys. After that will be QA sessions for all today's session. Okay, so move on the first part of that paper. So you can see the links. I put the links at the bottom of this size and the QR code. If I'm correct, mapping to this PDF version and you know, in this red paper years ago, performance and scale working group, they're different model for blockchain performance test. Okay, so from this picture as appeared at the set here we can see there are five nodes with a star as network connection. The nodes in color purple or blue represent for a blockchain network. And this year called system under test. So block being generated consensus and conformed among the nodes and the traffic business transactions inside the block happens there. And at the left side of this picture, this model, there's some test to test Hermes, there are two parts of it. So for says a lot, a lot of generating client performance test, in fact, just use some parameter and keep sending transactions to the system, the matter blockchain or others, right. So, in this case, as a lot of generating client is just keeping sending the traffic to the system under test there and an observer client, one or many just multi and operating the system under test to see what's happening among the system under test with and in the web paper, define some matrix with two kind of operator read and write or say read and write transaction and the two metrics latency and the throughput. Okay, so adopt with fabric workflow with four steps and let's say the read operator and the transaction operators, right. So, in the paper, the read transaction, in fact, if we say Alice Bob as a transfer means at least query she's account where client application in that case, in this case, so we don't need to create a new block among the system, we just need to know at least current account amount there. So, a new block is not necessary to be generated, but consider for transition or right right operator as I was going to transfer with Bob for some coins or assets there so we, we obviously need a block recording this transition and this block and the transaction become more conformed among the network. So adopt with fabric. Okay, so there are four steps for this transition workflow. So for this client send the request to peer for endorsement that's four steps. And the second is the client waiting for peer response for as a result for this endorsement and the third step is client send the peer or orders for waiting for order series generating block so this is the third step. The final step is that the order service send the blocks to the peer peer web date and conform the blocks and send the some notification back to client so there are four steps in short for a fabric transaction workflow and we can say for read kind of operator, you know, we just need to for, you know, for the two steps as a query operator in hyperlider fabric, we don't need a block being generated. And for transaction or right operator, obviously we need to waiting for the block being generated and conform that you know, numbers of peers and notification back to the client itself. And we have to, you know, other, we define the metrics with latency as the, as the time div or say durations between start and stop and also the throughput means during a period of time, how many operators being complete among the network. Okay, so move on. Let's talk about something about the mounting of visibility and maybe not this market. So, you know, long before today, as you know we have AI blockchain cognitive and many, many different complex service nowadays. Before that long before that we already have mountain system and it works as some somehow in this way maybe a control running with a top script to mountain the virtual machine or some specific hardware instance there for CPU usage or workload. On that instance, and you for some peak happens or say over a special value, the shell script will auto trigger a mail system, mail alert to the, you know, operator teams mailbox, and the mail send to some guys on duty. DRC will, you know, come on and to check what's happening. So that's, you know, the matrix and the traditional way we do the mounting, maybe just the CPU memory usage or further more for specific application as deep database matrix Java, you know, the, you know, JVM matrix, et cetera. And the, you know, as nowadays we have micro service we have distributed to the system as blockchain. So, we have a concept of visibility, which based on the matrix, adding more as the distributed data and the log collection among the, you know, systems, or say containers to collect data from different point of view, just for data collection, and those data collection, provide us different kind of wheels, make our more focus, make and that's easier to, you know, to do the, you know, nowadays marketing you know, human is at the level, for example, chart of, of run books. So that's kind of things together, you know, based on the obvious ability for adding the data collection by the different of wheels, right. So, take a sample with, you know, distributed tracing, how distributed tracing make our works easier with, for example, with Yeager and other things, but for now, just show you guys with this picture, which I get from this book, Mastering Distributed Tracing. And we will have the, you know, transaction workflow model here. So when we do the performance testing, targeting our distributed system as blockchain, right, so we're doing distributed concurrency because many transactions happened at the same time, as this picture show showing. So, follow the, you know, workflow charts here, so we start from a client, single client applications and the one transaction as the right angle and the lines here. So then we're going to, as a second step, the transaction move to peer components for endorsement and the folks are moving to order service to generate the blog and the back to the peer transaction, peer for validation, right. Do, you know, all the steps, how many components we need to check with, you know, as a distributed system, right. So, as I said, we can have a single client application as, as our large generation client, right. And obviously, we hope we have two peers as to, you know, play roles as two different organization to validate this or say endorsement to this single transactions, right, so we ideally has two peer, you know, containers or components there. And then for order here, file means draft cluster with number of file, then we move back to those two peers to check the validation and conform face logic yet. So, in this case, we, we, at least we have eight components logs to trees if this specific transition is very, you know, latency is very large take a lot of time so we need to check the time, you know, the logs to see what's happening right. We need to do that we need to go through eight components log trees and assumption are they considering we are not in your production or say in our real life cases, this is a, you know, assumption and experimental. So, your networks scope or sets of node or a set of system under test skill up and upper and upper, so you will have more and more components to trees. Okay. So, okay, so we take some minutes to go through some background model and the same. So let's see our targets. So, for us. Okay, so when we talk about our targets for so we found the, you know, basically as consider Kubernetes is a popular. For today's books. Okay, the last one. So we try to build this sandbox program for women's related things for blockchain. So we build these things based on Kubernetes for first because, you know, with two reasons. So for the sense to the fabric samples teams that were as a sample as tests and network Kubernetes and the obvious beauty related tools as the year girl, for example, and for me shoes, Grafana, etc. They have, they have operators supported to Kubernetes. So we can easily deploy and integrate all the, you know, the point system or say mountain system or visibility system, etc. All in one in Kubernetes. So we can upgrade it from traditional mounting to our visibility to collection more data and help us to, you know, with metrics as I said, distributed tracing and log collection with all the those kind of data being collected. Well, you know, for a few minutes that's okay, you know, the people who use this to this labs project to do mountain things at mountain area, you know, providing data to support them to do some decision do some analysis, etc, etc. So that's our targets. So, yeah, so for us meaning so color to our targets. So we do a sample in the Kubernetes error here and supporting for people, human insights here as you know I just do. You may insert, insert error, for example, for run books. So you can base on the metrics to develop your run books as for example, some chart ops. So you'll need a ops are always really driven development, you will need to do things in this kind of area, you will need a low you can use sandbox as your local development environment and to support you the development your own script as alert setting something like that for at your local laptop. Okay, so forces help you do things with operator run book at human insights. And the second is about some bottlenecks analysis. For example, matrix and distributed tracing, there are two area or ways of it. So for this, for example, as we all know, blockchain is there some crypto load operators when you may do the blockchain right. So it's a highly CPU workout. So we can have CPU usage at one side and with latency or throughput metrics at other side. So in that case, we will have have an overview to easily integrate different metrics together to see what's happening. If some hardware or CPU usage memory usage will influence our system performance are and and the second point of view is for specific transaction so we can use distributed tracing to see to figure out if it has a long, you know, a long latency transaction, we can see different time usage so which is the most the bottleneck, which duration, which phase cost a lot of time, most of time, so we can figure out and focus on that phase to do the enhancement effect. So, yeah. So, yeah, so our major and the targets is about things at Kubernetes area to provide the user for this live project. So to do the things at the multi phases and analysis. Moving on next we are going to see our design implementation and achievements. So firstly some design and the flexible consideration for this product so back to the things here. So first you can bring your own career is because we use Kubernetes as an infrastructure we personally I tried can and also we have a say with me cool but recently since not stable. We used operator kind of technology and the corporate to deploy our components at Kubernetes so that you can, you know, by the Kubernetes certification of for your options, you can easily migrate from can to your Kubernetes network cluster things like that. And then migrate from your pre production to production. So something like this being considered you can bring your own career and his business and adopt with this project. And also you can bring your own, you know, some image for example, from research point of view if you change some code for the block chains. Or components so you'll need to deploy once and the second to say was difference. So you can bring your own image there. And second you can define your own system under test scope. So if you want to change your set of network you can bring with your own system under test as the size of system under test can be skill up or skill down. So by your, just by your own control. For this point, we are starting investigation together with labs for that as a fabric operator to, you know, to make it more flexible and easy use. And second you can bring your own trend code. So if you go to see the project CSED at GitHub, you will find a sample with EC, ERC or ECR721 contracts there which the NFT contract running at the system under test and doing some performance related things. So you can, you can, or, you know, you know, simple way just to bring your own trend code there to do the testing. Okay. And for the achievement, we can see, you know, we personally published some research and some things related to those four metrics at GitHub. And for now, I just take latency here for sample. And we can see in the middle of this, this current size, there are two charts that I'm going to bring, go with detail for these two charts in the middle. So at the right side, we use tape and the QR code, if I'm correct, point to the project homepage. And here we can see the project workflow there. We're directly following the fabric workflow. So start from a transaction with random value and then send for peers for endorsement and then send to order for waiting for order, you know, generate block, send to peers after the peer validate and then confirm and then send back to the observer client of the tape itself. And then from the yogurt or distributed tracing, we can see the first step and the second step point out there. So, let's do my distributed tracing with this transaction workflow there and I take the, at the very beginning, we have distributed the concurrency map there, picture there, I take this again. So first, we can see the client starts the transaction, we have label as transaction start and then send it to the peers with same proposals and peer waiting for peer back to, you know, endorsement results, some endorsement phases up here. And then we can see the ordering process as ordering doing consensus, take some time there, and then back to the peer again for the transaction validation and confirm phases. And then we move to, you know, here is our matrix for latency, so from tape there, this is just at the beginning of performance test, so we can see the latency is keeping increase. For the latency, in fact, just, you know, some go long duration numbers from a duration to numbers, and then we can see some different kind of color of lines represent for read latency and the transaction latency from half percent and 90% and 99% of those latency. Okay, so we complete for our first half and we are going to start with our hands out after maybe five minutes, so during that time, I'm going to switch to this size here. So if you want to draw us, you can use, you know, prepare your own, you know, see where you're meant to be, it can docker and create and I personally suggesting you check with QR code here as the image list to pre download the image to your local desk. Because when we start up the system, it will go try to find out the image and start in the download, somehow it takes a lot of time because it's just depends on your own network right. So, let me check the charts. It seems that we don't have questions in chart room, right? Okay, so, okay. Let's start our five minutes time break and see you guys later. Okay, thanks for the recording and here we're back from the time break. So before I close the, you know, session so I'm going to in short to, you know, have a shorting introduced to you about what's happening. Following hands up session. So first, we will start Kubernetes based on the can and the second, we will start operator or operators based on those operator technology start operating system as metrics and distributed tracing. Then third step, we will start a fabric network. Currently, we just do some modification based on test network Kubernetes. So, we will start a same with network system under test with, you know, same with test network Kubernetes, then we are going to use tape to, you know, send the traffic to the system under test and check the metrics on, you know, UI for Grafana and Yeager for distributed tracing. Okay, so stop the sharing. I suppose you can see my screen with fast code, right? Yes, we can. Okay, so, Sam, sorry to interrupt you. There is some questions in chat box. Can you please? Okay, so, you know, my plan, original plan is just have the QA session at the end of the hands on because give some time for people to download the image. Okay. So just do some. So how the metrics defined was consistent mechanism. Okay, so, so first, this project, you know, is not a project focus on consensus mechanisms. Okay. And it obviously use for Shell, Kubernetes, et cetera commands to help you integration tools together to do the research from UI there. So it's not focus on the consensus, mechanisms are organisms at this level. So it's noted, you know, it's noted that, you know, it's at top level focus on the blockchain system and the metrics will just reuse the blockchain system provided for example, in coming hands up demo, I will show you some dashboard based on fabric metrics, which provided by fabric itself. So, yeah, so that's the answer for some for this one. Number when the numbers are full nodes increase graph shows the proof various latency. Well, for this question. So it depends on two things. One is about the latency at, you know, for a specific transaction concerning numbers of node in confirmed to transition. This is the fourth thing. And second understanding is traditionally, we say, where is the number of node increase, how it will impact the system. So what was the question points at the mean, who is at the mean. Okay. Yeah, so you asked a question for no, a number of nodes where various latency so you mean the confirmation latency are say just a normal general speaking for latency. When the blocks increase. Okay, so I'm going to, I have to say, there are two, some different kinds of definition for this question. So first I suggest you take a look in that white paper, there's some description about it. And you just can find it at some metrics different depth show that so I don't quite get your question so why you're asking this kind of question. So, so, okay, so I'm going to move on and to show you something to Yeager and Grafana. So, yeah, first, do some clear up and start the infrastructure as I'm starting Kubernetes, where can do with a local registry. So, this is the first steps. Well, let's say, let's say some legacy latency related question with Yeager later. Yeah. Okay, so cluster being started. And, okay, we have a note there. So, no phone found, and then we're going to load the image. So to load the image, you would better to write a hot code file as image list, and put the list image list I prefer in the QR code, which I showed in the time break, just copy paste all the image here making canned to load the image from your local Docker environment can itself to save to save the time. Okay, so it takes some time to log in. Network. Network says. So, you know, they're different to, you know, the network says and show who that to different health things right. So you mean network says or show who some sort of hold. Okay. So, in fact, the network says you want to it fixed fixed and you just want to know the with the numbers of, you know, required to see come confirm for specific transition increase how the transition output increase right. Okay, so just wait some minutes. So, currently, you know, we don't. This is the problem with space line. So that's different kind of things. Okay. So currently, you know, just this progress is just that our beta phase with some infrastructure build and note fully support to instant all the things as you guess question in the chart room here for address the network says in automation. So I have some discussion with fabric maintainer and the fabric operator maintainer. So the next step we are going to do some investigation and integration. After that, the network says can be adjusted in flexible and easy way. So far, we just use a code network to do the development for this product. Okay. So for us here, I'm starting the Yeager for this with crazy and the I should start from just fit force. Okay. There's some error or so. Let me a second. I need to do some reports. I'm going to reboot because I make a mistake here. Yeah, wrong. Second, wait a minute. I'm going to start this. My fault. They come. Yes. Yeah, you know, as I focus on the chart and answer the question with, you know, wrong step of the scripts. So I'm just clean up and run the scripts. So I'm going to ignore the question in the chart room chart room in follow up to complete the workshop and demo first. So could you please help recording some, you know, help with some general question QA and help recording some QA questions later. So I will answer after I completed the demo. Okay, so you mean, you want me to record all the questions so that we can end towards the we can answer towards the end. Yeah, I want to answer all the question at the end because Sure. Make sense. Yeah. Well, I do the parallel and make a mistake and you can see from the screen now. Definitely. I'll do that. I'll record all the questions and I'll share. Okay, thank you. So, here we go to restart the can works. Then, second, the image. Okay, holding our fabric tools. So to start with, you will have to, you know, clone your performance sandbox in Hyperlegia Labs, right? From get up to start with, you know, we have to clone your, you know, performance sandbox from get up, right? Yeah. Okay. So you need to get the code from GitHub and be careful as some, you know, some modules there. So you need to also check out the sub module as cool permissions operator. Okay. Okay. Yeah, so some image loading. Thanks. So what kind of a cluster do we need? Like how much, you know, compute is required for this cluster? Currently use Kubernetes in Docker as can there. So my personal MacBook just supported it. And for, you know, Kubernetes, you can bring your own device and things. And I can show you guys with CI running here. So here as a GitHub action, so, you know, it's able to run with GitHub action for all the steps and follow up for running it. So I suppose it's not so heavy with current network scope as basically a basic two organization network and you're not just easily with operator and some performance test for about 500 transaction. Okay. Okay, so these all steps are, you know, listed in performance sandbox in the runs, is it? Pardon. So this is these steps, you know, to set up, you know, just in the meanwhile, while, you know, the network comes up, everything comes up. So we wanted to check it. Okay. Great. I think, you know, we have made progress. Okay. What I'm doing is just the same or similar ways as a basic service tape here, you can also check the steps from the CS there. So, currently, and just, you know, in Paris, we might run is skipped this way and start with your girls. So there's some errors there because it will in this phase will build up namespace for marketing and to pull the out of responsibility things that's previous use and the other all that special namespace. So that's my fault at first round, which I'm going to get redo the case here. So after, after this, from here, we can see it going to start or deploy. You know, please use Grafana and the Yeager with Yeager all in one sample there. So it will start a visibility of based namespace with, you know, matrix with this way and the distributed tracing here. So then start to forward some parts so that you can have. So here we have after the port forwarding, we have Grafana and Yeager at you are here. So what's next, I'm going to from the dashboard folder. import some dashboard to Grafana. So here is the dashboard with tracing for common one and one more for tape. Okay, we have some. We have some dashboard imported and we have the Yeager ready waiting for some service in send here as an application. Then let's go moving next to deploy the fabric. Let's start. Hope it works well. It will seem as we just complete with these steps. So we're moving next to build up a fabric network from say there and do some enrollments, etc. To create MSP for here for orders. So here's etc. It's a long script to get here. As possible here is already the channel face. Wait a minute. So, yeah, here we creating for orders at my local. And here for some peers code to be there. For orders for peers. The waiting for ready. The next step is to create this channel. After the channel created. We will deploy some chain code and start the test. So, if I'm correct me, yeah, we can say some, you know, from here, we already have some metrics been left as fabric version for peers already for with the poll name here. We don't have a lot because they don't create channel yet. Yeah, so far notes. And here we complete with fabric network. So we can see the fabric network running here with order for signals with peer with coach TV with peer peer from another organization. And here we go for channel. Okay, we can see some channel being created. So, yeah, so we can say here at the dashboard we see, you know, some channel, channel box, something being appeared. This is the order. Yeah, the consensus. It is the information been coming up here. And common dashboards. Yeah, we start to have this channel and block height at the first block for channel. Yeah, some jobs to say stream duration connections happening. Looks good. Let's move on to deploy one single chain code sample. Okay, Chen code being deployed and do a single query. And single evoke for to create a basic asset as Tom and to prevent. Okay, so back to the dashboard. So we can see, yeah, very fresh it so we can see the chain, the new blog been created. Yeah, you can see some peers and others have the new block and some not everything looks good. Yeah, we can see some later transitions and some difference, Chen code execution created endorsement phases, Chen code code DB time, etc. So, next, we'll go to start tape for five hundreds transition testing. And here we go start the tape and we can. Yeah, we can say from here. Yeah. Okay, all the transactions been success are completed in three seconds with because they're so we are going to moving, moving here with nigger. And, yeah, here we have tape and a good to see. Okay, it's not fully ready in the, in the younger at say, I swear, getting slowly. I'm getting slowly. Yeah, for example, this, we try to figure out all the, all the transactions and take longer steps for us. Yeah, yeah. So currently we just in here just come, watching for a single peer node. So if you want to mountain two or more, you can add in the peers at here or at here. You'll be the answer to the question related with the increase of you're not latency increase by the peer or the commit peer through who's there. So with new added at here, you will see a new peer levels span here to see the difference of the times. Go find that takes a lot of takes some time. So let's back to the switch here. Thanks. latency. Yeah, and, and also the latency star you can also try to from here to see the difference. Or you can try the through. You can try the through food from, you know, some high high rates there because here we listed some peers. So you can just set a share of our peers, for example, to see the speed and the rates there. So let's ask the latency. Our throughput. Okay. So let's see questions. In terms of questions, you know, there was only one question that was, you know, put up afterwards that you know you mentioned blockchain work is a function of transaction throughput and network size. So, you know, there was, you know, any test metrics, you know, to baseline this. And I think you can, for example, reference to hear you just a check the, you know, later blockchain, I read some much less this because here. Let's see if I'm. It's very. Yeah, when you run, run it to take some efforts to find out who loudens those metrics at my local. So, So, you know, when you try to consider the things with committed notes, you can say here we have the lines represent for blockchain high rate for each node. So you just maybe at this time duration, you have a star and the stop here to multi how many knows being confirmed and the committed and the duration by 10 something like that. So so far we just as the basic can be the key or the ability to just display all the fabric metrics at the dashboard there. Okay. You can base on that to calculate yourself. One more question has came up that we have shared. So there are no more questions here. If anyone has any questions please feel free, you know, the now is the time to ask. And what is the green area. Some knows maybe. Oh, there are here it's just depends you can see each line for represent for each component in fabric. So, you'd be because currently is just a screenshot. I don't know which particular node or Docker code it to represent off. Okay. Yeah, and here I can show guys a sample with a tape here. So, for example, if you test the list to peers, your latency will look like this. So, for example, off of this query error, we can say if you query with single peer it take starts from here and say not here. But if we do the endorsement for peer number of two, it take about more minutes as some network duration, etc. So, just all depends. So you need to currently we don't provide all in one solution so you need to base on the other distributed tracing power matrix in Grafana do the calculation yourself and they will come you contribute back for your matrix. Okay. Great awesome. So I love the station. If anyone else has any questions. Okay. So, looks like you know we're all good. So shall I stop the recording. I suppose if everyone is good we can stop the recording and even no further question. For better range in Brian weekend and the weekend.