 Let's take just a couple more seconds. Yeah, take your time. All right, you were live on YouTube. You can take it away. I'll drop the link in the chat as well. Thank you, David. Thank you. All right, then. Hello, good day. Good morning. Good afternoon, everyone. Welcome to the Hyperledger. Monthly event. My name is last of Shandor, and I'm one of the co organizer of this group. Today we have an excellent speaker and a very good, very interesting topic. Which is, which is about hyperledger fabric scaling in a cross cloud environment. And, or today's speaker will be Zlatan Georgievich. He is a co co founder of a company called Sanofi, where his main focus is a product development, product management and development of consortia. Even before he founded or co founded Sanofi, he had been in blockchain and had experience with implementing hyperledger fabric networks in many and multiple industries like healthcare insurance and so on. He brings 20 years of enterprise software engineering experience. And personally, I met him a couple of years ago or several years ago by now, when he was with the previous company and they did hyperledger fabric training in Toronto, and I liked it a lot. I think that time many company companies have tried to open courses on blockchain and hyperledger fabric, but I found his material and his classes very useful, which I used it for a long time after. So before I give it over to to Zlatan to take it away, I like to make a very quick announcement and I hope you will like it. I'm just preparing here for a sec. So, in the second part of the, hello. If you don't mind everyone just using your microphone except Zlatan would be great. So, today, I'm going to do a draw, and I have a price for you, which I'm going to show you it's a real original unused unpacked IBM blockchain stocks. This is a one size people meeting the USA. So only, if not for anything else only for that is a very unique gift. And it says IBM blockchain. Unfortunately, this time you can truly show off with it, not even at the global forum, which will be as David mentioned in June and will be virtual. So if you would like to have these stocks which I just showed you, when you send a question in the, either in the chat in the zoom or on YouTube, I will keep an eye for that. I will mention your city. And at the end, I will draw someone and we will. We will figure out how, how can I mail this to you without any further ado, I like to invite at Zlatan on the stage and please take it away. Thank you so much. Thanks for the nice words. And yeah, let's let's dive into it. A bit of get you said a lot about me. Thanks for the nice words. Again, the, let me see how I can flip the slides. Just more to add on what I'm doing professionally. Again, we are actually focused on hyper ledger fabric as our technology stack and our main product apart from the services we provide for the projects is consortia IO. And that is a software as a service you can think about it like that to accelerate the management deployment of hyper ledger fabric networks and applications. We feel the gap that was there since the beginning to actually manage those networks across clouds across business partners that are on the consortium. That is very essential part because that brings a lot of complexity and even though the steps to do that management are pretty well well described and clear from documentation point of view from community point of view they're all manual. So that's why we build consortia to automate all of those steps and make it easier for the end users to manage the hyper ledger fabric based networks and applications. So, let's now move to the agenda. Why skill ability skill ability is very extremely important topic it is not just for hyper ledger fabric of course or any any blockchain. We know scalability is a big topic also for public blockchains and actually it's it's a valid thing for any software. No, it's not simple. It's even more complex when you put it in the context of distributed technologies like hyper ledger fabric. So in the session today I will be focusing on hyper ledger fabric in particular but have in mind that the concepts I'll be talking about are valid almost for any ledger out there. Because it is in the context of private and permission blockchains not in the context of public blockchains like Ethereum or Bitcoin I just want to make the drawer draw the line. It's extremely different to actually the way you manage scalability on those networks and have in mind that permission private networks are more or less networks that particular set of partners have control over. So, that's what we will be talking about today. So the agenda is just quickly talk about what skill abilities in general just to set the context right. So, you know what what areas will be discussed and then I will be actually focusing more on to what scalability means for distributed ledger technologies and hyper ledger fabric in particular. And it's not just about talking for software scalability in general it's it's more or less what are the most essential and important topics or constraints you know around scaling private and permission networks. And of course we will talk about how to do that across clouds. It is not that it's you can do it on one cloud of course but have in mind those networks are heavily decentralized and distributed. So it is very realistic to expect that they will work on on different clouds that's why or infrastructure it's not even necessary across cloud that's why across clouds topic is essential. And I'll leave 25 minutes hopefully for question and answers. I want to like take probably half an hour to go through the content and have an open discussion make it more interesting and probably answer questions and have a brainstorm or discussion on it. So in general what skill abilities. It is very broad topic to be honest. I'm not claiming I'll cover everything. Don't get me wrong. It's it's it's extremely technical and complex. There is a lot of theory behind that. I just put couple of dimensions here that are actually essential. This is a skill ability in terms of code that administrative skill ability that's that's actually the context of increasing the number of organizations or users that access the system. That is very, very well handled properly by because it's it's in the nature of it by hyper ledger fabric. In terms of scaling your channels by adding more organization more peers on the channel and all that so this is given more or less out of the box by the technology. Now you have of course the functional skill ability. Shortly it is about enhancing the system by adding new functionalities. Again very well implemented in the context of hyper ledger fabric. It's not just about chain codes of course I'm talking even about the so called plugins where you can actually adjust the network behavior on the peer level by implementing some system chain codes for example to verify the to change the default behavior of how transactions being verified when they're committed by the peers. Geographic skill ability again very essential part. We're talking about heavily decentralized distributed systems. So having your peers running in different areas even though it could be on the same cloud provider. Again having high availability by distributing or reducing the risk by distributing the notes into different geographical locations is essential. And of course load skill ability. Think about it in lame words. It's when you actually want to scale up or scale down your system, depending on on the load you have on it at a particular point of time. This is essentially extremely complex in terms of hyper ledger fabric if you look at the network as a whole. And we'll go through those things in a minute or two. And of course horizontal and vertical scaling. I think they're the most famous or the most talked about. In the industry, it's about, you know, extending the or adding essentially that translates technically into adding new peers on the network to handle higher volume of users you might have. And this is very interesting thing in hyper ledger fabric because it could be actually handled by every organization separately that is joined on the network. And vertical scaling again the same thing. It is handled by every organization in particular and it is about adding new computational resources for example increase the number of CPUs. And here, of course, you could do dynamic scaling, which is not simple to do of course is to scale up or down depending on the load by adding dynamically CPUs or memory or storage. So what is actually before we start talking about those costs in the terms of in the context of hyper ledger fabric. We will talk about what a hyper ledger deployment actually usually looks like. First of all, I just want to mention that with the distributed and decentralized networks. It is not one time thinking in terms of topology. It is not something that is set in stone those networks are actually designed and supposed to grow, or actually do in size. Speaking of, you know, considering the number of organizations, maybe organization or business partners will join the network. Some may leave the network. New peers will be added new peers will be removed. So that's why the deployment that you usually start with over time changes significantly. So when you are organizing the first thing you do usually is to organize your infrastructure. Think about it, the machines or the computers that's going to be used to to manage the load or to many to run your notes, your peers or the earths and all of that. So this is where you actually decide on what infrastructure to use and usually people go with different cloud providers, or maybe the same cloud provider and different geographic locations. And the reason is already explained why it's important. Then you deploy your organization and notes from fabric perspective. It is not. It's a bit different because organization from fabric perspective means membership service provider and organization from business point of view maybe a real corporation or company right so one company or one corporation may of course have multiple fabric service providers that from our fabric perspective for different organizations. So you deploy your notes you deploy your applications. Those are usually the chain codes and probably you're going to expose you want to expose that functionality on in some kind of a web APIs through some web APIs and expose some UIs to your consumers. Those consumers of course could be your clients or maybe internal clients of your solution depending on the business case. And of course last thing that is extremely important and complex to manage is the network governance. They've been efforts to actually streamline the governance process across providers. But yeah so far it is something that is out of bounds from hyperledger fabric perspective. So what is actually about now scaling about hyperledger fabric networks. So first is about scaling the consortium. So it is here not about adding new notes. I want to make a note here. It is about actually adding new partners that come with their own notes. So those partners that join your consortium are actually bringing usually should be bringing their own notes. And what is important here to consider is that onboarding of business partners on a consortium could be actually onboarding on preset network. What I mean by that is that those business partner may not necessarily have notes with them. It could be a process where someone provision notes or the consortium provision notes for those partners and onboard them. Or the other option of course is when the partners that are joining are already part of or already run notes and they just want to add their notes onto the network and start being part of the transactions. So that is I have a picture nice I hopefully you're gonna like it in a while to make it more visual what I mean by that because I want to put it in a context. In my case it's a supply chain context very easy to explain very intuitive to understand. So scaling your notes orders and peers as I mentioned vertical and horizontal scaling. What is however tricky here is that when the business partners on a consortium host their own and manage their own peers and orders and notes. Let's put it that way. You don't have a central or centralized or center central control of the scaling mechanisms. You don't have usually in many cases in some you do but you usually don't have a central party actually to manage the notes of every participant on the network. So when we're talking about scaling you have to think about it as in most cases of course as efforts that needs to be or steps that needs to be taken by every participants on the network individually. Okay, as soon as you put the whole network in a single let's say cloud provider on a single Kubernetes cluster if you wish to actually streamline and to make the scaling more simple. You're essentially centralizing everything in some cases that might make sense but in many cases that defeats the purpose. Of using distributed ledger technology because essentially even though they are distributed you're still centralizing them on the same infrastructure. And horizontal scaling from hyper ledger fabric perspective that is essentially adding more compute computational resources. Nothing, you know, strange here it's it's a regular. Vertical scaling for the purpose of actually handling more load in a particular point of time. So have in mind that the transaction endorsement in hyper ledger fabric may require endorsements by different participants on the network. So the overall transaction throughput and handling of the load actually is going to depend on the performance of those peers managed by different partners. So let's let's take as an example. Think about it that two partners are actually hosting peers and one of the partners is actually able to add like let's say new CPUs on their infrastructure to handle the load of the transaction during lunchtime because you know their users decide to do most of their daily transaction during lunchtime. So they want to like make the system available. Now let's assume as well that every transaction is endorsed by the peers of the both partners. So for a transaction to be processed, it needs to be endorsed by the peers of the two partners. So if one of the partners scales their notes up vertically to handle like to speed up the transaction endorsement if the other partner doesn't do the same. Of course that will reflect on the overall time to commit the transaction. So that means they need to work and measure that performance together as a whole. If they want to have a valid impact or let's say positive impact on the transaction throughput. So it's what I mean to say here is as well that it's it's extremely hard unfortunately to synchronize those efforts in reality. Let's be realistic right here. You cannot expect that now imagine a network of 10 partners. How are you going to synchronize the efforts to actually scale up and increase the transaction processing. So that's why it's essentially very important that those efforts are planned accordingly between and agreed between the partners seen actually many cases where like the businesses they usually let those efforts be managed or let's say performed by skilled engineers who know what they're doing. So if you actually supply have a if you're a small partner let's say a small business with without even IT department how are you going to do that right so essentially when you scale those permission networks across multiple partners and more partners you actually board on on the network more complex it becomes to handle scalability properly. So when people are talking about transaction throughput and you know saying hey how many transactions hyper ledger fabric or whatever X technology can process on the chain. And people come up with measurements usually those measurements are taken out of a test system where everything works in one box but that's the best case scenario but that's not realistic. So when when you actually try to scale think about those things. And I would say the expectation of the transaction throughput is a function of or actually the transaction throughput is a function of the performance of the peers of the nodes of all participants on the network. You cannot expect that one of the partners will actually scale up their peers and the transaction throughput will fly through the roof that that usually won't happen. For essential transactions of course it all depends on the endorsement policies. There are cases where endorsement is required by just by a single person by a single partner. And in those cases a transaction can be endorsed by a note that's killed properly. And then we are talking about a single party controlling the transaction throughput because it's it's much easier to manage and to do it. Okay so this is a picture here of how we actually add a new partner on the network here in my example as you see I have in the circle an existing consortium of processor farmer and retailer. Think about that these guys are actually tracking lettuce on the network from the farmer to the retailer shelves and in that case they want to scale their network by adding the regulator so the regulator can jump in in their notes to be part of the real time transaction processing. Have in mind that by adding the regulator on the network doesn't necessarily mean that you're going to have lower transaction throughput because the regulator may not endorse transactions and know it could be just a committer or a reader right. So here we were talking about scaling your your your network in terms of not adding peers but adding new participant on the consortium. Now talking about the supply chain example I explained that before that the network resilience in general supply chain is is very tricky topic. Why is that because there are many many businesses out there that run or use many many suppliers or vendors and they may not even understand what's going on down below the chain right so they may be working with a vendor and that vendor may work with contractors smaller companies smaller vendors and what not. So when you're talking about creating a network between your suppliers to for your supply chain and having your suppliers or vendors being part of the network. Of course you could expect. That this will happen on different cloud providers. So that means think about a company a that works with company B and company B works with companies here right so all of those companies maybe may have different relationship between them they may not even know that they work with each other. So the onboarding and the deployment of their high pleasure fabric peers assuming they use coverage of fabric of course will be driven and handled by each company individually. So you cannot expect essentially that they will bring over all of their notes on the same cloud provider. So what what you need actually to achieve with the multi cloud deployments is to actually have the decentralization of the infrastructure. So you have high availability. The same for geographical distribution. If a zone, you know, goes offline, you don't want to risk too much by hosting your peers in the same data center. Right. So, because in that case, even if you have 10 peers if the data center goes down, you're going to lose all of them. So that's why you want to actually distribute them geographically. And that's not the only reason why you want to do that think about international consortia. You may have partners working together across continents right one in North America one in Asia. And they may have their peers pretty far away right so here now we are talking about network latency right. So when they talk between each other, depending where the client is that's performing the transaction. The latency of the network is going to pay a big row in your transaction throughput right. So that's why different companies, even though they might be located in Asia, they may have to actually provision peers, not just on different cloud providers, or maybe the same cloud provider, but they may want to provision peers or nodes in different data centers in different regions. Because if a transaction runs from North America and the company in Asia have peers in North America so the network latency is going to be much faster. Latency will be lower and therefore the transaction will go much faster. So it is very sensitive think on how actually to distribute your notes properly for your particular network or particular consortium to have the best transaction throughput and availability, network availability. So another thing to consider of course partner peer availability that's that's another thing that's important as I mentioned before you may have use cases where multiple partners need to endorse transactions. So if one of those partners, one of those endorsers doesn't have proper doesn't build their notes properly. So they may just decide to go with one note and that note goes down, then none of the transaction will go through. So that is essentially very important to actually have the proper infrastructure, the proper setup not only not only within the concept of a single partner, but across partners. Okay, so how a single participant network may look like. So here I'm having three data centers and as you see they provision different notes in different data centers. Of course the notes communicate between each other over internet, which is a good thing so you can actually not just provision them in different data centers but you can essentially use different technologies to manage the network and the notes. You may use Kubernetes or Docker into different or just you know run it on physical machine. So it is essentially one network but built and running on top of different infrastructures, different geographic locations. Okay. Okay, so and here is how a network of different consortium of different partners look like. So they may be part of the same channel. And each of their notes may be running on two different bubbles like here is I mentioned Docker but it could be internally variation or union of running the peers the notes on different infrastructures and different technologies. Of course when a transaction goes as I mentioned before, it is important that the actually every, every one of the endorsers on the transaction have their notes available and their availability managed properly, because if one goes down. And endorsement is required by all of them. That's a very important thing it all depends on the endorsement policy you're using. Then it will affect the availability of the transaction as a whole on the on the tracking channel. There are cases though where you may want to have a single endorser for the transaction as the others would be just committers or they will be just reading the output of the transaction the result of the transactions or the data that has been recorded on the ledger. Put it that way, but they won't be active participants or participate in the endorsement process. So by having that endorsement focused on particular or done by particular partner, then the whole transaction throughput essentially in general, or mainly will depend on that on the on the speed and then processing power of the notes of that particular participant. So it's, it's essentially extremely important to define your endorsement policy. There might be cases where the endorsement policy is enforced by the business because less partners you have to endorse the transaction higher the risk is of, you know, losing completely the capability of processing transaction in case that partner decides to leave right without telling you. And that's why you may want to go of course with the transaction endorsement policy that requires transaction out of the transaction with endorsement policy that requires endorsement by particular or a percentage of the participants on the channel, instead of making it specific. So you could ask for majority or you could ask for subset of the participants on the net on the channel to endorse the transaction. Of course, if you're running a transaction and you'll be having good throughput and tomorrow you decide to change the endorsement policy and you add another partner to endorse and that partner does not fit the bill with endorsing properly or endorsing with the right speed. That may have of course side effect on the general throughput of your transaction on the network on the channel in particular. Okay. So that is the last slide. I think I fit within the 30 minutes timeline. So I would like to open question or if there are any. I don't know last low if we have something on the chat. I see we do. Yes, we have RGS pink wise pier three in Azure data center. Okay. So wait. What is that question. Is that so I just read it a lot. So I think IBM is going to integrate a pleasure fabric network in its cloud foundry to provide at the global secure network. Am I right. Well, let me be honest hyper ledger is just a bit misleading here. I just want to keep on repeating that every time. IBM is not hyper ledger and hyper ledger is not IBM right so they're different. Hyper ledger is is open source community right. Part of Linux foundation it's Linux essentially. So the partners, they will even though a partner might be part of IBM blockchain network. I don't think businesses has to be limited to actually or constrained by the vendor of their notes on their decision with what businesses to work. I think many, many of the businesses out there. They use different cloud providers different infrastructures so that that's what I'm saying is that communication between those notes, regardless on what I be cloud they run being it IBM hyper ledger fabric hyper ledger. What was that hyper ledger network or foundry or or or Azure blockchain as a service hyper ledger fabric as a service or or AWS as a service. It doesn't matter those cloud providers are not supposed to constrain a network or a consortium that is actually combining notes run on different cloud providers. It's counter intuitive from their business point of view they don't want you to do that right so they want your partners to run their notes on your own cloud provider. Reason is of course revenue right but I don't think that makes much sense because you cannot actually ask one single partner to provision notes on 10 cloud providers just because consortium can run only on particular cloud provider doesn't make sense. That doesn't make sense because it's so high. Forget about justifying it justifying running an application based on such network right so you cannot like invest millions of dollars on on maintaining notes on different cloud provider just because they're constrained. Thank you. Let me ask a question from the YouTube chat. Ali Reza was asking, do you have any kind of how chart or sample piece deployment publicly available. I was wondering what Dan G was also trying to know. Yeah right so there is for for hyper ledger fabric deployment I guess right network deployment and peer deployment. There are a couple of projects under hyper ledger fabric already there is a one under hyper ledger labs. And it's maintained by the community it's it's a helm set of helm charts it's not just maintaining or supporting hyper ledger fabric it also supports most of the other blockchain networks like so to even Ethereum. So, I would go and check that open source project first. If you want to do it manually of course because those are helm charts but they require extensive knowledge in Kubernetes to use it. I see. Yeah, I hope I answered that one for the orders. Yes, ordering service. I didn't talk about it. And, yeah, let me say a few words about it. We know now with 2.2 and up 2.0 actually and up the hyper ledger fabric community recommends running an ordering service based on raft. They well documented what that means, because you cannot scale raft indefinitely so there are limitations in terms of how many limitation being you know what makes sense. Don't get me wrong limitation is something that I would say recommended setup of raft of course you could try to bring up like hundred orders to serve a particular channel. But that doesn't make much sense. People usually there is a bit of misunderstanding. When I talk to different people hyper ledger fabric you know it's it's it's channel based so we're not talking about the single single ledger that is used by everyone on the consortium we're talking about channels and channels is a different instance of a different ledger. So the ordering nodes are actually since 2.2.3. They made it change the ordering service in a way that every order may actually serve different channels and different channels can be served by a different order so you can actually add or remove orders that are part of a particular channel. So that one actually gives you flexibility to actually manage an ordering note that is actually ordering transactions for different channels or different pleasures. That may not necessarily know about each other. And of course I pledge a fabric community. Let me just continue what I was about to say a minute ago is that there is a limit to the efficiency of the ordering service based on how many ordering nodes you run. I had it somewhere. Let me just look that up. But I believe that they recommend to have for a channel have that in mind. On a single channel. They are, I think, having about up to 11 nodes for the raft to work. So the raft itself now one one order can be part of different raft clusters. So this means think about the rough cluster serving a channel. One order could be a part of a different rough clusters. And communities recommending up to 11 to maintain a quorum of course for the rough protocol to work. And of course adding more nodes unnecessarily brings complexity and doesn't give any value. So ordering service keep it small slim and distribute between few partners if you want to just for the sake of having reducing the risk by one of the partners going offline and letting your channel answer. But yeah, it essentially should be kept to a couple of nodes. So how do you monitor the availability of the partners and the overall hyperledger hyperledger network status. That's a question from forehead. Oh, it's essentially think about the every partner. That's the way I prefer to think about it as a endpoint or a black box. So think about it's not about number of peers on the network. It's about number of endorsers on the network. And those endorsers are essentially technically their peers but they're and MSP or a partner level. So that means that it's a permission and private network. Don't forget about it. So if you want to know the metric of a particular node, for example, how many that particular node has how many transaction that particular node has been has processed and all that there is a metrics endpoint for each peer. And that can actually use Prometheus, for example, to record collect the data, the metrics of the peer. That's the only thing that comes out of the community. So if you want to have what I mean to say is that every partner may set up the metrics for their peers and have a holistic view of what's going on on their peers. They may not necessarily know what's the metrics on the peers of their partners. Of course, what they know is the through the service discovery is what is the topology of the network. But they don't know, for example, what their partners peers are running on, how many CPUs there, you know, what infrastructure they're running on, how much they've been loaded, because those peers of your partners may be serving, serving different channels, you're not part of. So you just have a view on the topology through the service discovery, technically speaking, and of course you have the metrics endpoint on each note you can use to to monitor your own peers. If the infrastructure is geographically located in different places where the latency is an issue or latency is a constraint, is there any measurements or suggested PPI which after the network doesn't support the transactions going through or is it just the way the network is configured. I think that's the question alongside what another person asked through. It is, it is all about how it's configured. Of course, you can do high transaction throughput, the complexity here and the real challenge. It's about how to synchronize those efforts between the partners on the network. You may not always have the chance to do that. And when you accept partners into your network, you have to ask your question how, how much of importance the transaction throughput is for you. So if you have more distribution of the notes and the partners, much harder it is to achieve high transaction throughput. So it's a it's a function that changes right so but I think it also goes back to designing and planning the network like what sort of data you want to store on the network because the business might not need to store every single transaction but maybe just, you know, a mirror of the transactions and some part of the collection data to be stored on the blockchain or on the ledger. So I think it also goes back to the beginning where you started with, you know, planning and designing. And as I mentioned already, even though your business might be located in North America, for example, they may provision their notes on two different geographical location to to facilitate or to manage that network issue. So you, when you endorse a transaction, you want to keep your endorsement endorsing peers close to each other right with the client of course because that's the client that if you do endorsement with a client that runs in USA and the peer is located in China, let's say then of course you could expect some kind of a slowness. But if the company in China provisioned peers in US and the client is in US, of course it will be much much faster, because it's all about endorsement I'm not talking about committing the transaction on different peers committing you know as soon as you have like 10 peers geographically distributed. You are actually interested in making fast endorsement so you can actually quickly stand to the ordering service. And then your committing peer might be distributed and the transaction committing phase is anyways asynchronous so it could happen later on some of the peers might be out of date of course but they will catch up relatively quickly after the transaction is done. Someone on this channel, I think referring to the diagram, what you have on right now, where we talk about different participants of a pipeline network, and specifically, I think, small businesses like I can imagine farmers in here, however, farmers are not always small businesses. How do you see businesses, small businesses who have very low tech, you know infrastructure or, or resources in the company, joining a hyperlegion network like how I think the question is how easy or how difficult is it from technical point of view to join an existing network. Yeah, I think, and that's, that's what my company is doing. The cost actually as of now to on board to provision the network provision the peers is is high and it's not about that much of the cost is not that much determined because of the complexity it's determined because of the people who the skills of the people that perform the task. It's highly technical. Let's be honest and you go and take, you know, a Helm chart, for example, many people even don't know what Helm is or never heard of Kubernetes. You know, it's you cannot assume that every business partner on a supply chain, or every farmer will have an IT department and who by chance may know about Kubernetes. So it must be easy and until that is close by easy to use software or maybe self managed or managed by software vendor at the low cost. You cannot expect that all of those small partners will justify being part of a network. They just want to mean it's doesn't make sense for them. It's about money, right? And if they don't get back what they invest, even some of them don't care. So that's why reducing the cost and making it user friendly easy is essential and for the success of the whole technology. That's that's how I see it. And that's what we're trying to do. That's what we're doing at my company, trying to reduce that cost. That's great. Thank you. Thank you. But essentially, as I do think that most of the as of now, you know, the short term kind of thing, most of the businesses that jump on a network, they will go with the service provider or software vendor, right? Maybe small companies, maybe it depends on what they can afford, right? But do it yourself is we are far away, maybe a few years until the, the, all the things are simplified to a point where a regular person with regular IT experience can do it by themselves. Yeah, we're the running network and setting up would be automated to a degree. So, you know, simply not to. Right, right. It will come. I'm pretty sure it's not just us working on it. So community is also aware. So it's, it just take time. But I think it never will be like Ethereum or Bitcoin where you download, you know, a dump and you, you start up your node and, you know, start making transactions on the network. As, you know, one of the essential and benefit of permission based network that you, you know, you create a consortia, you create the collaboration, you create rules on the network. How do you transact? What are the rules, how you change them and, you know, create trust between different entities. So I think it will raise a level of which customization will be needed or required. So I think there's a time for one more or two questions and one is from Maria Munaro. What in fact does amount of channels use of private data have unscalability. Amount of channels. Channels with, as I explained that since 2.3, they actually when you have a consortium, right. The whole point was about how many channels you may have between organizations, the particular set of organizations, right. And more channels you have that means you're loading, bringing clout on the ordering service. That was the key design issue or it wasn't even designed, it was like more of an implementation thing. Since 2.3, that's what they changed. So now you actually may have an ordering note serving X amount of channels. So you, you may have the same order or different orders serving different channels. So you don't need to actually have one ordering service to serve thousands of channels. You can break them down in a way that particular set of orders as I explained up to 9 or 11 to maintain a good performing craft cluster may serve particular set of channels. I think they come up with something that is how many let me cross check that I think 50 channels on an ordering note. So if you need to grow more, you bring another ordering note of course to serve the rest of the channel so that that's that's been solved more or less since 2.3. So essentially you keep an ordering notes to serve your additional channels if you're part of those like, you know, hundreds of channels of course managing those channels is a challenge. So I would be more concerned here about how actually you're going to manage the permissions and the deployments on those channels, then scaling the ordering service or adding an ordering note on. Okay. So we are just one minute after the hour. If you don't mind to just address one more question. Do you have the time? Yeah, I'm fine. Okay, just one more question until you answer it, I will make a random selection and draw the precious prize of today the IBM blockchain original socks for today if anyone interested who entered any questions so the question is from David events. Can you talk about the MSP organizations to a network when they have their own certification authority. Well, every MSP have their own certificate authorities that's that's how they are their logical entries by the way backed up or defined by their certificates. So when you're talking about membership service provider you're talking about a entity logical entity defined by by the certificates. What that certificate authority is behind that issues the certificates it's it's it's it's an open thing you it could be any certificate authority. And it could be Fabric CA hosted and managed by the partner, or it could be their own existing certificate authority so I know about deployments of, you know, use case where the company already have their own certificate authority and they use it to define their MSP. The MSP essentially as I mentioned is a set of certificates and an ID. Thank you. Thank you. And having that said, let's conclude this meeting. I'd like to announce that the lucky winner is your car jaws of johns on so your car if you can send me a private message with your email address or I can put my email address into the chat in here. You can email me if you interested in the socks I'm more than happy to mail it to you. And for the rest of the people I appreciate your, I appreciate your time for today, whatever you called in from whether it's, you know, a lunchtime in Eastern time zone is to East Coast or a morning session was for you in the West Coast. I'm always enjoying listening to you and the knowledge you share as as now as in the past, I really appreciate your time as well and your contribution to this community, I think it will be great. I mean great to see this knowledge spread away and I'm sure a lot of people will leverage what you just shared with us. And I hope and wish all the best for your business as well. And if there is anything else you would like to share please feel free to otherwise just want to say goodbye and on the and I will come back with the last note. Thank you last note and thank you for the opportunity to be part of the meetup very well organized and thanks thanks for your time and thanks for to everyone who joined the meeting and hope they got something out of it. I'm pretty sure and everyone please remember the Hyperledger Global Forum is going to be on the June, June 8 to 10. It will be virtual this year and remember to register and participate last year was amazing really good presentations and speakers so I'm sure it will be same this year and you can join from the comfort of your home. Having that said, goodbye everyone have a good rest of the week and we will keep an eye on or Hyperledger meetup group of Toronto for further announcement for future events. Thank you and have a wonderful day. Thank you. Have a great day.