 And now we are back on YouTube as well. We are live. Yes. We are live. Yeah. Yeah, awesome. So I think we can get back to that. So we were talking about network deployment and you're going through some of the same networks that she was going through. We'll quickly bring our screen on and go about it. Sure. I think it should be visible by now. Yeah, we can see it. We can maybe just jump into the other elements I think keeping close to the time. Yeah, sure. So yeah, we were talking about already deployed network that we created here. So when you look at the network that you have already deployed, you'll see services that are in the network. So in there, you'll see your peer services, your CA services, your orders, and what region these services were deployed in, what organization these services were deployed in. And after you have created your network, now we want to use this network with an application SDK or a smart contract. So for that, you can download Zeeves, and it artifacts to you. I'm sorry, I think your voice is sort of breaking. And voice is sort of breaking. Maybe you can stop your video feed, I think, to help out. Is it better now? Yeah, I think so, yeah. It's better, yeah. So after you've created your network, maybe you have your services. We can see the screen. Yeah, go ahead. Sorry. Yeah. Sorry about that. We've lost you again, Lakshay. I'm up here. Is it visible? Am I audible now? You keep breaking down in between. Or can you hear me properly, Lakshay? Yeah, yeah, I can hear you properly. Yeah, I think you can continue. Right, so after you've created your Zeeves network, you have your services deployed. Now you would want to download your crypto artifacts to connect your application SDKs with your network services that you have deployed. So for that, you can click Download Network, Download Crypto Artifacts button, which will help you download your crypto artifacts that Zeeves created for you. And in here, you'll see the cluster authorization data has been created by Zeeves just for you to access your Kubernetes cluster, all the channel artifacts that have been created, all the identities and certificates that have been created by Zeeves for you. And again, you can download your connection profile to help your application SDKs connect with the network services, which requires, again, TLS certificates and network URLs and services URLs, which again is a tedious task. Even if you're creating a dev network or a production network or a staging network, so in that sense, you save time, you save your efforts to help you with these small activities. Other than this, Zeeves also helps you with the analytics and network services, because that we have analytics and monitoring enabled on our network. So if I click that, this should be ready in a few minutes. Other than this, we have pipelines that we can see that have been done already on this network. So these are the jobs and pipelines that have used already to successfully package a chain code, install a chain code, and then commit a chain code onto my network. So for any pipeline that I have run, I really have to click this action button in there. I'll see the jobs that have been running inside this pipeline. And if I click that job, I'll see the data that has been produced by that particular job. So you can imagine this works exactly like Jenkins and other CI-CD set of tools, which help you automate your application workloads. And in here, it helps you automate your smart contracts and chain code workloads in a way. So we'll see how we can run a pipeline on our own in a few minutes. So this is Zeeves analytics, which helps you understand your cluster utilization in terms of different health aspects. For example, CPU utilization, memory utilization, that has been done so far by our application workloads or network services workloads. You get to see CPU utilization for all three workloads that have been deployed onto your cluster. You can, again, divide these stats namespace-wise or services-wise. And let's say if you have created a production network after that, we also deploy a dedicated refining from which it's set up for you to analyze your logs, help your, sorry, set up your alerts and notifications for different and fatal alerts for your services and the nodes that are running inside your cluster. All right, I think we can come on to our CI-CD pipelines now. So this is the exact network that I have in my UI opened in my browser. So let's say I want to see the services that are running inside this network or inside this community cluster. I can check the parts that are running inside this cluster. And in the parallel, I can also open up Zeeves CLI for the Fabric Chain Quotes. So if I see that, I'll see the operations that are supported by Zeeves CLI. So yeah, these are all the network services that have been deployed inside my communities cluster. So behind there, you'll see different blockchain namespaces that is created just for your organizations. And all the reverse proxy engine we're using in GRIS, engineering GRIS. And if I want to run a quick pipeline on my particular network, I should be able to do that with Zeeves CLI too. So for that, again, there are a few steps that you have to follow before you come on to your Zeeves pipeline section. With that, I'll show you a quick script that I've created for this purpose only. So you can imagine running your CI-CD pipelines like a shell script or you can imagine running it inside a GitLab runner, Jenkins, YAML job, or maybe in a GitHub workflow. I'll show you a GitHub workflow that we have simply we have created for this special network already. So first step would be to log in here with the access team that we created here with the API connection section. So then my first thing I think your voice is. Yeah, your voice is, I think, getting issues. I mean, no, it's still breaking that thing. So this is basically the tool in which you can actually put in places, whether it's Jenkins or GitHub workflows that will just be going through in a bit. But what it allows you to is to automate your chain code deployments and leave the node interactions onto Zeeves platform. So you don't have to worry about it. In your way, you just work on creating the chain code itself on the CI aspect of it, while we'll be the ones who will interact with the nodes and make sure everything is done accordingly. This feature also helps a lot of other protocols like R3 Potter and such where there's a certain procedure to all of this where you have to drain the node, you have to run the migration, then you have to bring it up again. So Zeeves does all of that sensitively as by the given standard by the protocol. We'll just quickly jump onto our GitHub workflows to show you guys what kind of CI CD setup we've got there. Actually, if you can jump to that screen and set. Yeah. Yeah, thank you. Your voice is still up. So we also, what other is there in Zeeves that we support a lot of public protocols as well, apart from these function protocols, you can deploy your own node on your own infrastructure or you can use managing cloud again as a service. And you can also, in fact, create endpoints. So if you don't want to filter your node to be running through your set, you can go ahead and create an endpoint for yourself. And we support a lot of protocols already for our endpoint services. Other than that, I think something worth mentioning about how the fabric is that we are working on a lot of amazing features with some of our big enterprise customers. And because of their needs, we are getting so much of maturity into our end of features. One of those features is that you want to build your consortium with other accounts of Zeeves platform, right? So that nobody actually governs the network for you. You have the access to your own network. You hold that key to your own network itself. So sharing your network profiles with others so that other people can also see the invite and get on boarded without having any knowledge of how how to fabric works, how to write DevOps scripts of how to fabric poverty scripts and all. Without all that knowledge, you can just keep onboarding your parties just by a simple authorization of where the cloud account onto a platform, which is a very big thing because the onboarding is one of the toughest part of, you know, in blockchain adoption, it's one of the toughest things to solve as of now. So that's more or less the platform. I do see that we are very low on time. I'm sure Lakshya did want to throw a bunch of stuff. But I guess you'll just have to wait for our next webinar for all those details, which goes into a lot more depth. We have IPFSR service as well that we're excited to show. But there are too many things to post-cover in one demonstration. So yeah, I think if there are any questions that we can go through them in the interest of time. Yeah, hi, thank god. We do have some questions from our audience. And if you would need more time, we can also prolong it a bit because we got disconnected. So it's absolutely fine if you want to do a couple of minutes more. Yeah, I think, but maybe Lakshya, if you can bring your screen off, we can go about it. But I think in the meantime, we can continue to answer the questions by the way, some of us feel them. I think that's a good idea. Sure. So we have a question from Alex Solaru. Alex, would you like to come off the mute and ask the question? Otherwise, I can just read it from Alex's question. So he's asking for AWS persisted volumes in AWS. What storage do you use? EFS, EBS, et cetera. And is it possible to configure the number of IOPS? So yes, for volumes, we usually use EFS because it has a better high availability in case of AWS. And yeah, you can manage a lot of things when you put EFS as one of the things that is probably in your network. Yeah. Yeah. Thank you, Sankal. And another question from Alex, and he's asking for CouchDB setups. Are you creating one CouchDB node per peer or a cluster of multiple nodes? If the CouchDB console is available for each couch DB cluster available, can indexes be set up for CouchDB in this? Yeah. So we are deploying one instance of CouchDB for every node. And this is again for the purpose of more decentralization that comes with it. And as well as ownership of the nodes that really has to be kept in mind when you are creating multi-account deployments. You can enable your own index and you can interact with the nodes. You can run your chain codes for CouchDB as well. All of that is somewhat of a standard, for CouchDB, CouchDB is usually the pre-print. So we've got a lot of customers that are already using it as you described here. Oh, thank you. And now we have Lilith also asking, Lakshya downloaded some artifact zip file. Where will that be used? So your applications of having the fabric would often need to create connections with the nodes in a secure way. And for that, your certificates of those nodes, the networks and the connection profile of how to interact with those nodes and network is quite important. So that comes directly out of our platform that you can consume within your applications to enroll new users to transactions and whatever you would rather do with a fully flexible network setup by yourself. You can do with this also. Thank you very much, Sankal. And Mayanak now asked to come off mute and to ask the question. Go ahead, Mayanak. Mayanak, you can talk now. Okay. Mayanak, you can just jump in later if you can do it now. But we have another, oh, I think it's connected to audio now. No, okay. We have another question from Alex and he's asking what is the largest size, number of order, spears, number of orcs for clients that have production fabric workloads running on Ziv? What is the approximate size in terms of total disk space of the couch DB slash level DB instances for these deployments? So this is again, so it depends upon your choices. Ziv allows you to have as many instances and nodes that you want, except below a certain threshold, which is just crazy for that with the fabric network. But you can basically configure a bunch of them as much as you want because you also get to choose the kind of configuration that you want to have those nodes running on. So of course, you know, you have to keep in mind while getting after pictures for how to get the solution that what is the scalability offered, how many orders will have what kind of impact on the transaction workloads, how much effort would it go in syncing and network IO necessarily amount. So all those things considered, you would want to keep your size of the fabric network relatively simple and not too huge or unless very much required. But you basically get to choose the new state, you can choose the firepower and you can choose the number of nodes and networks. That's great. Thank you very much, Sankab. I think Minak also wanted to come off mute but I think there are some audio issues for him as well. So hopefully, you know, Minak, you can also reach out to Zim or us later and we can address the question. Perhaps we can address one last question from Alex. He said, I guess, I'm curious at what kind of size levels you are start seeing significant performance degradations. So, right. So it depends upon the protocol, the topology that you choose and as well as, of course, the infrastructure that is powering in, right? So performance degradation would happen in one simple way that you can understand is the size of the networks that you require the block to be pushed forward to. But it also again depends upon your, the kind of peers that you would want signatures from when talking in particular aspect for hybrid fabric, right? So there's a whole transaction, life cycle to hybrid fabric where you would want that bunch of transactions full of block being transmitted by the different peers and based on the policy, you again would get the signatures. So the whole throughput and support of the transaction, the dissemination time of the block itself, it all depends upon your chosen topology and chosen, you know, infrastructure for that fact. Then of course, good things, good measures that people take forward, you know, while configuring the nodes, trying to make sure that your instances are at the right time for the right kind of workload that they'll be deploying and that's what you take care of in terms of management of the databases. For the Cows DB and Devon DB, I think that was a question there, I can't see it anymore. You can go to insert, Sankalp, and then they are there. Okay, got it. So, yeah, we're getting one Cows DB node per year, already, this is already answered. But I think, so Alex earlier asked also, but I also, we can, you know, Azure experts are available for you for such requirements, which could be quite a custom to your solution, if you don't want the standard requirements, of course, there is some amount of contributions that we can help out you with. So what is the proctomacell of this space? So again, this is for Cows DB already, you can configure, Alex, in the name where you define your nodes, you're adding your peers into it, adding more organization, you can always configure your Cows DB that will be inside this. Yeah, and I think we're back here with this screen full of hard workflows, and we can just go by the page of one of the workflows to show you how ZDeploy is, it's a pipeline. Pipeline? Yeah, yeah. So I have a complex, I mean, an application which includes a lot of microservices here. So let's say you want to run CHD pipelines for the chain code microservice, or running as a microservice inside here, given this person. So I'll simply go into its directory where I have set it up in my local. And I'll show you the workflow that we have set up for this application. So yeah, you can imagine you're running ZCLI as your Jenkins pipeline or your GitLab run a pipeline. So in here, there are a few steps that go on. So I can describe it in this screen better. So first thing, first step is to package your application, sorry, package your chain code and then install your chain code onto different organizations that are participating into your network and then approve those chain codes onto your organizations and then deploy it before you start committing a chain code. After your chain code services have been deployed onto your organizations, the last step would be to commit it with required arguments in it method that is required for your chain code committing. So for that, I think we can visualize it with this step. So I have this pipeline configured such that every time I push a new release, this pipeline will trigger. So my existing release is, so I don't have any release. So let's say I want to release for version two. So all I have to do is create a version two release and I can push it onto my repo. So this is again going to run all of the steps which includes logging into my CLI account and then packaging my chain code and then installing it onto my organizations and then approving it. So yeah, let's visualize it better. So I'll go to my actions and then I see here the pipeline that is running inside it. If I click this job here, I'll see the steps that are going to run for my pipeline and meanwhile I can set up a watch for my parts. So these parts should refresh on their own and in here we'll see these parts terminating and then recreating again, which are my chain code application parts running inside my plus. Other than this, again, you can imagine creating a workflow for your application that you've deployed. So this is a small bank application that I have deployed onto this network because it gives you whole access to your cluster. So let's say I log in into my application and here I'll see all the blogs and transactions that have been done so far on this application. So this is again a small bank application which helps you manage custodians for your ATMs and generate new ATMs onto different locations as a bank administrator. And then help you generate the ATMs OTP. Right, so this application basically is the one that is using those crypto artifacts in order to pay connection to the nodes, do transactions, view all kind of data to provide here. Okay. And one of the question I saw was about if we can add an organization onto our existing network. So yeah, we can do that. And also we can add more number of peers onto our existing organization. So let's say you want to add an organization onto your network. All you have to do is come into action section here, click add peer or add organization according to your own need. So let's say I want to add organization here. Now I have to fill out details about my third organization or fourth organization that I'm going to add onto my network. Same details that we saw while we were creating the network to fill out my admin CA details and then my admin CA passwords again and same level of details for peers and orders. And you can finally hit create and this will start creating your third organization. So workloads onto your network. Yeah, so it's basically the same set of that that we were doing at the time of deployment of the network. Maybe we can show some public protocols bits as well actually around some of the existing nodes maybe and showing their interface on Z. Sure, so I think this pipeline and successfully we can see that one of our application chain code parts has been recreated here and it is running fine now. All right, let's jump on to public protocol nodes. So I have an Ethereum node running simple energy network. So did you see? Yeah, sorry. So you'll see, sorry, yeah, okay. So you'll see a number of nodes that you've deployed for your Ethereum network. And again, with these networks, sorry, with these nodes, you get to see the cloud that these nodes were deployed inside. The node name that you gave to these nodes that are on in this network. The region these nodes were deployed in and again, the operation that you can perform on these nodes. So I only have one node here and if I want to connect it just to see what its endpoint is, right? I can see the connection endpoint here. So automatically a lot associates your public protocol nodes with an endpoint grid by NLZ services and you can copy these nodes and endpoints and you can start using them with the help of this quick code snippet which can help you integrate your three. And yeah. So it basically provides special kind of configurations by every protocol when we're dealing with them. And we also provide IPFS as a service which is a requirement a lot of times by a lot of solutions in blockchain space. So we can mainly go to that screen as well which will basically show you how you can manage the ZDFS which is basically the service of IPFS to our customers. So recently, yep, yep. Oh, go ahead, Sankalp. I just wanted to say that we also have a question from the audience but you can continue and we can take them later as well. Okay. Yeah. So the ZDFS is basically an offering of IPFS as a service. You can choose to utilize it as an endpoint for your application where ZDFS is an endpoint for your IPFS nodes. Or you can use it as an API, as a dedicated node for yourself as well. In case you want better performance and better information dedicated node benefits that come alongside with it, you can keep that as well. So we will also put kind of features for IPFS as a service. And yeah, this is thoroughly documented. We won't go into much detail here but basically it can manage the node for you. So you don't have to worry about the integrities that come alongside with it. Yeah, I think we can go to the question. I can see it. Yeah. Mayanak said that he will unmute and ask the question. So please go ahead. Hello, am I audible now? Yes, you are. Yeah, thank you Thomas. Yeah, hi Sankalp, Mayanakumar this side. So my first question is like whether it can, the product you have showcased, so whether it can be also used at any point of time for like smart contact development? Yeah, so that's correct. So while you have a lot of exchanges that we're going to encode for the time being, you can create your chain codes very easily and do that because then you can deploy them. But what is important is that you should have a base environment where you can actually do the development and test your chain code development. So this particularly allows you to, spawn your networks in a couple of minutes, a simple library network is something you can make use of and you have the endpoints, you have the connection profiles to be able to quickly do your testing on a running environment. You can disable much of the features like volumes because it may be unnecessary for your usage at the time. It is just an expense otherwise. But you can do all of that development testing together. So it would be like, we can bring out a package of a chain code or something like that and we can utilize this product for end-to-end infrastructure deployment, right? Yeah, so one more thing like, is it limited to a cloud infrastructure or we can also use it for like local deployment or we can use VMs or other private servers for the deployment of the network? So we have helped a lot of our customers with on-premise deployments as well. And why we don't like to move away from standard deployments. So let's say, happy the fabric is more preferred for cogonities. We'd like to keep it with that but we can do bare metal deployments. We've done it quite a few times. So it's not something new, it's something very, very reasonable to a lot of firms. So on-premise deployment is that, but it would require our enterprise support to be able to help you out with that. So it would be, we can say like the product support, but yeah, it would be needing some support from your guys, right? Yeah, so it would require some on-premise setup because based on what kind of... Yeah, I understand that. So other thing is like the screen which I can see from every point of view. So whether the peers or the organization who would be joining the network of a hyperlegislator, okay? So whether they would be having the same replica of this screen or their point of screen like having the transition history or something like quite like as an hyperlegislator. Right, so hyperlegislator and similar application use cases are more like applications that you can deploy over your network. You can choose to do it on the existing cluster or you can choose it to have it on separate infrastructure just connecting to this cluster. That's some bits that we allow. So yeah, you can use that application you can deploy with it. We are not as of now I think providing a dedicated explorer for hyperlegislator. But yeah, you can see that. And for different account users, yes, I'll only see the networks that I'm allowed to have access to the workspace access that I've got. So we can't see each other's networks unless it's required. Correct, correct, correct. So if suppose a channel or suppose a network is being created with a channel. So can we use the same product for creating a subnet inside the network? Yeah, so you can create your own channels or you can use the existing ones. So at the time of completion, you must have seen Lakshaya Hat, check that box up to create a channel and automatically join all the peers. You can do that if that's something that serves you. Otherwise you can use those typical artifacts again. You can create as many channels as you want to join those peers that are going to present to you. No, I was asking like, suppose there is a channel inside a hyperlegislator network, like Lakshaya has created the mine channel, right? And there would be some peers who would be joining the same channel, right? Suppose if we want to in hyperlegislator as a hyperlegislator support, we can create another channel in the same network which we call like subnet, right? And the limited peers who would be required or who are required to join that particular different channel, different application channel. So that functionality we can use from this product, like we can create another channel and join all the peers who are like necessary for that private channel or private network, like subnet. Yeah, you can do that. You can actually as many as you want using the similar same artifacts and then you can do that. You can even, it's your, if you have bring your own cloud, you have again full flexibility over your database account and you can add more to your VPC. You can do those kinds of things as well. Although we advise you a certain amount of caution when you're dealing with a pre-deployed workloads. So like who would be like administrating? Who would be the one who would be controlling all those activities around the network? Suppose if we... Mayanak, excuse me to interrupt, but now we are already a quarter past the hour. So I will need to wrap this up. I'm sorry about that. But you know, if there are any questions, please feel free to reach out to us or the Ziv team. And then we will address the questions. Yeah, thank you very much and pardon for interrupting. Thank you, thank you. Thank you. Sangal, would you like to add anything to wrap up or shall we just... Otherwise, I'll just wrap up. No, I think that's very good for today. We'll be having a lot of interesting sessions. I'm sure in the future we'll be very excited to show you guys what's next up on our platform. And yeah, feel free to sign up and see our platform going. If you have any problems, any issues, any interest, feel free to reach out to us. We'll be all yours for you people. So thanks for a great session. I'd like to be chose by our happy community as we've been since the early days. And yeah, very happy to be here. Thank you. Thank you very much, Sangal. As well as Lakshay and Gan and to all the participants, sincere apologies for the technical issues. I hope you've still gotten a lot of it. And thank you again, the panelists. I got some very positive comments both on YouTube as well as here. And thank you to all of the attendees. It's very nice to see that you are so engaged and unfortunately didn't manage to answer all of your questions, but feel free to reach out to Zivteam. Also Gan has shared some contact information there or to us and we will communicate it with the team of Ziv. So Sangal, Lakshay and Gan, thanks again for this really fascinating presentation. So one, a couple of more things before we wrap up. I would like to ask you to invite you to join the Hyperledger Discord. And we have a Discord community and please join it. You can post your questions here and you can also see how to participate and contribute to our community as well. We also have some other upcoming Hyperledger Foundation member webinars. And these are in-depth webinars with our member companies like the one we see today. Where they discuss the products and services they are building. Next week, you're welcome to join Antoine from Splunk who will talk about observability and blockchain and provide a deep dive into fabric and basil. Please join us there and go to Hyperledger events to register. Last but not least, Hyperledger Global Forum is happening again live from September 12th to September 14th in Dublin, Ireland. It will be a great chance to meet each other in person again. Thank you again for the panelists and to the panelists and to attendees. And again, this recording will be available both on our webinar libraries as well as on the YouTube so you can always come back and revisit some of the information shared today. And apologies again for the technical difficulties. We look forward to seeing you at the future webinars. Thank you, Ruan. Thank you, Thomas. Thank you, Gan. And thank you panelists again and also our participants with some great questions.