 I will quickly share my screen and okay. Aditya, could you just stop recording, stop sharing, yeah, thank you. So let me know if you guys can see. All right, so I'll start with, first of all, a welcome and a thank you for coming in here. This session is organized by Hyperledger Hyderabad and we are, we started our journey from 2017. And I'm sure, you know, a lot of you might already have attended a lot of sessions, we are 43 live events and still counting. One of the largest community, at least in India, so we have around 30, 1300 plus members across India and world. And we are also, you know, available on WhatsApp groups. So if you like us to, you know, add you to that group, you can, you know, mail us, the email IDs are here, and we'll also share it on the chat later. And if you'd like to, you know, since we are a community where we look forward to learn and expand. So in case you want to be the next presenter, you're most welcome. You can reach out to me or Karthike. We both fully this chapter. And I can't, you know, stop talking without saying a word of thanks to Vikram and Kamlesh who had been extremely supportive of arranging this discussions and you know, helping us everywhere because of them we are able to present this time as well. Now I'm just stopping my screen and quickly want to introduce Aditya. I'm sure a lot of you might already be seeing the kind of work he's doing in the community. You know, he's a, he's a trainer and a coach at Udemy. So you might be already seeing a lot of sessions arranged by him there. He's a very active maintainer of Hyperledger Labs project with the half a decade of experience in blockchain development. He is an engineer, blockchain engineer at Walmart. And I, you know, hardly welcome you Aditya for a very fun ride and a learning session, informative session from you. So, Floor is all yours Aditya. Thank you. Thanks Ritu. So I will share my screen and probably let me know if you are able to see my screen. So my screen is visible to you all. Yeah, we can see it at the time. Thank you Ritu. So the topic for today's meetup is, is going to be towards running Hyperledger Fabric Network on Kubernetes. And this time we are using a tool called Lechelet Populator and probably I will give more context about this tool, like what this tool does and why this tool was actually created to solve some of the specific use cases. So before that, like I would like to introduce myself. So I am Aditya and currently working as a blockchain engineer at Walmart. I'm also a certified Hyperledger Fabric administrator and Kubernetes developer as well. And apart from that, like I am also instructor at Udemy. So I have published couple of courses around the Udemy in the space of blockchain and Kubernetes. And I am also the maintainer at HLF Hyperledger Labs project, this HLF operator project. So probably and today we are going to see this operator only. And you can find me on this link, like if you will just Google this link, you will probably find all the ways to reach out to me. So let's discuss about like what HLF operator is and probably then we will discuss like why this operator was created. So we all know that Kubernetes and Hyperledger Fabric, they both are completely distributed systems and probably managing these two distributed systems under one roof becomes very, very much complicated. When you have two different distributed systems, they both are kind of standalone and you have to bring these kind of system under one roof, then becomes very, very much challenging. So HLF operator is a Kubernetes operator that is specifically designed for Hyperledger Fabric and it solves some of the use cases around the fabric. And it is pretty much going to make your life very, very much easy. This project is currently under Hyperledger Labs. So recently last month only we mired this project under Hyperledger Labs and all the operations that we are going to see in today's demo or whether you are going to set up it on your own as well. We will be using a Qubectl HLF plugin that comes along with this operator and all the commands and all the operations that you will see, they will be through this Qubectl HLF plugin. So I just want to throw some light like why this operator was created. So we have already a couple of solutions available in the open source market. Like we have Bevel, we have Argo-based Helm chart and we have a couple of other open source projects as well around it. So this operator is solving some of the specific use cases. Let's say there are some organizations which all you know, Kubernetes, but they are not pretty much familiar with the fabric or ways to deploy it. And in the organization, they are pretty much dealing with the Kubernetes manifest those YAML files. So this operator provides a declarative way of creating those components. So pretty much here as well, you are dealing with those YAML files and you are just writing those declarative commands and pretty much the components are created for you like peers or ordering service channels, all those are created with just the help of one command. It provides an abstraction from the initial bootstrapping of the nodes. So probably if you have tried to set up this network on your local machine with the help of fabrics, sample examples. If you have tried to run this network, you might have seen a lot of things go when you run those script starting from generating those certificates, then creating the genesis block and then joining the channel. So pretty much a lot of things goes there, but this provides a lot of abstraction and pretty much the commands are very straightforward and imperative. So you just hit a command, let's say you want to create a channel, you just fire that command that channel is created for you. So you don't have to do a lot of initial bootstrapping for you. This is done by the operator itself. This based on Kubernetes. So this can pretty much work on any Kubernetes environment, whether it is on on-prem or cloud. So all you need is a Kubernetes cluster and you are pretty much good to go. This is highly customizable and in terms of customizability, you can customize it for specific use cases. Let's say you want to renew certificates every year or within every six months, you can customize it. You want to increase the storage that your peers or CS servers or ordering nodes are using. You can pretty much do it. And also, if you have any specific use case around it, you can pretty much customize the plugin and the operator. So that is very straightforward. Underneath it is using the operator SDK. I think that is from the Red Hat. So pretty much you are interacting with the Kubernetes operator and then it creates components for you. So now I would like to discuss the basic functionality of a Kubernetes operator and then probably we will see a high level component diagram of this HLF operator. So on the left side, we have a user and user is pretty much interacting with the custom resources and the custom resources comes along with the operator. So when you install the operator, custom resources are created for you. Basically the custom resource definitions are created for you and you create custom resources using those definitions. In our case, the custom resources are peers, orders, certain orders and chain codes. And we have an operator. So operator is nothing just a go program that is running on a Kubernetes cluster. And it has a reconciliation group that runs within the operator. So what it does is it basically subscribes or tracks the events in the custom resources and whatever event change it receives it basically try to take decisions on those events. So let's say any investors try to match your desired state with the current state of Kubernetes cluster. So what I mean by this desired state and current state. So let's say you define a custom resource and you want to have let's say two peers. And somehow your network has only one peer maybe it is failing or it isn't it doesn't have like the community cluster does not have enough resources in terms of memory CPU or maybe disk. Then maybe it could be a pending state as well. So this operator is will try to match your current state and your desired state. So our desired state was to having two peers in the network, but the current state in the network is having one peer. So we basically this operator will try to recreate the peer or it will take necessary actions whatever is required to make the peer up and running. So this was the high level overview of how operator works. And this is this applicable to any kind of operator not actually for this is the working of any kind of operators so operator basically tracks the custom resources that you create. So they subscribe to the events and they take necessary decisions according to those events and try to maintain to your desired state with the current state of the cluster. So now let's discuss the hlf operator. So on the right side you can see we have a user machine component. Basically, this could be this machine is referencing here, a user who is trying to interact with the cluster or trying to interact with the operator. So it could like the user can interact with the cluster either using the cube city hlf plugin that we will see in today's demo, or using the Kubernetes API so communities has like communities exposed rest apis and you can use either those apis to interact with the cluster you can use some communities SDKs client SDKs like you can use client go library or in Java as well we have a library so libraries are available pretty much in all the popular languages so you can either use those apis or you can use this cube city hlf plugin. And just to give you a context this cube city hlf plugin is also built over client go library, which provides you a way to interact with the, with the cluster. So in near future if you wish to, like you don't want to use this cube city plugin, you are free to you go like underneath it is using the cubes client go library and pretty much you can write a custom application on the top of that cube city that client go library and interact with the community So this cubes hl plugin or your apis they are going to interact with your custom resources and they can be peer order or see or chain code as well. And our operator has subscribed to those custom resource updates so any update any update that is happening over these custom resources. hlf operators tracking those resources and it is doing the necessary actions so actions could be in our case it should be a great action. So let's say you create a custom resource. And operator receives that event and basically try to create a deployment with those necessary details that you provided in the custom resource. It could be same the same could go with the order as well in the certificate authority as well. Any questions or so far. Hello, I haven't got anything on the chat by the way. Yeah. So, yeah, so just to summarize this slide so basically you can have either a cubes hlf plugin or you can use client libraries or apis to interact with the community stress and you can write your own application out of it. Like for this project for this operator we wrote this cubes hlf plugin and it is basically a client. It isn't it is a layer over client collaborate and we are using this to interact with the cluster. So pretty much you can use your, your own application out of it and directly interact with the cluster. So now I would like to discuss some of the features of this operator. So it helps you in creating the certificate authorities so you don't need to manage anything. Pretty much like you, all you would be dealing is you would be mostly registering the identities and enrolling the identities. So managing of the certificate authorities done by the operator. The team is done for the peers and ordering survey as well. So ordering service creation or peers creation, those are done by the operator for you, you just fire a command and it will provision those resources for you. And you don't have to manually go and provision the cryptographic materials. So with the help of just a single command you can provision the crypto material, like the certificate or any artifact unit, you can create them and you don't have to do a lot of before that. And operator also supports domain routing with the help of studio so you can give your complete DNS name to your peers and your ordering service or your certificate authorities so you can expose them outside of the community cluster using the help of domain routing or studio. It supports running, it supports external chain code as well, and we will see this in the today's demo, like I will be running the, I will be installing the chain code as external chain code. And currently, like this fabric supports version 2.2 plus. So right now in today's demo I'm going to show you the latest version which is 2.4 and we will like all of our peers and ordering service they will be running on version 2.4. It also supports certificate renewal as well so you have to just run a command and it will renew the certificate for you. And it works pretty much well for the peer certificate but when you are updating, when you are in the order certificate, there is some manual intervention that you have to do. And also I just like to add one more point here that this operator is in the operator we are using the channel participation API. I think this was released in version 2.2. So you don't need to have a Genesis block in this setup and pretty much like we will be starting with our application channel and you don't need to have those Genesis block and we will be using like the orders also going to join the channel and peers are also going to join the channel using that participation API. Yeah, I see a few couple of questions on the chat. So one is does this support Hyperledger 1.4? That's question number one. Okay, so it does not support 1.4. We started with version 2 and in future as well we don't have any plan of supporting version 1.4 because things are like pretty much different in 1.4. So the idea is like we started this with version 2 and we will like scale it like whatever latest fabric version we will get, maybe 2.5 or version 3, we will continue on the forward approach. Absolutely. And the second one is how do you handle the backup and recovery in case of a disaster? Okay, yeah, pretty much interesting question. And I think this is a valid use case question. So this operator is not designed to handle those use cases. This operator is focusing or the idea of having this operator is to focus only on the Hyperledger fabric part and the backup and the recovery. They are pretty much out of the scope of this operator. But if you want to set up backup and recovery, there are pretty much some solutions available in the open source and in some paid solutions as well. You can use Velero and I have pretty much tried Velero for backing up the fabric cluster using this operator and it worked pretty much well. I have like tried Velero and I did a backup and basically I did a cloud migration. So in that case, my cluster was running in some cloud A and I migrated that to cloud B with the help of Velero. That is our open source solution and you have couple of paid solutions as I think there is one solution called Castin. So you can use those as well, but they come with some cost. So can you use Velero along with this operator? So the idea is, so the whole intention of having this operator, we just want to focus on the HLF part, not on the backup and recovery. Maybe in future we can think of this, but backup and recovery is a pretty vast topic and I think that is very, very much a leaf for this operator to incorporate it. Thanks. Okay, I hope that answers your question. And we have one more, Aditya, you want to take it now or maybe after your demo, we can park it. So, yeah, after that I have demo only. So let me just answer this one more question I see in the chat. So yes, yes, it does support unjoining of peers and channels because we are using the channel participation API and channel participation and channel participation API works in the joining and unjoining part. So yes, it supports that feature and this feature was not available in the, I think in the prior version of fabric, but with, and if you are using channel participation API only then you can use that join and unjoined feature. So now let's jump into the demo. So before that I would like to tell you the prerequisites that you need to have if you want to run this operator. Basically you need a community structure where you will be running this operator. Then you need a cube CTL command line. So this is the command line, using which you interact with your community structure, you need a help. So help is basically a package manager. It helps you in installing various health charts. It is similar to what you have in, in Node.js you have npm or in Java you have, I think maybe. So crew is basically a plugin repository. And in the previous slides as well we saw about the cube CTLHR plugin. So using this crew, we are going to install that plugin. Now I just want to come or discuss this high level component diagram that we are going to see in today's demo. And this will be a overall setup by the end of this demo, we will have pretty much this kind of a setup. So we will be having two namespaces. And so first name space is the default. Second name space is going to be the fabric name space. And in the first name space, which is a default name space, there I am planning to have the HLF operator and my operator is going to reside on this name space. In the fabric name space I will have all my organization, whether it is a peer organization or an ordering organization. So I will have two peer organization, org1 and org2 and one ordering organization. And inside the peer organization, I will have two peers. So I will have org1, peer1 and org2, peer2. And similarly in org2 as well I will have org2, peer1 and org2, peer2. I will have one anchor peer in both organizations. So you see this block marked as A, which means that it is going to be an anchor peer and one one anchor peer for both organizations. I will have only one ordering node, but you can have N number of ordering nodes as per requirement. And the name of that ordering node will be org-node1. We will have three CAs. So first one will be the org1 CA, second one will be the org2 CA, and the third one will be the order CA. So these CAs will be responsible for issuing the certificate for the respective organization. Then I have one my channel. So we have one application channel, which is my channel, and pretty much all the peer organization or ordering organization, they are going to use that channel to make the transactions. We will be deploying one chain code, which is my CC. And this will be an external chain code. And for this demonstration, I have specially prepared the Node.js part, like we will be going to see the Node.js chain code. And I will come to this in the later part, like why I chose Node.js only for the chain code part. Yeah, so now I will share my terminal window and probably from there we can take it forward. So before that, I would like to show you that I have already created a community structure on the distortion, because that was the fastest way for me to create a community structure. And this is a two node community structure. And let me just download the two configs so that I can interact with the community structure. Yeah, it is done. Now let me just move to terminal and hope my terminal is visible to you like the font size is visible to you all. Yeah, it is. So I'm in one directory. I have already hit this directory and this is a shellfk8 and pretty much this directory has just one folder if I show you this the fabcard folder. And this is a chain code part I will come to this folder later when we are actually dealing up the chain code. So apart from this, I have nothing in this folder. And first thing, what I'll do is I will basically set up my cube CTL so that I am able to connect to my community structure. So let me export the. Sorry guys, I think. Right. Somehow like my system got reported. So let me just go to the same folder. Yeah, and so the last thing that we did, we were basically trying to export the cube config variable so that we can use it along like I can use it to interact with the community structure. So let me export it again. So it got exported. Now let's see if we can connect with the community cluster. So let me just run this command to get nodes just to make sure that we are connected to the cluster. Yeah, so you can see we are connected to this cluster. And I have two nodes up and running in this cluster. Now, the first thing we have to do is we have to basically add the operator, like we have to install the operator and before installing the operator, we have to add the repository, the hand chart repository of it. So let me just write the command for that. So using this command, you can basically add the operator hand chart to your repository, local repository. And after that we can install the operator. Now let me install the operator. So using this command, which is Helm install and where we are specifying the version that we want to install. And this is the chart. So let me just run this. So this is going to create a operator in the default namespace. And if you want to install this operator in a different namespace, you can pass hyphen hyphen namespace flag and pass your namespace. So it got installed. Let's see if it is there or not. So we can see we have the operator here and it will take some time to create. But meanwhile, we can move to the next step, which is we have to install the Qubectl hlf plugin and that plugin we will get from the Qubectl repository. So let me just install the plugin. So this is the command Qubectl queue and then install hlf. You have to give the name of the plugin. So the plugin name is hlf. And this will install the hlf plugin on your local machine. It is already installed. But if it is not available in your machine, it will get installed. So now I think our operator is up and running. Let's verify this. So the operator is up and running. Now what we have to do is we will first try to grab the environment or the storage class because this operator needs a storage class when it is provisioning those resources for you. It should like the operator needs that knowledge that on which storage class we want to create the persistent volumes for those peers or storage for order or C. So let's see what all storage class we have. So we have only one storage class, which is do block storage and it depends on the type of cluster you are using the cloud provider you are using. Just run this command. So this is going to get the storage class and set it into one environment variable with name SC. So if I just do echo SC, it should show me the, okay, it should be dollar. So it should show me that storage class. I think we are pretty much good till here. We can create a namespace because all of our fabric components like peers, CS or chain code as well. They are all going to be live in this fabric space. Perform in that namespace only. Yes, so the names is also got created. And now the first thing we have to do is we have to create the CS because in any network. First thing you need is you need to have the certificates and then only you can start the peers or order or do any kind of operation in the network. So I will put the command for creating up a see it. So this is the command which will create the see it. Let me just make you walk through this command. So this is the plug in that we installed with the help of crew. And here we are specifying that we need to create a see it will specify the storage class. So if your community stress multiple storage class, you can pass appropriate storage class as per your requirement. And remember you, we just set the value of this variable in the last one. And then we specify the capacity that how much capacity this see it will get when it is actually bootstrap the name of the CA. So if I'm going to give the name to this is all one see, then I'm going to give the initial or the bootstrap identity. So if you remember when you actually start the NEC or CA in the in your local Docker network as well, using fabric samples you see that the bootstrap identity is passed or required. So this is the same identity, and then we are specifying the password of it, and then we are specifying the name space over which we want to create the see. So this should create a CA. And now let me just create the CA for auto and order as well so I will pretty much like run the both the commands at the same time, just to save some time. So just to summarize, this is the risk. The first command is going to create a CA for auto. You can see here, and the second command is going to create the CA for the order. Yeah, now we can just check if we have all the CS upper not in the fabric name space. So you can see that they are still in creation phase because the operator is trying to provision the persistent volume basically underneath storage for them and we can check those storage as well if I do cube CTL get PV in the like let's do all name spaces. So you can see, it is basically provision, it has provisioned these persistent volumes for those CAs. So you can see order CA is bounded to this persistent volume, or one is bounded to this persistent volume and auto is bounded to this persistent volume. And these are the storage that is specified in the command as well and this is the storage class. Just to confirm, let's see if the peer, the CS are up or not. Yeah, so you can see CS are up or not. Okay, so now we are done with the CA part, like the CA setup part is that we have to issue certificates for the components. And then that we'll see in couple of minutes. So now let me export some in my variable and these environment basically these variables are referencing to the peer image that you want to use in their version. So, like I initially told you that we are going to use the version 2.4 so I'm using the version, the peer image 2.4.3. And the order version is also 2.4.3 and we will be passing these environment variables in the next command whenever we are going to create the peer or order. Okay, I can take up one question. Okay, so there is one question. You can take the questions at the end I'm just. Okay, carry on with your flow. Okay, so now we are done with the C creation part but we need to have the identity is like peer identity we saw whenever in fabric, you know that everything is permissioned and you need to have certificates to access those those resources. So let's create an identity for the peer. And if you see this command this is here we are registering a peer, and here we are specifying the name of the C, which is going to be issued. Then here we are specifying the, or your peer name, we are specifying the for that identity, then we are specifying the type of identity so in fabric we have a couple of different type of identity like I think a peer admin and order and client as well. Then we have the enrollment secret of the C, we are going to receive the certificates. I'm passing the namespace on which we have the C running. So you can see with the registration is successful. Now in the similar way, I will register all the four P will get the identity. So this is for the odd one peer to let me do it for odd to pure one. This for the odd to pure one you can see. And the last one is odd to peer to. Okay, so all the four peers got registered. So now we can create the actual peers. So, let me put up the command for that. So this is the command which is going to create a peer for us. And here we are specifying the enrollment ID identity, the one that we just registered the MSP, and then we are specifying the peer password and the capacity for that peer. Then we are also specifying the peer name, and then we are specifying the CA against which we got this identity. And one more thing. So here we are specifying the state databases couch database, the default value for this is level DB but if you're not specific, then it will key, it will take the default values level DB but you specify this as the couch DB, and it is going to create a peer container as well. Here we're specifying the peer in action. These are the same environment variable that we just exported a few minutes back. So hope everyone is able to follow me here. Yes, definitely. I will quickly create the, I will use the same one to create the org one peer to and the remaining as well here like the peer for the command for them quickly. So this command is for org to peer one. And I will show you the parts as well. Once we are done with the peer creation I will show you that we have actually these parts I will level. So we can check if we have all the parts. So our parts are just coming up so we can see we have part for all one peer one, we have part for all two peer two, and we have part for all two peer one, and then couple of seconds we should have part for all two peer two as well. So it will take some time, but meanwhile what we can do is, I can show you that that underneath, this is just a deployment that is created by the operator with all the values that we passed through the command line. So if I show you the deployment, you can see, these are all the deployments that we have created so far. And you can see off to peer two is also coming up. We have covered the CA part, we did the CA setup, we did the peer setup. And now let's try to create the admin entity so that we can actually make use make, we can actually do some transactions, it could be joining the channel or it could be installing the chain code all those transactions that we can do with the help of that admin entity. So first we will register entity and then we will enroll that entity. So this is the command to register an entity. So we are basically registering an entity against this CA. So we'll get the entity from the R1 CA. This is the username for that entity. This is the admin password and the type of identity that we want to create. So we are going to create admin identity. And remember when we were, when we actually created the identity for the peers, at the time we used peer as the type. This is the enrollment details of the CA. And then we are specifying the MSP against which these identities are going to be issued. So this is going to register that entity with R1 CA. And now once we have done the registration, we can actually enroll it. So in enrollment, we basically get the certificates of that entity. So I will show you like how we are actually getting the certificate and how we are going to use those certificates. So using this command, which is HLFC enroll, we are enrolling the identity and here we are passing pretty much the same details. There is one difference in this command, which is CA name. And just one thing I just want to add here that when we actually create the CA, internally the operator create two kind of CA's there. So it creates a signing CA or and the second one as a TLC. So here we are going to get the signing certificate. So that's why I specified the CA here. And in the order section, you will see that we will actually do the enrollment for both for the signing certificate and for the TLS certificates as well. And this is the output file where our certificate are going to be stored in our local machine. These are the MSP details and the namespace over which we have the components. And if I do and like if I do LS here, probably you should see one or one hyphen peer file. So if I do LS, you can see I have or one peer dot ML and this is contains the certificate with the private key and the certificate signing certificate of that identity. Let me do a cat of that. So you can see I have the certificate and I have the private key for that as well. Okay, now let me do the same thing for the OCTO as well like let me try to get this registration and identity like the registration and then one part for the admin entity but this time for the OCTO. So I'm pretty much running both the commands at the same time. The first command is the registration command. The command is pretty much same, but the major changes on the name. So this certificates will be like the identity will be registered with OCTO CA and the MSP ID is OCTO MSP and the enrollment command is also same, but the output file is different and the details are different here. So if I do LS here, you should be able to see OCTO hyphen peer dot ML. So you can see we have this file as well and again this contains a certificate for admin, which is registered under OCTO MSP or OCTO CA. Okay, so now we are pretty much done with this part. Now we can do the order setup and just like we did for the peer so in order as well we have to first register the order identity and then we can create the ordering node. So the same kind of thing we did in the peer as well like we first registered the peer so that we have the certificate for that and then we actually created the peer nodes. So using this command, we can register an order and this is the same command what we did for the peers. A couple of things are changed here like the order name change got changed. The user got name got changed the secret obviously and the type of identity got changed. So remember I was telling you that we have four kind of identities. So peer, order, admin and client and then we are passing the enrollment certificate and then we are passing the order MSP and then obviously the namespace. So this is going to register the order identity and then we can create the order using this identity. So here, using this command we are actually going to create an order and here we are specifying the storage class. We are specifying the order ID and this is the same ID that we use in the very last command, which for which we did the registration, the MSP details and the enrollment password and the capacity for this order. So you can customize all these parameters as per your requirement. Let's say you feel that 2GB is not enough for your skills, probably you can try to increase it to 5GB or 10GB of storage. Then the name of the ordering node and then the CA name from which it is going to pull that identity. Then we are specifying the image version, image name and the image version that we exported as an environment variable. So this should create an ordering service for us. So the ordering service got created. Let's do get parts and see if this order is getting created or not. So we can see it is creating and meanwhile we can move to the next step, which is we have to get the order admin as well. So in the previous commands, we tried to register an admin identity for both the organization, but now we have to do the same thing for the ordering organization as well. So let me just clear up this and now we can register and admin. So this is the same command. This is pretty much the same what we did for the organization admin, like org1 admin or org2 admin. Just the names are getting changed, like the parameters are going to be changed. We are getting the identity from ordered CA, then the MSPID is also changed, but rest of the things are same. So now we have registered an admin identity, but we need to get the signing cert as well as the TLS certs. So to get the sign certs, obviously we have to enroll the user and while we enroll the user, only at that time we get the certificates of them. So here you can see the CA name is, I'm passing the name as CA because in this command, we are going to get the signing certificates and the output or the certificates are stored in this file, orderingservice.yml, adminorderservice.yml. So let me run this and if I do LS here, you should be able to see adminorderservice.yml file. So if I do LS here, we can have, yeah, so we have this adminorderservice.yml. Similarly, we will try to get the TLS certificates for this admin. So the command, this command and the above command, if you'll see this command and the one that I've just put it here, so you can see this is pretty much the same. The only you can see here is on the CA name side. So this time we're just trying to get that T30. So we pass the CA name as TLS here and then the previous command we were trying to get signing certificates and the output is going to be stored in some different file. So the output is going to be stored in admin-tls-orderservice.yml. So if I do LS, you can see we have the admin TLS ordering service. Okay, so now once we are done with this, we can just quickly grab the connection profile so that we can interact with the network. So this is the command and here we specify the organization which we want to make the part of connection profile. So only these organization will be there in our connection profile if I do cat of network config, which is my output file. So you should see the connection profile. You can see we have these three organization and this is the standard connection profile that we have that we use with our application to connect with the fabric network. Okay, now we will do one more thing. So we will basically try to add the user, basically the admin user that we created in the previous command to this connection profile. For that, I would like to open a VS code here. So this is my VS code and this is the network connection profile that we have here. And you can see that in the organization section, we have a user's property, but right now this is empty. So we are basically going to populate this field from these admin certificates that we create in the commands. And this one we are going to put there and this same like we are going to put Pog2pr.yml in this connection profile under the Pog2 organization so that we can make the transaction. So for that, we have one command there in the Cubesetail plugin as well. And you can either do this thing manually or you can use the command line. So let me clear up this and let me try to run that command. So this is basically going to add this file which was having the admin identity for the org one into this network config under the name admin and for this MSP. What I meant with this command is, let me show you. So if I show you my network config here in the ordering section or on the organization section, you can see in the org one MSP. We have now the user section got populated with the admin certificate and its private key. Similarly, we will do the same thing for the org two as well like right now you can see the org two user section is there, but we will populate this as well. So it should have been populated there as well. You can see the org two section is also got populated. Okay, so now we are done with the connection profile and now we can leverage this connection profile to create the channel and organization joining as well. So let's try to create a channel. So this is the command which is going to create a channel with name with the output as my block my channel dot block and the name of the channel is my channel. So these are the organizations which are going to be part of the channel. So remember if you have ever tried to set up a network on your local machine. You use a config tx.yml file and then you specify all the members were the part of the channel or the consortium. So this is the same thing that we are doing but you don't have to get that config tx.yml file you just write this command and it will do the underneath stuff for you. So the channel got created and we should have one my channel dot block file here as well. This file you can see. Now we can make our order to join this channel. And remember, when we register when we register the admin identity for the for the for the order, we basically did the two kind of enrollment there first one was for the CA. The second one was the TLS ID TLS certificates as well. So in this command, the next command that I'm going to run that will be going to use that admin or the TLS identity in order to join the channel. So here you can see the identity that we are passing is the admin TLS. This is the TLS one, not the normal signing certificates. So I answered the question in the chat as well. Okay, so now order has joined the channel and I would like to show the logs of the order as well. Let me do let me get the pods. So we have ordering node here let's try to get the logs of this. Logs, logs, pod name and then the namespace is fabric. Yeah, so you can see we have some blocks here. And you can see it joined the channel. My channel and the concentra is one of this five like why we got the concentra is one because we have only one ordering node, and you can see the order has started in the draft mode as well. So it has pretty much like started listening to my channel. Now I can clear this. Okay, now I can make my peers to join the channel as well. Command is this. So we are basically joining up the channel. We have to pass the channel name. We have to specify the network config. We specify we have to specify the user who is actually going to perform this activity. So in our case, that is the admin entity that we just add in the network config and the peer the target peer which is going to join the channel. So this should like our odd one peer one should join the channel. So it has joined the channel and now I'm going to run the same command for remaining three peers. Let me run the command in a one go. So now all the three peers are remaining peers are going to join the channel. So all one peer two is also going to join channel or two peer one and all two peer two is also going to join the channel. So you can see all the three has joined the channel and we can see the logs of them as well but I think we are limited on the time so I'm not going to show the logs now. Maybe if we have the time then we can see. So now we have so now so far what we have the the CA we have done the peer part like we are starting up the peer we have done the ordering part as well we get the ordering service. Then you did the channel creation part channel join part is also done. And now the next step is we have to add our peers as anchor peers. So remember if you if you recall the diagram that I showed here you can see I am going to add one peer one and two peer two as the anchor. We're not able to see we were not able to see that diagram if you're showing a diagram. Okay, okay let me show you again. So now can you see. Yeah it is visible now. It is so this. So, this is the diagram that I was talking about so we have all one year one and up to peer to they will be acting as anchor peer in our network and so far our all of our peers has joined the channel. And but now the next part we are going to do we are going to make them as the anchor peer, this peer or one peer one and up to peer to. So this is the command. The org one peer one as the anchor peer on this channel, which is my channel and the behavior passing the network config and the identity which we registered, which got added into our network config as well. So this should add our, this peer org one peer one as the anchor peer. And similarly, we have to do this for the org to peer one as well. It's up to you, you can make both the peers as anchor peer as well, but to make it very much simple, I have, I like your organization. So both of them has joined the like they are there already part of channel but now they are the anchor peer as well. So now we are done with the anchor peer setup as well now we can do the chain code setup. And we are doing we are setting up the chain code as external chain code. So probably the steps that you see might be different. So if you have done the external chain code setup anytime in the past you, you can recognize those steps. So I'm exporting the chain code name, I'm keeping the chain code name as my CC, then I will create a meta metadata. And this is the case of external chain code. Okay, let me just run this command again. Yeah, so let me clear this. And here you should see this meta data.json and this is nothing just environment variable that we exported the my CC. This has a simple two line JSON file. Now, we have to create a connection.json as well. This will hold our chain code address or the chain code service address that peers are going to use to connect to this chain code. So here also, if you see, we got this file which is connection.json. And if I look at of connection.json, you can see that these are three line adjacent here we are specifying our chain code name and the port over which this chain code is running. So maybe we have to create some tar files or the archives that will be installed as they will be required when they're actually installing the chain code. The first thing that we will do just that connection.json file and put it in into this code.tar.zz file. So it is going to basically archive this file and create a code.tar.zz as out. As you can see here, we should have this file which is code.tar.zz. Now, what we have to do is we have to basically run a tar command again. But this we take this meta data.json and along with this we have to take this code.tar.json and put all these two files into one file and I will call it as my channel external tar.json. So basically that is an archive and this is the standard process that we do with the external chain code. So nothing fancy here. And if I show you, we should have one myc6ternal.tz. This contains basically one meta data.json and this file which is code.tar.zz. Now let's calculate the package ID because we will be needing the package ID as next commands. So this is coming from the plugin itself. So here we are using the calculate package ID function that is available in the plugin and we are specifying our chain code name. We are specifying the language. So I told you that initially that I am going to use the Node.js as the language for my chain code and then we are specifying the label or your chain code name. So if I do eco there of package ID, we should have the package ID. You can see we got the package ID. Now we can actually install the chain code and when actually when we're installing in the chain code, we have to install this archive. The archive that we just created, this one, my channel hyphen external.tz. So we have to install this. So for the demo, I am going to install only in the 1-1 peer of each of both organizations. So this is going to install this archive. Here I'm giving the path, the network config file and the language. This is the chain code label. This is the user who is going to make the transaction and the target peer over which I want to install this chain code. But this is not actually installing of the chain code. We are not actually installing the actual chain code here. Rather we are defining the definition of the chain code to our peers that you will, near future, you will receive a chain code with these parameters and this will be the name and label. So it got installed. Now let me install this into the second peer, which is up to pure one. So you can see the target here is up to pure one. Okay. Yeah, actually, I can show you. Okay, so now I just want to tell you that like why I chose Node.js only. Like I chose Node.js for this demonstration. The reason why I chose Node.js for this external chain code or for this meetup is because in the fabric samples, we have the example only for the for the goal and chain code and in the internet as well I tried to find out there is very few examples that are available for the Node.js external chain code and I have received this queries quite a lot on LinkedIn and in through other mediums as well that they they don't have any idea of setting up the Node.js chain code as external chain code. So that's why I show I'm going to show you here. So let me open my VS code just to show you that what exactly I have in the chain code part. So remember in the initial folder here. And this is the chain fabric samples nothing fancy here is the sample chain code that I just copy pasted from the fabric samples fab card gene code. Same structure I took here. I made a few modification to this. So first is this Docker file, because when you are running the chain code. As external chain code, you need to dockerize it so I have created a Docker file out of it. And the second thing is on the package.json file. So here in the start script, I have made it to start it as a as a chain code server, not as the normal using the legacy method where we were not running the chain external chain code. So only two modifications I've done. I modified this start script and the Docker file. So I had this Docker file and this is what we are going to deploy into this demo is this fab card gene code. I have already dockerize this and I have already published this chain code into my Docker repository. So I'm not going to build that chain code here and push it into the Docker hub, because I think we are already running short of time. So now I can deploy the chain code. So here in the deploy chain code part have to pass my chain code name. So this external chain code command, which is going to deploy the chain code. I'm specifying my chain code image. So what you will have to do when you are building your own chain code, you just talk rise it, basically build the image and push it to Docker up and you can, you can reference your image here. If you have image, like if it is on the private repository, like it is in the private Docker repository or in some any private repository, you can basically pass the image for secrets along with this command. So we have a flag for that as well, where you can pass the image for secret. We're specifying the chain code name here, the namespace over which we want to install the chain code and this is the package ID that we exported as an environment variable. And rest is the replica. So how many replicas of this chain code we want to have. So I just need only one replica. So I will take up the questions at the end. So you can, you can pose the questions, but I'm going to pick the questions at the end only. So now if I see, if I to get parts in the fabric namespace, we should have one of my CC, this one as well my CC chain code that is running as a container or a part. Okay, so now we have installed the chain code. Now it's time to approve the chain code and in approval only one organization is going to approve the chain code. So from one, we have to do, we have to get one approval from both the organization. So this is the command which is going to approve the chain code. We are passing the network config file, the identity that going to take this action, the target peer and the package ID and then you have to pass the version in the sequence number. So any time in fabric when you install the chain code or even when you approve the chain code, you have to specify this version of the sequence ID that is pretty much helpful when you are actually upgrading of the chain code. Then we're specifying the policies. So these are the pretty much standard policies that we use. So here I am keeping the or as the endorsement policy and the chain code against which this chain code is going to be installed. So this should approve the chain code from a first organization, which is all one, and then we can get the approval from the second organization as well. So this time the target peer is off to peer. So yeah, so approval is done. Now we are left with the chain code commit part and then we can simply invoke and query the chain code. So let me put the command to commit the chain code. So this is the command. And I told you that only one organization is going to commit the chain code. So here we are passing them SPID, the user and the version and the sequence number and the chain code name and this is the policy that we specified in the approval processes. Then I'm passing the channel name. And with this command, we should be able to basically commit the chain code. Successfully. And now we can simply invoke and query the chain code. So let me just first invoke the query. Let me just do invocation first and then we can do the query. So this is the command which is HLF chain code invoke you pass the connection profile, you pass the admin identity here or the user entity. You pass the target peer who is going to take this operation and you specify the chain code name and the channel name. And after that, here we are passing the chain code details like the function which you want to invoke from the chain code and the arguments you want to pass. So using hyphen a flag, you can specify the argument so the fab card chain code that I installed that has a create card function and that needs five parameters. So I think first one is the ID second one is the I think make the model then color and the owner. So I'm passing all the five parameters here and this should invoke the chain code and I should get the transaction ID in return. So it got the transaction got initiated we got the transaction ID. And if you try to query the chain code. Here the difference is HLF chain code query this time instead of invoke it is query the rest of the parameters in and here you are passing the function the chain code function and the argument and one thing is there that this. I'm passing the empty parameter here because this plugin expects you to pass at least one parameter and you have this this flag is actually mandatory in the, you know, in the case so you have to pass it anyway. That's why I'm passing the default as a empty parameter but it does not have any significance with this function. So they should show me the record that we just inserted with these details like the yeah so we got the records we got the key. The ID was 100 or the ID was 100 and then we got the color, the make the model and the owner, you got all those things. Let's try to run one more function. I think that was query car. I'm not sure. And here let me pass the car ID. Yeah, so we got just one record. So in the previous one we were getting an array in the return but this time it is our object. It is working pretty much fine now. Now I think I have covered most of the stuff like we have seen the C set apart we saw the peer setup we saw the order set up and then we did the channel part where we joined the channel and install the channel, join the channel and install the chain code as well. And then we get the query and location part as well. So I think yeah I'm pretty much good here. Yeah, so just one thing I just want to show you here is that there are some more commands as well that I just want to show you and these are commands that you might not be needing frequently, but it's good to know these commands as well. So this is the command which is which shows you the leisure height or the number of blocks each peer has. So this is the kind which is channel top. And then you specify channel name and then you specify the admin and you specify the peer which using which we will be going to make this a transaction or the audio going to query the leisure. So it should show you all the peers with this with their channel height as well. You can see you can see we have seven blocks here and we have the height of seven so all of them has the seven blocks with them. Let me clear this one more thing I just want to show is and I think the next command is going to be pretty much useful for you and this is mostly required when you're actually doing some modification on to the channel. We want to do a channel update process. So this command is basically going to inspect on your channel. And then it is using the connection profile and it will like I'm showing the response in this my channel.json file and we'll see like what this my channel.json have. So this basically this will have your channel detail or the channel detail or this basically a channel block, which is required when you are doing the channel updates. So let me show you my channel.json. So this is the same JSON file that you can see which we get when we fetch the channel when we fetch the config block or the channel block. So you can see we have the channel group. And then we have the org one MSP. Similarly, we will have the org to MSP as well. And these are their certificates. Then we have the org to MSP as well. And we should see the concentrates as well, which are basically related to the ordering service. So if I try to search consent. So here you can see the concenter and we have only one concenter because we were having only one ordering node. So you can see this client certificate server certificate and the host over which this order is running and it's support as well. So I think I am done with the demo. I just want to show you the logs as well or let me do one thing. Let me show the couch DB UI as well. So that we can see that data is actually persisting in the couch DB. So if I do get parts in the fabric namespace and the couch DB port or the container is within the peer port. So we have to use it to get the data. So if I do port forward and then I can give the port name. This was the port name that is running in fabric namespace and I want to expose. I want to put forward on port five nine. I think if yeah, this is the port for couch DB. So if we go to browser and here we should see I have to use hyphen underscore utils as well. So the details are couch. Yeah, so this is the couch DB interface. And here you can see we have one record that we actually inserted into the database. Okay. Yeah. So I think I'm done with my site. I can now take up the questions if there is any. Yeah, there are a couple of them. You can see it. Yeah. Okay, so let me. So first question I can see is I think there is one question why do we need why do we require and roll order twice with CA and TLC. Okay, so whenever. So we needed this TLS and signing certificate for the order when we were actually joining up the channel so at the time you need. The certificates as well. The general participation API requires you to have a TLS connection, while you're actually communicating with the different components. So that's why we need the TLS certificate. Okay, the next question is do we have any UI instead of see right. Okay, so there is one open source project that is that we are actually doing on and though we don't have any much plans for that. We tried it as a, as a kind of a POC. And yes, that tool is basically right now, give you a view state like you can see the all the components but it does not have the capability of actually creating up the network from the UI. So, if my screen is visible, let me try to show you or let me just. The screen. So I will once I find that I will just post it in the chat the UI part of it. But yeah, but that is that is not our primary area of focus that project, we did that. And this I think the CLI that we are giving as the part of this that is I think pretty much customizable and you can customize this as per your need. So I think so the UI that that we are building or that we are planning to build that might be a different for your use cases as well. So if you if you want to have some different kind of user interface maybe you want to have, you want to provision something automatically or you want to give some extra options to you user that is up to you why this. Okay. So, any other questions do we have. How do we delete test data from the network. So what do you mean by test data like I'm not sure what this data means here. So if you can just let like tell me what do you mean by this data, I can probably help you. We cannot delete anything from here. Temporarily we can delete, but when you restart the couch TV you will be getting the data back if I'm not wrong. Yes, so, yeah, so deletion. So if you see if I show you my terminal window, you can see that all of our resources they are using pvcs. They are actually bounded to a pvc. So even if I delete any part, let's say they get deleted due to some reason some out of memory or anything happens in that happens it is automatically going to attach to the same persistent volume. So you are never losing the data, like accidentally if you if you delete something, you will still have the data with you because and so it is persistent volume, not the actual file system of that part. Okay, there is one more question how can it one note from Azure one from AWS. Okay, so I have made a video on this on my YouTube channel, I think that that would be a pretty much better way for you to understand this but yeah this is pretty much possible. You can have two different community cluster and you can, you can basically make them join to a single channel. I have this video on to the on my YouTube channel, you can like get more details from there. I have one more slide as well. This is more to this part where I have some links to show you. So this is the link for the repository, the popular and you can like feel free to create a feature request or you can start a project if you feel this is worth. And you can find me on these links so using this link you can pretty much find me anywhere like you can find one link in Twitter, anywhere, and you can book a call with me as well. And for this meetup I have created a discount coupon as well. So this coupon is applicable on all my courses so pretty much you can use this coupon to avail the course at discounted price. Okay, I got some in that in the time of deployment. We check the chain code with some sample data. So you mean that when you actually when you, I'm not able to understand like what does this mean like. Okay, I think this is, this is the connection to the question on how to delete the test data. So I think means that the test data which is created at the time of some deployments. Yeah, so, so on chain code, if you see, like, I just, I just created a one transaction, and that I did, I like when I had to do the transaction only then only that moment I basically created the transaction and before that we were not having any data so if you see my couch DB here as well I have only one record. And this record I just created just to showcase you the invocation part. So part from that we don't have any test data anywhere in the chain code and this this pretty much depends on how you're structured your chain code as well. So if you want your chain code to have some test data, let's say you as soon as your chain code starts you want to write some data to the leisure, you can pretty much do it but again that is on the chain code part not on this operator. That is the logic of your chain code that you have written in a search of it that you want to put some test data as soon as your chain code starts. Okay, I have to show you one more things. So there is one client example integrations as well I have done. So you can see this is the operator repository. And here is one examples for this for like two applications one for the go and one for the node just so you can use either use can use go library for the fabric or you can use a node just library for the fabric to interact with your Kubernetes cluster. You can pretty much register identity and role and entity invoke the code so pretty much you can do anything with the same connection profile that we saw in today's demo as well. So here you have to put your connection profile. How do you maintain a backup for PVC and application factor. Okay, so the backup thing is out of scope of this hlf operator this operators focusing only on the fabric part and backup and the disaster recovery they they comes under they are that outside of the scope of this operator because they apply not only to this operator but your community cluster. So I would suggest you that you can explore tools like Valero. So there is one tool Valero that is open source tool and widely used, and I also have one video around it on my YouTube channel so probably you can check it out. So I will try to do the cluster migration from I think I did that I did from Digital Ocean to Asia, so I may get my entire data between those two clusters and for this operator as well I have used the Valero as well in one of the use case. Yeah, let me put it in the chat. So you can use that Valero tool to do the backup and recovery as well you can you can basically do the scheduled backup Valero as well. Okay, great I think if there are no further questions we can close this. Thank you so much Aditya we can't thank you enough for spending time and helping all of us understand about this. Thank you. Probably you if you anyone if needed this thing, they can probably take a screenshot of this. The discount coupon is and the links are pretty much here. So you can feel free to check out this repository and start if you find this this is helpful. Hi Aditya, I'm Hars this is the last thing I just wanted to check like this session is recorded right so where can we get this. So I think on the YouTube channel this will be posted so probably from there you can get it. Yeah, we'll be sharing that maybe on the meet up page itself. We'll share that. We'll be posting it on our LinkedIn and it will be uploaded on Hyperledger official YouTube channel and please follow our LinkedIn page it will be available later. Yeah, sure.