 Hello, everyone. Thank you all for coming for today's session on Deploying Hyperledger Fabric with Kubernetes Operator Framework. So let's get started with a quick introduction of myself. My name is Manang Bhatni, and I'm a blockchain and a full stack developer from India. I've been working on different blockchain applications for the past three years now, and I've experienced working on multiple blockchain platforms like Ethereum, Hyperledger Fabric, Algorand, etc. I was also one of the Hyperledger interns for the previous year's Hyperledger Internship Program, and I contributed to the Hyperledger Chalo project during that internship. I'm pretty active on Twitter and Telegram, so if you have any feedback or questions for this session or anything else, feel free to reach out to me. So for those of you who are not familiar with the blockchain technology, let me give you a quick introduction of what it is. So a blockchain is essentially a digital list of changes that has happened to an application state. Now this list of transactions is called a ledger, and in order to create a new change, one has to create a transaction, and that will be appended to the ledger. Now, for example, a user wants to come in and change his profile. So that would count as one of the transactions and will be appended to the ledger. Now anyone can go back in time and see what change was made and by whom, and this ledger is an important piece of a blockchain application, and it is also distributed across a group of computer systems on a network, and they all contain a copy of it so that the application state or the list of changes that have been made to the application are always verifiable and auditable. So this ledger is also completely immutable. So if you want to change the state of the application, you can create a new transaction, but once you have done that transaction, it cannot be reverted back. So these nodes also decide the next transaction that will be added to the ledger, and one single node cannot do that. Therefore, all of the nodes should come upon this agreement of the new state of the ledger. So a node proposes a change to the ledger, which is then distributed to all of the other nodes, and a consensus algorithm is then used to make sure that the transaction is correct. So these consensus algorithms are of different types. For example, proof of work, proof of stake, delegated proof of stake, and they use some different logic to find out the next transaction, the next valid transaction. A blockchain is actually a sub-part of distributed ledger technology. So a blockchain uses blocks to form this digital ledger, and the blocks act as a security mechanism for maintaining the shared information or the state of the blockchain. So every block contains a list of transactions as well as a link to the previous block. So that's why there's a chain formation, and that also increases the security of the blockchain. Blockchains can also be classified on the basis of their accessibility and transparency. So a public blockchain is completely transparent and even can see the data, whereas a permissioned blockchain is partially transparent. So some members can be allowed to see the data, whereas some can be restricted. Private blockchains are usually used for development purposes only. Ethereum and Bitcoin are great examples of public blockchain, where all of the data is completely available to the public. So you can go to the internet and go through a list of all the transactions that were ever made on these blockchains. Now users are also treated equally, which means all of them have the same privileges, no matter who they are. Identity is also anonymous. So if you create two different accounts, none of them can be traced back to you or to each other. So the underlying technology of such blockchains was great, but they certainly could not be used for applications like supply chain, asset registry, banking, finance, etc. So enterprises were really fascinated and they loved the idea of having immutability, trustlessness or irritability in their applications. But they also needed a block chain where membership can be controlled, and they can know who the members were. And the transactions can also be made secret. Now in the hybrid of fabric came out of this need of the enterprises. So hybrid of fabric is a framework which is used to create permission blockchain networks. It is a framework under the Hyperlegion project, and it is maintained by the Linux Foundation. Now contributors of hybrid of fabric are throughout the globe. It is an open source project. And you can you can find the documentation on hybrid of fabric.tree.docs.io. Hybrid of fabric has a great list of features that makes it really popular among the developers. So you can write the chain code in Go, Java and Node. The ledger also has SQL like capabilities. Privacy is also a great feature of hybrid of fabric because you can create channels where only a certain members of the consortium can transact among themselves. There are also membership services where you can create identities or you can also revoke identities from users. The consensus algorithms are also very flexible and scalable. And there's great throughput. The whole architecture of fabric is completely modular. So you can swap from different membership services can swap the consensus algorithms to so these these features makes it a really popular choice and great for enterprises because because it has it also has great support from from the community. So these are the components of fabric. So there's ledger which contains a list of all the transactions. There is chain code, which is a software that is running on peer and it is responsible for changing the state of of the application or the blockchain. Now peers commit the transactions and they also keep a copy of the ledger. They are also the endpoints through which the application is interact with the blockchain network. And there are orders which which decides the order of the transactions. So peers send transactions to orders, they bundle them and they decide the order and then they send it back to the peer to to commit. The channels are separate spaces for members and every channel has a separate ledger. So if a consortium member is not a member of a channel, they cannot see its ledger. Now there are also MSP services which authenticates and manages identities on the network. There are wallets, which are used for securely managing a user's credential. There is certificate authority, which is used for registration and revocation of identity. Now there's also a state which holds all of the data of the blockchain and its applications. And there's consensus algorithms which are used for deciding the valid blocks. So these are the nodes that require for creating a fabric network, this certificate authority, which is used for registering identities and renewing or revoking certifications. There's order which is used to manage the order of of the transactions. There's peer which works as an endpoint for the applications, as well as stores the ledger and commits blocks to the ledger. Certificate or certificate authority is completely modular. You can either use the fabric's own certificate authority or your own certificate authority as well. So the new version of FabLedger Fabric that is the version 2.0 introduced some great new features like new decentralized governance for chain codes, private data enhancements as well as the ability to use external chain code launches. So earlier the chain code, which is a very essential part of the blockchain network had to be deployed inside Docker containers, even if you are using Kubernetes. But now you can use external services and host them anywhere and use them as chain code for your blockchain network. So we're going to do this practically later on in this session. So what is Kubernetes? Kubernetes at its basic level is a system for running and coordinating containerized applications across a cluster of machines. So it is a platform that is designed to completely manage the life cycle of Docker containers or any containers as such. And it takes care of the scaling the failover of your applications and provides deployment patterns like stateful still sets or replica sites, etc. So you can sell fee, you can scale, you can group, deploy their applications or your containers. So Kubernetes is really good for managing containerized applications for the most part. But if you're looking for managing complex stateful applications on top of Kubernetes itself, you can look into the communities of leaders, which are great for that. They make it easy to come manage these complex stateful applications by providing custom backends for custom resources. And they are the clients of communities API and and they they they allow any level of customization to be done to the deployment process. So the operator pattern aims to capture the key work of a human operator. So a human operator has a deep knowledge of how the system should behave at certain events. Now these Kubernetes operators can be coded or designed in a similar way to get the desired result out of the out of the deployments or the applications. So a custom resource would contain all of the application configuration. And the controller will contain all of the business logic. So as soon as a custom resource is deployed, the controller will take charge and deploy all of the other required Kubernetes objects like stateful sets, config map services, etc. For that custom resource. So create these operators. I used a framework called the operator SDK that makes helping operators easier. There are a lot of tools for code generation and scaffolding which which really helps bootstrapping a project really quickly. And you don't have to waste a lot of time writing generic controller code. So you can test the operator locally and see if it works correctly. And then if you have to deploy it for for a production level system, you have to deploy it as a separate deployment in the same cluster. So it will keep keep watching for the for the custom resources and do the task as as you intended. So let's get started with the code. I'll start sharing my screen so that we can we can go through the process of creating these operators as well as configure them configuring them to create to create deployment for a hyperlager fabric. And yeah, let's get started. So first of all, these are the few things that we need to to run this operator. Make sure you have both of these repositories cloned on your systems. The first one is the operator itself, whereas the second one is the fabric are the fabric specific files and scripts. The other one that we need is a five binaries. So we'll we'll need this to do some fabric specific operations. And we also need operator SDK to deploy the operator on our on our communities cluster. You cannot cannot use the Docker image to deploy the operator for for today's demonstration purposes. I'll be doing that locally. So I already have the I already have the depository clone on my system. So let me give you a quick look through the code of the of the operator. So there are two important folders in this in this code, that is an API folder as well as a controller folder. So the controller folder contains all the controllers. For example, there's a CA controller, there's a PR controller. So a controller will be created for all the all the custom resources that we create and API API file will also be created for them. So the controller contains all of the business logic, whereas API defines what a custom resources. So I'll show you a custom resource API. Yeah, here you can see there are two fields in our in our peer struct. One of them is a spec one, and the another one is a status. So the spec contains all of the specifications of our of our of our peer. So all of this code is automatically generated by the operator SDK. And you don't need to do think a lot of about how this code has been written. The most important part is that you need to meant you need to specify the spec specifications as well as the status. So the status contains contains the status of our of our deployed deployed resource. So let's say you deploy a state will set set for your custom resource. So then if you attach a service to it, you might need to know that what its access point is. So you can store that kind of information in the status, whereas the spec would contain things like image name or the configuration parameters, the ports, the resources, etc. So here you can see the common spec has three fields. There's MSP, TLS, and NodeSpec. So as I told you, the NodeSpec has all the all the node specific information, whereas MSP and TLS contain the certification files. So these need to be converted to the base 64 format before you put them here. And these will be required by all the three nodes, not just PR for the purpose of authentication and connecting with each other in the network. After that, let me show you how the PR controller is working. So as you can see, this is a reconciled method. The first function to run is the setup with manager. So the setup with manager sets up a new controller. And as you can see the for, so the for parameter here is for the primary resource that this controller will watch. So as you can see, our controller will then run this reconciled function that will execute as soon as there is some change or a custom resource gets created. So yeah, so as you can see so as soon as our custom resource is created, the controller will detect it and it will create a secret service for it. As soon as the service is allocated, I'm checking here whether it has any ports or not. And if it does, I'll change the status of our custom resource to the access point of the service. Then I'm also creating a stateful set which will contain the main PR container. So the secret would contain all of the certifications that the PR would need. So as you can see, I have the TLS certificates as well as the MSP certificates here. The service is pretty basic. I'm attaching the secrets as a volume to the stateful set later on. So as you can see, I'm also giving a volume to the stateful set and the container has this command which will start the PR. All the environment variables will also be set and the volumes that are needed by the container are also given here. So this is pretty basic and as soon as this happens, you can see that you'll be able to see that our PR is deployed and you'll be able to interact with it. So this shows that you can write any kind of code and this can be a highly customized code that you can add here to the reconciled method as well as you can configure it in other ways too. So you can do pretty much anything that you want with this container. You can log the details to some server. You can fetch things from other servers and you can do all kind of stuff because this is just a generic Golang code. Now once you make a change to these APIs, you need to learn to make two commands that is make manifest and make generate. So these will create the custom resource files and the APIs are required for our controller to work. So you'll be able to find the custom resource definition here in the config CRD basis folder. So this is a CA custom resource and you need to install it on your Kubernetes cluster before you can deploy a resource with this type. So now let me quickly run the operator. This is the command you need to run the operator locally. You can also run it by using the command make deploy with the image name. So in my case, this would be make deploy. Managing Fabric with a Kubernetes operator and it will automatically deploy the whole operator for you as a Kubernetes deployment. Now our operator is working and it is listening for the peer, the order as well as CA custom resources on our on our namespace. The namespace is default. Now I have some sample files that can be deployed to stand up our fabric components. So here they are. Let me show you how the peer looks like. So this is the peer custom resource and you can see this is the certifications that this peer will have. So we'll generate these using the cryptogen command for now or you can also take these from the fabric CA and all you need to do is convert them into base 64 and put them here. We also need the core peer location as well as the binary files for the for chain code builder. This will talk about later. Now you can put in all the configuration parameters in the conflict param section and the image is hybrid fabric 2.2.1. So now if I have deployed this custom resource and you can see the operator says that it has successfully reconciled a resource with kind peer and with name peer0-org1. So let's see the status by deployment and you can see here is a state pool set peer0.org1. Since I'm using Minicube I need to mount this directory as one of the directories of Minicube so that my peer can access these files. So I'll just mount this to the home slash fabric location. We can see what what is the status and the container has started and you can see our peer has successfully running. Now let's just deploy other nodes too. So I'll deploy the second peer as well and the orderer. Now before this we need to generate some fabric specific files like the certifications and the Genesis block. I've already done that that's why I haven't showed it here but you can use the Kubernetes Fabric Network repository to do that as well. So if you do create or search it will create certifications for org1 and org2 as well as the orderer. The script create Genesis will create a Genesis block and put it in the orderer files folder. This file this folder is being accessed by our pod to get the Genesis block file. So I've mounted this directory as a volume to the pod. So as you can see the orderer 0, the peer 0 and peer org2 0 all have started. Our orderer is giving an error of not finding a Genesis block. So let's just restart it because we generated it right now. Let's delete it and then create the orderer. Let's wait for the orderer to start. As you can see the container is in the waiting stage. Now the last one we need to deploy is the cj. So let's quickly deploy that too and it has been successfully reconciled. So here it is. Our orderer is also started. And as you can see the orderer is running and the system channel has also been created from the Genesis block. Now what we can do is create a channel. So first of all create channel artifacts for a channel in my channel. So it will create a channel configuration file. It will create two anchor peer transactions as well. Now since all of our channel artifacts have been created let's create a peer CLI, connect to that and create a channel. So I'm going if you see there is a peercli.tml file right here so you can just create a CLI deployment pod from that. So yeah, as you can see a connect CLI pod has been created. So let's go into that and all of the files are here. Now let's see the create channel file. Yeah. So let's put in the channel name as my channel export these two paths and the peers details like its configuration files and the address will be peer0-org1-7051. So once all of this is done let's send in the command for creating a channel. So our orderer is at this host and yeah rest all is pretty okay. There is error coming and this is unsupported okay. Let's put the channel name once more and now you can see that the channel has been created and you have received the bug 0. Now let's let's do other let's join the channel from the peers as well so you can add the copy paste the commands or you can just let's run the script let's run the script and you can see that the channel has been created sorry the channel has been joined. Now if you do peer channel list you can see the peer is part of my channel so now we can successfully say that our fabric network has stand up and we can do all all the commands that we can do from a normal fabric network we haven't deployed the couch db for the peers yet but they are also pretty straight forward and I'll be doing the operators from there as well in the future so I'll push those changes to the repository as well now another thing that I wanted to show is the chain code is an external service feature that has been recently introduced in the fabric version 2.0 and in this feature you can execute the chain code from an external service so the chain code previously had to be deployed using Docker containers even if you are using Docker capabilities and now they can be deployed as an external service to any host and the peer can execute them right from there so in order to do that you have to change a few different files here and there I'll show you what all these are so first of all you need to change the code .eml file and in that you need to go to the external builders key and add the location of the external builder so this path slash builder slash external will contain this binary files these three files that I've taken from the fabric sample repository for external sample code from the external chain code chain code folder so these are pretty basic I haven't changed anything about these so all you need to do is mount them to this path and set a name of the external builder now the chain code has to be modified a little bit to make sure that it is compatible with this new feature so the first thing that you have to change is metadata.json and it has to be set as type external so this is the first thing the other thing is the connection.json file has to be present so if you have TLS if you have TLS enabled then you can have the TLS certificate here as well the important thing here is the address so our peer will look for this address to connect with the chain code so if it changes if you have to connect your peer to this chain code you have to set a host name here so let's do that let's deploy one chain code for the organization first on port 8999 that is that now what we need to do is go to the chain code external folder in the communities fabric network repository and we need to package this chain code as as a chain code package which will then be installed on the peer the code itself does not need to be present because it is running as an external service all we need is the metadata.json file as well as the connection.json file once our chain code has been closed at terminal so once our chain code has been packaged we can install it to our peer so let's go to the peer CLI and then to the chain code folder let's first get peer once configuration parameters now let's go to the chain code external folder and install this chain code install so you can see the chain code has been installed and if we get a basic identifier for this chain code now let's do the same thing for the organization 2 let's again package it we have to go to the folder again now this time we are setting up to a different hostname so we need to package it again code yeah so this has been packaged now let's switch the peer to peer to the second organization and install the chain code again we need to export the variables from the outside folder and then install it so you can see this has also been installed now the chain code has been installed to the peer now all we need to do is get as an external service and then we can invoke it so if you go to the samples folder again on our fabric v2 operator you will see there are 2 files as a transfer basic org 1 and org 2 all you need to do is change the code chain code id here to the one that you got after you installed the chain code so this is for the peer 1 I guess I am correct let's change it to the peer 1 let me quickly show you the chain code too so the chain code is where is it the chain code is pretty basic it's almost the same except this last main function where it gets the server configuration which is the server address and the chain code id it has to listen to so the chain code server address is itself whereas the chain code id is the one we provided from here I have quickly check if I have installed it correctly 395 395 let's change it for the second organization too so this is it so this chain code id will be supplied to the chain code itself from the environment files it will listen to any request that comes from the peer and execute them now these are installed let's start up our chain code let's go to the fabric sample crd folder organization 1 and organization 2 so one thing to note here is that the image is of the chain code itself so this chain code has a docker file from which you need to create an image and then you need to either push it or build it so that's from where I got these images and let's see if these are deployed these are deployed here we describe them these are perfectly running and the install looks good chain code id and the chain code server is also being provided now let's approve the chain code from both the organizations and let's see what happens let's do it from the peer 1 first one thing you need to do is change the package id here let's take it from the asset transfer basic code yeah we'll find let's copy code config file we need to go to the previous folder so now I think it should approve it transaction has been committed and the status valid let's do it for the second organization too let's change the package id as well this is the chain code that has been installed here export pkg id to this one and let's approve it and the transaction has been committed now let's finally commit the transaction and install the chain code on this channel and this is also done so it is done from both arc0 and arc1 as you can see now we can invoke this chain code and see what happens now again let's switch to the organization 1 and let's initialize the ledger so you can see the chain code invoke has been successful and the result is 200 now do this for the second organization and yep we have got all the results back now if we go back again and see it from from the first yeah wrong command from the first organization we can see the chain code has been working perfectly fine both of the peers are able to get the same records and the chain code is working as an external service out of the community cluster not the cluster but out of the docker container that is usually used to be so yeah that's that's pretty much it if you need to change any of these configuration files all you need to do is go to the fabric v2 operator and change the apis and if you need to change the logic the controller logic then you can change the controller as well the repositories are open source so you can change the code as per your will and you can the sample CRD files are also here so you can change these as well and use them in your deployments the CouchDB deployment is still not done and I'll be doing it in the coming days and I'll push that to repository as well so yeah that's that's that thank you all for staying here and staying throughout the session I hope you learned something today and if you have any questions or any feedback please fill this form or you can also reach out to me on the twitter and telegram so yeah have a great day thank you