 Okay, perfect. Well, thank you everyone for joining and in this meetup, we're going to see how to start in a player fabric but from the operations point of view. This is like a yearly meetup that I've been doing for this is the third year and let me, it's so better. So this is like the third year that I've been doing this, this works of where we will and deploy a hyperlabeled fabric network. We do it with the babyloperator fabric, which is the main maintainer of that and a bit about me. So I'm David Viejo, I work at Confusorware and we're specializing in hyperlabeled fabric. We have been working with hyperlabeled fabric for three, four years. So we only dedicate to that, to hyperlabeled fabric projects where the main maintainers of the hyperlabeled fabric operator, which is what this workshop is about in order to deploy network, deploy fabric networks in Kubernetes efficiently and fast. And this is my LinkedIn in case anyone wants to reach me out, feel free to do it. So let's jump into some context about hyperlabeled fabric. So for people that don't know it, hyperlabeled fabric is a blockchain product that is permissioned. So this means that not everyone can use this network. So we need to give them some kind of access. So in this network, there is a consensus and in hyperlabeled fabric there is pluggable consensus as we can see here, which allows for different methods of reaching consensus. I did a meetup on hyperlabeled fabric 3.0 and right now there is only one consensus which is RAVT, the RAVT protocol, but in hyperlabeled 3.0 there will be a BFT, something fault tolerance consensus. So this pluggable consensus will make sense in the next major version, which will be 3.0. Hyperlabeled supports smart contracts, but they are called chain codes. They are regular programs that are written in Go, JavaScript or Java. And this makes it very easy for people that are outside for developers that have been developing with normal languages because usually the language for the blockchain networks in Ethereum is solidity, but in this case there is no need to learn another language, so you can use it with the regular language that you are used to using on a daily basis. So it allows the creation of private channels, so imagine that we have a network and we want to make a network between three organizations. So this allows the possibility that only these three organizations will be able to access this private blockchain network and will be able to interact with smart contracts. So, and also imagine that we have three organizations and we want to share data in a subset of participants, so we need to build a sub-channel, let's say for now, among a subset of the participants, this is also possible. And this is possible with the private data collection. So these are some concepts. If you are new, this is very hard to use in just an hour and a half or some minutes, so you will see this for the private data collection is something that you will understand when you use it. So right now the idea is that we have a permission network that only the participants that are authorized can see and access the network in order to modify the data we use smart contracts. And we have channels in order to form consortiums, agreements in order to have a network to deploy smart contracts and to extend data. So that is what HyperLay Fabric is about. Of course this is open source, so it is highly configurable, it has many parameters. So the context for this meet-up is well, when we started the meet-up, we started in HyperLay Fabric 2.3, right now we are in the HyperLay Fabric 2.5 LTS, which means that this is the long-term supported version. And we will use the channel participation API and to give you some context when I started in HyperLay 2.2, there was this system channel and in order to create a channel you needed to create a system channel first and then you created the application channel, which was called and this introduced a new step because creating a channel is not easy. So channel participation API was introduced in 2.3 and now is the preferred way in order to manage networks. So 2.5 LTS will be used. The operation all of the operations will use the kubectl plugin from the HyperLay Fabric operator, this is a plugin that is specifically built for kubectl and allows you to interact and has a specific logic to connect to HyperLay Fabric Networks. Then chain code as a service will be executed using Kubernetes, this basically means that we will have the smart contract deployed as an external service and the chain code is just a regular server and the PR will connect to the chain code we will see the PRs orders in the future slides code can be found in the not here in the link that I sent in the chat so scanning this code and we're not using cryptogen where everything that we will do will be automated using the operator but in order to understand the automation we need to understand the concepts so we have a PR so and maybe there is a so these are the components that we'll start here and then come back we have three major components in HyperLay Fabric Network we have the PR we have the order and we have the chain code these are the three, everything else we have the stick client which is the component that connects to the network but the network is composed of these three components we have the PR which okay there's here okay I pasted the link in the chat for the repository so we have the PR and the PR belongs to the PR organization and the order belongs to the order organizations there can be organizations that have multiple nodes of this type so there can be organizations that participate in the consensus and the order nodes are the ones that participate in the consensus and there can be so of the chain codes and in the consensus. So usually, and this is to have high availability, we have three order nodes because we're using the consensus protocol. And in this case, in this meetup, we will create three order nodes and we will have one order organization, which we will see later. And the PRs will be the one that execute, that connect to the chain code. So each PR organization will have one or more PRs and they will need to deploy the chain code. And the chain code is the component that contains the smart logic. So when you see chain code, you can translate it to smart contract. And each of these components have TL certificate and sign certificates. So this is the global picture. Let's come back to the PR in order to see the characteristics. And maybe I'm going too fast, but if you have any question about the concept of everything, just drop it in the chat. I have it side by side, so I can see any questions in Zoom. So the PR is another part of the execution of the smart contract. So whenever there is a transaction, the PR needs to be involved in this transaction flow. A question from YouTube, does the chain code support other languages? The external chain code supports Go, Java and JavaScript. So and of course, Go, Java and Node.js. So you can use JavaScript or TypeScript. I think TypeScript is better because it allows you to have typing and everything. SDK is now the fabric gateway. Yeah, we can explain that. It depends on the level of the audience. We can explain that in the future, but you can use both. So what the gateway did is move this flow that you need to go to one PR or multiple PRs and then submit the transaction to the consensus, to the ordering service, the left of Python. So these two arrows are the ones that are replaced by the gateway service, which runs in the PR. Because what happened is that this SDK client was in Java, was in JavaScript, was also in Go. And I'm sure there were inconsistencies or one bug that was in Go needed to be ported to Java. So what they did is, okay, let's migrate all of the functionality and let's make a service that can collect endorsements and submit the transactions to the ordering service. And this was included in the PR. So right now this SDK client just imports the PR and the PR is able to endorse the transactions, invoke the changes, etc., and then submit the transactions for the orderer to finalize the block. So this is the idea. For Sergio who asked this. So let's go for the PR. As I said, the PR is participating in the execution of the smart contract or participating in the transaction flow. A PR can act as an endorser. So this means that if the smart contract is installed in this PR, then you will be able to endorse the transactions and to execute the smart contract. If not, then the PR will, if it's part of the network, then it will receive the blocks and that's it. So you will maintain a copy, just a copy of the ledger, which is a valid option. Imagine that you have a network and you have multiple peers and you want to have a separate PR in case something goes wrong as a backup database. So this is a valid use case. So that PR will only receive blocks from the ordering service but will not execute any changes. So the PRs also maintain a copy of the current state of the ledger and can update the state based on valid transactions. So this is the concept of the state database. So maybe I can do a diagram of this one moment. So a PR, we have the PR, has two parts. It has the state database, which can be level DB or chaos DB and it has the storage of the blocks, which is just storing the blocks in raw, in the file system. And what this is called in the hyperladyfabric documentation, this is the ledger. So the combination of the state database and the block storage. So imagine that we have Q1 written to the ledger equals V1 and we have another transaction later, which is Q1 equals V2. In the blocks, there are two transactions. The one that says Q1 to V1 and the one that says Q1 to V2. But in the state database, only the latest one is stored. So Q1 equals V2. So you can imagine that the state database will be much smaller than the block storage. And why we have the state database because in order to query simple values, most likely we will want to obtain the latest value of a key. We don't care about the history. But if we care about the history, we have another database, which is the history database, which is in, I think it can be, it's in level DB for sure. But I don't know if you use cause DB, it will be in cause DB. But it's basically a database that stores all of these transactions, all of the modifications. So in the smart contract, you can query for the history of a certain key. And this will give you that the key had the value V2, which is the latest and then the value V1. And this is also stored with the transaction date. I don't know if you have the block, so you have some metadata, extra metadata. So these are the storages that happen that are in the PR. Okay. So and this will be stored either if it has or not a chain code is stored. So this is static. So let's continue. Okay. So PRs can be configured to operate in different roles, such as anchor PRs, which are used to anchor the organization to organizations or channels. So anchor PRs, and we will understand this better when we see the channels. And core PRs are well known PRs to the channel. So if we go back to the diagram, we have here a channel that can have channel demo, for example. And then we have PR01 for the organization 1 and PR01 for the organization 2. So in the channel there is a the data for the PRs that need to be well known are stored. So that when PR01 joins this channel, then it has a reference in the anchor PRs that the organization 1 has PR01 or 1. And then there is another record for the PR01 or 2. And this is how PRs know each other. And then there is a discovery service that basically all the PRs start to talk to each other. And then there is a view form where if the organizational one had another PR, PR02, then based on this PR, then it will know this PR. And then the organization 2 will be able to know this PR and also this PR. And this is called discovery service. But this is more advanced that they wanted to give you a quick view about the channels and why and core PRs are mentioned here. Because there are many times where most of the problems happen because we don't have the right anchor PR set. So keep that in mind. So let's continue. So one of the questions that you could ask, so why a Kubernetes operator? When I started in Fabric in 2020, there was this, I'm sure that you know about the Fabric samples, which is a great way to get started because with one self script you can spin up a network in Docker and you can start developing easy. But when I launched the first network, it was hard for me to understand everything. It was very hard to add organizations, to add PRs, etc. Add orders, add changes, add channels. So I thought that maybe because all of the networks were deployed in Kubernetes, so I discovered this concept of Kubernetes operator, I found that there was a way to abstract most of the logic that happens behind the scenes of deploying hyperliferate PRs, orders to manage channels also. So just what I created, Kubernetes operator does is abstracts the logic, in this case of creating HLF components and you just create a channel. Just like when you go to Kubernetes, you can create a deployment, a service, an English, etc. So you just declare what you want. So I want a PR with an image pointing to this Docker registry with these resources from this organization, etc. And then the operator takes care of deploying this for you and then it deploys whatever the image. So it deploys a deployment, service, it creates a gateway so that you can access it from outside, it creates the certificates, etc. So the idea of this is to make it easier to deploy networks and to expand the network. There is an abstraction from the initial bootstraping of the node and what is good that it works on Kubernetes is that you can deploy it on premies, on cloud, etc. So as long as you have a Kubernetes operational, you can use this hyperliferate operator and it will work. And it's customizable for specific use cases, for example, the certificate renewal, which was implemented with the latest versions. So you can configure the operator in order to renew the certificates automatically, which is often a task that is overlooked and there are many networks that end up with certificate aspires. So this is a great use case. So we saw this before, but basically, let's see, let's explain it again. So we have the PR, which has these two certificates, the TL certificates and the sign certificates. Now for more context, the sign certificates are used in order to endorse the transactions and the TL certificates are used in order to communicate with other PRs. So the PR will have the TL certificates in order to spin up the server, which is in gRPC, and then other PRs will be able to communicate. So that is why we need TL certificate and sign certificates. And the same happens for the orders. We have the TL certificates and then the sign certificates, which are used to sign data, which can be finalizing, signing the block or signing the, well, a transaction now, but signing some data in order to be able to verify it later. And then we have the SDK client, which only has the sign certificates. So these sign certificates will be the certificate of the user. And this is what the certificate and the project kit that will be used in order to submit the transaction. This is using the SDK client, but if we go through the gateway service, which Sergio mentioned earlier, we will also need this sign certificate in order to sign the data of the transaction that we want to execute. So the other component that is very important is the orderer. And the orderer, we need to have multiple orders for the network to be highly available. So usually we have three and the concessions that we will use will be raft. And what the orderer knows is receive transactions from the clients and orders them into a block. There is some logic that needs to happen there. For example, we can configure for a specific channel the number, the maximum number of transactions that can happen that can be bundled into a block. By default, this 10, but we will say that we can increase this number. Then the orderer now broadcasts the block to all of the peers in the network. So before there was a state gossip protocol, which was deprecated not long ago. But right now what the peers do is subscribe to the orders in order to get the latest blocks. So that means that the orders will need to have the copy of the ledger, the full copy of the ledger will need to be on the orders. And this ensures that all of the peers have the same copy. And the orderer service is also responsible for maintaining the consistency and the security of the ledger. So it will validate the block and the data in order to be a valid one basically. And the order service can be either a single node or a group of nodes. If you spin up a hyperliferant network with one node for a short period of time, it can work. But if you want the network to be scalable, then you need at least three. Because if you have one, my experience, you cannot escape to three orderer nodes. So that has been my experience here. And we have a Fabric CA. So the Fabric CA is a server that was developed by Hyperlayer Fabric and is responsible for Isuna managing the certificates that are used. In a Hyperlayer Fabric network, yes, but specifically, there will be a Fabric CA, a Fabric Certificate Authority, per organization. So how this... One moment, I need to drink water. How this Fabric CA works. So first, we initialize this Fabric CA and we initialize it with two certificate authorities, the Sign Certificate Authority and the TL Certificate Authority. And there is a... Will the manager be used instead of Fabric CA? At the moment, no. It cannot be used, but you can bootstrap anything you raise an issue on the Hyperlayer Fabric Operator repository. But in the latest version, you can bootstrap the Fabric CA with Custom Certificates. So in series, you could do a Custom Integration, generating the Certificate Authority with Cert Manager and then including that in the Fabric CA. But basically, also we initialize the Fabric CA with two Certificate Authorities and then, based on that, we can register users. And then these users, we register them using the initial user and password, which is called the enroll user and enroll password, and then we create another user. And this user can be a client, can be a PR, can be an orderer. These are the three tabs of users. And there is also the type of user that this admin, in order to manage the network. So we register these users and then we can enroll the user. So basically what enrolls means is that the client generates a private key and then there is a Certificate Sign Request. This Certification Request is sent to the Fabric CA. Then the Fabric CA signs this CCR and then returns it. So how this will look will be like this. So we have the Fabric CA. It has a database, which can be SQ Lite. And then enroll, there is a initial enroll user and enroll password. Then someone needs to register a user. I mean, goes to the Fabric CA, registers, user, client one, client, PW. And then the user creates. So there is a creation for CCR, Certificate Sign Request. And then this CCR is sent to the Fabric CA. And then the Fabric CA signs it and then it will return to the user, return the certificate. So this is the flow. The Fabric CA is deployed in the Kubernetes clusters and SQ Lite is the default one, but you could also use Postgres. So you can choose between Postgres, SQ Lite, and I'm not sure if you can use any other. These are the two databases that I tried. There is a question. You mentioned that PR subscribe to the orderer to receive update. Does this description to the orderer by PRC place the gossip protocol for propagating updates to the chain? No, because this is only for the blogs. This is a question from Jeff Brousel. This is only for the blog. So if you have private data collections, which in order to illustrate the private data collections, we have the channel. Channel demo. And let's say that we have two organizations. Okay, with one PR. This is the organization one. And we have also the organization two. So private data collections, if we check the blog stack, we will see that the private data collections are not safe into the blog. Only the hash is safe into the blog of the private data collection. And also the key is hash. So if we have a private data collection, Q1, V1, what will be greeted to the ledger will be hash Q1 equals hash V1. Okay. And there is a private store in the, in each PR. So what will end up happening is that these private data collections, when we execute the smart contract will be stored in the private data store of each of them. And these two PRs will, well, basically the PR will start contacting other peers. And then if they are missing some private data collection, then it, they will be fetched from the, from the other PR. So that is the main, the main use case that I have seen about private data collections, about the legacy protocol. Sorry. Then the PR will fetch. I will refer into the chain itself. Actually, not the PDC. There. Okay. So this is the flow of the VARX EA. This is the flow for the private data collection. Let's continue. So the CDNode provides an enrollment service that allows entities to request an authentic digital certificate. So this is what we just saw. And the CANode also provides a revocation service. It has a CRL, a common revocation list that it can be used, but then you will need to replicate it through all of the nodes. And I think it's very complex for, I mean, if you have a breach, it will be recommended, but I haven't seen any network that uses this because you need to restart almost all the, or I'm not sure if you need to restart them, but you need to propagate the CRL through all of the network. And maybe there are some nodes that are not hosted by, by your organization and they need to be updated in other organizations that has other policies. So it's complicated, the revocation, but it supports this service. So what, and we're getting into the co-ordinates, into the co-ordinates operator, into the HR operator resources. So there are two kinds of resources. There are the physical, which are the ones that end up deploying an actual deployment in co-ordinates. So for example, the PR, order, certificate authority, these are all components that are deployed. And there are the logical resources, which are used in order to automate some operations. So we have the fabric main channel, which is used in order to create a channel. We have the fabric follower channel, which is used in order for PR organizations to join a specific channel. We have also the fabric network config, which is used to create network configurations for fabric based on the current organizations and PRs and orders, etc. So instead of having to build the network config manually, we can use the fabric in the network config. And it has also support for external PRs, external order organization, and order nodes, etc. And we also have the fabric identity, which the fabric identity automates all of this flow. So if we move this, let's move this. So what the fabric identity automates is this flow. So first, the operator registers the user, this is optional, and then it enrolls the certificate. And this is stored as a coordinate secret. So instead of being the user, the operator gets the certificate, and then it creates a secret. And not only that, but it manages the auto-certificate renewal. So we built it because this was a common use case for when certificates aspire. The users by default aspire in one GR, but what the operator will do is that 30 days before, we can see this in the workshop, it will basically auto-renew the certificate, create a secret, and then also the fabric network config can reference an identity created by the operator. And the network config is updated every one minute, and then it generates another secret, which is the network config secret. So using this fabric network config from the operator, you will be able to have an up-to-date with no certificate aspires network config. So this is really great. And these are the two types of resources. We support the operator UI, which is an UI that we have developed, which is specific to fabric, and it will show some channels, also the data for the channels. This includes the encore PR, the channel configuration, the orderer, the peer organization, the orderer organization, and also the blocks and the data for the blocks. And we support also the fabric operations console. There is a tutorial, the reason why the tutorial might be up-to-date is because I haven't used it. I implemented it because this was something that was useful, but this is not really 100% tested. This fabric operations console was developed by IBM. And well, if you want to basically, if you want to make, to only see the network, so the blocks, et cetera, I think the operational UI is very easy to deploy because it interacts also with the fabric network config. And then the fabric operations console is for more complex use cases. So we have a question, can we use a telephone operator to deploy a closed solution like KWS or digital ocean? Yes, as long as you have a, as long as you have a coordinate cluster, you will need to configure different parameters, the storage class, for example, your DNS, that is the most critical part, but you can deploy it in any cloud provider that, that you can say provision according to this cluster. This was a question from George. So this is how, well, this was a answer for Martin, yes, but there is currently an issue on facing where you can use private subnet-based clusters. I'm not aware of this, of this issue. I mean, I have been able to deploy it in Azure, AWS and on premise. So, well, if you think there is an issue, then, I mean, it should, if there is an issue deploying it in private subnet-based clusters, this, this is probably because of the network configuration, because as we can see in this picture, what the telephone operator does is deploy these components. So deploy the PRs, orders, Fabric CA, but if the configuration for these components, the network configuration is not right, then it won't work, because when creating the Fabric main channel, the operator will try to connect to the PRs through the domain, and then it basically won't be able to reach neither the PRs, neither the orders, neither the Fabric CA. We have another question, now we can deploy it with hyperlabel using the telephone operator. Is there any issues with the operator deploying it this way? So, when I talk about the telephone operator, I'm talking about the verbal operator Fabric, and I haven't, I mean, I'm maintaining the operator, so I don't maintain verbal. In theory, yes, in theory, with the verbal 1.0, 0.0, you can deploy it using hyperlabel, which is also, which also can be used to deploy other networks, I'm aware, but I mean, I'm sure that the hyperlabel has done an incredible job, so I think that it will be supported. The only way to deploy resources with the HLF operator, this is another question, is by using QCTL HLF. Is there a way to use the API? Yeah, in fact, I have a post in LinkedIn when I explain this moment, basically this one. So, what to use verbal operator, I would put it in the chat, but you can, and you have the source code here, so in order to use Kubernetes as the API, so this is the OO, so we have the, where is the operator client, and basically what, and this was the, this was this slide, so all of these resources that you can see here are just jammers, and these jammers are the declarative way that we were talking about. So, there is a jammel for creating a PR, a jammel for creating an orderer, so what the QCTL does is just makes it very easy, because almost all of the parameters are by default, some of them are compute, but at the end, the QCTL HLF uses this operator client, and then in this case, in this example, what we're doing is first looking at the QF config, this is go, by the way, then we're getting the HLF client, and then we're listing the PRs, but same way as you can list it, list the PRs, you can create a PR, you can delete a PR, you can list orders, et cetera, so it's something that is supported, but you will need to deal with the complexity of doing this. So, okay, so let's continue, any other question, maybe from YouTube, I'm not looking at YouTube, so maybe there's a question from YouTube that I'm not seeing, let me check, okay, there are no questions, so let's keep going. So, recently, and this was two days ago, so we have released 1.10, so this introduced new resources and also some bug fixes, but the major ones, the fabric identity, it has been, the renewal has been configured, the automatic renewal, of course, then the fabric network config, this has been expanded, because now we can add external PRs, external orders, and this is useful for some companies that are using, that are using the operator in order to manage the network config, but they have multiple clusters, so all the configuration of the PRs cannot be fetched from the current Kubernetes cluster, because there are in others Kubernetes cluster. We have a, which version of Hyperliferate supports 2.5, so there was a question from YouTube, so and also for the enrollment and FabricCA, now they support external secrets, so you can initialize a FabricCA with a certificate authority that has been generated offline, this was also a feature that was needed, and mine of the bug fixes, so I was, almost all of them was me testing it, but there were some small bugs that happened primarily on the Fabric main channel, and Fabric follower channel, so this has been addressed, so this is the new version. So what is the skills, and we are getting close to the workshop, we need to know about cryptography, because as we saw every PR and every order, every component in a blockchain network has cryptography involved, by cryptography I mean signing data, encrypting data, etc., with this case, signing data is the main use case, we need to know about Kubernetes, so if you don't know about Kubernetes, I recommend you seeing a video or doing a mini course on Kubernetes, because deploying the network is 10%, but understanding what we have created, and seeing what can go wrong, what can be improved, etc., is the 90%, so I recommend you looking at Kubernetes, then we will need docker in order to deploy the chain code, we need the Node.js with TypeScript, this is in order to develop the chain code, I don't know if we will have time, I will try to focus a lot on the operations point of view, and basic networking concepts, so DNS, TLS, just the communication, and this is very useful because usually when we deploy a happy life for the network, one of the critical parts is how the users will access it, so in this case we're using Istio, so if we go to the black diagram, so this is, I don't know if I have a picture here, well let's do it, so the architecture that we will do in the workshop will be this one, so we will have ourselves, where the user, and we will connect to the Kubernetes cluster, the entry point of the Kubernetes cluster is Istio, which in Kubernetes is an English controller, so it's the one that manages, it's our English, it's the entry point of when we try to access a service on the inside, then Istio is the load balancer, and then we will have PRs, we will have orders, and we can have anything we want here in the Kubernetes cluster, we can have APIs that you deploy in order to connect to this order and also the PRs, so and Istio will connect to each of them, and there are rules that can be set in Istio, so it's highly configurable, so if this breaks, which most likely is what happened to Martin, which had problems with private subnets, so if this breaks, if any of this communication is not working, then the network will not work, so this is also a part of, apart from these components being deployed, we will have also the HLF operator, which is deployed inside the cluster, we will have the change code deployed, etc., and then the PR will be able to communicate to the change code via the networking inside the cluster, so this is the this is the architecture of core, we can have monitoring, we can have, but this is not really in this meetup, there are courses for that, so this is not covered in this workshop, but this is the idea, so what we want to do in this workshop is create two organizations, and what we will do is, first we will deploy one organization and with the order RMSP, which will be the orderer organization that will host the three orderer nodes, and then we will add the second organization, because most of the, most of the problems that arise is okay, I have a, if you finish the workshop as it was the last year, I have a network with two organizations and two PR organizations and one orderer organization, okay right now how do I expand it, so let's do first creating one organization and then we create the order, each organization, each PR organization will have a change code, and we have two PRs, and then we'll have this key client with TypeScript or with the QCTL plugin, but it doesn't matter, which will execute transactions, so on demo time, but let's answer some questions and then let's jump into this, a question, well a question from YouTube, which version of Hyperliferic it supports, this was answered, but it is 2.5, but it also supports 2.4 and 2.3, but I don't see a reason why you need to deploy lower versions, instead of Istio we could use nginx, correct, I haven't tested it and in fact one of the major features that I haven't mentioned is about the traffic support, it's not documented as of now, but right now you can use Istio and traffic, nginx I don't think it's supported, I mean you will need to do your own research for this because I think that nginx supports the support for SNI, the subject name indicator is very poor in nginx and I think the nginx control, nginx controller doesn't have any much support for that, okay, ah replace, ah okay so this was a fix on the, weird, okay thanks for that, also the chain code, thanks Martin for the request, also the chain code deployed on the PR should it be deployed as a service or it doesn't matter, the chain code should be deployed as a service and this is the recommended way, before what happened is that, well first of all the PRs were deployed in Docker, so the PRs were deployed in Docker and what happened is that the PR ended up deploying chain code 0 behind the scenes and this happened on Docker, so this is the Docker, Docker demo and this can be another chain code, so chain code 0 and chain code 1, okay but this had a problem because first of all Kubernetes drop support, support for Docker, now the one that is supported is container D, so this is the first problem, most of cloud providers don't support Docker and the other is that if the PR deployed in the same, and this happens on Kubernetes of course in the same node, the chain code then there is no serviceability, so basically you cannot know if a chain code is up, is down, how much resources is wasting, is consuming, etc, so this was the problem, so the chain code should always be deployed as a service and what this means is that there is a port here which is 7052 and then when we install it we tell the PR to which server it needs to connect and that's it, in theory this could be developed in any language, in Rust, in Python but only Java, Go and Node.js are supported, okay so there are no more questions so let's move to the to the workshop, okay so the workshop is located, I will paste the repository here, okay there's another question but the chain code could be deployed as a separate Docker service, yeah you can deploy it in your local without Docker, with the right parameters, so yeah, okay so you can, the the instruction to run this is everything is in the readme, so feel free to to start it, to give a start here and all of the steps are numerated here so we just need to copy and what they will do is execute these commands and then at the same time I will explain what these commands do, I will also paste this on YouTube just to make sure, okay perfect so when I have this cloned into my repository to my local I have some files which I will remove, okay this crystal is not needed and then let's start in an empty, in an empty folder, so the first part, the first thing that we need to do and I will, I think I will be switching to this the ARM in order to explain some things, so we need the coordinates cluster, we have two options for this, we have the k3d and the kind, lately I've been playing with k3d which is just a wrapper around k3s but you can use any distribution, any coordinate that is provision that you want, but one thing that we need to know is that we need to expose, in this case we're exposing two ports, so this is the port on the host, the ET is the port on the host and this is the port on the Kubernetes, so what this will do, this is okay, this is us, so why we need this because we want to be able to access locally, so if we deploy a PR that is PR0 or 1 localhost which is the domain that we will use and why we will use this domain because if we access Google the DNS then we can use the domain and then it will be redirected to basically to the loopback IP, so when we type localhost we type this IP, this domain and 443, then what we want is to go to Istio which will have the port, these two ports open, so 30949 and then 30500, so and the port on the host will be these ones and then they will be connected to the ports and this is why we need the mapping, if you don't do this mapping, if you provision it using another, I don't know K0S and you don't do this mapping then this Warsaw will not work and the reason why we need to do that is because we need this domain, we need basically a qualified domain and if not we will need to do a bunch of stuff, a bunch of stuff that will not indicate and will not be close to a live environment like KWS or ASU, so this is the first thing that we need to understand, let's create a cluster, this will take some time and what this will do is create this cluster, this weren't this cluster on Docker with two agents and if you are using kind then the same is needed, so we have this extra port mapping 3094149 and then the same for the TLS, so right now we can do kubectl get nodes and then we have the three nodes ready, we can see that the current cluster was just created and we can do k3d, kifconfig, get, this is the number of the cluster as we have specified here, k yet is hlf and then we can save it into a kifconfig and this is our kifconfig, if you want to use it for example in the example of accessing, well I think we removed it but there was someone who asked how to interact with the operator using Go, so you can use this kufconfig file in order to interact with Kubernetes cluster, there is a question can we use mini queue for Kubernetes on Docker desktop instead of k3d, I haven't tested so what I use locally is k3d or kind, it should work but you will need to check if these port mappings are available, if they are available then I think that there will be no problem because I mean you will need to change some parameters that I will mention during the workshop but they will be okay, so next part we will need to install Istio, so if we go back to these diagrams and I will copy this diagram okay put it here, so right now we have provisioned the word in this cluster so this is done and then we need to provision Istio, so in order to do that we need to download Istio and then add it to the path, okay and right now it's downloaded, we add it to the path, this is the folder that I removed at the start of the workshop and then we have the intervene, this is the CTR and this binary this program we will use it to initialize the Istio operator because at the end Kubernetes operator are a common pattern in Kubernetes, if you have been working with Kubernetes for a long time you must, I mean this map will sound familiar and okay so first we need to install to create the namespace Istio system and then in order to install the Istio operator then we need to execute this command and in order to visualize the cluster I use lens, so this is a tool that you can download, I think it's lens. Well no, lens coordinate test, okay this one create this lens.dev, so highly recommended you can explore any coordinates cluster and it's very fast, so this is what we're going to use in order to visualize what has been created in the cluster and in order to troubleshoot problems if they arise, so let's get into that, connection refuse, okay let me try and we are in so there is a namespace that has been created which is the Istio operator and the Istio operator right now is here already, at the left we have the custom resource definitions which are the ones that are used, basically these definitions are the ones that are used to quote extend, quote-unquote extend Kubernetes, so we will basically notice this when we install the the cell of operator but what this Istio operator has created is the Istio operator resource which right now we don't have any and this will be used in order to install the Istio English gateway, so the next step is create these resources, so the app version install Istio.io which is the one that we have seen here, install Istio.io and then we will create a kind of Istio operator, this is the metadata the namespace that we have just created, there are some other components but I won't get into this but because you can you can install Grafana at the same time that you install the Istio gateway but the important part is this one, so in the components we have English gateways and we're going to have one, it's enabled and we're going to have two replicas this is for the deployment and this will be the resources, so if you deploy this in a production environment you can tweak these resources and the number of replicas, I recommend at least two and then the service and this is where the mapping comes in, you see that we have that the node port is 39 for 9 and this is needed in order to in order in order for this mapping to work which was here, there it was here, so this is where we declared when we were creating the coordinate cluster we declared the port 80 and 443 will be redirected to the sports but now we're creating Istio and we're telling Istio okay you need to listen in these two ports and the and the first one will be for HTTP and the second one will be for HTTPS and the service the service type will be node port the name of the Istio will be ingress gateway and the moment so the name of the ingress will be ingress gateway and this will be the ports then the pilot this is for the not really sure because this is something from Istio but I think that this is to manage the configuration so in this case we can have only one replica and the resources are very minimal so with this is fine so let's execute this kubectl apply and what will happen sorry is that a resource will be created right now the status is reconciling these are the list of Istio operators so you could have multiple uh gateways and the Istio operator will deploy the ingress gateway the same way that the happily fabric operator when we create a kind this case it won't be Istio operator it will be PR fabric uh PR orderer the Fabric CA it will deploy automatically based on this of the configuration that we that we select it will deploy it automatically so just creating the jammin that we have created here but this is for the Istio operator we will see uh later about the the fabric so if we check the diagram right now this one is done so so now what we're going to do the next step is to configure the internal dns because okay right now we have if we go to this mapping we have this mapping done but what happens for the ps that are inside so we have the pr0 we have the pr0 for the organization one and pr0 for the organization two and these are the indifferent pods this is a concept in in Kubernetes and they need to communicate between them but the url that we will have so the the pr or the organization two will try to connect to this one okay so in order to be able to from other order containers to go to this pr0 or one is to tweak the dns so what this one if we don't specify anything it will go to localhost and it will fail because it will try to contact something that is right in the in the local pods but we will take the configuration so that it goes to Istio so when there is a the localhost of the st will be redirected to the Istio and then Istio will act as a load balancer and will redirect to the right pr so that is why we need the internal dns so in this internal dns the lines that are needed is this rewrite so basically we are rewriting everything that ends with localhost st to the Istio ingress gateway that we have just created and this sbc cluster local this is to indicate that the the service is inside the word in this cluster so and if we check this service inside the Istio system we will check that these uh this have uh every service has some endpoints and these endpoints are the ones that are deployed so if we check this ip 104206 then one of these will have this ip 104206 so basically it will be root to this ingress gateway and then it will this ingress gateway will root it to to the right pr order or whatever we create inside the cluster outside the cluster we just need to uh going to localhost is is good enough because we have this mapping so next step okay let's create it because sometimes what it happens is that if you are restarting the your your pc and then uh the crisis spin up and then the coordinate system is restarted what happens is that the core dns is overwritten so one common problem that happens is that the you want to try the network you restarted your your computer and then it doesn't work if it doesn't work you one of the things that i will try is to apply this confirmed mapping and then uh if you can restart the the ps and the network it will help the ps to get the new uh dns configuration because basically there is a there is a casting internally in the ps about the dns so uh restarting the ps help uh get the the latest dns configuration for for its node so next step is installing the telephoperator so in this case uh we are going to use helm you can go here in order to install helm you just need to execute this script and then you will have helm in your in your in your computer so i i already have helm and have this version okay i think what whatever version that this uh greater than three will work but we will need to uh add the repository at this custom repository in this case this has this has already been added to the repositories i usually do this do this all the time with this force update because if there is a new version then you want to get this new version if not then you won't get the new version and if you have followed these steps earlier and you don't update the repository then you won't be able to install the latest version so we we have added it and then we install it i usually do a upgrade install because in this case if something changes then i can just execute the same command so this right now it will be installed if i execute it another time then it will be upgraded okay so in the meantime there is another question do we need to deploy a bolt as well for managing the certificate generated from the ca uh no i mean this all of the configuration is safe as secrets so this is something that if if you want to manage the configure the secret configuration for the for the users you will need to do something you will need to integrate with bolt with a server that you do that you you need to develop it basically i don't know if hyperlabel i think hyperlabel supports uh bolt so i think they they managed that but the operator the the goal of the operator is to make operations easy and to rely on Kubernetes as much as possible so there is no integration with the external service because the focus is that someone needs to be able to spin up a network quickly and in order to do that the less dependencies the less dependencies that they are the better so any integration needs to be custom in this case this was a question from from just so let's check in lens so we have the chair of operator control manager deployed and if we look closely at the custom resources there is a new item here which is hlfconfusower.ps which is the the company that started this project and we have a bunch of custom resource definition we have one for the fabric ca we have another for the fabric chain code everything is empty another for the fabric chain code template which is a new feature for the 1.10 fabric explorer is not used uh fabric follower channel is uh in order to be able to join ps for the peer organizations fabric identity we saw it is in order to register and roll a user in order to get the certificates and the the project key main channel is useful to create channels network config is for generating network configurations operations console this is the in order to deploy the the ibm fabric operation console that the open source for the fabric operator api fabric operator you are the only one that is you can use both but there is a way and this is in the documentation in order to only use the api and to deploy both in order to reduce the the complexity of creating an explorer and then the orderer node in order to deploy orderer nodes in in the cluster and then the pr self-explanatory in order to deploy we are in the cluster so these are all of the resources that are available each of them have different uh different parameters the most complex one for this is the the pr which has many parameters and well let's get into that so we have installed the the operator right now what we need to do is install the qctl plugin because if we check the picture we have the chief operator which right now is deployed right now is deployed so the next step is to start deploying vr's orderers genkos etc so we need to deploy this we need to install the qctl plugin what we're using is clio in order to install it we see so you need to execute this command if you're macOS or linux but there are also instructions for every operating system so once you have installed clio you can just keep ctl clio install hlf and that's it so in this case i have already installed it so it should be i don't know why it's taking long but maybe it's because of my connection the installing plugin is left okay and then if i do qctl hlf i have already one pre-compiled but uh it will be in home being qctl hlf i want to remove it in order to not cause any any conflict and the next step is to deploy our first organization so in order to deploy an organization this is conceptual i uh we will need three parts so this is an telephore organization the first thing that we will need to deploy is the fabric cia then once the fabric cia is deployed we need to register the users these users can be so register user vr or order and then we can either deploy deploy pr or deploy orders or deploy both so whatever we want okay and that's it so this is to deploy an hlf organization and then after the pr have been created we can create a channel and this is what we're going to do so in order to create a channel so we need existing organizations then create channel config then well but all of this so in this case the fabric main channel will handle all of that but you will need to create the network config and then submit this network config which means uh this means joining other orders genesis block joining northerners update channel configuration if needed etc so in order to create a channel first we need the system organization and for each organization and as we can see here we will have three organizations in the first run we will create only one organization one pair organization and in the second round we will create the other so the first thing that we will need to do is first well in variable variables these are these are for the image which you can tweak so let's let's do that i don't know if i can do this side by side so deploy the pr image okay so then we need to first deploy our certificate authority so in this case we have the storage classes local path in order to see what the storage class you have available you can go to lanes and then in the storage section the storage classes you can see here all of the storage classes available this is very different in asur in lws in any cube etc so keep that in mind if not the c a won't be deployed and then we are just passing to the c a create and if we check the c a create source code so i don't know if i can let's let's open the operator c a create so this is go and what we are creating is a fabric c a in the operator with all of these parameters so what this will deploy is just the the jambel file uh into wordlet with some parameters some of them are did to grab it from the right perfect thanks so yeah there are some default parameters for the fabric c a but we will interact with the with the plugin so let's do this side by side this will be okay perfect so uh the c a create has the image which we have the declared earlier the storage class which is local path also we have the then roll id which this is the initial user and password in order to be able to register users and then the host which in this case is all one c a dot local hodo testing the domain that we talk about and then the steopor will be 443 this is this is for the gateway uh to know when to which port it needs to be exposed and this 443 is a mapping for uh this port in the h2 operator so we're saying that it needs to be accessed through this 443 which will point to the node to this node port and then we will be able to access it via our localhost 443 so let's execute this okay on the right and if we go to lens then in the pods there is a pod being created there one c a in order to make this was so faster I'm going to also create the c a for the order organization which is the same as you can see it's the same but without the name this is the name or does c a which will be created and the host is different okay and I will create also the c a for the for the other pair organization okay so right now in lens all of the c a shall be created and if we go to the hlf uh the custom resource definition to the forex c a let's see the one that is running if we edit it we see here the the configuration this is the this is the definition and the forex c a has a lot a lot of a lot of configuration I I won't get into detail uh on this but it has a lot of configuration so uh it has a gateway this is uh in case that you want to use the gateway uh this is not related to the gateway fabric so this is uh related to the gateway uh resource that is being developed at Kubernetes but in order to see some of the parameters you can set the resources you can set the common name the organization uh for the for the c a and you can set the service in this case usually it's a cluster ip and then you expose the c a using istio or or or traffic or others or or other istio english controllers and the main well we have here the database and this is where you can set any you can set postgres for example and then the data sort will need to be a postgres connecting string this depends on the on the drivers of go in the forex c a and then in the c a we have the affiliations for the c a but both of them are using the same database so the affiliations will be the same for both we have the c rl uh the common revocation list by 24 hours the ccr this is for the name host uh which will be by default used if there is an intermediate in this case we don't have an intermediate and then the registry so in this case the identities that will be a register for the c a will be enrolled and the password will be enrolled pw and these attributes basically mean that this user will be able to register other uses which is what we want and the enrollments if we set minus one this means that there are no maximum of enrollments remember that enrollment means that the user can generate a certificate so the ability to generate a ccr and then send it to the forex c a and then the forex c a will return it with the certificate and then the subject which you can change of course for the for the c a and the same is happens for the tlca because remember that we have two certificate authorities we have one for signing and the other for tls and we will see this when we're deploying the the pr so all of these chas are running so we can see in the post that are also running we have a verification script which basically means doing a curve on the c a info in fact this we can see it on on google chrome if we want we need to click advance and this is the the default one is the default c a is the name okay which is c a and this is the c a chain which it's in base 64 and if we put a parameter here for the other c a which is tls c a then we see the certificate authority for that c a so this means that it's working we can access it also for the organization one and then for the orderer organization perfect so let's continue the next thing that we will need to do let's go to the diagram so where this step is done then we need to register the user the pr or order depending on what kind of organization we're deploying so we let's do it for the first organization this is a very alone readme so it has almost 900 lines so there will be much as close sorry but yeah let's register this this user so we're registering a user in the organization one c a certificate authority and this is the user that will be created pr and pr pw the type of the of the user that will be registered will be the type of pr and then in order to register the user this is the user and password that we need and this is dmsp id which dmsp is the id of the organization and this will be used in the channel configuration so let's register it and then let's register also the the orders for the organization two okay and so for the orderer c a so right now these are registered and if we rank again then we will get a an error so it's already registered so this step is also done okay so now we need to deploy the peers and the orders so let's go to the peers okay so this is the piece done well let's go to the first organization this is this is the piece done and then we will deploy a peer how we deploy a peer using the kubectlhfpr create we will in order to spin up the peer faster we will use the state database level db with this image that is the one that we configured earlier then the version is the pr tag for the for the docker image which is 2.5.5 then we have the storage class which is the one that we defined the enroll id pr msp id organization 1 msp and then we need to tell the operator what user to use in order to enroll the in order to to get the certificates for the peer because we need to get two kinds of certificates the science certificate and pl certificate so these are the credentials capacity will be five gigabytes the name of the peer will be organization one peer zero and then the c a name which is a combination of the c a name and the namespace will be or one c a okay and then the host which are similar to the ones that we define for the fabric c a here or one c a so in this case will be or one pr o or one and this is the host that will be available to access the pr and the same so we're creating two peers just with different configuration the only configuration that changes is the name and the host everything else is the same because the c a that we will use in order to enroll the user is the same and the user and password for the in order to enroll the user is also the same the storage class is the same the image is the same so only the name and the host differs so let's create this and the if we go to lens then we will see that this is being deployed right now and in the meantime we will deploy also the the ps for the second organization which are the same but with the c a name change in order to get the certificates for the organization to msp and uh i don't know if i registered this yeah i think so okay then the host will be different of course instead of pr o or one will be pr o or two and then the name of the pr will be different so let's create both peers and this should also be deployed as you can see the peers when they don't have cos db the memory is very low i mean right now there are no there is no heavy data but with cos db it will be at half a gigabyte basically without doing anything so you can imagine so uh let's wait until these peers are created okay right now they seem to be created if we go to the custom resource definition for our peer well these are failed i don't know why but as you can see they are created so i don't know one thing that i do in order to force reconciliation is to add annotations and maybe this fixes the the pr okay here here is running this will be running in no time so it's a matter of time so we have the pr so if we go here this pr is done then we need the orders we're going to deploy as we saw in the presentation we're going to deploy three orders order order zero order one order two so if we go to the deploy order organization which is the in the fifth step then we have this command so the order not create is very very similar to the pr we have the image we have the version we have the storage class which we have defined earlier we have the role ID which in in this case is orderer the MSPID which changes for the organization the password which is the one that we have registered earlier then we have the capacity the name of the orderer and then the CNM to in order to use in order to get the design until a certificate and then the host we have the admin host and this admin host is host for the channel participation API so this is why it's needed and that's it and there is the port in order to be able to access so let's deploy these three nodes okay these three have been created and then let's come back okay right now these are running and then in the fabric orderer node we should see we should see three but we don't see we should see three okay right now and they are being deployed as we can see here so you can see how easy it is to deploy orderers and peers if we want to check the connectivity if we want to check the connectivity there is an open SSL command which we can use open SSL as client connect and we're connected to this host and port and we're doing this locally so open SSL and this will have to return the certificate this is the server certificate if we go with this server certificate to server logic the code we can see the details of the certificate and this in this case this needs to be as it's here the TLCA because this is a TLC certificate and we have the sans the subject alternative names which are local hosts or to PRC or so there are a lot of subject alternative names and we can use any of these any of these domains in order to communicate and this will be valid for this certificate so right now we have all of them you can see the memory very very low also I'm on ARM or Namaq M1 so this will help in order to lower the resources so we can do this verification for the open SSL for all of the peers but if you deploy a peer and you cannot access it from your local machine then this means that you have done something wrong in the Istio or in the configuration when deploying the peers so this is the first thing that you should check so let's go back to the diagram and then right now the deploy orders are done perfect so now we need to go to the channel we need to deploy we need to create the channel right now we have the system organizations but in order to deploy the channel by the way if there isn't a question let me know because I haven't seen many questions from long time so I want to make sure that this being understood so in the channel we need for the peer organizations we need the signed and admin sign certificate we saw earlier that there four types of certificates we have the admin client PR orders the admin is the one that it's able to create channels update channels etc so for the peer organization we need the admin sign certificate and for the order organization we need two certificates we need the admin tls certificate and we need the admin sign certificate this is used to modify the channel configuration the order channel configuration and this is used to join the orderer nodes using the channel participation API that we saw earlier so what do we need to do now we need to generate three different identities well four because for each peer organization we need to create admin sign certificate and for the orderer organization which we only have one we need to create two so four in total and there is a question we have to specify all these components in the chain code smart contract again I think there is a misconception there the the chain code is independent so the chain code is a server and it doesn't have any reference to the to the PR the PR connects to the chain code no the chain code to the PR so that is the flow okay so let's go to the part of well we have deployed the order already so let's go to the step six which is creating a channel so as we saw in the diagram we need to register the admin identity first I am executing the commands in the right the right terminal and let's create right now the two identities so we have created the the user and then we're going to create for the orderer these two identities that means there is a certificate and that means a certificate and we're going to do that using the fabric identity okay so actually this is using the fabric identity and all of the identities on this workshop will use the fabric identity CRD custom result definition so let's create these two identities on the right and in order to see if this identity has been created correctly then we can do push it here get fabric identities and these two are running and the server the the secret name is the same as the name of the identity so we can get the secret orderer admin sign and then we can see that there is one created and this is the one that we will reference in the when we create the channel so if we go to Lens just to show the contents of this of the secret we have these two secrets and then for each secret we have the third PEM which is the certificate that was signed by the signed by the RxA the key PEM this was generated at the client side and the root certificate authority for this certificate and then the user jammel which is an Insta property created for the for the operator and all of the secrets that the fabric identity creates are the same ones so third PEM, key PEM, root PEM and user jammel okay so we have created the identities for the orderer MSP and then let's do the same for the organization one MSP in this case we will register the admin user for the organization one so right now it's registered and then let's create the identity for the organization one if we check the parameters say for both so the parameters for the identity are the name of the identity this is for the CRD the name space of the identity then the CA name in order to enroll the user the CA name space which in this case is the default the CA which will be used so in this case CA means that this will generate a sign certificate and then roll ID admin admin password so this is to be able to get the certificate that we saw and the MSP ID so and the same is for the for the orderer we have this orderer admin sign which was using the CA and the order admin TLS was using the TLCA so that's it and if we want the right now no more no more identities we will need the organization to MSP identity after we create the channel because the channel contains a lot of configuration so here we have the fabric main channel we have the metadata name demo this is for the custom resource definition and then in the spec this is the channel that will be represented in the hyperlabel in there okay and we have the admin orderer organization and the admin PR organizations so this basically means that this comes from the from the use cases and from the experience so usually if we have a channel we have an organization that knows our hyperlabel fabric which we can call the founder and this is the one that manage the channel configuration manage the organizations, PR organizations, manage also the orderer config, consensus configuration etc so and and this is what we mean by admin orderer organization and admin PR organization so the admin orderer organization will be the ones that will need to sign the configuration update in order to update this orderer part which includes the batch size this basically means max message count means how many transactions will be bundled into a block and we have also the batch timeout which means if we have one transaction not 10 so how much should the consensus wait until creating a block and in this case is two seconds then we have order configuration so this orderer configuration only with this signature or the MSP we will be able to to update and then we have the admin PR organizations which will be able to update this part and also the PR organization so the members of the of the channel of the application so and there is also a section for the orderer organization so the orderer organizations if we need to add another organization then only with the identity of the orderer MSP will be will be sufficient and imagine if we have 20 organizations in the channel this speeds the process of adding organization removing organization especially in test nets so when you are testing configuration or when you want an easy onboarding in order to test the your network if you are in production then you may need to tweak this accordingly so we have the following main channel with this admin organization with this channel configuration and then we have the PR organizations later we will add the organization to MSP here and the identities that will be used in this case we only need three identities the fourth one will be used in order to join the organization to MSP which we haven't we haven't created yet and the external PR organization orderer organizations in the orderer organization for each one we need external orders to join these are the host and the port for the channel participation API in order to be able to join the the orders to the channel then we have the MSP ID the orderer endpoints or orderer node one or node two node three this is the and this is the port and then these orders reality these are the cons the consenters and for each consenter we need to specify the PLA certificate so that is why we have this environment variable so in this network in this channel we have three consenters orderer node one orderer node two and orderer node three and the PLA certificates are fetched from from the CRD from the fabric orderer node so if we execute this and then echo orderer zero tls search so this is the tls certificate for the orderer zero and and we need in order to mark it as a consenter they need to be in the channel configuration and they need to have a valid certificate so we can now uh apply this custom resource definition in this case we're not using the qctl happy library plugin because the configuration is complex so let's create it on the right so forward main channel created and then we can get the forward main channel so the channel has already been created so this is good and if we go to the lens then we can check in the orders if there is some log activity so yeah writing block two so there is activity uh not in the organization two peers so the organization two peers are not but there is something but i think this is between the organization the organization two so yeah because we haven't yet joined the we haven't yet joined the the peers to the channel so which this is the next step so we need to join the peers from the organization one to the channel so in this case we need to get the orderer zero tls certificate let's add it here and then we need to create a fabric follower channel so the convention that i use here is to have the channel name that's the organization and this way there will be no duplicates so in default in this follower channel what we will set are the anchor peers which are the peers that are well known to the channel for the organization two when when he joins to be able to find the organization one peers and then the chelify identity which will this is the one that we have created earlier in order to be able to join the peers to the channel and then in order to get the the genesis block we will need to have access to at least one orderer so with one orderer is is good enough but in order to connect securely with that orderer then we need the tls certificate of the orderer zero and then of course the mspd which is the organization one msp and the channel to join which basically this channel will be the one that will be requested to the orderer and the the orderer will only reply with the genesis block if this identity is authorized the channel so there is security involved we cannot just join any organization so let's execute this and then let's get the fabric follower channels and then we can see that this is running and if we go to lens then we see that creating ledger demo so a committed block five so there is activity in the organization one pr zero and also in the organization one pr one this will be perfect and then in the organization two still no no activity but let's do in the organization two already so in order to join the organization two we need to add only add a line an item to the peer organization so in this case and instead of organization one msp it will be organization two msp and organization one ca two organization two ca and that's it because the ca name space is is the same so let's reapply this fabric main channel okay fabric main channel configure if there is an error in the configuration then the fabric main channel will switch the state from running to fail but in this case it's running so this means that the change has been has been applied and if we want we can we have here demo config which is the latest channel configuration and this is a jation navigation and here the organization two msp is located so yeah we now assure that the operation has been currently processed and as we have joined the second organization then we can join the ps we were able to join the ps so first thing that we need to do in order to join the ps from the organization two is to first register that mean okay in the in the organization two certificate authority and then enroll it creating the fabric identity then we have created the identity we can go to lens and then we can see the fabric identity and we can see that it's running and the secret has been generated as we saw earlier and now we can create this fabric follower channels with anchor peers instead of the anchor peers being the one for the organization one these are the ones for the organization two the order is the same and then the peers to join are the ones for the organization two so if we execute this command the what is the follower channel and what is the purpose of creating it the purpose of creating the follower channel is in order to join the peers to the channel and to configure the anchor peers in the channel so right now if we go to the config map there is a new one that has been created the demo or two msp follower config and this is the channel configuration which is in jation and here we have the anchor peers for the organization one and then core peers for the organization two this is what the fabric follower channel does if you don't set anchor peers then you won't be able to approve or uncommit the change based on my based on my experience and if you don't create the fabric follower channel then the peers won't be joined to the channel so that is the purpose so right now if we check the organization two peer one then we see that there is some activity committed block 11 so if we go to this diagram right now well this would be another box created fabric followers channel and this means joining the peers the org peers to the channel and configuring the anchor peers okay so these two are green so what is left we have created organization we have created channel so what is left is deploying a smart contract and in order to do that we have three steps so we have the part where we stole the chain code we have the part where we approve the chain code definition for each organization for each org peer org not all not all of the organization but for each peer organization and then finally we commit the chain code definition once a majority of orgs have approved so if only one of our organization approved then we won't be able to commit the chain code definition so we will start with installing the chain code in all of the peers so let's go back to the to the workshop so install a chain code we need to prepare the connection stream and this is where fabric network configs comes into into play so we will need to create two identities because we will create two network configs one for the organization one and the other for the organization two so the identity in this case the fabric identity will also register the user okay so I won't get into detail this because we're running late but we need to create well right now these two identities are created so there is no need but we need to create these two network configs let's create them and then let's go to the lens powering network config and then we have these two network configs and if we check and in this network config there should be a property which is the secret name where the connection profile is saved so let's go to the secret and then these are the two secrets for the network config and this is the connection profile which contains information about all of the organization the peers it also contains in this case this is the organization one it contains the users so all of this and what we will do is fetch the connection stream for the code and let the secrets so let's download it using this common basically what we are getting is the config jammer so right now we have locally the organization one jammer and we can see this better than in lens so we have all of the organizations here well not organization two only organization one and the and the orders and then the PRs and then the certificate authorities and then the channels and in the channels we have these orders will be used and then this PR will be used there is no need actually to to get the PRs from the from the other organization to this connection profile because the discovery service takes care takes care of that so this will be fine so we have fetched the network configs this step so now is the time to install the chain code I will execute this and then let's comment it later back and say it installed so what this created is a the chain code data set this one develop finder and if we will if we extract this chain code we see that it has two files one is the code which contains the connection json and if we see this let's see it better in the visual studio code so this connection json will be the one that the PR will try to connect to in order to access the chain code that we are installing but in order for this connection to work we need to deploy a fabric chain code into the coordinate this cluster and then we have some properties dial time mouth when connecting and then if it needs tls since we are in local communication in coordinates then we don't need tls then we have the metadata in this case this is type chain code as a service this means that the PR will try to connect to this address and this is why we need to to this address and this is why we need to connect we need to specify this this address and also we need to specify this metadata the chain code as a service and the label is just a metadata for the I mean it is for for referencing this because we have here chain code is tall so this is the label asset asset colon and then this is the hash of the of the chain code package so if we do csa 256 sum okay we will see that this matches this start with df and ends with 3b so this is the first step then the second step is approving the well we can check if the chain code is installed and this takes the configuration profile that we have seen earlier the user which is the one that is present in the connection profile and in the PR that we need to check the the chain code from so in this case we can see that the package ID is present and it should return something like this by the way there is a I mean for the people that are here there is a course that I launched and we explain this in detail so yeah if we if you are really interested and want to take it to the to the next level then make sure to to check this this course out so it has a discount for the first people so make sure you check it if you're interested I won't say anything more and okay so first the next step is deploying the chain code container on the on the cluster this is for this connection json to work asset column 752 so we need to deploy the external chain code the chain code name is the one that we declared earlier which is asset as we can see here and then the image this is some image that contains a chain code which I have published earlier so and the package ID this is very important so let's deploy it created external chain code asset so this is also a resource from the from the operator and this is in this case is fabric chain code so it will deploy the asset chain code and we will have in the service an asset service and with a port 752 which will be the one that the PR will use to connect to the chain code and execute a transaction so the next step while this well this chain code is already running so this is perfect I think this is in go so there are no logs but we will see that it's working so we need to approve the chain code and we need to approve it for both organizations so in this case let's say let's put the document policy to both so I can put it to both but and so both organizations need to endorse and we need to approve it for each for each organization this endowment policy means that both organizations should endorse the the transaction if I put this or this means that either organization one msp organization two msp if if uh organization two msp signs the transaction is good and if the organization one msp signs the transaction is good so there is no need to execute the the chain code in both in both organizations so this is the policy that is passed here in the policy in this one policy the version and sequence will start by one but if we want to change the document policy later we need to increase the sequence if this changes and then we have here the channel and then the name of the chain code that we're approving and then here at the start we need to configure the connection profile in order to be able to connect to the PR then the user and then the the PR okay so let's execute this okay I need to to declare the variables let's execute this so approving for the first organization chain code approve this is the transaction so for every approval and commit there is a transaction involved and then approving the okay the print the chain code for the organization too so now the chain code has been approved so at this point we have approved for each organization we have installed let's fill this in green and then we can once the majority of organizations have approved we have two organizations so this is met and we can now commit the chain code definition so let's commit it the chain code commit parameter where we have a problem I don't know that right now it's committed there's will have been some problem with the connectivity so the chain code commit is almost the same as the approval major but since the package ID is per organization because the every organization could have different chain codes and the chain code as we saw earlier is the hash the chain code that you said so every every organization can have its own package ID but when it comes to commit in the chain code definition the global chain code definition doesn't contain the package ID so this is why this is not included here and then now we can invoke a transaction in the channel so we can do it as the organization one or as the organization two we will see so let's execute this this command so this is QCT LHLF chain code invoke this is the connection profile organization one jammin this is the user to use in order to sign the transaction proposal the PR that will be used for endorsement but others could be used also then the chain code which we want to go to execute this case the asset if we for example if we set another one then this will fail so this is a common error someone will say that they don't know this chain code so yeah well basically it won't work so as you can see so we can do the same for the organization two so the transaction ID is the same right now the ledger is initialized and there are two types of operations here the invoke which generates a transaction and therefore can in this case can modify data in the blockchain and then we have the query as you can see the invoke takes long time because it needs to wait until the block is finalized and as there are not many transactions and the maximum transactions in a blocker 10 and we are right now executing one then it waits two seconds which is the batch timeout that would configure here in the power of main channel so this is why it's taking so long so long and then the query only goes to the PR and there it is immediately a query took five milliseconds so it's very very fast and what the query does we have here this decay client we have the forward PR and we have the chain code so the query only does this and then the chain code returns the data then the PR returns the data okay but the difference is that when we're invoking a transaction and we want the data to be to be stored if this is using the fabric gateway it will go to the orders the ordering service submit transaction or the SDK client can submit also the transaction so that will be the that will be the catch so when you submit a transaction then it needs to this SDK client needs to wait until the transaction is committed in a block so that is why for the invoke it waits and we have a transaction ID and for the query we only have the return so query is usually used in order to query data to get data from the from the blockchain and invoke is used in order to create data or update data etc and then we can create an asset okay then two seconds and this will be we have the transaction ID and then we can query an asset and then we don't query an asset that doesn't exist and then you will throw a transaction that's it and this is from the chain code that's it that's it 3d3 does not exist so this is the completion so right now i will want to do the query of the questions that you have so let's start the after deploying to the coordinates in order to access the blockchain through let's say an API we have to deploy we have to deploy that in Kubernetes as well and use Istio to get access to the endpoints is recommended to use the same Istio the same ingress controller so if you use Istio then if you are accessing an API then it makes sense to use Istio because if you are in asur on AWS then you have to unify the entry point to the Kubernetes so having Istio and nginx doesn't seem it seems like a waste of resources to be honest a question from youtube after all the ors approve the chain code if you want to update the chain code do they need to approve again this is interesting because you can just using the external chain code as a the chain code as a service you can just update this so you can just update the image so let's say that you have a chain code image then you just update it and then you redeploy it and that's it but usually the organizations are separated so each organization has its own chain code especially if it's decentralized so usually when it's truly decentralized it involves a new approval but if you want in this workshop to update it then just compile a valid chain code created a token image of a valid chain code and yeah that's it and run this command again having changed the this image so you don't need to install it again because installation is done and you have already the address set here for the chain code so you just update the image and it is redeployed and then the new code is available okay so from Carlos what is the data stored in the chain code no the chain code is it's not stated because it relies also in the in the state database this is a in-depth conversation but the the data is stored in the pr and this is what we saw at the start of the of the meetup we have in the in each pr we have the state database the block storage and the history database so in this case we have the blocks which are stored in all of the blocks so we check the lens the logs for the organization too so we see here that receive block 36 so this is where the data is stored but when a block is received the state database is updated so that is that is a catch but the data is always stored in the pr when you deploy a certificate authority step for one the host is set it as organization one c a what is the what if they install the hlf network into a virtual machine in cloud and we want to connect with sdk and create an api well that in order to do that you will need to handle the basically the sdk the virtual machine needs to be able to connect to to the the Fabric CA in Kubernetes so you will need to have these you know if it's here it's in the page too so this user which will be your vm will need to have connectivity to Istio but the if you want to deploy in a cloud it's very different so and you can i mean there is not a unified way i did this so my plan is to upgrade it and to bring workshops to this course i'm the creator of this course so if you purchase this then we'll make sure to ask all of the in-depth uh not in-depth question from the workshop but in general you have a way to to ask me anything here so from george fabric main channel fabric follower channel is it the same assistant channel as an application channel no uh the system channel is not supported in the operator so we support only the channel participation api the system channel is uh is not used so the fabric main channel creates the channel and the fabric follower channel joins as we said here uh the the fabric follower channel joins the organization peers to the channel but they don't modify the channel configuration well they modify the channel configuration only for the anchor peers but these anchor peers are meant to be to be configured per each organization because if every if for each uh peer organization we needed to agree with the other organization it will be a mess so the anchor peers are only need the current peer organization signature in order to to be able to be to be updated and then the fabric main channel is the one that created the genesis blog joins the orders and then update the channel the channel config if needed so uh this means that adding new peer new peer organization sorry the orderer organization or removing peer organization imagine the concenters etc so basically the the configuration that you can see inside the fabric main channel spec so it's not it's not like the system channel george any other question that you may have anyone but i don't know david yeah you want to thank you yeah thank you david so you're all set yeah i'm i'm all set i mean there is no additional question and thank you everyone for being here and you can find me on on linton yeah thank you david and i've sent those resources out to everybody who signed up who weren't who wasn't able to join today so hey perfect thank you great well thanks everyone we'll see you in the next one bye