 Okay, we are live. Hello everyone, welcome. It's a pleasure to see you on the last session from the World Probe Series Hyperledger Bevel with Sovnik Roy. You can find the previous recourse on our YouTube, I will put a link on the chat. Quick words about the Sovnik, Sovnik is a technical architect, Accenture and technical architect and the maintainer of Hyperledger Bevel. Also put in the chat more information about the Hyperledger Bevel. So please, thank you everyone for showing up. And Sovnik, could you please start presentation? Yeah. Hello. Hi, everyone. So hopefully my screen is visible. Right. So before we start, we have these antitrust policy notice. And of course, as you noticed, this meeting has been recorded and live streamed. So if you don't want to be recorded or live streamed, please exit and watch the recording later. Right. Thanks for that, Igor, for the for the introduction. Yeah, just again, a recap. So Hyperledger Bevel is is a tool falls under the tool section of the Hyperledger umbrella. It's the newest project in hyper under Hyperledger Foundation. And it will accelerate the developers and operators tools. It's when deploying a production ready or production worthy DLT network on different cloud providers. Thanks for that, Igor. And yeah, so this is what this was the kind of series of workshops today is the third and the final one. It's a bit longer, but we may not use the whole thing. Let's see. So for the first workshop, we learned about how to deploy Kubernetes managed Kubernetes cluster. I hope everyone has a managed Kubernetes cluster by now. And then in the second workshop, we learned about how to deploy a hashikov vault on top of on the Kubernetes cluster. As I explained, you can have hashikov vault in a different VM in any different configuration. But for this workshop, we are going to use it on on vault. Sorry, on Kubernetes. Then yeah, and then today is is finally I think we discussed the last workshop that will be doing fabric. I know because most of the Hyperledger meetups that I do people ask for fabric because yeah, it is mainly that mostly used Hyperledger product. So yeah, we'll do hyperledger fabric. And on this one today, and on this one from question wise, what what are the other platforms that are supported by Hyperledger bevel? Can you guys answer on on the chat? Is it okay? Yeah. Yeah. So yeah, that's most of you correctly answered. So Corda Corda is both Corda open source and enterprise in the fabric. Of course, that's what we are trying to deploy today in the end and be so as well. So these these are the other platforms that are also supported by Hyperledger bevel for deployment. Okay, yeah. Okay. So prerequisites for that. Okay, there is a question about really support short with. So that is up to the community. If someone wants to support to the short with part and make it available on open source, of course, then it will support short with but yeah, it is after all an open source product. Right. So prerequisites for for today's session, you must have the Kubernetes cluster from the first session, you must have the hashtag of vault. From the second session, you should also be having Docker or machine that can learn run Docker commands. And of course, the laptop or desktop, which which will be your, you know, kind of the Ansible controller from where you are going to run the Docker commands. So these these are the four prerequisites for today's workshop. Again, it's a hands on workshop. So we'll encourage people to actually do things I will also do because this will be my first time deploying on to GCP after after long, at least after two years. So we'll see how how the current version works on on GCP. Right. Any questions on on the prerequisites or any you if you are able to you have your prerequisites ready. If you have restarted your vault, it may need to be unsealed. So please use your unsealed keys to to unseal the vault. And then it should be it should work. Otherwise, if it is still sealed, then we'll get errors when we work. Okay, no questions or there is a delay. Right. So these are the steps we'll start doing. So let's do a follow this. I'll paste the link. And then we'll do do it yours. I mean, do basically the AI. So this is the link. Let me stop this presentation and share the link. I hope everyone is here on this link. So just to reiterate, I mean, this was the last week's one. So you should have your vault unsealed. It shows here unsealed and logged in. I mean, there's no need to log in. More important is is unsealing. And you should already have these secrets v2. The cluster should be up and running to get your new for at least for Kubernetes, sorry, GKE, you can run this command to get your latest or the correct cube config file. And yeah, and then let's follow this instructions. Yes, this workshop is streaming on YouTube and the link is few comments above. Right. So first of all, let's read a kind of what what we are trying to do. So I'm re again reiterating the some prerequisites extra ones. So you should have a machine which is the host machine where you can run the Docker commands. You have one Kubernetes cluster, and you have one hashic of fault server. And you should have read that access to the Git repo. So either using a private key or HTTPS token. So let's start. So we'll have to get your for the Git repo first and have your okay. So just for this repo, I've already done it. So I'm not going to do it again. For this repo into your own personal GitHub, your your own GitHub, basically. And then you should, if you're using a HTTPS option, there are two options to log in or push pull for Git repos, either HTTPS or SSH. So if you are using HTTPS, you should generate a personal access token, which should have right access to the repo. And then you also, if you are using SSH, then similar to SSH, create an SSH key, private key, you should have your own private key in your own place on machine and upload the public key, the corresponding public key to your GitHub, to GitHub section. I'm going to use SSH. So I have, I have enabled, I have added my SSH key to the to my GitHub account, right? And then you clone your Git repo to your host machine and you can call this the project folder. So yeah, this is the command. Okay, I am using VS code because it's easy to edit items as well as, you know, run the commands and all that. So you can use, you can use VS code if you have, you can use some, your other favorite editor and terminal as well. But yeah, I am using VS code. So I have already downloaded the code under, I've just, the only difference is that the name of this folder is demo, BAF demo rather than bevel. That's, it was an old name called bevel, sorry, BAF, it was blocked in automation framework. So we have now, I've just downloaded it and the rest of the things should be similar here. Okay, so depending on your platform choice, this will be the thing you can read through these in leisure time, but today we'll do the hyperledger fabric, right? So let's look into the hyperledger fabric configuration. So this image may be a little bit old, but in general, it is a single network YAML file. We call it network YAML. It is basically a config file, which starts with network and it has its type and version where, where type generally means is the type like fabric or Corda or Bezu version is the version like we'll do fabric 2.2.2 if Bezu would have a different version, Corda would have different versions. And then front end, this is some kind of supply chain, the example app context, so we can ignore it for now. And then each network has these, these, you know, child items. So you have one environment, you have the docker section, and then you can have the orderer section. Then for fabric, you have the channel section and then each organization. So what the concept or the principle behind bevels configured, all the configuration file is that each organization is representing a party in the blockchain network. So that means each organization will have their own cloud provider, provider with, will there have their own URL suffix, will they have their own Kubernetes, will have their own AWS access if, if you're using AWS. So that is why it is, these are child of the organization. So which makes it easy if you are going to deploy into multiple, into a multi cloud environment, it makes it easier. Because though you have at, when you are starting the deployment as an operator, network operator, you will need access to all the environments. But then once this has been set up, those access can be revoked and the network will still work. Because at that, after you have set up the network, you will not need access to the network unless you are going to do any changes, which are, which may be needed. So, but if you are working as a network operator who is like in the consortium where you, the job of your, the main, you know, the group of the consortium is to set up the initial network and then give access to each of these, the parties full control to their environments, then yeah, that's how you do it. You set up from a common with a kind of a master, master operator, and then you provide, you hand over each of those environments to the, to the respective organizations. So that means you will, in a real life scenario, you will have, you will have multiple Kubernetes clusters, you will have multiple vaults for each, for each organization, and you will have multiple Git repositories as well. So all of these can be separately modified in under the organization. But for, of course, for our demo or test environment, everything is same, because we have one cluster and one vault. So we'll keep all the details same, but it is possible to have different user username and different ways to authenticate and different vault and different Kubernetes. Right. So then under, so each organization will have different services, like these are specific for fabric. So you will have a CA server, you will have the CA service, you will have the order service, a consensus part, as well as all the peers. Just another note again, it is not recommended even by Hyperledger Fabric to use the Fabric CA for production use cases. Okay. Now let's see what we'll do. So the sample that we'll be using is this one proxy none. What exactly is K3S? We have not used K3S, by the way, even without knowing what exactly is K3S. But we have used K Kubernetes and hence it is K8S. So these are the, so if we see the, I'll just explain the folder structure as well, because most of you will be new here. So from the whole folder wise, we have these dot ones are kind of default ones. You have the dot GitHub workflows and all that there. In automation, there is a sample Jenkins file. We used, internally we use Jenkins for automation on the deployment automation. So that is why there is a sample Jenkins file as well, if you want to use it in your existing Jenkins pipeline. Build is a temporary folder, as you see, it's in Gitignore. The docs, all the wiki docs that you are seeing accessing right now is basically in here. Then we have examples. So we have right now we have two examples. So identity app is for the indie example and supply chain app is for the rest, you know, Corda and Fabric examples. So those are the examples. And then we have platforms. And as you can see, we have all these different platforms, one, two, three, four, five, six, including Corda Enterprise. So six platforms being supported. So all the code that is related to that particular platform is under this folder. There is a shared platform, which is like a common code, which is shared across all the platforms. So if you're just kind of a tip there, I mean, if you're using only going to use only one platform, you can actually remove all these from the Git repo because so that you get less confused and only keep shared and the respective platform. You don't delete shared because then you'll have problems. Right. And then all these are more contributing and the core files that we use. I mean, the co-owners and all that, we have the network schema, which validates the network schema here. Then coming to one platform, we have these folders, charts, very self-explanatory. This contains all the health charts that are used for that platform. Configuration contains all the configuration, the Ansible roles or Ansible configurations that is used in that platform. Images will contain the Docker files if there are any custom Docker files. For Fabric, we are not using any custom Docker files. So it is empty. Then releases is where you will store the GitOps release files. And then scripts are, in case you want to keep some samples, scripts, SH files, shell scripts here, you can use that. So that's the overall. Right. So going to the real test. So we have all these different samples under configuration samples. These are sample configuration files. They have dummy values, which you'll have to edit once you use it. So today we are using a network proxy none. So I will paste this. So that's the proxy none. And then you'll edit this as per the configuration that you will do. So I have already done part of it. And yeah, and I think it's there in the instructions as well. Let's go. That you copy these into a build folder. So here it says, create a build folder in the under the project folder, and then it will have these following files. So first one is the Kubernetes config file as we'll rename it to config, then network specification file, which is a network YAML and the SSH. If you're using GitOps, it should have the private, the PEM file under under build. I have many more things here, but, but these are the minimum things that you need. Any questions till now? So you can create another file called a network YAML, which I have done here, but it is a copy from, from the network proxy with some changes. I will, I will just do the comparison so that it is clear what are the changes. All right. So let's, let's start. So we don't have, I mean, these, these can be ignored. The first change here is it is called demo fabric. I mean, this is, this is just a tag. So you can keep it local, but that's the first dog. And then our proxy is none. That is same. So, so there's no need to change that. And then the ambassador ports can be ignored because anyway for fabric, we don't use ambassador. We use HAProxy. So this can be ignored. Then comes the retry count, which is 15, our case. So retry count is count how many times it will retry for, given it is, you know, it will wait, our system will wait for the boards or, you know, the certificates getting created. I've not added this annotations part because this is a new feature. But in case you want to add custom annotations for your service deployment or, or PVC, you can add this section as well. Then the Docker, yeah, I've just removed Hyperledger. I think this is the old one where we had Hyperledger labs, but again, I'm just use Hyperledger. Yep. And then, then I think the rest of the things are almost same except the comments here. So you can, you can just keep it same. Yep. So that is same, that is same, that is same. Yeah. So the main changes will be in, yeah, cloud provider. So I am using GCP. If you're using AWS, you know, use AWS here. And then in that case, you have to pass the access key and the secret key. But I'm using GCP. So I'm keeping this default. Right. And then we have the cluster context. So this is I have not completed yet. So I think we can use same for config because as, as instructed, it said it should be config. The only thing I used to do is I have to find the cluster context. So let's change everything. So there will be this file has three organizations. So that is why you can see these, if I search for it, it has three, three occurrences. So I'll rename all of them to config. Now for cluster context, I'll have to do. Okay, if it is too fast, I mean, I can make it slow. If you are following the, okay, I mean, let's let's do one thing, you know, take 10 minutes to go through the network proxy non YAML file. And you go through and then we can start after less than 10 minutes is too much. We'll take five minutes to go through that. And then I'll start again. Okay. Yeah. So the main thing to do right now is copy, you know, copy or copy your network proxy none dot YAML into a build folder, create the build folder under bevel. So CD, I'm using BAF demo, right. LS minus LTR. So then go to build. So create, if you don't have the build folder here, do a create a folder called build and then CD into build. And then then copy, basically, you will run this command copy, I've given the path, you have to use absolute path most likely to network dot YAML. And then we edit that we are editing, we're comparing and editing the network YAML. I mean, if you if you don't have a compare tool or something, then you can you can just follow how we are editing. So far, we have not not edited anything until line number line number 110 or 118, depending on the file. But where, because I my cloud cluster is GCP, I'm using here this cloud provider is GCP, I'm using GCP. If you have AWS, you should be using AWS. If you have Azure, you should be using Azure. If you are using local mini Kubernetes, you I think mini cube is more or less, I guess it is supported, but I'm not entirely sure. I'm not entirely sure for fabric. So you I think you can use mini cube. Let me check. Yeah, you can use mini cube as as the provider. So how I checked is I we have the main differences between all of them is all of these different properties is the storage class. So how I checked is there is a role which creates the storage class. And this is where the templates are. These are the supported ones. So AWS Azure, mini cube, digital ocean or since GCP is not there, so I'll have to create a GCP storage class as well in the meantime. But yeah. Okay, so if you you go have a review and I'll create the storage class. So the question is, can we use platforms, the network ML as it is inside the will folder or we have to use the answer to that is yes, you have to do specific changes because that's what you are doing right now. I am explaining those changes that you need to do because it will the network configuration is respective of your environment. So you cannot use, you cannot use another network ML blindly without doing any other changes. And that's what we are doing right now. Okay. Jorad had a question, what is the default structure of the organization, one organ, how many peers? I mean, example, a project wherein we have a main organization and another organization with four peers to have chain code, where we do put the participation of the network. So for that, the answer is that that will depend on your project. And as you said, you have a project where you have one organization and another organization which has four peers. So in that case, you will have two organizations, one organ and the second organization has four peers. But if you have that kind of situation where there is, you know, only one organization which has the channels and the chain code, then I mean, I think you have to rethink the design. And maybe blockchain is not a good choice in that situation because blockchain is supposed to work across multiple organizations. So if you're not finding a use case where you only have one organization where everything is happening, then you actually don't need blockchain, you can just use a database. Okay. So as I said, I'll have to create a storage class. So can we say how many of you have GCP is working with GCP? You can say yes, I don't think Ray's hand is here. You can say I am, I am while you're working with GCP. Okay. So just one. Okay. Okay. So in that case, I'll do these changes. You know, if you're with Azure or AWS, you don't have to do these changes. So yeah, so this is what we'll do. For someone who is using unmanaged DIY KITS cluster, yeah, in that case, you just choose a name. Most likely your, as I said, the difference will be your the SC, the storage class. So that's what we're trying to do. We're adding a new storage class. So for someone who is using, as if you see in this screen, so these are the ones that are already supported, AWS, Azure, Minikube, DigitalOcean and OKE. So I'm adding a GCP one. So similarly, you will have to add your respective KITS cluster name. You can call it whatever and use that whatever in this cloud provider section. So for whoever is working with GCP, you can work with me. So this is the file. I'll copy the relative file path, which we are changing right now. So here I have added, you know, basically copy pasted these and named everything to, you know, all OKE to GCP. That's what I did. Then corresponding, we have to create the templates because these are the template files in which are the storage class templates. So the same thing you can do, you can copy paste this OKE SCP and rename to GCP. So this name and this name should match, you know, goes without saying. So and then you write define the storage class definition here. And the only variable you're using is the SC name, which is being passed, which is, which is all right. So the rest of the things are same. So only for GCP, these are different. Okay. So, so which whoever is using a Longhorn storage class, you can, you can just use create here, you know, add those details on the provisioner section and all that here. Okay. So I'll paste this in the chat. I hope the yes, you have to create. Yeah. So I think your provisioner will be something you have a class name, right, for the provisioner for Longhorn. So that will be your class name here under provisioner. So this one is for GCP file. I'll just paste it. Please ensure that the spacing is correct, which is here and the after parameters. Okay. So that's, that's how, I mean, it's a bonus. That's how you add a new storage class, which, you know, you don't, you want to use, you know, if you want to edit, change the existing storage class, for example, even for your AWS, you don't want to use EBS, you want to use something else, just come in this folder and change here. And, but this is specific only for fabric for, you know, Corda and all the else. There will be similar files, but it may not be in the same path. It will be inside Corda or Fab or Beesu and all that. Okay. So, right. So far so good. Any questions are is, we'll be able to go back to editing this. Okay. No, there's no comment. So I'll continue. So just again, to add there, this cloud provider should, should match your, this part. Okay. That's what I'm saying. Now cloud provider should match there. So I'm using GCP. If you're using AWS, please ensure you have the access key and the secret key populated. Then I am doing the config file. So the config file I will copy from my, my, my current config file, which is attached to the which, which connects to my GCP, GCP Kubernetes is here, which is this one, right? So I can, I'll copy this to a file called config, as I said, and, and that's what is mentioned here. Now, then I'll change, I can update the cube config, config. Now to get the, we have to update the context as well. So just do cube CTL show. Next, I hope the command is right. Maybe not. Also do cube CTL config view. So if I do the config view, the context is current context is this one. So this is my context that I'll use here. So replace all occurrence with the correct so far. So good. Okay. Next is a configuration that will be changing is vault. So of course, you have your own vault URL. I have my own, which is this one. So please replace a vault URL with, with this. So replace, find all vault URL, replace with, and remove the slash. And you should have your token, which is the, which is a root token. So use the root, which starts with something called s dot, right? So replace the root token with that. And from yesterday, you know, day before yesterday's session, if you had created a different secret path other than secrets v2, just update here, but I have created secrets v2. So I'm using secrets v2. So Patrick, you have a question, can we do this with only one cluster and use the same context for three instances? Yes, that's what I'm doing. Yes. Because I'm replacing, if you noticed, I'm replacing all occurrences. So, but in, in multi-organization, multi-cluster, you will use different. Yeah. So Jose, your question is for a production multi-organization network, you will have their own network YAML, not to start with, because especially for fabric, because fabric has this concept of, of course, genesis network, at genesis network, all these, all the, or at least on the genesis organization should be there. And once that is ready, then I think each organization can add their own peers and all. So in that case, they will have their own network. But, but other organizations, especially it is much easier with Beesu and, and, and quorum that you can add, you can start a validator network only, and then you can gradually add new, in which case your, each organization will have their own network YAML. Right. Next, I mean, another important part, this one, everything is important, but this is one of the most important ones, which gets wrong. So I'm using SSH. So hence the protocol is SSH. If you're using HTTPS, no worries, you can use HTTPS. So SSH and in, if corresponding URL also is updated. So remember, if you are using SSH, you should have this git at the rate, this format of URL. But if you're using HTTPS, you just have, have the same URL, which is HTTPS. You can get the actual URL, sorry, from your git as well. So git remote minus v. So if I see, if you see, so I have, this is my origin. So this is what I am using. git at, git, git at git hub.com, shornock. It's only I replaced this colon with slash shornock fulcrum.git. This is my old repository. So I'm, I'm reusing that. In your case, you will be, if you're using HTTPS is just you change this to your own username. Okay. Check my branch. I'm in fabric already. So yeah, then you will take your branch. So if you are, whatever branch you are on, use that here. So I will generally ask to change to, you know, from something away from develop branch, because generally we use develop branch for, which has, you know, then you will be able to get all the, all the latest upstream mergers and all easily. So here, the example was, yes, you have to actually use your username and password. So this is, this is branch you'll give. And then the release directory, no changes. No, no, the password is a token, sorry. You don't have to use HTTPS password. It is, it's a key is just called password, but the value is a token. Yeah. Okay. So git repo, I mean, this is kind of a duplicate, but I'm trying to make it easier. Otherwise, we'll have a lot of complex. So code in our Ansible. So we have another one for, for this saying the git repo, which is basically without the HTTPS SSH and all that, just the repo from the hostname.github.com where your question is, I think I answered your question that there is support for token. Because right now, I think if you use password, at least with github.com, it will, it will not work because they are, they have rejected. They will, you'll get the rejection. Right. Going back again, email is your git email address. So I'm using mine. And then the, because I'm using SSH, my private, private keys needed. If you're using HTTPS, private keys not needed. You can just leave it double quotes. Because I'm using SSH private keys needed, which is, I've just updated to the right name. I know this one is github. Because in my, this, my file here under build folder is called github.com. Sorry. Yeah, github.com. That's why it is, it is called. I've used this. So no, so this HTTPS token at the rate, all these will be formed automatically. So you will just use, if you're using HTTPS, you will just use HTTPS github.com username bevel and here github.com username bevel.git. That's all. The token will be added automatically later. So you, you just add the token in your password search. Fabric recommendation to have only one network YAML for the Genesis network. Yes. Then you should have separate network YAMLs for additional, you know, when you add new channels and all that, or add another participant. And that is why if you see in our, in the samples, there are so many different examples of samples. Okay. So github's, these changes, and that should be enough. If you have done, you know, if you searched all and updated everything, so that should be enough. And you should not need any more changes. I'll just review again. For the other organizations, I did only one organization. So yeah, it looks like mine is fine. Everything else is, is default. Okay. Yeah. If you are using, if you're going to do the chain code as well. So if you go to the second organization, which is manufacturer in this case, there is a chain code section at the end. So here the chain code is basically deploys the chain code as well. So if we are going to do the data chain code as well, then here also you need to change basically use add the git repository, the git token here. So this one, and then github, you know, the details that you want to change. If you are going to do, but I don't think we will reach the chain code deployment. But why this is a separate kind of section is because you, so that you can deploy chain code from a different repository altogether. It doesn't have to be the same repository. So in which case your, this one will be safe. For example, if you're doing fabric, so so that's and the branch, which is basically the upstream, you know, the, which branch it will download from a branch name for grid section is should be the same as your current branch name. And as I said, that it is, if you're, if you're, if you're my branches fabric, so I'm using fabric, but it's, yeah, you can use, I think the default here is local. In that case, you should create a new branch called local after you have downloaded the code. It should be the same branch as you are in when you do get branch, which is fabric in my case. All right. So 52 will have another five minutes wait so that everyone has bought up. Yeah. Oh yeah. So the endorsers endorsement is respective to that chain code. To the question here is there are endorsement attribute in the chain code section and there is endorsers attribute in the channel section. So endorsements will depend on your chain code, but endorsers will be on dependent on the channel configuration, I think if I'm not wrong. Yeah, because in channel you define that how many endorsers are needed and all that. Jorell had a question in this network, how many organizations are there and peers? There are three organizations, one order and two peers. If I am, question is if I'm using any other cloud apart from GCP AWS Azure, what should I configure the provider? That will be dependent on what you want to use as your provider. So name. So if you call it XYZ happy, you can call it XYZ. Just ensure that there is a file called XYZ underscore SC and you have mapped XYZ dash order equals to XYZ underscore SC dot TPL. These two things. So VK had a question on the reference links for past sessions. I mean, of course there are in YouTube, I mean, if you are under hyper ledger, you should be able to see them. Take some a bit slowly, but in the meantime, forever has done just ensure that you have these three files. If you're using HTTPS or SSH, you, if you will have three files, if you're using SSH, you, sorry, if you're using HTTPS, you will have only two files. Right. And then this is what we're doing right now. Ensure the configuration file network YAML has been updated with the DLT network that you want to do today. Okay. Should we progress or wait for another four minutes? Where's the status guys? Okay. All right. I'll just recap a little bit for everyone else or for the recording as well. So I have copied the network proxy none and changed the important parameters, which is mainly the cloud provider, then the Kubernetes section with the correct context, then the vault section with the vault URL and the root token that I'm using. And then the GitHub section, which is corresponding to my Git repository. Any other questions in the meantime? Someone had asked like is it too fast? Is there any KITS mechanism on GCP, for example, to achieve communication among different clusters? See, I have one single cluster for each organization. So it is, it is not a, yeah, I mean, every, every cluster is there. And that is why Bevel deploys either ambassador or HAProxy to achieve that inter cluster communication. And yeah, that is why you have, you need ambassador or HAProxy, which is basically acting as a ingress controller in way that is how you kind of allow your cluster to open to other outside of the cluster basically. And in this example, we are using proxy as none because we are not connecting to other clusters. But in Fabric, for Fabric, you will have to use or inter cluster communication, you'll have to use HAProxy. And for all others, you'll have to use ambassador. And that means you will also need a domain name for each of those services because they will use the domain names. Yes. I mean, the chain code can remain developed if you have, but yeah, if you, if you're changing, you can change all the branches to the same branch and even under service sphere chain code. Yes. So let's run this. So I'm going, I've gone back to the BAP demo. In your case, it may be your Bevel. So I've gone back from the, from the build folder. Okay. So I'm running the second command from, from this document because I'm, I think there will be failures and we'll need to debug. So it is better to run this one. Otherwise, you know, the, if the Docker container fails, then you'll have to run it again. And it's a bit too much. So that's what I've run this command. I'm running this command so that I can, I'm inside the container. Yes. So for someone using, using Windows, it will be a little bit different, especially the password, PWD. Carlos has a question, some configuration that allows a connection to GCP with Bastion. Not exactly sure what it, it means though, because that is your GCP configuration. So that is our cluster configuration, which has nothing to do with, with Bevel. But in case if you want to secure your GCP, that will depend on the GCP, you know, the config file actually. But if you want, if you are, if you have one to do it in a secure manner, then, you know, just create a dev dev or Ansible machine inside the, inside the same VPC as the GCP or the GKE and then connect there with bash from your Bastion host and then run these commands. Right. So I'm there and it should have mounted Bevel. Yep. So let's go to Bevel. And then I'm going to run this Ansible command. So I'll just check my, my build folder has the files. So I'll run after, after I've been, so I'm now in Home Bevel, then I'm running this command Ansible playbook, paste the command here as well. And yeah, now we wait or get some, get some tea or coffee and hopefully it doesn't fail in the middle. So what, what it actually does, if you will notice, I mean, now it is just, so the steps that are involved in, in what Bevel does is we have different playbooks. We're using Ansible mainly for the automation. So what is more of a glorified script, shell script, actually Ansible. It is not used for a lot of additional details. Okay, there is some problem. Oh, I have to download G Cloud in my controller. Just don't have. Okay, got an error. Okay. So this is because you're using AWS, right? Wait. And I think you have the AWS token you are using is, is not valid. Because I think that's the issue because it says forbidden or your, what should I say, your Kubernetes is, is, is forbidding that because it has this anonymous issue. Yeah. The vault version is 174. 174 is the latest supported one that we have used. So the token for setting up the cluster is fine. But I think what is happening is your Kubernetes cluster that is, is forbidding some resource access. That is the error that is showing. It doesn't, it's not allowing system colon anonymous access for some reason. Okay. Right now, I'm not entirely sure what can be done. But it seems you have to, if you, if you search Cloud, yeah, I think it is RBAC Relay. Yeah. That is exactly the same thing, John. Yeah. So you have to search on, on that. I think you have to add a role-based controller there. You can create something. I don't exactly have, have the exact link right now, but it's a quite a common problem. So it should be able to resolve. Yeah. Most likely your default RBAC is set to off or something like that in, in the cluster. Okay. I'm spending time in installing GCloud. So I'll make a note for an issue to create on our channel, sorry, our board so that you install Google Cloud as well on the container. Oh, I think I left authenticate. Okay. There's a question at which level do I need to check roles? So this is talking about the Cuban, the Kubernetes roles. You can create the account. It is not a, you know, it is no problem with creation of the account. It is more with more to do with the access from different systems. So I think that is what is allowed. So are you able to do get pods? Yeah. So are you, are you using a, like a profile or something? AWS profile? Yeah. Yeah. I think that's the thing. So because you are now running it from the Docker container, as you saw right now, even for Google, I had the same problem. I had to again authenticate from within the Docker container and then it works. So most likely if you're in the Docker container, if you want to authenticate, you can do that. If you want to run the same Qubectl as I tried to do, like I did Qubectl get pods, I set the AWS, sorry, the Qube config to that config file and then I did Qubectl get pods and then it gave me that error of authentication. Then I authenticated using the Google Cloud commands and then it works. So most likely what you have to do is set the same profile on the Docker container using your AWS CLI is already installed on the Docker container and then I think it should work. It will work. Right. So yeah, there was an option of, you know, it's asked for me to ask because I'm using HTTPS when it tried to connect to GitHub, it asked me if that SSH was checking. So I just type yes. And then if you see, so I was before this was what Bevel does is creates, it uses GitOps, which is basically using Git repository as the configuration storage. So all the generated files for the Kubernetes cluster is created by Ansible. It's using the templating engine that Ansible has, which is quite good. Then what it does, it saves those files into the Git repository that you've given in the GitOps. Then what happens is we have something called Flux which runs and then it, then what Flux will do is connect to the Git repository using SSH or HTTPS and then it will up download everything, you know, create the corresponding files on the Kubernetes cluster. Okay. I mean, if you are having problems with SSH, maybe, I mean, more many of the networks don't allow SSH to all ports. So you can use HTTPS. Yeah. I mean, this, I think this fatal, this one is, I think it can be ignored because rest everything out more or less is working. So how do you see, I'm doing the debug thing as well along with this. So I am connected to the cluster using, using Lens. So this is my cluster, so many other clusters as well. So in the Lens, if you see in the namespaces, so there will be a, depending on what you've given your environment name, you will have something called Flux, something. So if I, you know, Flux local, you are using Flux local, I've given demo fabrics, I have demo fabric. And on that, there will be three pods. One is the main Flux pod, then one is the Helm operator pod, and the other one is, is a memcache. So the flux, if you do the logs of this Flux operator, which is just clicking on this, or no kubectl get pods, sorry, kubectl logs minus F, you should be able to see the logs of the, of the Flux pod. Now, what I noticed was that at some point in here, it was saying that the Git repo is not ready. Yeah. So it was saying Git repo not ready, and it kept on, kept on repeating that. Okay. So that kind of, that kind of meant that my Git repo was not getting cloned. So what was the problem was that, because in my, my cluster is on, on extension Google. So that's why port 22 was blocked. And that's what was the problem. It was not able to, because I'm using SSH, it was not able to connect to this SSH one. So and hence I fixed it for, and then now it is able to download the code. And then these kind of messages, if you see this kubectl apply and all that, that means it is, it is, it is working. And hence, if I see my, if I go to namespaces, I'll be able to show you. So if you see these two namespaces were created recently. And if you see here, it also shows that it is, it was created by Flux CD in the labels. So it's creating the third set of namespaces. Now, I'm keeping an eye on the Flux logs as well. So importantly, when you start, part of the debug is another link called the debug I'll share with you in the chat, which is the verify. So this also talks about the similar steps that I'm, I'm doing right now. So you, you first, you would have to check the debug of the Flux pod. Then after all the namespaces and all have been created, then you check the logs of the, the Flux, the Helm operator pod, because if there is something wrong with your Helm releases, then it will be shown in the Helm operator. If I show you now. So the Helm operator has, it's not there because no Helm release has been created. But yeah, this is, this is going on. So I'm going back here. So here, if you see, now after we have gone past, now it is creating the vault Kubernetes files. And it is waiting for job supply chain. So many of, I used to get all these issues earlier that it is saying, oh, it is failed and retrying. So this is not actually a failure. It's basically, it is waiting as soon as you see retrying. That means it's waiting for this job, or whatever task to finish. And it will try, that was the 50 retries that we have added. It will try 50 times. Yeah. So Flux operator with other namespaces, I think it is fine. It should not be a problem if it is trying to, if it shows that, you know, the refreshing image errors on Flux operator, that is where that you can ignore. Yeah. So that was to Bruno's question on, there is trouble with the Flux operator attempting to pull images from other namespaces. So the rate limit one, yeah, that is also fine. I think, I mean, not fine in the sense there is a rate limit issue, but if, is it not able to pull any image that you are using or like, okay, yeah, thanks for that. Yeah, that should also be fine because your dot AWS has the configurations, the profiles and all that details. Okay. This job is taking a little bit time. So let's see. So guys in the comments, I mean, mentioned where you have reached at what point is there? Okay. In the logs, if you see git push has failed, check the second error message. This set, it is gets ignored because it, but it is actually a problem, even if it is ignored. But if you see the messages, the actual message of why the git had failed will be here. So just check those, the messages, the logs, this green, I hope it is green for you as well. So if you see this, there is an error here, then it will, there will be a, that means there is a problem, like higher here, it does say for me does say not a git directory, but it is, it still pushes, I think because I have mounted from my local details. That's why it is, that's why it is kind of saying that it is not a git directory and it is kind of pushing to fabric. Yeah. So if you're at cluster role binding, still waiting, so please check your flux logs. Most likely it is not syncing and that is the problem. Okay. Okay. Looks like I've done some mistake on the Docker side. So I may have to redo it. Okay. I think I've given wrong fabric, sorry, wrong files, wrong Docker username and password, and that's why it is not working. So let's stop it. Okay. Sorry. Patrick, what is the issue that you're seeing? The, from flux pods, if in your flux pod logs, if it says that what was it, unable to pull images, I mean, if the flux pod itself is working, it should be fine. There should be any other issue like git repo not claimed, cloned or something. If you search that, like how I did, I should be able to see the error. Yeah. So most likely, are you using HTTPS or there is something that is blocking access to your git repo? Okay. In that case, you have a different version of Kubernetes where the V1 beta one doesn't exist. And that is the problem. So what you have to do is update in your release files. Right. Okay. That is super new version. I don't think fabric in bevel supports bevel doesn't support one above anything about 1.19 because there is some problem with the fabric controller, fabric version itself with the chain code controller. So yeah, if you are doing this, then you have to basically replace all that, you know, search for that. This thing, I'll just whatever is giving error in your releases folder and then replace it with, I think they have changed it to V1. So replace it with just slash V1. So no, I think Jose, your P3 install update Kubernetes that is only going to update the client side, but it is not going to automatically change the, you know, the cluster, sorry, these files. So because this is the template, so you have to update the template as well and the release files and check them in manual. You have to stop the code anyways, I think. I mean, I'm also having a problem. So, oops. Okay, I think I found the problem and I'm running it again. So the thing that the problem was, I've used the wrong Docker hub name. Right. Any comments on, I know we have, we have another, I don't know if we'll take the whole another hour or not, but any comments on how far you guys have reached so far, because I'm stuck at running the vault Kubernetes job, as you can see. Okay, Jose is already creating channel. You're, you know, long, long, you know, going forward super fast. Yeah, GitHub access issues. So it may be, I'm not sure if it enables, okay, thankfully, my thing has progressed. Not sure if it enables your additional, sorry, it needs something to be enabled on the token that you have for GitHub. Yeah, maybe, maybe that is the problem. Okay, so coming back to the, my issue, I'll show you. So please don't, don't use this index Docker IO hyper ledger instead, hyper ledger lab instead of hyper ledger use hyper ledger labs only, because it seems we have reference to this alpine image, which is not under hyper ledger, but it is under hyper ledger labs. I know my Docker token is visible here, but I'll revoke after the session. I mean, it's anyway free one. So it's nothing. Okay. So if we see on our, on the cluster now, now we can see that the supply chain Kubernetes job has been completed. It shows us completed here succeeded means the job is completed, complete. Now on, I told you that I'll show you that after how it changes. So because the job is complete, this, if you see this policy has been created. And if I go to the access, then I have this access also also created. You should have both the policies now because the other job also has been created or succeeded. So we'll see. Yeah. So it is on the carrier section now. And for, for Patrick, we turn off the step. Yeah. So yeah. So that's for, you know, we have our in our roadmap to change to Kubernetes, you know, upgrade to, I think we'll, the latest we'll do is 1.21. And then everything else is fine. I think only fabric has some dependency on the Docker thing, but I guess with external chain code and all that, it should, should be okay right now. So, and from a debugging, I'm doing the debugging part as well. So as you see from debugging, I have, I am seeing that what is happening on my cluster in the background. And I am following the logs of the fabric of Helm operator and, and the main flux as well. I mean, after, after it starts once, I think it's only the initially, if you have it has already always done, you know, one update, it will always do the next ones. Yeah. So you, it's not, it's only a starting problem that we have with, with flux, because mainly of the access, the get access issues. Sometimes it likes SSH, sometimes it doesn't like SSH. So, but that is, depending on your network, it's not, not a problem with Github as such. Yeah. And you can also have a look in the Helm releases section. So Helm release is a custom resource custom from by for, which is created for flux. So it is under Helm, fluxid.io under and called Helm release. So these ones, you'll once Kubernetes, sorry, flux itself or flux Helm is successful, then you will be see this as success as well. And then once this is succeeded, then the job or the pod will appear as you see here. They are all related. And another update on Bevel is, is that we are working almost finished with V2, flux V2 upgrade. So, yeah, that's what I said, the idea, you know, you, I think it needed the main repo access, because it also that, that token is used to also update a tag as well. So anyway, so going back, we are the update from our team is we are also working on creation of upgrading to V2, flux V2, flux V2. The changes are a lot, but it will make things simpler. And we'll enhance the, it has the, I mean, reduce the memory load on the Kubernetes cluster, because as you see a flux is, there is like a, it syncs with the GitHub, right? We have the time set as two minutes. But after sometime when your network is already running, you know, you don't need every two minutes sync. And then you, you cannot change it once it has been installed. I think with the flux V2, all those new features are there where you can, you, it's a bit different way to download or link to the GitHub or any Git repository for that matter. Okay. Let's see if our storage class is getting created or not. Yeah, it got created. So any other updates anyone would like to share or questions while we are waiting for the second storage class to get created. I mean, I know it's for some people, you'll see that why is it creating the same storage class again and again. Again, it's small because of the, because we're doing it many times, you know, different for each organization. That's why it is some examples we have. We have like same storage class being used for everyone in the cluster. That's also possible. But I think fabric was one of our oldest oldest code is one of our oldest code. And that's why you have the storage class also gets created every for each organization. Okay, I've checked YouTube if there are any comments there. Let's see. It's the same horrible thing here. So once the network is up, how do we deploy chain code updates? Do we rerun the YAML file? I mean, there is a chain code operations, a playbook separately, which you can use to do the chain code updates. There's I think chain code upgrade and all that also included in that. Yeah, so the answer is that you have to run the YAML file, but it is the playbook you change. The network YAML may remain same with additional, of course, you will change your chain code, you know, version, for example, and all that. Yeah, so that is, I mean, that is, I think one way with 2.2. I think it's easier with the chain code packaging and all that. And we also include, it comes with Fabric Operations Console, so Hyperledger Bevel supports Fabric Operations Console as well. So there is a, I think it's a tag or a key called Operations Console enabled or true or false, something like that. So once you can do it via the Operations Console as well. I mean, guys, if you have not explored Fabric Operations Console, it is a Hyperledger Labs project. You should use it. You should explore it for people who are using Fabric. It gives you that front end that many people want to operate a Fabric network because you can create channels, everything, of course, all the all the settings have to be set up. I think there is a video on our Wiki channel as well, which shows how to set up Fabric Operations Console on Hyperledger Bevel and use it to create channels and chain codes as well, packet chain codes and all that. Right, so the story slices have been created. Now we are creating the CA search. How is the way to clean the cluster? So there are many ways depending on what you want. So if you want to remove Bevel, the deployment that is made by Bevel, you run, there's a playbook called Network, sorry, Reset Network. So no, it is actually clean network. It should not be called reset, but yeah, it's called resetnetwork.yaml. If you run that Ansible playbook, you know, give the path, then you use the same network YAML, then it will delete everything that you have in your cluster that was created by Bevel. But if you want to delete the whole cluster, you can just go and delete the whole cluster from Cloud Console. Yeah, wait, on that one, I'm sure I think you did require our token to be updated. In that case, you may have to uninstall everything and do it again, because your flux has already been set up with the older token. If you had to regenerate a new token, it is not supplied to Flux yet, if you had not rerun the code. Yeah. And going back, we have, you have to uninstall Flux first, because it will not update automatically. So now, one of the, you know, CA Sard's job has executed. So if you see, I will show the vault. So in vault order organizations, the CA certificates have been stored now. I think the easiest is to just delete the namespace. Wait, yeah. So if you have created a namespace called Flux dash something, right? Like for me, it is Flux demo fabric. I think the easiest will be if you just delete the namespace, it will delete everything. And then when you want to run the playbook again, it will leak to the create. I says this namespace you can delete, right? Okay. Okay. Yeah. So that is why IP is not a good use. You should always be using domain names. So, yeah, if the vault service changes IP, then it is a lot of a problem because all the accesses from Kubernetes is also based on that IP address from vault. So yeah. So hence you should use domain names all the time and not use IP. Okay. So we are at Carrier Sard's job, which creates the certificates for the CA server. Who else has progressed beyond token review binding or has there, you know, CA certs. And I know someone said he's already in the chain code part. All right. Any other questions you have anything in the meantime? Okay. So now that is complete. Now we wait for the certificates. So I know there is some things I think you can do to change this, you know, do it minus five minutes, but we have not yet reached, you know, because we're busy enough with other things. So yeah, if anyone wants to contribute that as well, it would be good enough. It should be an easy configuration contribution. Because otherwise, if we do it too quickly, then after when we run the actual network, then fabric control complains that the certificates are invalid. Carlos had a question. I'd like to use bevel in the cloud or shouldn't be available to use it. Not entirely sure what it means. Do you mean bevel as a service? Oh, vault. Yeah, vault in the cloud. Yes. Yeah, we have not done the vault AC, right? And I think that there was a question on the discord as well. We have not tested it yet. I don't think it will be fair. It will be super difficult to integrate with AC vault. Okay, I'll see how if the ports are already up and running. Okay, so if you see now the CA ports are running on the supply chain, the main daughter organization, it is already running for five minutes. And this server has just started. We'll wait for a few more minutes. Yeah, the only disadvantage with online workshops is that you cannot actually see if everyone is progressing on the same level or not. We did organize an in-person one in London, but it was a lot of much less participation than this one. I know the participation is more here, but the engagement from us because we had other team members as well who would help people. But that is less there. Okay, VK has a question. How much does it cost in cloud for running this lab? So I mean, it depends on how long you want to run, but we run dev environment on AWS, which is a little bit costly because of additional security rules and all that. And it is like a lot of people work on the same cluster. We create at least two or three blockchain networks at the same time. So for that we spend about $600 a month. So it will be much cheaper if you're running only one cluster. And as I said, Google is a bit cheaper because Google doesn't charge for the Kubernetes master, whereas AWS charges for the master controller as well. Yeah, thanks, Michael. Yeah, it will work on the, as you said, it should be working on the world on the cloud. And you can also automate the infrastructure as well. Yeah, I mean, we do have an infrastructure code internally within Accenture, which creates the cluster as well as, you know, the vault as an internal hosted internal machine HVM and with bastion hosts to connect to the vault. So that is already there, but it is pretty, everyone has their own ways to do infrastructure. So that is why it is, it is kind of not open source. It is, it is not like super secure way and all as well. So it is, it is a way to easily do it. But we generally do between a lot of different things for clients. So not what the distinction between India and fabric come in deployment. Yeah, I mean, there are many, but, but the main one being that indeed you will need a IP address. So the, the ambassador part when you are doing for, you know, proper deployment for, for clients or, you know, public, as I said, when you are using ambassador and all, or HAProxy, Indie uses ambassador, but you will need a static IP address for that because Indie uses IP address. If you're using locally, if you're running locally on your local, you know, on the mini cube or, or even within, if it is only within one cluster, then it is fine. You will mostly use the node port service of, of Indie. And then when you access it, it will mostly be using port forwarding. So that's, that's fine. But in that way, it will be simpler and it will not take this long as well, because Indie doesn't have a, I have a CA server and all that. Okay, so now we have actually passed time conversation only we've passed the six, five minutes. So here, now we are going to generate the certificates by running the fabric CA client commands on the CA server. So this is, this is again, automatically just wait when the board comes up. The only thing also, of course, is, is that it is here, you are doing it independently. There is no, if you're, if you're using separate walls, then of course, that you, you are not going to share each other's vault information, you are using different vaults. So each company, when they finally get the cluster and the vault, they will only have their own details, own, they will not be able to see the other other organization's details. So that's the advantage. And all the, I mean, if you see, I see now the CA tools is coming up. Yeah. How can bevel be used to maintain update, upgrade a running network. So we have, so one of the mentorship projects right now is exploring how to, at least they're trying to document to upgrade fabric 1.4 to 2.2 using bevel, both, you know, which was created using bevel, but bevel can easily be used to maintain a current network, which has been deployed using bevel, because then you have all the operations, you know, to add, add a new node fabric, you have deleted new node also, you can add order, new order, you can add peers, you can add a peer CLI. And I think some, there are some things in the backlog as well about refreshing the certificates and all that. So that is how you can, you can use bevel to maintain a running network. Yeah. So from Indi, we have not worked on a lot of Indi network upgrades as such, but yeah, it is correct that it is mainly using a config transaction. So in that case, it will just update you have to run that on the CLI. So it doesn't really, on Indi, we do have this addition of, you know, how to join an existing Indi network that is there in bevel. Jose has a question, can we have multiple chain quotes per organization? I mean, yes, you can have multiple chain quotes per organization, but in the current format of bevel, it is after, yeah, it is run after the, after the, you have to change the network YAML, because in the current format, it doesn't support in the same, okay, this is done. I'm not sure why. Yeah. So if you see now, the certificates for the orders, the MSP and the TLS ones have been created by the tool and then the users, the admin user has also been created. Yeah. So we'll answer the questions one by one. So first of all, Jose had a question for, is there any source to check the new format? So no, I think there was one, there is, it's open to the community, we are not working on that. I think it's there in the backlog. I will, you'll have to find the issue in our backlog. But currently, we are not progressing on that, having multiple chain quotes for each organization in the same channel. But it is there in the backlog. I don't think, but you can pick it up if you're working on that. Wade, you are still stuck at cluster role binding. I mean, the question there, I'll ask is, do you have the flux spot coming up, has come up again? And is it working? Mahesh had a question. So essentially, vault act as a wallet, which is in samples, is typically in the file system, right? I mean, yes, it is, that's right, the advantage of vault. I mean, it is not only acting as a wallet, of course, it is acting as a simpler wallet, where all the files are stored. And when the pod comes up, it will connect and download those files or files from the vault. Yes. So that's, that's how it works. And yeah, if you're using fabric samples, then it is kind of working only for, let's say, is only working for that. Historia is there. And once you, you know, if you delete the file system, then you lose all the certificates and all that. Okay, so Wade, you are saying flux is running, okay, good. But then in the logs, is it, is it fine? Is it working? Okay, I have a new problem, which seems, even if the certificates are generated, it's not able to fetch a check. Yeah, so I mean, Wade, I don't know, I cannot help without seeing the actual thing, but maybe you check when, what are the logs saying in flux? Yeah, I mean, I don't think that is causing any issue. There should be another similar Git repo not found or something. The same part, if you search the logs. So on the same part, you win. Sorry, Wade. Okay. So I found a problem. So I had another firewall blocking access to vault. So that's why it was stuck at checking that. So yeah. So I syncing the Git repo, yeah, sometimes it takes a little bit time to sync the repo. Yeah, so maybe that's, that's the problem. It's progressing. Okay. Patrick has a question, is Hypervisor level mainly catered to deploy networks or kinds of user deploy agents separately, like Aries agents. So it is mainly for networks. I mean, if you have that idea of using agents for Aries, you're welcome for the contributions. But on our supply chain application app, for example, we do kind of format deploy a format of the Aries, the ACAPI agent. But it is not, it is specific to that, not supply chain identity app, which is basically the example app as given by, as provided by the Aries and the D1. So yeah. So it is not, not a very, like not in a similar configurable way it is. You edit it, you build the Docker images and then you deploy it. It's very traditional. Yeah. Okay. So yeah. So looks like now the pods are coming up. There is insufficient CPU. So maybe I'll have to, wow, okay, new, new, new Kubernetes node is coming up. So let's see. I'm still waiting. It's so much resources. Okay. The order is working. Yep. So both are running. No. Okay. Right. So we have now, you know, one of the pod is up and running. So we're waiting for the next pod. Sorry, the next peer. And as I showed you the supplier, sorry, the other peer is already running. Yeah. And the new pod is coming up. Okay. Still, this has insufficient CPU. I think my GKE cluster was created with very less amount of resources. And hence, this is the problem. Patrick, if you have a crash loopback off, you have to check which pod, sorry, which container is crashing, which is difficult to see from Cube CTL if you don't have pod. Right here, I think the issue is that you have some, if you have used HTTPS, have you provided the CSR? I think there is an option of providing CSRs for the world because otherwise your Kubernetes will not be able to connect to world without the public key because it is HTTPS. Yeah. I think that's why it is getting that issue called no hander. Yeah. So this pod is, sorry, carrier net zero is also up. And now we are creating the channel. No. I mean, the world certificate is generated by less and crypt is fine, but when Kubernetes is trying to connect because we, I think if you internally we use Curl or something, but it is not entirely completely unauthenticated because if it is HTTPS, it will need the public key of the certificate or basically the certificate to connect to your HTTPS route. I think that is a problem from what I can see. I mean, without analyzing in detail because all this while I have used HTTP on the world. So yeah, I mean, you can use HTTPS as well. But for that, I think there is a sitting called vault TLS. I don't exactly remember that's what I was going to search that. Let me search on that. There is a vault certificate there. Yes, correct. Thanks, Bruno. What I'm saying is that there is no issue. There should not be an issue with the certificate itself. But what is happening internally is when we create, I'll show you. So when we create this access, so this is the access by which Kubernetes is authenticating with vault and vice versa. So this access, if Kubernetes has to connect to an A, you are giving the path, HTTPS, what is it? HTTPS, Vault, Interop, that path to Kubernetes. Now, when Kubernetes is trying to do a Curl or do a post message, then if it is HTTPS, it will fail because the certificate is not in Kubernetes's certificate store, the public key. It has to accept, but because it is command line, there is no one who is going to accept that certificate. So that's when the authentication will fail. And I think that is why it is showing that error. Right. So we think it is completed till the generation of the joining of the channel that is still pending and working. But here, yeah, as you see here, we did the channel creation at six point. And then now we are joining the channel. And once, I mean, that was I was running the deploy network. Then that is what this ends here. But of course, if you're running site.yaml, it will also deploy the chain code, which we haven't done yet. But I'm not sure if anyone has reached this point yet. But as you see, here, I have successfully completed this deploy network. So this is another way you can you can run. So after your initial setup, which is that when you run using a fab platform shared a configuration site.yaml, if it fails, as you saw many of us, it failed for the first time. You can always run. Then for the second time, because all those files, the initial setup is already done on this, on the container, on the Docker container or on your machine, if you're directly running from your machine, then you can directly run the files from this, this configuration, which for example, I ran the fabric network. So I'll just explain here. So I'm sorry, I ran the deploy network. Then if you are, I was explaining about the fabric console operations console, right? So if you want to just deploy fabric operations console, you can directly run after your network is set up, you can run this playbook. The command remains same, which is, as you saw, uncivil playbook, then the path, then the network.yaml also will remain same. Then this is create and join channel which we have, which we have completed. Then this is for chain code upgrade. I think someone was asking about how you can upgrade. And this is for chain code ops, which is the next status. Yeah. And then if you have this add or adding a new channel, adding a new order organization, adding a new peer organization, add a new order to an existing organization, or add a new peer to an existing organization. So these are all the different, so these are the operations that you can do to maintain your fabric network. So here what the URL provided, we do not provide non-https connections. So if I understand well from within Kubernetes, a vault URL should be this one, right? Okay. So if that is the case, then the error root entry not found in that case, maybe it is not able to resolve your address. So you cannot use different vault URLs from within the network and outside the network. So whatever vault URL you have specified in your network yaml, that will be used even internally by Kubernetes because Kubernetes doesn't know in itself that it has to connect to vault dot vault dot SVC dot local. So that doesn't know. If you want to use that URL, then you have to specify the same URL in network yaml as well in this place. That is what I think there was a discussion on Discord as well. Same thing I gave the same instance earlier that they were not able to see the certificates getting created or the jobs are not running and there's this error, that error on vault. But that is mainly because you have to use the same URL from within the Ansible controller, which is in this case, my Docker instance, Docker container, and as well as from the Kubernetes URL. They should be reachable from both. Now, if you want to use internally vault service cluster local, then use vault service cluster local here and then map in your Ansible controller that to the IP address in your ETC hosts file. If you're using a redirection or something. So you map that, this domain name to IP address in your ETC host file for that to work. Okay, right. I think that's pretty much for today. I hope it explained, I've explained enough on how to deploy using Bevel and what are the prerequisites and whatnot. Okay, so sorry, this link that you specified, who was that is Bruno, right? Bruno, I think right now I just clicked and it seems your URL is accessible over the internet. So you have the wrong version of vault. It is 1.10.3, which is much further. That as I said, our last session, we are going to use 1.7.3 from the charts. Yeah, so maybe that's the problem because the API will be different for 1.10.3. So if you see mine, it is 1.7.3 and I still link this guy as well. I think Joel, someone asked about the version. So the maximum we have supported tested right now is 1.7.4. Yeah, I think there has been a major upgrade between 1.7.7 to 1.10 and that is why we have a lot of issues. There was similar issue in the Discord channel as well. Yeah, and then there was one client who was using two old version of Vault 10. We had to upgrade it to 1.7 because of all that the claim is not found and all that. That is a new issue with Kubernetes and Vault connection, the authorization. Yeah, so job supply chain job. Yeah, in that one, as I was trying to explain, if you can check in the background in the Kubernetes cluster, if that job Helm release is getting created or not. So you can filter by namespace. So this is the Helm release and if you go to the pods, that will appear here. So one thing with jobs is that if you leave them for long and it fails, then you will not see them in here because the job will retry for six times and then disappear. So if you want to rerun the job, then what you have to do is go to the Helm release and delete that particular Helm release. Then it will rerun the job. Okay, going back here on the logs, as you see, the anchor peer jobs also got completed. Sorry, it's the wrong thing. So the anchor peer jobs got completed and that's where we have now these additional peers here. So basically then that means manufacturer peer is able to see the carrier peer and carrier peer is able to see the manufacturer peer. And on order itself, it's a single node order, a single node draft. So there is only one master. But you can see that it knows that there is a new channel called all channel. Right, I think. And also, yeah, the CLI is also working. So you can log in. So I've enabled CLI only for manufacturer. That's why there's no CLI for carrier. This you can control again. So for manufacturer, I can log into the CLI. And if I want to run the chain code, you know, deploy chain code manually, I can do from this CLI as well. So you can run the peer commands. Yeah, so you can see peer channel list is all channel. What else you can that other commands are on chain life cycle chain code, which I don't think we'll run here. Yeah, because we have not installed any life cycle base chain code. Yeah, so this tool is lens, Kubernetes lens or gate is lens. Yeah, it is it is a very nice tool. Actually, yeah, that one, which you can, you can run, you don't have to remember all the cubes CPL commands. You can see it in a much nicer way and you can delete all do all the operations, basically delete update. I mean, this is Google. So you cannot see the node, you know, the Prometheus things as well. There is some problem with the Google attachment. But if you're if you're using an on AWS, you can even do the whole cluster monitoring using lens, because it comes with the Prometheus operator, and you can see how much CPU is being used and all that. On GKE, for some reason, it doesn't work. I've not explored why it doesn't. But you can you can see it in, I mean, it works nicer when you show Kubernetes to someone who is not not technical person, basically, then they look, they like to see these these things. Yeah, thanks, Patrick. We'll take that for the next session. But as I said, it's much easier when you do a in person one, because these things you can just go and do and there will be other people as well, not only me talking. Yeah. So if you're only in the terminal, I mean, when we started, we also used only terminal and we got used to running the Kubernetes commands. Now that is by kind of by heart, like we know, you know, cubes CPL logs and cubes CPL, then you have to give the if there are multiple containers, you have to give the container name as well, do the helm release and all because we used it regularly, we know, but only recently we have moved to this tool, but it just makes it a little bit easier. Yeah. Okay. Thanks, everyone. And as you have, you know, the code now, because you're all cloned, the issues board is there. So that will that will have all the issues, the backlog, you can if you see something that is you look, see it is simple enough for you, please, please do it and, you know, complete the task. I've seen a lot of it, you know, some people have created issues as well. That is also a good way to contribute if you find a problem. And even if it is, you know, we can say that, no, it is actually not a problem you are doing it wrong, but it's fine. You can you can always get that terrified by creating an issue. Sometimes, of course, if you think that you maybe you are doing it wrong, then always ask on discord. We have a lot of other people also talking all different things on discord and you can see the chat history as well. Yeah. And all the all the best get contributing and we'll see you soon. Yeah, there will be more events plan. I mean, this one this month was completely there was this was a series. So of course, we don't want to do the same thing again. I mean, quite soon, maybe we'll do something later. And we also have some two mentorships running. So I'll have some time on dedicated on that as well, which one mentorship is about, as I said, upgrading fabric 1.4 to 2.2.2 using Bevel, there'll be automation involved, hopefully. And then the other one is using, you know, demonstrate cactus using Bevel, which will involve you can actually deploy cactus as well using Bevel so that you can you can do the interoperability in a easier deploy the interoperability solution as such. Yeah. Right. That's all all for today. And hope you enjoyed. And thanks. Thanks, everyone for joining. Thank you.