 And you were live. All right. Thanks, David, and welcome everyone to Hyperleisure Bevel Workshop. This session is of three hours and will be demonstrating and discussing the deployment of hyperleisure fabric on Kubernetes using Bevel and the newly added Bevel operator fabric. The addition or basically the integration of Bevel operator fabric has been an important milestone for us as it kind of introduces the Kubernetes operators to achieve a DLT automation and deployment. So throughout the workshop, as David already pointed, you can feel free to ask questions and also use our Discord channel and just have that open. So we'll be posting out links and code snippets to help you with the workshop. With that, let me let's have some introduction from the speakers for the session. Yeah. We have David Vio. David, off to you for just a quick introduction. So thank you, everyone, for coming. I'm David Vioco. I'm the CEO of Kubernetes Aware which helps companies run hyperleisure fabric networks and complete hyperleisure projects. And I'm one of the main contributors to the Bevel operator fabric, which helps to set up a network in way less time than the traditional way on Kubernetes. Yep. Thank you, David. We have Shannak. Shannak, quick introduction. Hello. Hi, everyone. Yes, Shannak Roy. I'm joined from Manchester. I work for Accenture. And the technical product owner of the whole hyperleisure Bevel, I used to work for other companies like Capgemini and Infosys in my earlier days. And that's my LinkedIn and my Twitter handle. Yeah, I mainly work as a distributed systems architect right now in Accenture, focusing on blockchain and web tree technologies and also on the DevOps architecture. Yeah, over to you, Shannak. Thanks, Shannak. And myself, Shavujit Sarkar, I have around a decade of experience in software engineering and technology management. I work with Accenture's Metaverse and Continuum Business Group. I'm also the maintainer of Hyperleisure Bevel. Previous work experience was with Oracle and SAP in global markets. You can find me on LinkedIn, my IDs, as Sarkar1604. With that, I believe we can start with Hyperleisure Bevel. Shannak, do you want to take that or should I go ahead? You can do it. Yes. OK. Yeah, so Hyperleisure Bevel is an automation framework for rapidly and consistently deploying production ready DLT platform. It is not a DLT platform. It is a tool which does the automation. So there's always some confusion when people usually start with. But the important point to understand is that it's a deployment tool. So in a nutshell, if you see the diagram here, it kind of starts with a developer or an operator configuring a single configuration file. We kind of refer it as a network Campbell file. And that single configuration file holds the information of the DLT platform or network that the user wants to deploy. For example, it would consist of different participating organization of the network, choice of DLT platforms that bevel support, the various configurations regarding, for example, the consensus mechanism of the platform in case of fabric, the number of orders, or various other details like channels and chain code, and so on and so forth. So that single configuration is then consumed by the framework. And as you see on the right hand side, that the framework then deploys the platform of choice onto the cloud provider of choice, which is abstracted through the Kubernetes. Now, there are some guiding principles based on which the bevel solution has been created. And it majorly confronts to the reference architecture. We'll talk about the reference architecture a bit in the later slides. The other guiding principles are the infrastructure independent part of it, which is done through the abstraction on Kubernetes, allowing it to be deployed on any cloud provider or choice or on prem infrastructure as well. Most of the components that are in bevel are modular in design. So you are free to kind of plug and play with them and have your own desired components, basically. The bevel solution is designed for security. So it uses best key management practices. None of the keys or credentials are stored in the source or configuration file. We'll also talk about these in more details when we actually go through the workshop demonstration. And, yeah, of course, it is open sourced under Apache 2.0 licensed, and it's contributed to hyper leisure. Moving to the next slide. So key benefits of using bevel. I would say bevel, the key benefits of bevel are three aspects of it, which is, firstly, it provides a secure environment for your deployments by utilizing, as I was talking about, the best key practices of key management, which is available by default in the solution. Secondly, it is a truly scalable solution, allowing the platforms to be used for early PO season pilots and then can be scaled up to run in a true production environment. Third, it is an accelerator that provides a proven architectural pattern for your DLT deployment. With bevel, you can create a dev or a test environment in under an hour and then also cut the development time from weeks to hours. Some of the other key benefits are, as you see here, the reference documentation. So we have extensively worked around our documentation. In fact, we are also doing a mentorship program to enhance our documentation further. The whole solution uses generic tools, which allows quick adoption for the DevOps team. And also, the automation can be easily plugged in with the other continuous integration tools, such as Jenkins or Git actions, et cetera. The whole solution is kind of can be used, containerized assets. And bevel itself can also be run out. The source code can be run from a containerized environment. And you'll also see, while you do the workshop, that most of the configurations, as initially we talked about the network YAML file and also the other different DLT network-related configurations, you'll see that all of them are created as code. So all the policies and configurations can be configured through the code itself. Now, moving to a bit of the architecture that we initially referred as one of the guiding principles. So I can take this one, Sujit. Yeah. Yeah, yeah. So just in case where people have joined now, you can ask questions on the, it's better, you can ask here, but as David said, it's better to ask on the Discord channel because then the channel is always there because this meeting chat will go away after the meeting is finished. Okay, so coming back to the slides, so we have the physical architecture strawman. So we talked about the DLT reference architecture that is followed by Bevel. And so this is like the overall architecture. So this is a kind of a format from the Togaf architecture actually, but it is like an Accenture format, which we use internally, but this part of the DLT reference architecture was also open sourced before Bevel was open sourced. So you can see like on the left-hand side of the screen, you have the security services and the DevOps services, which are more on the DevOps side architecture. And then the purple colors of the other boxes on the right side denotes the runtime or the execution architecture. So if we do write, sorry, bottom up first, so you have the infrastructure services at the base. So Bevel has, you know, the cloud providers are there and then we use the container services and that's how we achieve more infrastructure agnosticity because it's not tied to a specific cloud provider because we're using Kubernetes. And we always use the managed Kubernetes service solutions provided by each of the cloud providers so that you don't end up spending time in managing the Kubernetes itself rather than actually deploying DLT networks. Then on top of that, we have the distributed data platforms. So at some point we had the distributed databases. I think no one has actually worked on this yet, but it's still there. But on the ledger side, we support all these. So that means a typology of fabric in the Quorum, Bezu, and Quora, Enterprise, and Open Source. We support all of them. Then on the next layer where we integrate, which includes the application integration as well as DLT traditions. In DLT integration, we mainly use ambassador for most of these two DLTs, but specifically for fabric, we use HAProxy. The other application integrations and R will be dependent on whatever application you are developing on the DLT. And then of course, you have the presentation services which is mainly the front end. So that will of course definitely be according to your application. So whatever is basically as a picture here is provided by Bevel and the one you can see in the green boxes are the prerequisites. I mean, if Bevel does not do them, you have to have them before you are trying to use Bevel. On the security services, of course, the main is the vault, which is supposed to be your screen. Yeah, so it's vault, which is the key management where all the keys and certificates and even secrets are stored there. Then all the policies are done in Git. The policies means like who has access and all that because we use GitOps. So all the policy management is happening, is taken care of by Git. The other ones like IAM and certificate authorities and all the other things are determined by the project or kind of, again, will depend on the application because your application will have a different way to access the front end, for example. Then something to the DevOps services, if we again do bottom up, we have the application lifecycle management which is the delivery management. We, ourselves, use GitHub and read the docs. So all the documentation is on read the docs and GitHub is our main where we do all the all the spirit management, spirit planning and release management. Then on the build and test artifact management, we have GenPins as well, which we have not used for a long time, but it's there, but we have now mainly moved on to GitHub actions. So all the build management, release management releases, actual releases happen via GitHub actions. All the sample codes are there on GitHub, under the .github folder. You have the infrastructure as code, which is not part here, because as I said that the infrastructure, the cloud infrastructure is a prerequisite. Then we have the Kubernetes deploy. We use the, of course, use Helm, mainly the Helm operator and the Flux operator to do that, which again uses Helm. Then Prometheus and Grafana is kind of, it's still in the backlog. We have not, we need some help on there. So if you hear anyone is able to help on the issues which are related to deployment of Prometheus and Grafana for any of the DLT networks, it would be great. Then configuration management is via Ansible. Again, as we discussed, our main configuration find is just one network camel. We don't do the whole management of servers via Ansible because it's this anyway, on separately done by the cloud provider and we are using managed Kubernetes solutions. So we don't need to configure Kubernetes via Ansible or anything, but we are mainly using it for the template for creating the Helm releases Helm files, which gets deployed via the GitOps. And that's where we have the version management via Git and GitOps. So you can, because we're using GitOps, you can have different branches for different Kubernetes deployment and different environments on the same Git repository. Yeah, so that's the overview. Next slide. Yeah, so now we have the setting of the fabric network using Bevel as a demonstration. Or, I mean, this was supposed to be a workshop so I'm not sure if people have come ready for the workshop or is it going to be more of a demonstration? Can we do some kind of hand raising? Yeah, hi, Swoji. Yeah, hello. Hi, Swoji, I have a quick question. Yes, please. Hello. Yeah, I have a quick question. With respect to the previous slide on the architecture, so instead of using managed Kubernetes, if the self-managed Kubernetes could provide the more powerful options to go ahead, I mean, implement the solution. So how different, so will it be more advantage to use self-managed Kubernetes instead of managed Kubernetes? So would you please put more light on that? Yeah, so Sharnak was actually taking these slides so I'll let Sharnak reply that. Yeah, so if you go back on there, yeah. So I'll answer that. So self-managed, I mean, I don't think from a Bevel point of view or from a DLT point of view, it doesn't matter if you're using a self-managed Kubernetes or a managed Kubernetes. I don't see there is a positive or negative, it will depend on your companies' policies or your clients' policies. If they want to use self-managed Kubernetes and they have people to manage the Kubernetes, then fine, yeah, I don't think there is any advantage or disadvantage in using a self-managed Kubernetes. The only difference that you would notice when, and that's with any Kubernetes, even if you move from AWS to Google, that the change that you need to do on Bevel side is at the correct storage class so that the PVCs or persistent volumes are getting created correctly. Sure, thank you. Yeah, okay, we have another question about alternative tools like Arbo CD, et cetera. Yes, why not? I mean, Jenkins and GitHub Actions is not a part of Bevel. It's an example that we provide that is there for someone who wants to automate the whole solution. It's the release management and all, those are examples. I mean, GitHub Actions we use for ourselves, like for Bevel as an open source project. Not, it is not, if you are, someone is using Bevel, they don't have to use GitHub Actions, they can just run it manually. So there is, yeah, you can always use Arbo CD or anything, any other, or even Git lab or something to automate. We have also, from our experience, we have even used AWS DevOps tools and Azure DevOps tools to automate the whole process because at Bevel, from Bevel side, it's more mainly running Ansible commands. I think people are still asking the questions here but I'll answer it before moving on. How is Bevel different from Fabric Operator? I don't understand the question because are we talking about Bevel Fabric Operator Fabric or any other Fabric Operator? Because if the question is that, then Fabric Operator, Bevel Fabric Operator Fabric is part of Bevel. It's just that we are not have integrated yet, but in general, I think, if Subhijit, you can post the blog post about the differences but in somebody, Bevel, the one that originally was, was created for more production solutions because it is production worthy and you just run it using the configuration management and GitOps, it all comes with it, even the Hashicob Vault, you need Hashicob Vault to use Bevel so the security and all is integrated in it but when you're using Bevel Operator Fabric, you have to run it yourself and I think that's part of the second part of this demonstration or workshop which where David will explain what is Bevel Operator Fabric. Then I think I'll close the questions with another two second ones. So external chain code is there currently on HLFB user, yeah, external chain code example is there now, which is released last sprint, right? Subhijit? That it is available on the developer branch, it has not been yet merged and released to the main but it is there available on the developer branch. Documentation for hosting on Google Clouds, so the storage configuration, all the storage configurations are there, you have to, yeah, it's there under the storage class so we don't do the separate documentation for Google and Azure and AWS, it is all generic. So that's all, all right. Shannak, I think we should continue with the demonstration, I'll try to answer on the chat. Yeah, yeah, that's fine. I mean, it's better as I said, we better to move on. Just checking you to, there's no questions there. Better to move on to the Discord channel, but anyways, I'll share my screen. So can we? Hello, sorry. Is there any plan to integrate Firefly to the integration layer? So what do you mean by integration of Firefly? So can this can be integrated with Firefly? No, I mean, what do you want to do? What is the end goal of that? Okay, it's for the installation and Firefly provides the integration of chain codes with external applications. All right, I mean, I have, yeah, I can hear you. So we don't have any plans right now, but yeah, if there is enough interest, you can create an issue, and we have a bigger community nowadays. So if people have interest, they can pick up the stories and work on them, right? Okay, so from the demo point of view, right. So we are not from demo, we are not creating a separate network or sorry, a Kubernetes now. So if whoever is working on, if your guys are working on, you want to do it along with me, then you'll have to have your own Kubernetes. Is anyone here who is actually ready for the workshop, which we have Kubernetes installed, or Kubernetes ready? If this is happening, I cannot see. Okay, so if people are creating the cluster, Subhijit, can you share the proxy network YAML on the Git Discord channel? Yep, I have shared it. It's with the name of network.yaml. It's in the Bevel Workshop channel. Yeah, we can install the vault as well if needed. But I'll start with the description first. So as we had talked about, I can give you a brief overview of how this works on the Bevel. So I have the Bevel as I have cloned it, like Git clone Bevel. It's just that it's named as BAP demo, my older. It's from old project for the demos. So I have cloned it and I am here as well. So Git remote, sorry, Git branch. I am on fabric branch. So this is the latest that I've cloned from develop, but if you want to work on main, that is fine. So you can do Git checkout main and then clone from here. And what I have done is I have done Git checkout develop and then I have cloned from develop to a new branch called fabric because this example is for fabric. So yeah, I'll check it out again. I'll give you an example. The structure is, folder structure is, I mean, can be a little bit confusing because we have so many platforms. So we actually have substrate now as well as an example substrate. But yeah, so the make all the code are under platform for each of these different platforms, but shared is a common platform. So this part is used by all the platforms. So we mainly work on the shared code. And then we have a sample network schema, which provides, sorry, network validation schema, which helps to validate the network so that you enter the values correctly. And then you have all the platforms here like Basu, Fabric, Indicorum, Corda and Corda Enterprise and Substrate. We're working on fabric. So we'll work on the fabric now. Fabric has these kind of folders. So you have the charts, which has all the Helm charts. Then configuration contains all the playbooks and roles because we are using Ansible. Images is kind of empty, but in general, wherever there would be custom Docker images, containers required, then the Docker files will be in these images folder. Releases is where the releases will go because we're using GitOps. And then scripts, here are some sample scripts for all the crypto generation and all that. But then we have the, for the external chain code, we have the chain code server and peer chain code certificate generation scripts here. So basically any script shell scripts that you will need for that particular platform, platform will be in here. I'll explain the proxy, the network camel which is the configuration file, which we are talking about. So if we go to configuration, then all the playbooks are here. So the operators will be using just the playbook at this, when they're specifically working upon Appalachia fabric. But in general, for the shared, which is the common playbooks, we go to shared and these playbooks. So these are the common playbooks like you have all the requirements or all the set-uping of the environments or validating the schema. And we generally, all our documentation refer to this site.yaml playbook, which is kind of like the main playbook. But within fabric also, you have additional sub-playbooks. Like you want to add a new peer, you want to add CLI, specific CLI, you want to deploy just the fabric console. You have, as I said, you have the external chain code. So these are the separate playbooks that we've provided, but we'll be mainly using that. Now playbooks need an input, which is the network file, which is we call network.yaml, or the configuration file. So we have the network for this demo or workshop we're using this. And I think Subhijit has provided a one, a better one, but I'll explain it in anyways, regardless. So we have all networks in Fab, Hyperledger Bevel has similar looking network.yaml, and they are all in these configuration samples. And you can see there are so many different samples for Fabric because Fabric is the most used platform for Bevel. And we have a lot of different examples, like how to do this and how to do that kind of examples here. So we've moved on from Kafka and we're only supporting Raft now it is. Fana, because apologies for interrupting, but I see a request in the chat. Is it possible to increase the font size? Oh, okay, yeah, yeah, yeah, I'll do that. Yeah, I didn't see that because is this better? Hope this is better. Yeah, it looks bigger, yes, thanks. Yeah, all right. Yeah, so yeah, I didn't, so this it can minimize this, it's easier to see. So yeah, under samples, you have all these examples as I was talking about to you, because Fabric supports most more operations than any other platform right now because Fabric is the most used platform for Bevel. So you have different samples for others also. There are minimum samples, but generally everyone will have a main network yaml. So let's try the network yaml now, right. So all networks, it starts with network. So if you click on this, it's a network. And then under that you have the type, which because you're deploying Fabric is Fabric, it's for Corda, it's Corda, for Bezu, it's Bezu. Then you would give the version, which is the, we're still working on, I think 2.5, we just started despite this sprint, but we, all this is currently tested on 2.2.2 because that was the previous LTS. Then we have an environment section, which is mainly like a common environment, common section for all the environments. Then we have a Docker section, where which is mainly used when you have a private Docker images. I mean, if you're not using private Docker, you can actually comment this out and not use the username password. Then you have the consensus part, which is basically for Fabric. We have the raft here because we're using that. Then we have the order section, where this is for common, again, a common section for all the, all organizations within the Fabric network. So you have orderer, and then it has the order name and then the organization name and what is the URI and all. So in this example, I will be using a local or a proxy as none. That's why the organization name has a local name. And then environment type is basically, it just creates a tag for the flux deployment. If you have multiple deployments, you must generally encourage, I'm not into it, you must use a different tag for each of the environments. These are, this is the retry count, which is basically within AX, within Ansible, we check if the network is up, has reached a certain state. So it retries for this much. So if you basically have a slower network, it is higher number. If your network is fast enough, I think 20 also is fine. We are not using a proxy. That is why we don't need to have external DNS. If we had external DNS, it will automatically update the external DNS paths, et cetera. Any additional annotations can be provided here in this example. This is an empty annotation, but this is like a valid annotation. So any annotation for the service, you can add additional names and labels, same for the deployment or the PVCs. So only the only thing is that, I have the V2 here, the old one. So the original one that we generally use for cross cluster deployments. So that's when the HAProxy, we don't actually have ambassador. We had problems with ambassador for Fabric. That is why we never used ambassador. So you can use HAProxy, but in this case, we are using none because it will be local. We are deploying it only in one cluster. We're not using multiple clusters. All right. So yeah, so that is why, because we are doing local, the URI is local, order one, whatever supply chain and the port number for public or using HAProxy, you use your fully FQDN with say 443 or HTTPS or 8443. Then we have the channel configuration. So you can provide multiple channels here. In this example, there is only one channel called all channel. Then any chain code for that channel you can provide here. Whatever orderers are part of this channel is here, which is basically the organization name of the orderer and then all the participants of the channel. Then you will also have the endorsers for the channel. And this is all details are there. We're not going into like complete detail for each of them, but I'm just say pointing out to what are the examples? Sorry, what are the key main things here? And then you'll have to provide the Genesis name again, which is a unique name for a different group for that particular channel. Then comes the section of the organizations. Now Bevel is designed so that one originally designed is production for the annuals. So originally designed that only one organization is part of one Kubernetes cluster. For another Kubernetes cluster, you will use another organization, right? Or you will use another Kubernetes cluster for a new organization. But in all the, of course, when we do deployments or examples, you cannot have multiple organizations, sorry, Kubernetes clusters for different, for a small example. So we are using a single, I mean, in the same cluster itself, all the organizations, but in general organizations are like how fabric does the members, right? So those are the members. So in this example, we have the first organization, which is an order or organization. That is why the type is orderer. And then all the details of the CA server and the AWS keys, et cetera, we will provide. Then the Kubernetes section, you have to provide the Kubernetes config file, which you use to connect to the Qt CTL. And then you provide the vault where all the secrets will be stored. And then we have the GitHub section where all the Git details, all the details are there. I think most of us, this is quite, we have, you have used and I, we are using it, it's created. You can update it. It's already provided. All the examples. Then you under that, under after the GitOps, so this is again for organization, each organization has a similar concept or same keys. After that, then you have the services where you have the CA and then the orders. So this is for the orderer organization. Now, if we go to the manufacturer, which is a member organization or a peer organization, then of course a CA and these details are common. Then only difference you can see is that you have the users, you can add, Bevel will generate a user one automatically. So if this section is missing, even then you will, it will generate a user one. But if you want to add more new users, you can add here. And then you can give all the revoker and what else endorse those kind of attributes via this attributes channel, sorry, attributes key. So you can add more users for your application. Then you can, you have the CA service because each organization has a fabric CA. And then after that, you have the peers. In this example, we have only one peer but you can add multiple peers with just adding here. So in each peer, you have of course the peer type as anchor or non-anchor. The gossip peer address in this case, they are the same. But if it were, there were two peers, then of course you will have like peer one here in this case as the gossip peer address. Or if you have multiple peers, you can have another peer in the gossip peer address. Then the peer address is his own address. Then this is the certificate, which is the peers, peers public key basically. Then we have the CLI part, which is enabled or disabled. It will create a CLI. Then we recently released the cactus connector. So you can do enabled. The main playbook will not deploy the cactus connector. There is a separate playbook for deploying the cactus connector. Then all the ports, you can find all the ports. So mainly these are the used. This REST server and Express API, these are part of the application deployment. So it is here just as an example. After that, you can pass the chain codes. So now you can pass multiple chain codes. Earlier we used to support only one chain code, but now you can pass multiple chain codes as an array like this. So in this example, we are passing this chain code and all the details are here, like how to do the chain codes. Then I think the metrics were enabled recently. So this will enable the metrics server. Same for the final third organization. Yeah, just different names and the certificate parts and all are different, but otherwise it's generally the same kind of configuration. So that's about the network ML. We do have a lot of questions, I think. Any specific question? So with it that we need to answer. Yeah, there was a question around can bevel deploy orgs per cluster? So I think there's a confusion here. So just to clear the confusion, you can configure the organizations to be on different clusters. That's what should be suggested and is suggested for production architecture where each organization kind of has its own cluster. But as Shannak was mentioning for test environments or for a quick development, we can have all the organizations kind of be deployed on a single cluster. Yeah, so yeah, to zoom in on that, how we will do that is via this section. Each organization has a KTS section. And of course, if you are using AWS, each organization has a different AWS section. So if you're using AWS keys and the values, then you can give the AWS details for the different organization to be different for each. And then under the KTS section, you will have the different region and different context of the cluster and the different config file. It will not be the same home bevel config. Or even if it is the same config, the cluster will be different because you will have the different context in that same config file. So that is how you can deploy same from one source. So from this machine, for example, to multiple clusters, but the main prerequisite to there is that you should have access to all the clusters from this machine, from wherever you are deploying bevel or using bevel. Yeah, graphic clarification so much. So when we say like multiple organizations in different clusters, so that data distribution will be happening within those organizations, right? With respect to the architecture, correct? Yeah, yeah, yeah. And of course in that case, you should be, you will not be, if you use these kind of URIs, it is not going to work because it is on different clusters. And these are local cluster URIs, right? So you will use these kind of method where you own the domain called blockchaincloudpoc.com and then you have created proper paths or DNS routes so that your order one dot org one ambassador is correctly pointing. Yeah, yeah, yes, of course. When we say other organization, we should have the domain address, yeah. Yes, correct, yeah. Thank you. Yeah, okay, I see it's some few questions. I don't know why the chat window is so small for me. Okay, I can increase it now. Yeah, no, can bevel? So there was a question about can we have only organization with both? Yes, we can. And that is where you have this branch where you should be, I mean, that branch is actually not kept updated because we don't, I think it adds unnecessary confusion, but you can use the code from this mixed organization branch. As you can see, it was last marched on November 9th. So you can use this branch to where you are using both peers and orderers from the same organization. Okay, network architecture, not entirely sure what a network architecture would look like because it is just generic example. Can bevel use existing organization as an organization in the configs? So it doesn't deploy it. No, so we don't do hybrid, existing organization deployment. So you have to deploy it using bevel because if you have existing organization, do you have everything in the world and all that? Because all the other operations will need the values or the certificates to be there involved. Like for example, you want to add a new chain code. So it will try to get the user certificates or the admin certificates from that particular vault. So if you don't have that already, it's not going to work. So I don't think existing organization as an organization in the config and then running other operations, I don't think it's going to work. We have not tested, it may work, but you have to put all the things correctly in the vault. Ports on the GRPCT files, yeah, you can modify. I mean, if I go back on this, I mean, that's the whole point of this file is if you want to change all these ports to seven, zero, whatever, or nine, five, nine, two, yeah. So that you can modify and bevel will work. Just ensure that it is, you have changed everywhere because this is a local example, right? So you will have to modify all the paths correctly for a different port. Yeah, I'll show how the secrets are managed. I will show that anything. So yeah, so generally secrets are managed, I would say is via vault. So all the secrets, you will need the vault URL and the vault root token to be passed here. Just another important thing because you will have a lot of secrets in this file, if you are not going to check in this file, please, whenever you are using this file or updating this file, put it in a folder called build or put it outside of bevel folder or in this case, my BAF demo folder so that it doesn't get accidentally checked in. Ideally it should be inside a build folder which you just create like MKDIR build and put that there. And then once you run it or if you want to change it, yeah. You can have a separate, for example, a private repository if you want to store your network configurations and share it with your team members but it should not never be a public repository. So the final question before we move on to vault. So Vijit, in the meantime, do you want to do the vault deployment? On one cluster, basically. I don't have those ready, Shanak. I think if you see the hyperledger fabric demo from the... Yeah, you want me to do the bevel setup in the vault setup on our cluster, is it? Yeah, yeah, yeah. Okay, so just give me a minute, I need to set that up. So from the HLF, the global forum, Okay, yeah. The links are there. Yeah. In the meantime, I answered the questions. Can other hyperledger tools like Caliper or Explorer be connected and launched by Bevel? I mean, I'm not sure what is the goal here, but if you want to, it can be. It is not there supported now as we kind of trying to mention. Bevel is a deployment tool. It deploys, we have added other features, of course, but the main goal of Bevel is to deploy, make it simpler for you to get your DLT network up and running, be it Fabric or Pesu or Quorum. What is the max cluster organization it can support? I would say 10. We have not tested beyond because it will take long time or I don't think it will be suitable for test cases as well. So it is, I would say 10 for now because it takes, of course, it will take a long time to run and all that. Any other question that I have missed? Oh, there was a question, where is the Fabric branch in this repository? There is no Fabric branch in the repository. You have to create the branch. So because you want to keep your deployment separate from the developed branch, that is why you just create the branch, like get checkout minus be Fabric. That's all you have to do. Can you just work with an external DNS and connect with a gateway for an end user application? Yes, so if you have a separate way to connect or expose your applications, that's fine, yeah. So you don't have to make the whole Fabric network as a public network, right? If you're running the whole Fabric network only in a single Kubernetes cluster, then it's fair, you can just expose the application, either the API or the REST server or the front end, for example, via any other method that you want to expose it to through. But yeah, within, if you are not deploying anything separately, like in a different environment to then cluster, then it is no need to use an external DNS or HAProxy. Okay, there's a question about how latency between clusters is handled as latency is one key concern. So again, cluster latency, cluster communication that is out of scope of bevel, the cluster itself is a prerequisite to bevel. So you can't, bevel cannot guarantee if the cluster is connected or not, it will try to. And if it is not working, if it takes longer time, yeah, you can, I think there are some parameters in the Fabric REST server that you can modify for that connection parameters, but otherwise bevel is deploying the DLT network. So with respect to on this context, so going beyond, when we increase the organizations or clusters, so this could be a performance instance, or do you think that? Because we are all using the managed Kubernetes, no? I mean, that, so- Yeah, so if you're using a managed Kubernetes, the communication between the managed Kubernetes is kind of that responsibility falls under that purview. Okay, okay. Yeah, yeah, Eduardo, your question that external DNS, if you're planning to have various organizations in different clusters, yeah, that is right. So you will need to have external DNS and the domain DNS configured. So that two, for example, peer one in cluster one can talk to peer one in cluster two from a different organization. Yes, so you will need the external DNS. Can we distribute orderers between clusters using bevel going back again to the other branch that we showed you? Yeah, if you're using the mixed organization branch, then you can have each organization can have orders as orderer and peers, not just orderer and peers, so or peers, so you can have both. In that case, you can have distribute the orderers between different clusters using bevel or you have multiple orderer scenario, bevel also supports multiple orders, multiple orderer organizations as well. Yeah, going back to the question about how many minimum order, so this is an example that is why we use only one raft orderer, but ideally as all with all raft examples, it says for consistency or I don't know what it's called, forget the term, but you need two N plus one. So basically, if you want to work with one failure, I think you need three orders in the raft. So if you need more, it should ideally be odd number of orders, that's all I know. So this example is basically for a multi, even in our local examples, which is if I show the Fabric V2, which is our main external example that we use, we use three orders. And I think keeping the time into consideration, I'm ready with the vault demo. Yeah, so yeah, let's start that. Okay, all right, I'll share my screen. So guys, whoever is kind of don't have vault, you can follow this example that Shubhachit is going to show to deploy vault on the cluster itself. I hope whoever is following has the cluster. Let me know if you guys are able to see my VS code. I think you need to increase the font size by getting control plus or shift plus. Does this increase the size? Did it? No, I think. Does say for me, but yeah, they don't see it here. So the control shift plus, yeah. Yeah, now it's fine. Another one, I think. Right, so if you guys have the Discord channel open, I'm gonna share the link. So we'll start with adding the Helm repo. So we'll be using the Helm repo for HashiCorp Vault. And let me just put the link here. So the command is Helm repo add HashiCorp and then the link to the repository itself. Once the repo is added, you can just quickly check the various... You can search the repo to list out all the vault releases that are under it using the Helm command Helm search repo and then basically the repo Helm repo. Name that you have for that repository. And for the purpose of the demo, we'd be using the chart version, which is 0.13.0. And once you have added the repo and then you have figured out that you can see that this particular release is there, you need a override file to kind of override the default values of the Helm chart provided by HashiCorp Vault. So you can create a file called override.yaml which would look similar to what I have here on the screen in order to kind of make the UI enabled for your vault, you'll pass a value called UI and then enable it. We'll also use the service type as load balancer so that it is kind of externally exposed. And then basically the port on which you want to run the load balancer. And the vault comes default with some of the agent injectors and all which is not required by us. So you can add this particular value which is injector enabled as vaults, right? And then deploying this is quite simple. We just simply run the Helm installation command. Which is... Can you paste this in the chat? Okay. I think it was there before. No, it wasn't. Oh, it's there. Yeah, actually it is there. Yeah, it's scroll up a bit. In that the injector thing is not there. But yeah. Not there. Okay. Maybe just add this here. Yeah. Adding it there. Yeah. So for anyone who is a bit confused, so you can just create a file called override.yaml inside build directory. In this case, you just done it is build infra then override.yaml. So you can just write, I create it in under build directory so that it doesn't get checked in because there's no point. Yeah. Yep. So once you're ready with the file, we are ready to install the Helm. So Helm chart, before that you can create a separate namespace for your vault to run. So you do it simply by creating, running this kubectl command, which is kubectl create NS stands for namespace and the name of the namespace, we wanted to have the namespace name as vault. So it now has created the namespace. Now we simply run the Helm installation command, which is, There's a question about how to get the list of the version binaries, right? So what was that previous command? So with it, the Helm search repo, search, right? Yeah, Helm search repo and I'll paste this as well. Yes. And then finally the installation command, which is Helm install, the installation name would be vault and this is the Helm chart that we have added. So that's the mention of that. And then the namespace on which we want to have the installation, the namespace that we created is vault. The version is the choice of version that we picked from here when we did the search. And then finally, we have to add the override file to override the default values of the vault Helm chart. So we provide the path to our file, which is billed infra for my case and the name of the file is override the YAML. I'm gonna paste this before I run. Yeah, just for everyone's, this is an example of vault deployment for production use cases. Of course, you will use more secure ways of deploying vault. Yeah, sorry, Subhijit. An example of what ways is production steps in the client? So production steps for security, you can use a consult model where you will have multiple vaults. You can have vault in high availability mode. You can have vault in a private, not in the cluster itself. You can have vault as a separate, running as a separate VM in your private namespace, not private subnets. And then you can use it from within the Kubernetes cluster. So those are a few ways of deploying more production oriented vault. You can seal and seal, sorry, seal and all with different keys even with cloud, you know, cloud secret keys. And all that, yeah. Right, so the installation is successful. It is deployed. You can just check it using the kubectl get pods command. Then you can see that there's a vault pod running. You can also check the Kubernetes services using the kubectl get SVC command. SVC stands for service, PO stands for pod and the namespace as vault. So you'll see that the vault UI service is created with the service type as load balancer and AWS, which is the underlying cloud provider. In my case has created an external IP to access this particular vault and which is actually running on the port which is provided here, which is to triple zero one. So I'm gonna use this URL to access the vault. I have to stop sharing and reshare my browser. So I'll use the external IP that AWS has created for me and use the port. So this lets me access the vault UI. So since this is the first time I have created, so it will do the initialization steps. You can also use vault CLI to do the steps, but if you have an UI then it becomes much easier. So for the example, I'm just gonna add one key share and one key threshold. So this is, this would be used to kind of, when your vault gets unsealed basically, you can use this key shares to unseal it. So this gives me with the initial root token and also the key which will be used to unseal the vault when it gets sealed. So you should be keeping these two keys safe. So I'll download it. And then let me just keep it copied in a notepad for now. And then I continue to unseal it with the key that was given in the previous screen. So after unseal, I can now use the root token to sign into the vault. So once I need, you can now start using the vault. One of the prerequisite of having vault and running bevel is to have a unsealed vault and have a secret engine created. So we'll use a KV secret engine to kind of store our secrets for the network. So you can create a secret engine using the button here. So enable secret engine. It would be a type of key value. And then you can just name it as, let's say for our example, we have used secrets v2. So let me just name it, but you can name it anything you want, but then make sure that you have the same path in your network YAML file under your vault section. So you have to take care of that. So once done, then you have your secrets created, secret engine created with the same name secrets v2, right? And now that you have your vault URL, your root token, you can use those and the name of the secret engine in your network YAML files organization vault section. And once you've done that, I think we are ready to proceed further. So Subhijit, can you share the vault URL and the secret to me? Because that's the only thing I'll change. Okay. I'll share it on Discord. No, no, just on to me on chat. Okay. Now I'm just saying that are you on Discord? On via Discord, yeah, yeah, yeah, that's fine. And I'll stop sharing now. No, I don't have the share button. Does your screen button? Yeah. Oh yeah, I can, yeah. Right. So once you have that, you will update, yeah, you will update the vault URL in your vault.url section in the network proxy, sorry, in your network YAML and then the root token. And then this path, because Subhijit has created secrets v2, I'm also using secrets v2, but if you have created something different, you can use that. Then comes, I'll just explain what is it the GitHub section, which will be same for you mostly. So you can use HTTPS or Git protocol as SSH. So let's use HTTPS. In that case, your Git URL will be HTTPSgithub.com, then your username, basically your forked bevel name. Then this branch, yeah, actually I'm using fabric, so you can change it to fabric. Then the release directory, I explained earlier, so that you can use this as the releases directory. So this is platform, Cypherlager, Fabric, Releases, Dev, you can use any other name here as well. Chart source will be, you can keep it same, no need to change this. Git repo is basically the shorter version of this. I think we can figure out how to make the same, but yeah, it's still like this, so you can use GitHub.com. That then this is your GitHub username, and then replace all of this Git access token with the password or don't use password, use an access token, so that you can deactivate after the deployment is complete or in case you can change it later. Email is your Git email, and because you're using HTTPS, this private key doesn't matter, but if you are using SSH, then you will provide the private key for the SSH connection, and in that case, this path will also be SSH, and it will be like a little different. Yeah, so once that is done, so I have already have a local network, local kind of YAML created, so I'll use that because it has all my AWS access keys and also I'm not going to show it, share it here, but I'll use that internally. So once that is done, so you go back to your terminal, and then just run Ansible Playbook, then platforms shared, configuration, then side.yaml, and then pass minus E as a extra parameter, which is minus E. Then in my case, I am passing build network local. You have to put add the rate because it should read it as a file, so add the rate build network local.yaml, so that's the single command that we use. It does a lot of things. So once the people are here, I think it's time we can move on to David's behavior or showing how the fabric operator works, but this is the command that you'll use, and once I click press enter, it will do all the steps, it will kind of check all the prerequisites, and it will check all the, it will install the deployments of anything, say Vault CLI, for example, and then it will deploy, it will start doing all the different stages. We'll come back to it later towards the end of this, but this is the command that you should be using. So there was a question where I updated the URI is here in the Vault section. Vault URL is the new Vault URI, and then the root token is the root token. And if you have used Secrets, if you've used anything else other than Secrets V2 as a new key KV, then you'll update that here. So in the Vault section, if you just search with Vault, so that's it. Right, I think it's time over to David. David, do you want me to share my screen for the deck? Hi, David, are you there? Yeah, we're not here in you, David, if you're speaking. Sorry, now, can you hear me? Yeah. Oh, yeah. Hello, okay, perfect, at the microphone. So with it, you can share the deck. No, again, I will, what I was saying before is that I will share my screen, because I've already updated the presentation in order to answer some of the questions that were asked before about the version, so the changes, etc., so we'll share the screen. You can see it. But this basically is more or less what the base presentation that we did on January this year, which was a full workshop of two hours and a half, and in order to understand what is very well-operated of fabric, we need to understand what is a Kubernetes operator, what is the goal. So the goal of a Kubernetes operator is to automate the deployment of an application. So if we look into the fabric samples, for example, here, there is an example on how to deploy Kubernetes, but this, if we check the configuration on the QF Jammer, this is too static and there is a large jammer about this. So it is hard to run dynamic networks using this approach. So this is where the operator automates all of these parts, all of this creation of these resources in Kubernetes, the certificates, the config maps, the deployment, so it will abstract you from the actual deployment of the hyperlabeled network. So first thing, the real operator fabric is a declarative way of creating hyperlabeled components. This includes the PR, the order chain code channel, channel is more logical, but it's also supported in the operator. Also in the version 1.9, there is the possibility to create identities which is a way to enroll the user and get the certificate of product kit in order to use it in applications that connect to the chain code. This is this Kubernetes operator, obviously it's based on Kubernetes. You can deploy Kubernetes on premission or on cloud. In all of the projects that we have done, we have worked on cloud because managing a Kubernetes cluster unless you're at the company, it's not really something that brings much value and this is customizable for specific use cases for certificate renewal which is already supported. There is an alpha feature which is automatic certificate renewal which one month or 15 days before it renews the certificate for your PRs and your orders. So this is also some feature that was asked a lot before. And what is supported apart from the hyperliferate components? So I saw a question before in the chat about if external chain code is supported, external chain code is supported in the operator fabric, the creation of components and also the channel configuration as a code. This is a really useful feature because one of the hardest task in a hyperliferate network is to manage the governance and the governance and the configuration of the channel, for example, how many transactions you want in a block, what are the organizations that will belong to the network, what are the consenters. So we have a resource that we will see in the next screen which is channel configuration which there are two parts. There is the founder of the channel which is the organization that manage and run the channel and there are the followers which are the organization that joined the channel. And versions 2.3, 2.5 and 2.5 are supported right now. In fact, this demo will be based on the latest version, Hyperliferate 2.5.3 which I think it was released like two weeks ago or something like that. I don't know if there is any question, I'm not reading the chat right now so please let me know if there is an important question to clarify. Yeah, there is no questions yet. Yeah. Okay, if I'm speaking too fast, let me know. Yeah, okay, yeah, I have a question. Can you hear me? Yes. Yeah. So you said the external channels supported. As you mean we have, and I know that Bevel supports multiple channels. What's if we have a new chain code to be deployed and installed? How does it support it? New chain code deployment and installation. It's a post new chain code, but the deployment of the chain code is not, if the task is not just to deploy the Kubernetes so there needs to be a chain code approval, chain code commit. So there are operations involved in the hyperlabeling network, but yeah, this is supported and this is something that we will see in the demo because we will deploy to organization, to the organization, it's with two PRs and then an orderer organization with three orders. So, and then we will create a channel between these organizations and deploy a chain code. So this is supported and the chain code is deployed in Kubernetes. So if you have, we will show how to deploy a chain code and how to prove it and how to install it and how to commit it, then you can take this example and then polish it for your use case, in this case, not sure if the answer is the question. What is the repo for the Bevel operator? I think he is this one. Bevel supports maximum, I will bring up the chat, I don't know if you can see it. Okay, so I think there is a question. Let me go back. Yeah, so the Bevel operator, when do you use Bevel and when do you use this Bevel operator fabric, yeah, versus operations console, that is a good question. That's right. So as I kind of explained Bevel, the pure Bevel was designed for more production systems. So we'll use that. At some point, one of the, we'll be using Bevel, we'll also be using Bevel operator fabric to deploy. So until that comes, there is a two separate kind of deployment methodologies, but we are working on to, so that we run the same Ansible command and internally Bevel will use the Bevel operator fabric. And that will happen for all the other platforms as well. So we'll create a, again here, anyone who is interested to lead the way to Bevel operator Corda or a Bevel operator Corum, please feel free to get in touch. So basically at that point, Bevel will use the GitOps operator for production oriented workloads and we'll use the Bevel fabric operator or Bevel operator fabric for more, say, POC or even production operator. You can choose what you want to use. You want to use the GitOps operator or you want to use the Bevel operator fabric and that's the main difference. So it's basically not a great difference. There is a UI console in Bevel operator fabric and there is also fabric operations console, which is provided by fabric, which you can use to deploy, which adds like you can use to create the channels and all that, right? Which is managed by fabric team. Yeah, there's a similar one. Yeah. You can use Bevel right now to deploy the operations console as well and also get all the chain channel assets that you need to basically manage, add the keys to the wallet and all that. So that's possible via Bevel. You use the same UI, which is the operations console. There was a question about fabric. Bevel supports 2.2, but Bevel, yeah. So there is another thing if you want to use like 2.3 or 4, you can use Bevel operator fabric. But in general, if you're not using any new features of Bevel, sorry fabric 2.5 or anything, then if you just update that top section of 2, where I place 2.2.2 to 2.4.2, it will work. Yeah. Yeah, but the channel participation is not supported in Bevel, I think. And you said, can you hear me well? No. Okay, but Bevel doesn't support the channel participation API right now, I think, or the chain monastic. Yeah, it supports. You can do both channel management, separate addition of a new channel and additional members. Yeah, it does the approval and all that as well. So that the channel configuration is same. In your case, yeah, you're using as a code. In our case, it will be in the network gamble as a configuration. Yeah, that's not the difference. But not with the channel participation API. Not with the API itself, yeah, yeah. Okay, right. And about the user interface, the operator fabric has an explorer, tailored for fabric, but it's only read-only. We will show it at the end of the demo. So we can see it later. I don't know if we want to continue. Yeah, please continue. Yeah, can we have a copy of the deck? Yes, we will send it afterwards. What am I going to answer this? But I will, I will say the read-only. So just to continue, the Bevel operator fabric resources that are supported, so there are two parts, the physical and the logical part, the physical, the PIA, the order certificate authority, chain code operator UI and the fabric operations console, which was a project which was donated by IBM in order to manage the network. And the logical resources, the fabric main channel, the fabric follower channel, which is used for organization to join a channel that has already been created by some organization and more legal resources which were added in the latest version 1.9 are the fabric identity and the fabric network config. The fabric identity enrolls the user and also manage the certificate renewal, which is something that is also most likely a problem in the project after one year, then certificate is inspired. So with this, you ensure that the identity that you will use in order to run the application that's inspired and the fabric network config, which automates the creation of a network configuration for your application, and taking into consideration all the PIAs and all the orders that are in your cluster. So these are the logical resources which we would use after the physical resources are set up. And the next slide, the operator fabric CRTs and resources. So we have the keep CTL plugin, which we will use. This is our developer machine and we have the Kubernetes cluster. You can use whatever Kubernetes cluster that you want. So in this demo, we're going to use kind, which we will have locally, but you can use AWS, Azure, whatever cloud provider that you want. And this keep CTL, we create the CRTs that will be created, installing the help chart of the HLF operator. And then these CRTs will be then taken, be taken from the HLF operator and then the HLF operator will create the PIAs, the orders for XCAs will update the channel, will manage the channel configuration for the channel that we will create, update the channel for the PIA organization, including the encore PIAs and also deploy the visualization CRTs, which are the operator API, the operator UI and the Fabric Operations Console. So this is the role of the operator. And then it abstracts the creation of these entities that we see here. And the main difference between Bevel and Bevel operator fabric is that Bevel focuses on more networks such as Corda, the one that Sonak said before, Corda in the, and there are more blockchain networks and Bevel operator fabric only focuses on fabric and it has tons of functionality for running HLF network. And that's it. So for the demo, I don't know Sonak, if you want me to continue to do the demo or you want to continue, but for the demo... Yeah, yeah, please continue. Yeah, yeah, please continue on that. We will need to know what basic cryptography, what is a certificate, what is a public key, a private key, basic knowledge about Kubernetes and Docker. When I say Docker, I say container deep, so container technology to know what is a container. In order to run the API, you will need Node.js with TypeScript and then basic knowledge about cell commands. In that case, I will use Mac M1, so if you have Mac M1, it will run more than fine. And basic networking concepts because we will use a local DNS, so we will need to know what basic networking in order to know and be able to troubleshoot problems in the future. And the goal of this workshop is to create a hyperlite-fabric network of two organizations, two PR organizations. Each PR organization will have its two PRs and a chain code and then there will be an order organization which will have three orders and then the demo channel will be created between these three organizations. The order organization will manage the consensus and the PR will manage the endorsement of the transactions and then the API, the client will send the transactions to the one, to the ordering service. So if there is, there are no questions, I will proceed to the demo. So if you want to ask any questions now, it's your time. Don't think there are any questions. Okay, that's good, yeah. So... Oh, sorry, there was a question. Is Fabric main channel and Fabric follower channel different from channel in hyperlite-fabric? Is Fabric main channel, is different from... No, it's Fabric main channel and this is closely tied to the governance and you may want to use another way for your use case, but the idea of the Fabric main channel is that usually in the real projects, there is one organization that has the knowledge about hyperlite-fabric and the other organization ends up managing the channel. Managing the channel means who joins an organization, who lives, what is the configuration of the channel, et cetera. So the Fabric main channel usually has a list of concenters, which are the ones that manage the consensus and the Fabric follower channel is for the organization that they know another organization that manages a channel and they just want to be part as a peer organization of the channel. That is the goal. But at the end, everything is a channel, hyperlite-fabric. So there is no difference. It's just that one creates a channel and the other joins the peer, the peers to the channel. That's the only difference. I don't know if that answers the question. Yes, it seems like a logical definition and some sort of difference. One is for the peer organization, which is the Fabric follower channel and Fabric main channel is to create the channel. So let's start now that we have seen. So you can follow this demo using the edit half repository. We have shown here. This is the one that we will follow the rhythm for this. So first thing that we will need and this is something that I already created in order to speed up the demo, it's the Kubernetes cluster. So we have already a Kubernetes cluster, which I can show here that this is the Kubernetes cluster. I did the test just before, so we have here some Fabric follower definitions in the left. There is a Powering main channel, which I will delete in order to start from scratch in order to not have any problem. So we don't have any more resources, which is good and we have an empty Kubernetes cluster right now. So this step, you can write it on your own. You can also provision another cluster in another cloud, such as Amazon AWS, but you will need to be careful with the, in this case, with the DNS, because there is just live here, the local DNS architecture, which is, I think it's nice to see before actually going into the demo. Because usually we have a domain name such as hyperlabel.org or something that can be accessed from the exterior. And in this case, since we're in local, since we're in local, we're going to use the local HODOS ST, which is a DNS that just resolves to the loopback IP address, which is one to six, one to six, one to six. We'll access the hyperlabel in there. So if you want to run it in Amazon or in Amazon or Google Cloud, you will need to change this DNS and you need to make sure that this DNS resolves to the service of Istio, in this case. So that will be the change. So I have it said this, so the first, the next step that we need to do is to add the repository for the help chat, this we have done already, and install the cell operator, the version 1.9.0. In this case, I have already installed it. I'm not going to restore it, so I can, but if you go through this step, then different messages will appear and the operator will be installed and then the cell operator controller manager will be created here. And then the other dependency that we have is the QCTL plugin, which will be the client that we will use in order to create peers, create orders, create certificate authorities, and interact basically with the Kubernetes in order to create custom resources definition. But in order to do that, we will need to install Kriv and when you have followed this step, which is just to go here, and then depending on your operating system, then execute the command. So for Mac OS, if you are in Mac OS or Linux, then you just execute this command and then add the path, this you need to add this environment variable to the Buster C or set SSHRC and then you are ready to go. And restart your cell and after that, you can just install the QCTL plugin just like this. So this has been installed, so this is right now we have QCTL, HLF, so we can get the help of the QCTL HLF plugin, which has all of these commands. We're not going to use all of them, but it may be good for you to revisit this. So after installing the QCTL plugin, which this is a QCTL plugin that is installed in the developer machine, it's not installed in the cluster, we need to install Istio. You can just download Istio with this command, which will create a folder Istio 1.16.1 in this folder. And then you just export the path in order to add the Istio binaries in order to be able to install Istio to go into this cluster. And then after this, we will have Istio Ctl with all of the commands. The ones that we will use is operator init, which will create the operator. This I have already run, so make sure that you run it. I also created Istio system. This is what is called in Kubernetes middleware. So these are general components that need to be installed for the telephone network to work on. You can have Istio also being used by other applications, not just a hyperlabel. So you will need to install this operator, this CRD for Istio, install the Istio.io with these components with the ingress gateway, one replica, these are the resources, this is the service. So this is basically the properties for the Istio operator. And then after this is installed, which we have already installed if we see in the Kubernetes cluster, which are the installed Istio.io, CRD, Istio gateway. This is healthy already. And what this will deploy is it will go to the post, this will install Istio ingress gateway, which will be the one that will serve the request. After this is installed, we can start to create the peer organizations. So actually this difference is for the worst of the worst running in January, but we can just remove that because the images are the same because now Fabric supports IRM, so we don't need to have different images. Then we can just export them here and this is important also configuring internal DNS. So this internal DNS is basically for, if we look in this screen, so the core DNS that we see here needs to be modified and this we do it by modifying the config map, so that when it asks in the cluster for local.st, it doesn't go to localhost because localhost will be the current port and the current container. It will go to the Istio ingress gateway, so that will be the difference. That is why we need to configure the internal DNS. If you don't do this step, then nothing will work basically because the operator will not be able to contact the Fabric CA in order to enroll the users. So this is really, really important and this is the cluster IP, which is an internal one. Okay, we have a question, does bevel use approximate bevel operator uses Istio, any plans for developer to support a proxy or a Quaringtask gateway API? In fact, now that you mentioned this, if we go to the documentation, there was a pull request like, I think it was this guy, RockHits. Which introduced the gateway API. So you have here in the documentation how to get started with the gateway API and there is a way to use traffic and there is a way also to use Istio using the Quaringtask gateway API. So they supported. I'm not sure so much about bevel, but the operator fabric, the gateway API support, but they're not used in this demo because Istio is way used at the moment. Yeah, so going back on that bevel supports HAProxy. So when bevel will use bevel operator to do, then I think we'll have to see how it works with the HAProxy. But yeah, right now bevel operator probably uses Istio and as as, yeah. Yeah, it can also use traffic or any... Yeah, traffic. That's a recent change, right? The any load balancer, any load balancer that is the gateway API for Kubernetes. So for the DNS, we need to configure this. This API is the one from the network. This is the ingress gateway, which has the 10, 96, 24, 71. This is a cluster IP. And so what will happen is that for any local.st, it will be related to host internal ingress. And this will be related to the cluster IP of the Istio ingress gateway. So that is why we need this damage. So we just configure it in my cases and change because I configured it early. So right now, what we need to do is deploy the organizations. So we have here the first step is deploy a certificate authority for the organization one. The second step is deploy the two peers for the organization one. And the third and fourth step is just to do the same for the organization two. Then the fifth step is to deploy the other organization and then we will create the channel. Then the organization one will join the peers to the channel. Then the organization two will do the same. Okay, we increase the phone size. Yeah, David, yeah. Yeah, better. Then we will install the chain code which we will need to prepare the connection string for the peer, the metadata file, prepare the connection file. This is for the external chain code. Then build the chain code Docker image. This is optional because there is one provided out of the box. But if you want to configure in order to run your own chain code, then this is needed. And we have two options. One for ARM and the other for regular AMD64. Then if you build local image, you need to push it to a container registry that is accessible by the Kubernetes cluster. It doesn't necessarily need to be a public container registry. Then we will deploy the chain code. Here using this command, this is the external chain code which will be deployed in the Kubernetes cluster. Then some checks, approval of the chain code, committing, and then we will do two, in this case, two instructions. And then we will launch the explorer in order to see the peers and the orders and the channels that have been deployed as well as some blocks. So this is a quick work through what we will do. And everything is in common. So we finish in cells, in cell script. So we will start by deploying the certificate authority. There's a new question. Can we operate the public videos for a new node to an existing channel or a shell of network? Yeah, I mean you can create the physical nodes using bevel operator fabric and then join them to the channel. So that's possible. Yeah, so if you're using bevel or not using bevel, you can still create and then join both, both using either using bevel or using bevel operator fabric, yeah. Great. So we will start by creating the CA, then we will run this command. In order to be faster, I will create the three CAs at the same time because I think we will be very, very tight on time. So we will create the CA for the second organization using this command. The commands are very, very similar for the three organization. We will, in the meantime, while they are going to be deployed, we will review. But basically, we are specifying the CA image which we have supported before. This is a version of the storage class where this is the one that can support the capacity for the storage of fabric CA, the name, the role ID, which you can change the role password. You can also change the host. This is for Istio, Istio, and then the port, which we have configured. This port is the same, it's the external one. So since right now we have configured external port as 443, then we will stick to that. And this command, if you are running this in a shell script then you want to wait until all of the fabric CA's are in the concession running. And then since we have created the three at the same time, all of these are, the constitution is met. And in order to visualize the governance cluster, I usually use Lens, which I highly recommend. So if we go to the ports, then we will see the three CA's being deployed right now. And in order to verify if these are deployed, then we can run this command using the localhost.st since this is a Walker domain that we put there one top, the CA with the port. And this will return an addition of the, in this case of the CNM with the CA chain. So this is the certificate authority certificate that was generated by the operator. Then we will register a user for the PRs that will be deployed in this organization. This is just registered, take the name of the certificate authority that will be used, the user and the secret. This is the user and the password that will be created, the type of the user. This is very important, we're creating a user for the PR. This user for the PR will not use in order to submit transactions from an API. This is really, really important. And then there the credentials in order to be able to register this user, which are the ones that we specify while creating the Fabric Certificate Authority and MSPID, which is the organization one MSPID. Then we will run this command. Identity PR is registered because I already registered it. So this is something that, if this is your first time, it will not really be served. And then we will create the PRs. We will create two PRs at the same time. And then in order to go faster, we will deploy also the two PRs for the organization two, which is the same command. We will try to register the organization two, user for the PRs. This will, in this case, it has been registered. The identity PR is already registered, okay. We will deploy the two PRs, then we will run these commands. This will deploy the two PRs, you don't have to do PR one. And then after you have deployed the PRs, you just run this command, it keeps it there, wait for all of the PRs to be in the condition running. And in the meantime, you can also see it on the lens in order to visualize the PRs being created. If your connectivity is good, then this will take one minute, two minutes maximum. It will also depend in the resources that you have in your PC. In this case, I'm running in a max studio of 128, the average of RAM and many CPU courses is very, very fast. As you saw in 23rd sequence, we have a PR running. So these PRs have already been created. We can verify using care, randomly one. This will give this error. This means that the PR is responding. If we do this for the PR one, it will be the same error, but this means that there is connectivity, PR zero or one, same error, PR zero or two, same error. If we want to go, if we check this against a PR that doesn't exist, then this will give this error, which means that it's going nowhere. So it still doesn't have a way in order to do this to appear. And right now, the only physical resource that we need to deploy, apart from the chain code, is the orderer nodes, are the orderer nodes. So in this case, we have already created the CA. We need to register the orderer. This is the same as we did with the PR, but the type instead of PR is orderer. And we have the role ID, everything is the same. The ID is different. And we will create three orderer nodes, which this is the closest that we can get to a production network. Although in a production network, I highly recommend to run five orderer nodes because it will have more room in case two orderer nodes crashes or something happens, then you will have three orderer nodes left, which will be more than enough in order to run the consensus and to be able to install blocks. In the case of three orderers, if two of them go down, then the network will not be useful. So we will create the three orderers, we will wait until these are created, and then we will go to Lens in order to visualize the orderer has been created. Yeah, I don't know, this has been created. The three are the same thing. This will take some time, maybe a minute or two. I'm going to pause. This is running already, everything is green, and a few seconds then this command will return that the order is running. If there is any question, in the meantime, feel free to ask it. There is a lot of, I mean, you will need to do this a lot of times if you are new to fabric, in order to understand all the pieces and all of the components to be able to deploy this stack network in another cloud provider. It wasn't this one, but apparently, I don't know what it failed, but apparently this. Okay, there is a question, is Bevel being used in any production environment? I'm sure yes, and I'm sure there are members from those organizations which are who are in this call. So, yeah, Bevel has been used and being used in production environments. The second question, I guess, is for operator fabric, right? Is there a plan to integrate with Vault for certificates and other secrets? So that's for you, David. Right now, no, because there has not been any request and it will increase the complexity of shaping up a hyperliferative network, which is one of the main concerns of the project. So the main concern that we're trying to solve is how do we spin up a hyperliferative in a fast and easy way? Certificates are already handled by Kubernetes, so every secret is stored in Kubernetes, as well as some, so it's the same way as when you deploy Postgres, so you store this in the secret password of Postgres. So, maybe in the future. Yes, yeah, so I'll just summarize on that one. Yeah, so Corey, I mean, if you want it to be supported, maybe you can create an issue on the thing. But as David said, the main problem that operator fabric is solving is how fast you can do it, rather than adding more complexity. But yeah, if you want that, you can always use Bevel, which has support for Vault. But as I was saying earlier, at some point we'll integrate Bevel and Bevel operator fabric. So Bevel will use Bevel operator fabric, but you will have the options to use the original Bevel or the operator fabric, in which case the original Bevel will give you a more production worthy, where you have Vault and all that secrets, you know, all secrets are managed by Vault, where or if you are using operator fabric, then the secrets are managed in Kubernetes. So that is another main difference between them. So in Bevel, all the secrets are stored in Vault. So even if you migrate your Kubernetes, or even if your Kubernetes shuts down, you can actually, it's easier to get back to it, but with operator fabric, because everything is inside Kubernetes, so you'll have to manage the backups properly. All right, so there was another question. I thought we could use Ansible Playbook to instantiate a set of it. So that's that, if you're using Ansible Playbook, that's for the Bevel setup. This is how we are using the operator fabric to set up a fabric network. So those are the two different ways of doing it. And the reason why you say it's a little confusing word of this, yes, it's because this was the original company that developed this part of that. So it's really hard to change the CRDs. It's on one note, then please create a pull request, but it's really hard to change the CRD names. Okay, so is that the CRD name now? No, the CRD name, no, I think it's the IP version. Yeah, it's the IP version. Ah, okay, yeah, yeah, that's... Changing this will be very, very hard for, because this project is already being executed by multiple companies in production, so migrating will be a better risk. So yeah, so to provide you the explanation that KFS originally created this and then they submitted as a hyperledger project. So that is why the API is HLF, yeah, that's that. That's the name of the API, just like for Kubernetes, you have to do so many different APIs. Okay, so right now the status is, if we look at the pictures that we have, so the goal was to create all of the physical components, which right now except the chain code we have done. So we have two pairs per each pair organization and we have three orders, but there is no channel yet. So we need to create the channel and we need to create identities in order to interrupt with each channel. And this is what we're going to do right now. So if we see it in Lens, then we have here all of the components, there is one both per component. So we have four pairs, these are the four pairs, we have three CAs, these are the three CAs, and at the top we have the three order nodes. So this is the status. So now we're going to create the channel and for these and for the third main channel and the fourth follower channel, we need to create identities. So we need to create first the identity for the order MSP. In this case, there is a small difference in the identity created because instead of being the TLSCA and the, instead of being the CA, which is the one that is used in order to submit transaction, in case of using the channel participation API, which is the only way that we can create channels in hyperlabel, then we need to and probably using the TLSCA. This is the only difference. We can go more in depth after this. I add them. So we will register the user, the admin, and we will create the identity. And we will do this for each organization. Okay, already exist. Okay, this already exist. Maybe a moment because I run this also before I need to delete the previous identities. Yeah, I need to delete the previous identities. So this will work right now. So first for the order MSP, yeah, this is already registered because I run this twice. Then for the first organization and then for the second organization, okay. And this will create three identities, all of them are running and what operator has done right now is to create a secret. And we can see here that there are three secrets from 20 seconds ago. And each of these secrets has a set PEM, a key.PEM, which has a predate key, a root, there was certificate and the user jammer, which will be used by the operator in order to perform the operations. And we have for the organization one and organization two also. And after this, we can check the identities from a line in order to see if the status is running, which is the case. And we can proceed to create the main channel. I will highly recommend you to go through each of the properties because we don't have the time to go that in depth. If you want to see it in depth, you have the meta from January, which will match much depth into fabric and why there are, to explain the commands line per line. So feel free to do that. But we will basically get the PR organization sign certificate, the root certificate for signing and the root certificate for the nodes, which is the TL certificate. And we will also get the certificates for the specific orders. This is for the consensus, for the consensus. So we will export these variables and then we will keep CTL apply this jammer, which is the type of this resource is fabric main channel. The name of the channel is demo. These are the admin order organizations so basically the channel will be created so that only the signature from this organization are needed in order to change the configuration or add another member to this channel. And then the channel config, which has, you can configure the policies, the ECLs. This is not usually the case, but you can look into modifying this, also the capabilities, the order configuration, such as the bus size, in this case, the max message count 120, this means that you can get up until 120 transactions in a block, the batch timeout, which is how often there is a block created, et cetera. And then the PR organizations that belong to the channel and the identities in order to perform the operation of creating the channel and managing the channel. Then the order organizations, which has the order in order to be able to join them to the channel. And then the orders, which are the consensus, the consensus and this is why we need the TL certificate for this consensus. If we have five, then we can just create two orders more and then add them here. So let's create this. If there is any specific question, if you're to go to the repository and then raise a question in the chat, then we can answer it later. But basically, we create the CRD and we can check if the channel has been created using this command. So this is running, so this means that the channel has been created and the orders have been joined to the network. But no PRs are yet part of the channel. And we can see here, created block five, so this means that this is working. And what we will do after this is join the organization one to the channel. The same, we will need one order TL certificate in order to trust. The order is not to fetch the block to join the PRs to the channel. We will create a fabric follower channel for the organization one. These are the anchor PRs. This is the identity that we will use. If you remember, the identity that we created using the fabric identity has this name or one does that mean. And this is the user jamming that we said that we will need that for the priority in order to join the PRs to the channel. So we will create this and we will do the same for the organization two. This is something, these jammels, this is something that you can have in Argo CD, you can have in any Github system that you want to use. This doesn't necessarily need to be QCTL, just. Then we have created the fabric follower channel for organization two. And then we can see that both of them are running. And in order to verify that both of them are running, then we will go to Lens and we will see the logs of one PR randomly. Then we will see that there are blocks here and belonging to the demo channel. So this looks good. So right now we will, we need to prepare the connection stream for the PR and as we have already the identities, we can create a fabric network config that the operator will use this spec in order to create a valid network config to be used by the QCTL plugin or by any application that you want to develop. For example, Node.js, Go, Java, et cetera. So we will create this public network config. We can see some properties here such as the channels, they get this to be added to the network config. If this is internal or not, this is for the endpoints. In this case, this is not internal. Then in the spaces that we want to filter, in this case, we want all of the new spaces to be set to search in all of the new spaces for PRs and orders since we only have one network and the organization that we want to be added. And the separate name will be the separate name that will create the operator. And we can, right after creating this, we can see the network config. So we can go to Lens, right after this, go to fabric network config. This is running and then we can go to the secret and we can see, okay, let me delete this because this was something from before. I don't know if there will be any problem running this in a work desk cluster that was used before for a network. But this is running and then the secrets will be here. So this is the network config that we will use. If I copy it, well, I can copy it and paste it whenever I want. And also I can use this command, which is here, QCT address secrets, just some passes that the config jammed and then we will pull it into the resources network jamming folder. And then we can go to the network jamming and we can see there are one MSP with the users that were added, certificate, private key and the PRs of the organizations. This is a network config that you can use in your application in order to connect to the channel and execute transactions. So right now the status is we have created the PRs, the order s, CAs, we have created the channel, we have joined the PRs to the channel, configured and code PRs for each organization. So right now what we want to do is deploy the chain code in the Kubernetes cluster, install the chain code in each of the PRs, approve the chain code for each of the organizations and commit the chain code. And when this is done, one of these steps are finished, then we will need to test the chain code. So submit a transaction and query the network in order to interact basically with the chain code. So this is highly technical, but this step is to create the metadata file. This is for the type of external builder. In this case we will use the chain code as a service and the label can be whatever you want. We will execute this. So this will create the metadata generation and prepare the configuration files. We will create a connection generation. This chain code name and this address will be the one that the PRs will use in order to connect to the chain code and we will not use TLS in this case. And we will let's execute this. So based on the connection generation, we will create a code that they said this will contain the metadata generation and the chain code tar, which will contain the connection generation. So this is a structure that fabric needs in order to install the chain code. And then we can calculate the package ID using a helper function from the Qubesity plugin and this it will be the package ID. And then after this we can install the chain codes in each of the PRs. We will need to execute the chain code four times, one for each PR. And we can run this copying and pasting everything. And then if there is no error, then this means that everything went well. There is a question that Eduardo Vasquez is asking. In the previous workshop, you created the network config file a different way with the CLI, which way will be better? I recommend now this is a resource that was added in 1.9. I recommend that you use this CRD because you can automate the creation of the network config using the Qubesity. So the very little fabric, which previously you need to have more tools locally. So this is right now in the place that we're running. This is how we're managing the network config and this has the improvement that this is refreshed every minute, every two minutes. So as the identities are also refreshed, this ensures that you have a network config with valid identities. So the certificates won't aspire. So this also has the improvement about the certificate renewal, the automatic certificate renewal. Well, after this answer to this question, we have installed the chain code in all of the peers. Then we need to build the image. In this case, since we're running short of time, the image is already built. These instructions are in case that you want to build your own chain code and then use the chain code in order to deploy it and run your own use case. So this is just for reference. This is what I had to run in order to push this image. But this is already in the docker half. So no need to docker build, to run this docker build and also no need to run this docker push command. Now we need to deploy this chain code in the Kubernetes cluster. And the reason why we had to compute the package ID is because the chain code needs to have the package ID for the organization. So we will need to deploy it with the package ID, the replicas, and the name will be the chain code name. We will be asset in this case. And the next phase will be default. The image will be the one that you will need to change if you want to deploy a custom chain code. So we can go back to Lens, to the pods, and we see that the asset is already running. Then we can see in one PR, the chain codes that are installed. You can run this for any PR or one PR zero or one PR one. So feel free to adjust these scripts as you need. Then we will need to put the chain code for the organization one. Since we have two organizations and the majority of organizations need to approve, then we need to approve for both organizations. If we had three organizations, the case that we had, we have three organizations, then at least two we need to approve before committing. So we have approved with the first organization, then let's approve with the second organization. And then this is the transaction for the approval. And then we can commit. The endorsement policy is organization one MSP and organization two MSP. And then we can just commit. And at this point, the chain code is committed, we can just interact with the chain code. There are two main functions, chain code involved. This invocation will store a block in the heavily-affirmed network and the query will only go to the chain code and other information without leaving any trace. So when you want to store any information, then you need to use invoke. In this case, we're initiating the ledger. So since we want this data to be persisted, we need to use invoke. So let's do invoke. Yeah, this is the transaction ID and then we can get all of the assets. That were created. So these are the assets in this case. And in order to see the network, we can deploy the explorers in the Kubernetes cluster. The API cost will be this one, operator API localhost.st. This will be the network config that will be used. This will be the visualization mode for the explorer. The explorer will be visualized based on the policies that the organization one has. The HLF circuit key is config.jammel in the, because if we go to the NC network config in the Kubernetes cluster, then we will find that the key is config.jammel. You can create your own circuit with your own network config and adjust it as you want. And then the HLF user of this network config for this organization, for the organization one MSP, is the org one admin default. So we can just export these variables and we can create the operator API. This is needed because in order to reduce the deployment components because in before the operator API and the operator API were needed. So right now there is only one component which is the operator API that has the website embedded in the image. So there is no need to deploy the operator API because this is already in place. So, okay, one moment. We've got, there is a problem with the, with the pod labels being added that were required in the HLF plugin. And then support that. So we will make a release on in order to fix this value. So right now the operator API has been created. Let's go to the pods. And right now we can see that the operator API is running. So we just need to get this host, then go to localhost and then put operator API localhost in HTTP because we don't have any certificate. So this will load the operator API which we will be able to see the peers, the orders, the certificate authorities, everything is running. There is basic information if you go to the detail of the, of the PR, for example, PLS PR, science PR, pricing 11 months. If you go over, then you will be able to see the time. You will be able to see the time. I don't know if in Zoom you get to see it, but in this case it's 19th of June of 2024. And I think the best screen and the screen that gives you the status of the network is these channels, which will have the demo, which is the one that we have created. And then here we can see the channel demo with the height, with the PR organization, with the other organization, with the channel peers, also we can see the peers of, the height of each of the peers, which this is really useful in order to see if there is a peer that doesn't have connectivity or has problems in order to catch up with the rest of the peers. And we can see that each of the organization have two peers, one core peers, and the order must be, has three order endpoints. If we can go to the detail of the organization one and we can see the sign certificates, which is by 10 years, and the detail certificates. These are the root certificates and also the core peers and the MSPID. And apart from that, we can see the blocks, which we have multiple types of transactions. The config ones are for when we have created the channel and there is no transaction data here. And then there's a transaction is for when a change of this block. If we go to the latest one, then we will be able to see that this block, the block number 11th, we need to understand that the block starts with zero. So even though that we have a height of 12, then the latest block is 11th because it will start with zero. This is the hash of the block, this is the date where the block was created. And these are all of the keys and values that were created with this transaction. We need to understand that this is a block, so there can be multiple transactions. So we have created asset two, three, four, five, six, and it goes back to the Visual Studio Code and we query again, they get all assets. Let's copy and paste this Visual Studio Code. Then we will see up until the asset six. And if I do again the init ledger, then I will see another, so the height has been increased to 13th and then I will see another transaction 10 seconds ago and then I will see the same keys being added. So this is a great way to see the status and the operations that have been lately produced in the block. Okay, David, I think we have another half an hour so we can summarize, but there is a question if you look at it on the chat. Yeah, I mean, right now, feel free to go through the read me. This is the end of the tutorial of the demo. So there is nothing more so now, so we can go to the summarization and answer any questions. Yeah, I think there was a question that Jeffsen asked about there was an issue with the fabric follower plugin. Manually they have to manually join and define an anchor pair and then Leon needs to work it to work it seems. Well, without much information, it's hard to change too. Yeah, I think the summary on that one is that it would be better to test it on other things other than kind. And secondly, there was a question is that how to, if there was support for migration of data from an existing fabric network, like from the Docker or middleware, right? That was a question. Demigration is a hard topic, to be honest. Yeah, I mean, I would agree. I think from the migration point of view, someone has to write code to lead those CA values and create the secrets, right? That's the basic answer will be, so what you want to do is recreate the physical components in the new cluster. Or if you are running Docker, then create the Kubernetes cluster and create the physical components. The PRR is the easiest ones. So you will need to join the PRRs to the previous channel in order for them to catch up. This will be the first part, but the hardest part is it depends on how big is your network. If you have two million blocks, then the PRS catching up will take lots of time. So this is the hardest part. Yeah, and I think there was a question about this read me even on the YouTube channel. I think N.O. Dominic is asking the same question. Do you have documentation of how to set up the HLF from start to finish? Is this read me on that repo that you provided? The read me in the repo. So I can paste it again. So the read me is in the repo. We'll paste it again. So what we did is just run through the read me. Yeah. And you can do it on your own. Any plan do it on your own. Yeah. So the question was, can we create fabric entities through UI like CAPR? So the answer to that is no, because the fabric operator, because I mean, you can do some operations via fabric operator operations console, but it is not, it doesn't create, but this UI is read me on, right? David? There is a way, but I mean, it was tested a while ago until high level. I don't think it has been integrated. It's here, there are some, but I doubt it will work to be honest. Yeah, exactly. Actually, I had a look at it. I saw an option there to create. That's why I asked. Yeah. Okay. In theory, but it will have lots of problems and I see problems with the, because in this lots of configuration. So I think it will be easier for you if you are a developer than just run and configure the channels instead of going through the UI. Maybe you want to build a UI on your own and then create this. No, actually, yeah. Actually, the people you are building, they wanted a UI. So we proposed this, but they wanted it to be from US too. I was exploring this, that's why. Yeah. So in summary, you may try to make it work, but yeah, we have not tested it. Not 100% supported yet. Yeah. How does the UI get all the network data? There is an API. And this can be secured using OpenID Connect. That is an option. For the purpose of this demo, we haven't used it, but for the securization of this data, there is an option to integrate with OpenID. Such as key clock of zero, Amazon Cognito, whatever you want. And it is foreseen to be able to have a console to execute those actions. Well, this is for the user, I think. Yeah, I think it's common. And the answer to that is no. Bevel is mainly aimed at operators who would or mainly also aimed at automation. So as soon as you keep adding console and UI to it, you cannot, you may add more complexity to the automation tools. So the short answer to that is no. We have supported the Fabric Operations Console to some because Fabric already did it. If Fabric Operations Console adds those parameters of creating a peer, et cetera, then it will be available. But from a bevel point of view, even from an operator point of view, I'm just putting words in David's mouth. But I don't think the focus is on a console, on a UI because we want, as you saw, all the David's commands were from back-end, from a bash shell. Even my commands are probably used bash shell, which is basically what the operators would use, like to use. And it is also much easier to automate them because you can just write a script which will run at whatever time. Yeah, rather than sitting on teaching someone how to browse this console. Yeah, okay. So I'll get back the screen share. So just to show you what happened on my side, we'll open the chat open separately. Right, so yeah, so here if you see that my deployment has completed, I'll show the lens as well. I did not add any chain code or anything, but just to overview on this... Shaanik, just to, sorry to interrupt, I don't think we are able to see your screen. David, you might have to stop sharing. Oh, is it? Okay, right now I stopped. I think right now you can see. Oh yeah, now you can see. Yeah, perfect. Okay. Right, okay. Yeah, so if we start from this point, so from the vault, yeah. So this was the vault that Shubhajit deployed, it was running. And after that, these are the components of the flux controller. So flux again, as you heard about an operator. So yeah, flux, flux installs a lot of different operators. So same thing has happened here. So we have different operators which are running. And then gradually the bevel systems start running. So we first have the vault Kubernetes job. So that the main purpose of this job is to create the connection between Kubernetes and Vault so that you can talk to Kubernetes, talk from Kubernetes to Vault and vice versa, okay? Because Vault is supposed to be secure and not everyone should have access to Vault, right? After that, you have the CA certificates. So because we are deploying the certificate, CAs, so we create a job which will create the CA certificates. So that job then stores the certificates into the vault. Then the CA servers are running. So these are the servers as you see, they're green. So that means they're running. And then we have the CA tools. So basically this is like the pod we are providing which does all the crypto generation using all the commands that you would use. So all the certificates, the user registration similar to what David showed with the Qubectl HLF commands. So you do the user registration and all. And then the only difference in this case is that all the CA tool is, once the certificates get generated, they get stored into Vault. So if you delete them, that's also fine. And this point I just added another AWS node. But yeah, again, yeah. Then after that, it's just the peers that have started running. So peer manufacturer supply chain which is the orderer and the carriers peer. And then once that is, once the peers started running, then we have the channel creation. And as David kind of said, like we have the main person, main organization which creates the channel. So for example, in this scenario, CarrierNet created the channel and then they both joined the channel through these jobs. So these are jobs which run. And then we also had PRCLI for manufacturer as true enable. So the PRCLI was installed. And then we have the anchor peer setup. So basically that means the anchor peer was added. So if you see the logs from the orderer and all that. So here I can see that the all channel was added here. I mean, this error is because the joiners were not added. But yeah, you can see that's the all channel added this. And then on the peer zero, so this is for Carrier. So after the anchor peers were added, then the membership view on the all channel was changed. And it was able to know that there is another peer called manufacturer net on the channel. And similar will be the log on manufacturer. Yeah, so if you see the same thing and then manufacturer can see the other peer which is peer zero. So that's the summary. And then if I was trying to see if there's any answer, no question here, no. Going back to what all things, other things you can do via bevel with fabric is here. So you can upgrade the fabric version. You can upgrade a running fabric from 1.4X to 2.2X. You can add a new organization. You can add a new orderer organization if you want to add a separate Kubernetes cluster with the orderer organization. You can create new channel. Definitely you can remove an organization but removing an organization of course means you should have agreement with all the other organizations. But max I think it's a vote kind of thing. Then you can add a new peer to an existing organization. You can add just another after orderer to an existing organization orderer. Just adding the CLI. You can add install chain code basically both 1.4 and 2.2 versions. You can upgrade a chain code. And you can as I said, you can deploy in a Fabric Operations Console. You can refresh certificates separately by one playbook. And as I always said, you can deploy Fabric the cactus connector. Though the cactus connector itself has a defect. So we are not able to successfully test it but you can still deploy a version of cactus connector. And then you can also deploy the external chain code using Bevel. Right. So questions, what if we had set up HMN with Bevel and just install? Yeah, I mean, on this one, I think it will work. David, do you have a different opinion on Enos question? What others from the ones from Eduardo, let me know. No, no, the previous question. What if we had set up a network with Bevel and just install, the thing is that the Bevel operator Fabric user interface explorer, you will not be able to see the PR orders and see it. You will not be able to see the channel, block the participants of the networks, because this part relies on the network config. So with Bevel, I think you will be able to create a network config and then just plug it in the explorer in order to see the channel data. So the channel data, the blocks, the transactions and everything. So it's possible, but it's not natural, you know. So if you want to use the Bevel operator Fabric provide and you most likely want to use also the Bevel operator Fabric. Yeah, right. Yeah, so I'm going back. Okay, going to the second question, always about the CA server. Yes, the Fabric CA server is not supposed to be used for production workloads. So we have not tried ourselves with generic CA server, which is like provided by AWS or deploying your own CA server. So basically, the idea is that you should deploy with your own CA server and it should have the MSP kind of configuration. So you can provide them membership memberships. So create users and applications. I think that's the way to do it when you are using for production. So not aware of this thing, right? So yeah, so if you go to the, if I'm opening vault, once the vault is deployed or once everything is completed in the secrets V2 and when it's created, this was empty. Now you have all the secrets in order organizations and supply chain. So this is for example, this is the Genesis block and then under this, these are the certificate, the CA servers, public certificate and key. And then you have the users, MSP details, all the admin services, all these are here. So if you want to copy or download for an application, for example, especially the admin user, you can use Vault CLI or even Vault API, Vault provides API access as well to download these certificates in your application. Yeah, so that's the question in the rest server you can use. I'm sure there would be packages where you can use Vault server and pass the details of this Vault and the token as some kind of secret so that it can download these using the APIs. And that's what Bevel also does anyway. So Bevel for all these boards, which are running if I do this. So this is the certificates in it part. As you can see, it runs, it gets this using the APIs. It calls the Vault APIs to get the TLS certificates. Do we have the option to update the certificates? Yeah, I think David already showed how to do it on Bevel operator fabric. And in Bevel, it's operations here, yeah. So refresh certificates in hyperlogic. So if you click on this, you can know it's there. There's a separate, basically there's a different playbook which will refresh the certificates. And backing to back to this using Eno York question, I think David answered it that you can do that. But as we already discussed, the UI is not 100% and we don't test it as well. So it may work and may not work as well. So we don't, and we are more focused on providing the CLI versions or working on the CLI versions of both from Bevel and operator fabric. I pasted the two links which basically are referencing the Bevel operator fabric. Yeah, so these two links are for when, if you're using Bevel operator fabric, this is how you renew or even set up auto renewals. The idea of Bevel operator fabric is to set up the network and to run it in autopilot more or less. So that the maintenance is minimum. On fabric 2.5, yeah, I think will be supported in, we have started a spike. If anyone wants to help here, they can. But we just started a spike where we want to explore what is fabric, what are the differences between fabric 2.2 and 2.5 and how it will impact Bevel and then we'll create the stories for those differences. If there are not major differences, it will be a smaller, it will be earlier available sooner than later. Okay, that's all then. Any other questions? We have about, you know, 10, 13 minutes. Going back to the fabric version support for Bevel we as I keep on saying that if you are not using, if there is no major changes, if the generic, I think fabric is generally backwards compatible. So if you're not using specific features of fabric 2.4, if you just update that 2.2.2 to 2.4.x, don't use x gap, some real versions, it should work. Yeah, the recording will, is there already there on YouTube? So if you don't have to wait for the recording, it's already will be published as soon as we finish it on the YouTube channel. Yeah, it's the same link as the live stream. Yeah, it's the same link as the live stream. Is it currently possible to create an infrastructure with three orders and wrap consensus? Yes, not exactly no raft leader. I'm not exactly sure what you mean by that. Like you can create, I mean, you can definitely create three orders with a raft consensus. I think David showed how to create on that. And in Bevel, you just add in the network camel, add multiple orders in the orders under the order or organization. Most likely the orders, I mean, this is a problem in general. So most likely the problems cannot communicate. Yeah, if you're seeing a raft ordering, yeah, I think that these orders are not, either not communicating with each other. So they're not able to select a leader or there is some problem in the assist channel. Yeah. There's a question about Explorer CRD. David, is that, is that supported? This was meant to be supported. The reason why we have the customer's plurip for fabric is because the plurip project, and this is my opinion, is great because it's for multiple blockchain networks, but it's not tailored for fabric. For example, you don't have the option to see the private data in the transaction, in the Explorer, which is, which many users rely on private data. So this is something that is not in the block, it's in each of the PRs. And the operator Explorer, developer operator Explorer, supports also the visualization of the channel as well as the configuration. So we, in the past, we missed a lot of features in the fabric explorer, then it went into the pre-created right now, I think it's in hyper layer labs and it has been renovated, I think. So developing, development is taking place, but the focus of hyper layer explorer is not fabric, especially. So I don't think that we will support that, but you can always deploy hyper layer explorer on the site, not using the operator. Also, you can use the explorer that we provide. All right, I think that's all. Thanks, everyone. We loved the participation and the questions, definitely. Some great questions, but yeah, I know people didn't put on this card, but you can always come back on this card. We have those two separate channels and also the workshop channel, if you need some help again on the workshop's shop channel, please come back there and ask your questions. That's advantage, of course, with asking on Discord is that it will be there and someone else will have the same question. We'll have the answer. It's not a video answer or you don't have to wait for the whole video to complete to get an answer. But yeah, please feel free and please use both of them. And as we already said for some of the issues, if you think that any feature would be great for that, you can add them as an issue. Of course, we can do the triage and if we need more details or more, you know, how to do it, we can triage that issue. But if that's also one way of participating in an open source that you can submit an issue, it doesn't have to be always code. Of course, if you already are working on something and you think that is a good fit, you can create the issue and then submit the PR as well. So that's the best way anyways. And yeah, that's all. Thanks. Great. Thanks everyone for joining and yeah, thanks for running this and yeah, nice to have all this information. And as you said, yeah, let's meet on Discord if you have any additional questions for anybody about Bevel. Great. Thanks everyone. Thank you everyone. Have a nice day. See you.