 So, thank you everyone for dialing into today's virtual Hyperledger meetup. We are going to talk about deploying production networks with Hyperledger or Bevel Helm charts today. We have Sonak, Roy and Suvajit Sarkar both from Accenture and Bevel Maintainers who are going to be going through this material with you. And again, if you have any questions as we go through this, please do feel free to ask. And then I'll keep an eye on the YouTube. We sometimes get questions over there. And Sonak and Suvajit, I'll copy over any questions I see over YouTube to Zoom chat, so you only have to check there. But with that, why don't you take it away? Thanks, David. So welcome everyone to the Hyperledger meetup. So we'll be, as David said, we'll be talking about deploying a DLT network using Helm charts before we actually go into the detail of our agenda. Have a quick round of introduction. Sonak, off to you. Hello. Hi, everyone. Sorry, I played the YouTube last week as well. Yeah, I'm Sonak Roy with 17, 18 years of experience in the software industry right now and working for Accenture and maintaining Bevel since 2019. So yeah, and these are some of my previous companies I've worked for. And I mainly work in the distributed architecture as well as Bevel, implementing Bevel for another blockchain networks for clients. That's all from me. Thank you, Sonak. Myself, Suvajit Sarkar, I have around 12 years of experience in technology and software development. I currently work with Accenture and part of the maintainers of Hyperledger Bevel. You can find me on LinkedIn. My ID is this and I'm on Twitter as well. With further ado, let's just move to the agenda for today. So the agenda for today is quite simple. So we'll first do a quick introduction and a small introduction of what really Hyperledger Bevel is about and then we'll dive into the actual code demonstration where if you want you guys can follow us while we do that. But I mean, you can use this as a recording and do it on your own time as well. So the demonstration is basically in two parts. The first one is deployment on Kubernetes Docker desktop, which is the local Kubernetes setup. For the DLT network, we would be using Hyperledger Bezu for today's demonstration. And the other part of the demo is deployment of again Hyperledger Bezu, but this time on a cloud managed Kubernetes, which is more of a production ready environment. I'll move to the next slide. So just a quick introduction on what Hyperledger Bevel is about. So it's an automation tool that allows you to consistently and rapidly deploy production ready DLT platforms. The idea behind Bevel is to kind of allow developers to set up a secure and scalable blockchain solution without worrying about different aspects of how it will work in production or worrying about aspects of scalability and security, thus focusing mainly on developing blockchain application. So Bevel has been proven to kind of cut the development time from matter of weeks to matter of, I would say, hours or days. And some of the guiding principles that Bevel conforms to is the reference architecture. So this is the DLT reference architecture that was open sourced by Accenture a couple of years back. So Bevel kind of confirms to that. And then that's how it kind of gets the production ready or production worthy architecture. The infrastructure that Bevel uses or the tools and components that we have is mostly in infrastructure independent that allows you to kind of choose your own cloud provider or your infrastructure in whatever way you want to. The most of the components in Bevel are modular in design. So you can kind of choose to plug and play these different components and choose different tools as per your requirements and as per your needs. The other aspect is basically Bevel is designed for security. So all the keys and crypto materials that are required for any blockchain network is not saved in the source code itself or in the configuration file or basically in the controller environment. Rather, all of these are basically stored in a secure key vault. The other thing about, I mean, you all know that Bevel is kind of open sourced and it uses Apache 2.0 license. So all of the components are basically open sourced and has Apache 2.0 licensed. The diagram that you see here, it basically talks about how Bevel works in a nutshell. What you have here on the right hand side is basically the different DLT platforms that currently Bevel supports. So we have Hyperledger Indie, we have Corda Open Source, Corda Enterprise, Hyperledger Fabric, Quodam and Hyperledger Bezu and Parity Substrate as well. And the different cloud providers that you can choose and use as infrastructure are any of these which kind of support some managed Kubernetes or if you can set up a Kubernetes which is self-managed, that is also possible. I will move to the next one, which is the actual demonstration of deploying Hyperledger Bezu on Kubernetes Docker desktop. So before I just kind of move to the actual demonstration, do we have any questions? No questions, I don't know. So for this part, these are the tools that I'll be using. So some of the things which are at most required is you need to have Git, you need to have Hyperledger Bevel repo. We would suggest you to kind of get the Hyperledger Bevel repo from the GitHub URL. We'll be posting the URL. You can fork the repository and get it in your local. The Kubernetes environment or the Kubernetes setup that I'll be using will be on the Docker desktop. And I'll also be using WSL and Ubuntu on it. For the development purpose, I'll be using VS Code and Kubernetes Lens to view the different resources or manifest that will spin up on the Kubernetes cluster. So for viewing that, I'll be using Kubernetes Lens. So quickly to talk about the environment that I have, I'm currently running Windows and the system requirements. Right now I'm using 32 GB of RAM, but I think 16 GB will still work because the Docker desktop kind of consumes a lot of RAM. So I have the Docker desktop installed. So it's running now and you have to enable, if you are new to Docker desktop, you have to kind of enable the Kubernetes engine. So you can go to settings and then go to Kubernetes. Here you have to enable the Kubernetes and currently the Kubernetes version, which is enabled is 1.29.1. Another important thing that is required is the WSL and I'm running WSL and Ubuntu installed on it. You can still not, if you are on some other Ubuntu machine itself or if you are on Mac, you don't have to do that. But the idea behind this is the Docker engine itself uses the WSL2 as a backend engine. So the advantage of that is basically the Kubernetes that runs and your WSL are basically using the same network interface. So that's why if you just do a Docker PS, right now there are no, nothing running. But then basically whenever you have some Kubernetes pods or something running, you'll see Docker container running for it. And the network as in you can access to that pod or the container using local host. So that's the advantage of it. And for, let's say, if you have some other applications that you want to test locally, then you can still kind of have them in the same network interface in the same environment that way the communication between them is easier to kind of configure. Right. So once you have the Kubernetes engine running, if you run kubectl commands, you should be able to see the Kubernetes nodes or pods, whatever you run. So right now it's running the Docker desktop control plane with just one node. And now I'll quickly switch to my VS code where I have already got the bevel code. Code from my fork repository. So this is the root of the project. So if I do a git remote minus V, so I have this origin which is from where the official, sorry, it's not the origin, the upstream, that's the official bevel repo. And this is my fork repository. So it's better to get a fork of the original repository that way. If you are planning to contribute something, it's easier to kind of create a pull request. That's how that's the process that we follow. And with that, let's switch. So I think it needs to be a bit more zoomed in. Yeah, yeah, yeah. Yeah, that's better. Yeah. Let me try to remove the zoom watermarks. Yeah, I mean, I think I don't think you are able to see the zoom toolbars, which is actually blocking my view. So let me just move them aside. Yeah, that's fine now. So the code structure wise with, I mean, post the release 0.15, we have moved our samples out to a separate repository. So now from a top repo, root of the repo, you'll have, you'll see just the platforms, which contains the, all the platforms that we currently support. So Basu, Fabric, Indie, Quorum, R3, Codd, Open Source and Enterprise, shared folder contains and substrate, shared folder contains all the code that is kind of reused by these other platforms. So once we kind of go through the deployment, whenever there's a reference to the shared, we'll talk about that. But for now, let's look at the Basu because for this session, we'll be deploying Basu. So Basu or each of these basically platforms will have subfolders and these are charts, configuration and releases. Configuration contains the Ansible code that is used for automation. And basically the Ansible kind of uses a GitOps process where the configuration or the network is the network state, which is the single configuration file that is used in Bevel is further divided into sub configurations, which is then pushed as a release to your GitHub repository. And those are basically are translated to the Kubernetes instructions. So that's how Bevel works. But this demo will be doing something completely different. This is the first time we are doing and this is one of the key things that we have decided to start working on since we did the Bevel general availability release 1.0. So we are currently working on and it's still in development. Some of the platforms are done. So which is deploying a particular platform using just Helm charts. So we have kind of worked to make our Helm charts stand alone. So in this particular demonstration will not be using Ansible, will be directly using the Helm charts that we have here in this chart folder to deploy the network, deploy the Bevel Basu network. So the charts here are as follows as in you have the Basu cacti connector. So this is particularly for setting up. Cacti connector will not be talking about this. It's more for interoperability. And then the other charts are basically Basu Genesis, Basu Node. There's operations chart as well, which is proposed validator. This is used to add new validators or basically proposing existing Node or existing member as a new validator. And then for private transactions we have Tessera Node, which is used to do private transactions. And then we have a chart which is about the TLS surgeon. So this is particularly used when your Basu nodes are behind a proxy and they use basically SSL or TLS for communication. So there we need some kind of TLS search. So this chart provides that facility where it creates a, it uses OpenSSL to create your TLS certificates. So first part of the demo, as I said, would be using local Kubernetes and I'll be setting up Basu with the bare minimum requirement. So we'll not be using any proxy service, which would mean that all our nodes, which are the peers and validators, will be using your cluster IP. So all the communication would be within the cluster. So any single cluster setup would work. If you have multiple clusters then definitely we'd need some kind of proxy or some kind of way to kind of let those peers be reachable to the other peers on a different cluster. So for this first part we will not be requiring those things. And we'll also not be using HashiCorp Vault. So the key management or the crypto storage will, the cryptos will not be saved in a separate key vault, but we'll be using the Kubernetes secrets to store those secrets or credentials and basically the node keys and the other keys that are required for Basu network to work. So for anyone who wants to kind of use it, the first thing that they should be looking into is the read me folder in the charts under the charts folder. So if you look at the read me, it's quite self-explanatory. So I'll just go through some of the important points here. I'll skip some of those because that will be kind of required for a different kind of setup. So the values, I'll skip this one part because this particular one uses is more for when you are using HashiCorp Vault as a key vault service. So I'll skip this one. We'll discuss about this in the later part. But let's start from the prerequisite thing. So the first thing that we need to do is install the, or basically update the dependencies. So how we have written this end chart is that let's look at the Basugenesis one. So each of these charts will have some kind of requirements which are its dependencies. And as I was saying that there are a few of the charts which are in the shared folder, which is being used by the other platforms. So for example, this one if you see is basically taken from the shared chart, which is Bevel Vault Management and the other one is the Bevel script. So it contains some of these scripts that are more of a utility scripts that is used to, let's say if you want to access some of the vault related things. So the script is mainly used for that. And so let's just run the commands now. And let's do the first dependency. I need to run it from the charts folder. So DTE and then go to hyperledger. It's your charts and then run the same command. So now if you see the dependencies are being downloaded. So in the Basugenesis you'll see that you have the TGZ files for the charts. So the first dependency is done. The next one is to do for the Basu node one. So I'm going to run the other dependency. So yep, the other dependency is done as well. So if you look at the Basu node, the dependencies are the Tessera node because each node can have a private transaction manager running with it. So we need this particular chart which will spin up the Tessera node. The other ones are TLSurgeon and the storage class which the node would require some kind of persistent storage to save the ledger data. And then that's why we need basically the storage class here. So once this is done we are all set to run the first command which is to install the Basugenesis chart. And for that purpose we'll be using our namespace which is a supply chain Basu. We can give any name to your Helm deployment and basically create, have any namespace you want to have. The values that we'll be using is from these values folder which is no proxy and no vault. And since this is the Genesis chart we'll be using the genesis.yaml. You can have a look at these values which is under the values folder. So we have two folders at here. One is no proxy, no vault, other is proxy and vault. So we'll be using the no proxy and no vault. So the genesis.yaml kind of contains the different configurations that are required for the genesis. So you can choose to kind of change the values here to configure the genesis file as per your need. So the chain ID, for example, the consensus mechanism, the gas limit for your network and then the part where it kind of starts the bare minimum network. So here we are starting up four nodes. So these four would be validator nodes. So for a Basu node there should be always two and two-third of the validator should be always up. So minimum validators that Basu themselves suggest for a QBFT network is four validators. So that's why we have defaulted these two initial four validators. We'll spin up and you can choose to change this. But then, as I said, this recommendation from Basu itself is a QBFT network should have a minimum four validator. So yeah, you can just leave it as it is and it's going to create the genesis file which will have basically the node addresses or the extra data information of all the four validators. And it's going to also create the node keys and other node-related crypto things which I'll show you once I run the code. But one more additional thing quickly here is if you look at the chart itself, the genesis chart and have a look at the job which going to create the genesis file, you'll see that there is an option for having permission networks as well and there are sections which talks about additional accounts. So this allows you to pass in new accounts which will have an initial set of balance. You can choose to edit this to have your own accounts added and then whatever balance you want to add to those initial accounts. So without further ado, let me just run the code and then it would be easier to explain what has happened. So let's just run this. So it's simple. I just run the help command with the name that I've given and the chart that I've called it and the namespace is supply chain base and I'm telling you to create the namespace because the namespace was not there before and then the path to the value file which was just showing. So once this is done, if I just open the lens to see what has happened, so if you see in terms of pod, there's a Genesis init job that has succeeded. So it has completed the job which has actually created few things. It has created the secrets. Secrets hold the few of the crypto materials that are required for the validator nodes. So if I just look at this, you'll see it has account addresses. It has account key store, account password, account private key, the node address, the node key and the node private key and the node public key. Similarly, it will have for other validators as well. Apart from these things, it has also created a config map. The config map contains some of the files as the base to Genesis is the important one. It also has the base to peer which I think contains the static JSON file. So this is the static JSON file that the network has generated. Now let's... So the crypto materials for all the four validators is already ready. So all we need to do is to actually start the validators now. So let me run the other command. So the first one is I'm just starting the validator one. So I've given the name as validator one calling the base to node and then passing the value which is same as values no proxy, no world, but then I'm calling the validator dot yaml. So one interesting thing here that I'd like to quickly show you is basically inside this node. So I'm going to run this node chart. If you look at the values of this one, so basically it has, as I was saying, that this chart has a dependency of storage class, correct? So storage class is basically how... is a class definition of how your persistent volumes would look like and the storage class is kind of... is basically dependent on the underlying infrastructure that you are running. So in my case, I'm running Kubernetes on Docker desktop. So the storage class definition would change. So if you look at how... what is the default storage class that we support? So if I go to shared and so you... these bevel storage class, and if I just show you the storage class YAML file, not the YAML file, I think helper file, yeah. So you see that there are provisioners. So the AWS EBS is a provisioner that is for using AWS EBS volumes and then you have the CSI driver for GKE and this for Minikube, have Minikube host path for Azure. You have the disk CSI as your provisioner. So similarly, you have to have a provisioner that would allow you to create the volumes in your Docker desktop. So just to kind of check that, you can just do a Docker... no, sorry, not Docker. You just do pubectl get storage class. SC stands for storage class. You'll see that there's already a default storage class that is created. So I'm just checking the provisioner here. So the provisioner is Docker host path. That's how if you are not sure what is the provisioner, you can check in your own Kubernetes cluster. There would be a default storage class. You can just check the provisioner here. So I'm going to use the Docker host path as the provisioner. Now, either I change the provisioner here in the bevel storage class chart itself, but then the beauty, which I just wanted to show you by how the kind of dependencies work in Helm chart is basically you don't have to kind of change the base dependency chart, but you just change the value in the dependent chart itself or the main chart. So in the node one, if you see how the dependency is there, so if I just open the requirements.yaml, you'll see that the storage class is aliased as storage. So if I go to the values of it, you'll see that there's a value which is storage. So any value that you define under this storage would kind of override the sub chart or basically the dependency charts or the charts which this main chart is dependent on. So either you just change it here or you can choose to just change it here in the top level values that you're going to pass. So I'm going to change it in the top level values that I'm passing. So I'll go here and go to values and then validator.yaml. So here there's a storage here. So I'll just say provisioner as Docker desktop host path. Correct. So I'm just confirming the spelling or else it will not work. P-R-O-V-I-S-I. I think that's fine. So once I change this, then I am ready to run the command. So I'm going to run this command now. Right. So once this is done, quickly looking at the Kubernetes again, so it has now created a stateful set. So this runs the basu validator node. So if you check the logs of it, you'll see that it is actually starting the node now. It will take some time. So let's, I will show you, we'll come back to this again. So just to show you the storage class thing, if you see the storage classes, it created the storage class for the validator and it is using the provisioner as Docker host path. That's how it is able to create the persistent volume claim and then it is able to the persistent volume claim is then is able to create a persistent volume. So this is the volume that the storage, the validator node will be using. Now without further ado, let me just quickly run the other four validators that are required for the network. So I'm going to run the validator two and then I'm going to run the validator three, let it get complete and then the validator four. Right. So all the four validators are running. Let's check them here on the lens. So you see all the four validators are running. Let's check the logs of one of the validators. Let's check the first one. So validator basu. So it started running. Now the way we have defaulted is basically it's a zero gas fee network. And as you saw in the Genesis, the network ID or the chain ID is 1337 and then the basu versions are where do I see? Yeah, the basu version is 23.10.2 And then I think yeah, now since all the other validators are up, it's able to reach to them and it's able to connect to the other three peers which are the validators. And once I think after some time, they will start adding the blocks. So I'll take a quick pause here. Any questions now? No, I don't think there is any question. Okay. Yeah. I'll check on the YouTube as well. All right. So now we have currently all the four validators running. We can spin up a non-validator node. So to do that, we'll use the same basu node chart, but then instead of the validator value, we'll be passing the transaction node.yaml value. So these are the default values that we have provided. You can choose to kind of look into those values and then make changes according to your need. So I'll just open the transaction node one. So here you see there are additional things like Tessera, which is for private transaction manager. So validator does not actually do the transaction or take part in the transactions. So it does not need to have a Tessera. So here in this particular transaction node, you'll have the Tessera one. And similar to the validator node, I'll need to add the provisioner to the transaction node as well. So I'm going to add this. And then let's start the member as well. So same command just changed have changed the values file. So by changing the value, I'm just changing the work of that node itself. So these ones are for validation and validator nodes. And this one is for doing the transactions in non-validation ones. So I have the member up now. So if you see it has started created the basu node member as well as it has created the Tessera node as well. And in the logs below for the node validator, you see that it has already started adding blocks. So it has already reached up to 13th block. Now with this, you have a network or you have a basu network which is ready to be used. You can use this network to do your transactions. You can deploy smart contracts on top of it. And then you can do transactions and these validators are there. But all of these are basically set up in a local environment where none of these validators are exposed outside or none of the member node is also not basically exposed externally. But still you can try out a few things quickly. So for example, let's do a quick transaction. So to do that, let's do one thing. So if you look at the services, the Kubernetes services that we have. So I'm using K as an alias to kubectl. So you have to run the kubectl command. So kubectl get as we see stands for service. So if you check at the services, you'll see that all these validators and members have ports open. So 8545 is the RPC port. So I'm going to use the RPC port to kind of check few things on the network. So for example, the RPC endpoints will have different APIs. So you can use the admin API or the API to query few things or to do some admin related stuff on your node. So to do that, since it's a test network and this is basically running on your Kubernetes, you have to do a port forward. So you can do a kubectl port forward. Service slash the name of the service, which is I'm going to port forward the Besu member nodes service and I'm going to port forward it to port 9009000 and then 8545. So what it means is that I'm port forwarding 8545 or basically my local host 9000 port will be forwarded to the Kubernetes service Besu nodes port 8545. So that way, with this, I can use local host 9000 port to reach to the RPC endpoint of my node member. So I have to give the service name as well. Sorry, not the service name, the namespace. So now you see it's forwarding. So now the local machine, I can use local host or 127.0.0.1, which is stands for local host itself to reach to the RPC endpoint. So I'm just going to use postman. So I have the Besu JSON RPC collection. So I can just use this to check a few things. For example, I can check the list of validators. So this one has the IBFT API, but since we are using QBFT, QBFT, so we'll have to use QBFT. There is no QBFT API here, but I can just change the body of it. Instead of IBFT, I will just say QBFT. Let's do this one. Get validators by block number. So this one, QBFT, the end, I think I have to change this. Let me see if the environment is correctly added. So local host 9000 is the environment. And let's say the latest block. I'm going to check the latest block and see what are the validators on the latest block. So with this, you'll see that it's giving me the node addresses of all the four validators that we have. So this proves that the network is adding blocks with the latest block having the four validators, correct? So another thing which would be kind of interesting for you guys is basically just looking at how the Genesis file looks like. So just for that, you go into this particular folder. Because that's where I think for additional steps which we'll be talking about in the next part, we would need the Genesis file there. But I'm just using it to view it currently. So the first second, I'll run the second command. I don't need the static JSON now. So this one will show me the Genesis file that is actually there in the network. I mean used in the network. So if you look at this one, so you see that there are these accounts which are the accounts that were added to each of these validators. So each validator will have an account added. And then in that account, there is an initial set of balance that is given. Excuse me. So if you look at the lens again and look at the secrets where the account information is there, where did the secrets go? Can't say secrets, yeah. So if I look at the validator one, so this is the account key store. This is the account password and then the account address. So if you see there's a 0x712f and then if you just go here, you'll see that you'll find a matching address here. So that means that this validator node has an address and then that address has this balance. That's what the genesis file is saying. So you can just simply use your browser and connect MetaMask and then go to settings, networks, add a network. So since I have port forwarded this here, there's a port forward happening which is port forwarding the RPC endpoint of one of the members. I can use that as a network address. So let me add a new network. So yes, so let's say test bevel and then the RPC is HTTP localhost colon 9000. So that's the port. The chain ID is 1337. That's what we saw and then we can just call it it and then save it. Correct. But I see 0th because I have not added the account yet. So this account does not have any balance in that. So all I need to do is add account and then import account and then use the private key that is here again in the secret. So secret will have a private key which is your account private key. And yeah, this is the private key and I'm going to use this here import. So yes, it got imported. Now you have the initial balance that was there defined in the genesis. You can send or do a transaction. So I'm going to send it to one of my primary account. So I'm going to send to 1010 it. Here it by default metamask kind of takes the gas fee from the market value or whatever the Ethereum market value is. It does a gas estimation, but then since ours is a gasless network, we can choose to just do a next and then here in the edit I have already edited in the advanced settings that I've made that this is like no gas is required basically because it's a gasless network. So there's no gas fees and then I can just confirm the transaction. It's spending some time. Yep, so the transaction got complete and in my primary account I see 10 it now on the basu test network. So with that I will take a pause now and do you guys have any questions or we good to move to the next part? Hello, can you hear me? Yep. I have a question. Why is it called the validator one to why is not a validator and with the number of instances like why what do you name it one, two, three? Yeah, just for the simplicity sake I'm naming it as a validator one, two and three. It's up to you how you choose to name it. So here you can just simply when you do a helm installation you can give your helm installation name whatever you wish to and that's the name that you'll have for your validators No, I don't think it will quite work with basu because basu the genesis file calls it as validator dash one. So for validators you have to use the names validators dash one, two, three for the genesis validators. I think if you add a new validator you can name it whatever you want for members you can name it anything yeah. So in that sense if I change this then there will be mismatch with the genesis with the genesis yes yes I think I mean not genesis it's exactly it is with the secrets that is getting created because the genesis is going to create this if you go to the list of secrets in from lens it's creating the secret name as that right? Validator dash one keys, validator dash two keys. So that is how basu kind of basically we are using this hook concept that is provided by basu on basu Kubernetes only that hook creates these values as validator one dash one, two, three yeah. Okay, thank you. For the members you can give any name I mean that's the only mismatch I mean if you want to then go rename everything then that will I think the secrets if you rename that it will work. Yeah. But for the members you can name anything because the member crypto or the account it is etc are not stored in the genesis anyways right? Thank you. Thank you. Do you have any more questions or or else then I'll hand it over to you Shanak. Yeah I'm just checking on question on there as well on YouTube as well. So yeah we can hand it over. Yep. I'll stop sharing. Do you want to share? I mean it's fine yeah yeah sure. Okay. You're seeing my screen right? My charts for Yep I can see your browser. Right. Yeah so here it's a bit weird I cannot see what I am sharing so so yeah so as there's already kind of explained in detail the charts for the basic components and this I'm going by this because it renders well so all this while you saw how you can use no proxy and no vault basically the very basic installation just to get your a basic network with your choice of value consensus protocol and then just you can write your smart contracts and deploy using the local host forwarding that to local host etc and with the RPC is the get address is already always there the get RPC but for more production oriented use cases of course we'll be using will not use the local host will definitely not use local host and also will use will use like a proper Kubernetes cluster and also will use vault for storing our secrets and also will use a proxy to access the Kubernetes the RPC and points basically so that's that's how you from our point of view from the local's point of view that's that's a very generic or very basic production structure should look like of course you should have much more advanced security features and backups and restores etc for for the Kubernetes cluster itself for a full-fledged production network right so so going by that if we're coming here so the value example files are the same it will be it will have some different will add something different values so mainly the important parts are mentioned here but you can go to each each genesis this readme file to see what all values you can configure so each each of our charts have the respectively readme files so that you can configure the values as per your requirements so this one so these are the main things that you kind of should take care of first of all is the cloud provider because I know we have used kind of said mini-cube or AWS but the example that I'll show you is going to be Azure but yeah so you can provide the cloud provider as Azure or AWS or mini-cube so the provider that is kind of supported by Bevel in general which originally was AWS Azure, GCP, mini-cube and open not open-shift digital ocean I think so those are all supported cloud native services would be false so this is a future thing that we are going to work on in this year where you will be able to use the cloud native secret managers like Azure Key Vault or AWS KMS as well with instead of vault basically so in that case the Azure cloud native services will be true but right now it is all false because it's we have not implemented that it is a future release then of course you need to provide the Kubernetes URL which is Kubernetes API URL that's these are the main things on the Kubernetes side and then as I said we will be using vault as our service secret manager so we will have to provide the type as vault sorry type as hashicop we will be using hashicop vault otherwise what Subhajit used was Kubernetes which basically means using Kubernetes secrets rather than a secret manager as such network is basu of course because we are doing basu then you will have to provide that URL address of the vault and then authpath and authpath is basically where the vault secrets will be created and then a secret engine name and the secret prefix and the role so these role you don't generally have to change because it's kind of static now in vault so this is the vault that we are going to use and this is the secret engine that I have enabled secrets v2 it's a KV engine so this is as per standard bevel documentation right it shows tells you how to enable a secret engine when you create a vault so this is this is a secret engine that we have created basically so if your secret engine is called something else like KV you will have to use secret engine as KV here for example so right now it is empty I say it's a new one so it's empty so that's that's pretty much so we are going to this section of ambassador and ambassador with ambassador and proxy and vault so with proxy so we'll be using so we'll be using the is this visible or should I do like this is better yep this is better so we'll be using the values from proxy and vault and as as mentioned in the read me that you please replace these values of course you can replace other values as well but these are mandatory so replace the global vault address the global cluster URL and the proxy external URL suffix so these are the three things that you must replace so I've done it already so for example in my genesis file I've given this the URL you can see what are the changes that is there from the checked inversion so this is the address which is my vault address and then this is the provider as usual the checked inversion as AWS and then the Kubernetes URL is my Azure's Kubernetes URL wherever measure a case cluster then we have this raw genesis I mean that's already same it still creates the number of four validators basically and it uses QBFT so that's the that's the changes I mean that those are the same changes I've done for all of them so yeah as you can see here the Kubernetes URL is not required for the nodes it's only required for the genesis so the Kubernetes URL is missing here but otherwise it is the same thing only thing and for the nodes for validator node or transaction node the external URL suffix is also updated because that's how we'll be accessing the endpoints or the RBC endpoint same with validator yeah so these are the changes I also updated the image version of VSU to 20 to 10 because I think the test initially we checked in at 22 and then it was giving issues with the mining zero gas fee issue was there because it was not working with zero gas right so that's those are the updates that I've already done for that yeah so then you do is all you do is here do with go with create namespace and create the secret token so now why we need to create a secret root token is basically the initial connection between your vault and Kubernetes is has to be done by some which one which is root and that is why you have to provide the root token so yeah cube CTL so I've already created this so this is basically the root token that you get when you create vault or unsealed or initiate vault you will get a root token so that's the literal root token I have created in my secrets yeah I can see so I'll go to supply chain only now so you can see the root token is already created so it's here I did a test run so that's why these things are there but yeah it will soon go away right so now we'll run the first chart which is the genesis chart as same command as you know it was showing help install genesis the only thing is that the value file is different because my value file now has all the value all the like the Kubernetes URL and vault details as well if you go back this kind of again similar the only change is that I think it was there was no vault management when you saw the previous example but in this example you have the vault management because this is the this is what creates the connection between the Kubernetes cluster and vault so now if once this is genesis is complete if we go to see what has happened in vault Asia so we get a supply chain kind of a folder so this is same name as you have provided in in here the sorry in here basically so that data is kind of hidden so supply chain so once here you will see that as we are discussing that the base you the genesis automation or hook it's using validator one name naming convention so all the secrets are actually here as well so it is not only in in Kubernetes secrets but it is also you have a backup copy of the Kubernetes secrets in vault and also even the genesis file genesis as well as the static nodes file so this enables you to keep it if you lose your Kubernetes cluster but you can you will not lose account secrets because you know you're keeping it separate so yeah it has all the all the things all the secrets that was created next step yeah just again same as you did the only thing I will notice here is we are passing up proxy P2P so this is the RPLX connection so with proxy when you're using ambassador or any other kind of proxy I would say so this RPLX is TCP based not HTTP so that is why it has to be have a different port for independent services otherwise they will clash so that is why I'm passing this P2P RPLX P2P proxy port separately because it will be different for each of the validators and as well as the node we go back to Lens yeah so you see that we have the we have a pre-installed hook which basically for each validator node it doesn't really do a lot of things it just checks that the secrets are there if the secrets are not there it will go and download from vault if the vault secret is there right so that's what it does and because we ran this so the vault secret would have been created and why this is showing is because the storage class is getting created so while that is doing we can do the rest actually so that doesn't take much time so if you notice here from validator one and two the difference is that I've also changed the P2P port and then I'll do three I'll do all this together and then we'll look at the logs and remember I said that these files like the way I'm using the main validator file here right so these files are given for you as an example if you want to change anything just go and edit either this file or the default value the within the charts so always better to use a separate value file like this and so that you know you don't you know what kind of changes that you have you have made so you want another version of this you want these limits to be different you want you want to delete the keys on delete here the delete keys on delete is false so I'll show you how that means so these all can be changed so I'll do validator as well right now so you can see all of each of their pre-installed hooks have run and then the nodes are running you can see the logs now yep so the logs logs are here same as QBFT within there is some issue with the NAT manager we are not entirely sure that why the Kubernetes NAT manager doesn't work but because you are using ambassador then it finally works so you can see here now after the Validator 4 has started it has started the consensus or creating blocks so that's what it has done QBFT ways your controller builder and it's importing blocks or readily the secrets you can see that initially it was root token but now you have all the keys here similar to as what Shubhiji showed the only difference here is that if you delete these keys and then re-run the installation or do the helm install it will recreate these ones because and with the old keys it will not try to generate the new keys your keys involved is still safe so that's that's the advantage or that's the production kind of oriented way production worthy way we want to provide it that even if you or if someone accidentally deletes this only an admin will be able to do it of course but if that admin has accidentally deleted the E node or the node key details or changed it but then you can still go and get it from vault and it's automated you don't have to manually do it all you have to do is do an uninstall a helm uninstall and reinstall and helm install any questions there's no question here not on the youtube as well fine so moving on working to basically a setup in another name space so setup in another name space will involve two places two things mainly first you have to get the genesis files because for a new name space basically it kind of mocking it as for a different organization now if you want another organization to join the network you have to give the names the give them the genesis file the boot node and the peer details right so these are the three files you should be sharing of course in a secure manner but then because I mean in general the boot nodes genesis as should be showed you genesis it doesn't contain anything that is secret it is basically all the public information for that network so what we'll do is we'll go to the base genesis chart files section and when a less I already have this I'll remove these files because these are from my older older network right so now I'll get each of these files so basically the static nodes start Jason the genesis and the boot nodes so yeah so I got all the three files I should be here in my genesis files so these there were no boot nodes this is the genesis as I said yeah nothing secret keys are not here so it's fine to be shared and the static nodes which has the addresses now if you see the static nodes this address is the is using a internet internet address it's not a local host so validator for base is this and with the port and that is I said that you will need need to have different ports otherwise they will clash because it's a TCP connection so yeah so this is where all the fully qualified address of the validators are available and this is how it will connect it is connecting even the validators locally are connecting using these addresses now I'll go to the charts one and yeah for carrier also I've already created the root token so this is the other namespace yeah so you see the root token is already created now I'll run the genesis secondary now secondary genesis is basically it's not actually running the genesis it's not creating a genesis it's it's instead it's just copying these JSON files into the namespace so it's basically copying these JSON files into the into the this new member namespace or a different company so if we go back now yeah so you'll see that for the carrier best namespace you you have the base superior genesis already here so that's that's it so now that's done now I'll create this carrier node so you can see that it has a little bit different value set first of all it has a transaction secondary kind of thing and then yeah the proxy also has to be different because why it is different for me is I am running it on the same Kubernetes cluster of course if you are running it on a different Kubernetes cluster you can reuse the ports you can have a totally different port depending on that Kubernetes cluster the other thing I'm passing is the node identity which basically is we use this in our supply in example that is why it is here but it's to identify to basically give an additional identification for the node so now if I go to these ports yeah so you can see for this port we have we have the tessera tessera hook as well as the node carriers the nodes hook so the general idea of a member in this production use case is that it will have a tessera node as well which basically provides the privacy component you can disable tessera because tessera is show you the value file is via this so if you just pass tls enabled as not sorry not tls tessera enabled as false it will be sending it as sending it as a different it will not basically open the create the tessera node okay I remember now that with the peer nodes to work this has to be created so let's create the validator as well this one so the carrier node is up not yet there is a question I think on YouTube what is the minimal cost to run this setup so for I think I hope that this is for what I am running so this one it depends of course the pricing has changed and all that but I am running a Kubernetes cluster on AKS you can see this one which has 3 nodes so it will basically whatever is the cost of your running your 3 nodes I think AKS doesn't charge separately for the Kubernetes endpoint itself so of course then depending on your use case sorry your cloud provider it will vary and also depending on the node these are all general purpose nodes so if you want a more high CPU or high memory nodes it will be costly there is a limit on how much you can run on each of these on different types of nodes in this case node I mean the VM instances EC2 or Azure virtual machines those nodes so we have checked that generally with kind of production oriented network it cost around 700 per month but you can save more you can do those reserved instances and all that so then it will be cheaper and it's dollars and kind of includes the network balance load balancer and etc as well right so the carrier node is up and then you can see that it is it has important the almost latest block yeah and we'll check at the tesera node this is the DB this is a tesera is still taking time yeah tesera is I think it takes a bit of time as well because the supply didn't run this supply one yeah so tesera will only work when they communicate with the other thing other node yeah so maybe that's what is causing an issue but anyway if you're not testing tesera so it should be fine so the node is up and running should be all right now same as what we did let's see how why it's not working okay so same thing we will I am connecting to I've already connected to the tes network so I'll just have to add the account so I can take the private key from my from my vault very easily yeah so same as we said we have 10,000 ethereum and I can I my first my own account which is this one this is a test account doesn't have anything so let me see if I can send some of this large dollar values to my account so I have not said this max yes I tried to which it did but I think it should still would work it shows as sending if I go to the logs I think you will see some new kind of log at least here once that appears then we know that the transaction is done yes if you see this is the transaction because it has a kind of a gas though it says 0.1 so this once this transaction is done that means there was a transaction because it is using gas these were 0 gases so that yeah so that's done I have so much less money now but my own account has 10,000 ethereum yeah so that concludes yeah the other things not conclude I will just show you that uninstalled part as well so to verify on mainly this one remove keys on delete false tag so this tag is basically when I am doing an uninstall if this is true it will also delete the secrets if it is false it will not delete the secrets in this case it will not delete the secrets so let's start with carrier so this is as simple as deleting any other help release so just help uninstall name space then the release name go to carrier so it does have a pre-delete kind of thing and a cleanup because we are doing so many things internally we also ensure that it is cleaned up but because secret deletion was false so you can see that the carrier the keys are still here so the keys are not deleted so in the next time you will install run the installation with the same command as this one help install it will not recreate the keys it will use the same account same details everything and because also we have used stateful set sorry volume claim templates the persistent volumes are also there so even though it is not bound to a pod the volumes are still there so if you don't delete these volumes manually you will still have the same same node the same connection so it will all contain the anything that you have done on the node any updates on the databases or like the any smartphone that you have installed etc so that will still be there unless you delete this manually so that's the advantage that we have provided so that's pretty much rest of the uninstalls are kind of same thing the only thing we say is that uninstall the genesis at last because it contains important things or connections so don't uninstall genesis at the first so genesis should be uninstalled at last because that is the first thing that we create that's all actually I had to show we had to show for today any more any other questions yeah the other thing I would also say even if you turn that delete thing on true but the vault will still be there so vault values will not be deleted unless you delete it and even if you delete this one you can actually keep because vault kv has version option you can actually keep get back this version again back from vault because vault will also create a keep a version so that's another advantage like here okay that's all so will you want to take over yep sure Shannuk right so we completed both the both the deployment things one we did on the local Kubernetes set up on Docker desktop then we had the cloud managed one using the AKS now let's move to the part where we want something from you so what we want is basically you to be involved with us involved with hyper ledger and and help us basically shape this different projects that we have and to do that we do have regular activities that happen the one which we are joined currently is the workshops this happens based on the different releases that we have or let's say we come up new features we have this workshop set up for you so if you have any suggestions on the next workshop that you would like to like us to showcase or basically do for you then please feel free to suggest us you can use our discord channel or put a mail to the hyper ledger middle group the regular activities that we do is the bi-weekly sprint planning calls so this is where we plan our sprints our sprints run for two weeks so if you want to be part of our regular development and maintenance work then you are more than welcome to just join us on these calls this all happens on the zoom meeting invite you can have the links to that from the wiki page of hyper ledger the other important calls that we have is the roadmap grooming call this happens once every six weeks there we discuss about our progress on the roadmap and then we discuss about the future releases or features that would make sense to be added to the project and then we also have the release or the program increment demos that also happens I think once every six weeks so with that as I just said then that you have the power to kind of shape the direction in which these projects move so you can kind of shape how the project fits to your needs and with that thanks I've just announced or added like a new project called the bevel operator basu which basically is a base operator for hyper ledger basu so if you are interested to contribute it's more happy or we would be really happy even at hyper ledger because right now it is directly based on this Helm chart that we saw right now just two Helm charts but with your contributions it will be great if someone can you know add their the controllers the additional controllers using golang etc whatever is suitable for adding additional features like addition of a validator etc like generally what you do for a basu network those operations as well so that would be great posted it on github as well sorry on discord as well it's under my private my repository under shornak but definitely once we see the interest and all will go ahead and submit it to hyper ledger TOC for a project kind of bevel similar to bevel operator fabric status awesome okay we'll move to the last part which is basically for you guys feel free to ask any questions you can go on mute and ask it on here itself or you can put a chat message do we have any questions I think there is a question on youtube where do we go okay the question on youtube is have you got any problems issues you can think can we bottleneck sometimes of course it's a very generally questions of course there would be issues and problems when you are running a blockchain network and also a network still in the emerging tech domain because we still have so many changes even within besu as well as the whole blockchain domain so yeah there will be problems the only problem that bevel is trying to solve is providing you with an easy deployment mechanism so that you don't have to get stuck on how to deploy besu with all this with a more secure way etc because that's already done for you yeah so the issues with deployments we have sorted the issue with the besu itself I guess is there in the besu channel I think there are a lot of great contributors and for the deployment side on the Kubernetes side yeah that's why we use managed Kubernetes we don't run our own Kubernetes so running with the Kubernetes cluster that I showed you today I just created yesterday after like 10-15-20 minutes of my effort because it is a managed Kubernetes cluster it's very easy to create on cloud so yeah so that's that's the easier easier part to try to solve when coming to the complexities and the bottlenecks etc there will be anyways which is same as any other production system any other questions nothing on the chat okay I think that's fine then we can end at the top of the hour I'll stop then and thanks everyone for joining thanks everyone yeah thank you for presenting thank you great thanks everyone bye