 Okay, until you all live, hello everyone, hello, welcome. This is a first workshop from works of series about Hyperledger Bevel, is a series of workshop designed to help you learn about blockchain technology. They mentioned particularly Hyperledger Bevel. Hyperledger Bevel is an accelerator by which developers can consistently deploy production ready distributed networks across public and private cloud providers. Let me introduce our speaker today is Sol Nakroy. He's a technical architect at AXIN and technical architect and my tenure of Hyperledger Bevel. Thank you everyone to join this session and please enjoy and learn and Sol Nak, could you start please? Yeah, hello, I hope. Hi everyone, good morning, afternoon and evening depending on from where you are joining. I hope my screen is visible. So this is not, I mean, I got a question. Is this the first session for Hyperledger Bevel? So this is not the first session for Hyperledger Bevel. We have been doing a lot of sessions. We have code walkthrough sessions and demo sessions as well. This is the first of this workshop series, which is we have two more next week. And so yeah, as Igor just mentioned, Hyperledger Bevel is an accelerator. It is not a blockchain platform itself. It is an accelerator using which you can deploy blockchain platforms or DLT networks in a much lesser time. And also it involves production-ready systems as well. So you don't have to rearchitecture your solution or for a separate production system. We have most of the production things embedded in Bevel. Of course, like any other production system which involves blockchain or non-blockchain, you will still have to do all the other security assessments and all that, which is not covered as part of Bevel. Okay, so with that, yeah, I'll start with the, yeah, this is our antitrust policy notice. You don't have, I'm not going to read it. It's there. You can read it and you can, it will be there for the video as well. And wait for there for some time. I think someone is reading in the videos. So for, yeah, so Hyperledger Bevel. So this is the first session. We have two more next week. So the first session is going to be setting up of a Kubernetes cluster. I mean, this is a generic session for Kubernetes cluster. And I know it has nothing to do with Hyperledger as such. Kubernetes is a different than platform altogether. But why we are doing it is because when we did earlier workshops, we found that most people do not know about Kubernetes and they struggle to start the Kubernetes cluster itself. Of course, many of you here will be experts in Kubernetes and in that case, your session is not for you. But also we are not going to answer any details of various details for Kubernetes because you'll have very good courses online about that. So we are going to set up the cluster for, so that we can use Bevel on it in that scenario. So that's the summary on that for today's session. It's about one hour session in total. So we'll see how far we get. It is enhanced on session. So if you will be doing yourselves as well as I will try to do myself as well. So if we get to, it should be possible to create a cluster within an hour because we will be using managed services. But if someone is stuck because you don't have a cloud account or some other problem, then you can do it yourself later as well. The next session will be setting a vault. Vault, Hashek vault is an important component for Hyperledger Bevel because it stores all the secrets and the keys and the public keys and private keys in vault. So that is important. And so next session, the second session will be on how to set up Hashek vault. And then the last session will be how to set up a DLT platform using Bevel. And the last session is a longer session, right? Any questions so far? Or if not, I have this question about what is a Kubernetes cluster? If anyone can just give a brief for someone who doesn't know about Kubernetes, you can type in the chat. So the question is, what is a Kubernetes cluster? That's the question from my side. Of course, if anyone has questions, also we can talk. I think there is a question from Jose. Can we take these set up as production ready one? So this is, I wouldn't suggest you to take this set up directly to production because as I said, for doing a production solution, you will have to deal with other security, mainly security considerations as well. But yeah, it is not a production. This one is not a fully production. It can be used as a high-level development environment, but not for a production system. Yeah, I think the answers are more or less right. So someone said about cloud native orchestration platform. So no, it is not a cloud native because you can run Kubernetes on bare metal as well. So it is not a cloud native solution. It is an orchestration framework for continuous platform that is best one and organized. So I would say it is more like a management platform for containerized assets, which all the images and all that is much easy to orchestrate using Kubernetes. I will compare Dan using, say Docker, Compose or anything else. So because Kubernetes takes, it does a lot of the memory management, CPU management itself. And yeah, and on top of that, we are going to use a managed Kubernetes in this session. So that a manage means one which is provided by cloud and by a cloud service provider. And that is why these are the two prerequisites for this session. You should already, I think Igor sent that email. You should have already have a cloud account, preferably GCP, because I'll be doing it on GCP. Or if any others, of course, they should support a managed Kubernetes or and you should have a laptop or desktop, of course, from which you are connecting and you'll be using the cloud console. So those are the two prerequisites. And so no further questions, I hope. Right. So let's get on. I hope you have everyone has their cloud accounts. I will paste this link on the chat as well. Sorry. In the show and copy. So this is the one that will follow today. Okay. So there is a question, can you elaborate on India and some other terminology? It is different than, is it different than Bevel? Yes. I think in that case, you, I mean, you can look at the hyper ledger umbrella. Because the session is not for that, but if you look at hyper ledger, the main website on hyper ledger foundation, you will have, you'll have the greenhouse in which the Bevel is a tool and and Indie is a DLT platform. So that's the main difference between Indie and Bevel. So from the DLT platforms point of view, under hyper ledger, you have hyper ledger Indie, hyper ledger basu, hyper ledger sawtooth, hyper ledger fabric is the most popular one. And others I, I, of course, and non hyper ledger are, for example, go quorum or, or, or R3 corda. Those are non hyper ledger DLT platforms. And then Bevel is a tool hyper ledger Bevel, which is, which uses, which is used to deploy these DLT platforms. That means hyper ledger Bevel can be used to deploy a fabric network or a quorum network or a basu network or a, or a corda network or Indie network. So those are the five, I would say are supported right now by hyper ledger Bevel. Okay. So, I mean, this is, well, we'll start now before that any questions if you are doing hands-on, please say yes. Yeah, if you're not doing hands-on, you can just just look at this, but I would suggest that would be quite boring. Right. Okay. So there's another question, does any cloud provider ever go in cluster for free if not, which is providing minimum cost. So, so that's why we are choosing GCP. So GCP as far as I know, it is not free, but when you scan everyone else go on mute. So for GCP, it is not free, but the GCP when for a new account, you get $300 credit and Kubernetes is included in that $300 credit because GCP doesn't, Google doesn't charge separately for the Kubernetes cluster management itself, whereas AWS charges for the Kubernetes cluster. So, and it will not be in the AWS free tier. So I hope you have in accessed the link by now and been to your console, right. And so let's do the next thing you can create a new project ideally, or you should already have a project. Just select the correct project from the top. So I'll show you here. I see here you select projects here. So I know this spelling is wrong, but this is how the project was created. So that's how I have. So this is my project. And then I love to go here. So the next one is, yeah. So on your local machine, you should be installing Cube CTL and G Cloud SDK. So I think I already have eight on mine. So Cube CTL. So I think I have Cube CTL. Otherwise it would have given error. Yep. And then I have G Cloud as well. Yeah. So I have G Cloud. So please install Cube CTL and G Cloud SDK, which, which you will have the access, which we'll be using as the main, the clients. Right. Then the next step is to create a service account. And that's what I was there. You can create the service account. So we'll start. Okay. So I've created the service account as for the details, as for the guide there. And then I am going to create a key for that. And the key will be downloaded in my system. Now the next, so that's the service account. And that's the service account you should be using to connect. I mean, if you already have like a super user service account and you can use that as well, there's no, but this is kind of an extra layer of security that you use this specific service account and then set the Google credentials to this JSON file so that whenever you access the Kubernetes cluster, you'll be using using this. Right. Then next is Kubernetes. Search. So Kubernetes engine, it should be enabled already. If not, you can, you have to enable. For me, it is enabled on Google Kubernetes engine because I've used it already. And then I'll create, click create. And I'll use a GK standard. I know autopilot looks a bit more enticing, but yeah, it is, it will give a problem later, I guess. So you can do GK standard. And that's what it says here as well. So let's give this a name. As you can read in the document that you, if you are using for production-like situations, you should use regional, but otherwise, Zonalis is fine. You can specify the node locations as well. So I'll say, yeah, I think this is fine. I'll add more, few, two more locations. Release channel is, I think you can just do default or stable because I think these, these are the, this is where you are selecting the version of Kubernetes. Bevel kind of currently for all supports 1.19 only. So it may, it's not, it doesn't appear here, which is fine because for this, this one will be using 1.21. The only problem with 1.21 at that some point, we'll have, there will be some problems with the vault and all, we'll fix that maybe during the workshop. But for now, yeah, choose a stable channel and choose, for now, let's choose the lowest, which is 1.21. We have to do the whole thing. Okay, so next is default pool. You can keep the name as default pool itself. You can, you can change. I'm not enabling all these auto scaling and all that now because you can, you can change these later. It's not anything that you cannot change later. So the only thing you may want to do is change the node in the, in your default pool. If you want to, you can choose a higher memory or higher CPU. But remember that these is for each instance type. So if you're using your $300, it may be costly to choose a higher one because we have already chosen three nodes. So there will be, that means you will have three easy E2 medium VMs spun up. And just for everyone, again, for everyone, if you're not working always, what you can do is come and make this node, this, I'll show you again, but this number of nodes per zone to zero as well before you, you know, close your, so that it is not billed to your account when you're not using it. And then once you come back tomorrow or whenever you're doing it, you can come back and change this number of nodes to say three. It is actually three per zone, which is quite high to create nine. So I'll change it to one. So yeah, change it to, change it to one per zone. And then it will take some time to come up with the new VMs, but, but rest, you know, you'll save on the costs. Okay. So coming back to the nodes, I think the boot disk size doesn't have to be 100 GB because we'll hardly use anything on the, for, for Bevel, at least, we'll hardly use anything from the default boot disk because Bevel will create PVCs always for any of the new deployments that you'll be doing. PVCs are persistent volume claims. So they are separate volumes that are created for on, on, on GCP. So which is not, which don't take this boot disk space. So I'm just reduced it as far. Okay. Okay. Networking. There is some issues. So I don't think I'll have to create a network. Let's go one by one. Security, computer engine service account is, is fine. Allowing default access. Access is also fine. Okay. Metadata. I mean, if you want to add any key, any. What should I say tags? It is, it is like, this is how you do. So you can add any, anything say any tags you can add Kubernetes labels, you can add node tents and all that here. Okay. Next automation. Yeah. So this is, this is automation. So I think what, what I am having an issue is there is no default network. So maybe I'll have to create a network first. Okay. So there's a question. What challenges would one face with Bevel set up if not using a managed Kubernetes service. So the challenges that you will face is not with the Bevel setup. It is with the man setting up your managed Kubernetes service itself. So because if you're not using your Kubernetes service itself, so if you're not using Bevel, sorry, managed Kubernetes, then you will have to, you know, maintain those nodes. You have to ensure that you have enough number of CPU yourself because otherwise Kubernetes will, you know, just stop, you know, scheduling automatically will not have any automatic or you may need to write those rules yourself to auto automatic scaling up or scaling down rules and all the complexity that comes with it. And you will have to manage a separate Kubernetes master node and separate you to manage the at CD, the Kubernetes directories and then the Kubernetes master data and all that yourself and ensure that they are backed up so that you can get back to, if something crashes, you can get back again with the same same state. So all that is not all those extra effort is not needed if you're using managed Kubernetes servers. Okay. So let's see what is one month. So you can choose private or public, you know, cluster. So private cluster is a public cluster is of course your cluster itself. It's written here that the roots are automatically created. You have to, you cannot change the setting after the cluster is created, but private cluster is where you have an internal IP for the nodes. And you can access the control plan using an external IP address, which, which I'm selecting now can also select enable control plan global access. So to which what will it means is that it will, it will create, you will be able to access a control from everywhere. Otherwise you can just give the IP, IP range. So let's do. So I'm doing this because we are going to access it for all. Otherwise for from everywhere, because otherwise it is going to be, you can hear, you can create, you know, like any VPNX address or a specific IP address where you are able to connect to the control plan from. So that's, that's the option here. Okay. Did I have a question, any recommendation, minimum hardware to install hyper ledger bevel. So not sure who is asking the question is coming as iPhone two, but hyper ledger bevel is not in itself a deployment is not an installation in itself. Of course to run hyper ledger bevel you will need a machine because it's an operations tool. So that we mostly recommend using Ubuntu with, with it, which is able to run Ansible. So that means it should be, should have a GB of RAM at least. And then what hyper ledger bevel does is deploying a like a DLT network. Now for the, for the DLT network itself, you will of course need hardware or repeat cloud or bare metal. But that hardware is will be of course dependent on your DLT network configuration. If it has two nodes, it will be less hardware. And if it has a hundred nodes, it will be much more hardware. And the question is about mini cube or kind. So we have not, we have, I mean, we did try using mini cube earlier, but most of the time it didn't work because we were spending a lot of time to fix issues related to mini cube itself and the networking aspects as you may have seen in the discord chats going about going there. So we have not tried any of those what mini cube used mini cube, I think at least two years ago or three years ago. And after that it will not. So I'm not sure what is K3S. So I don't know if it will work on K3S. Yeah. We have not tried on any of these mini cube kind or microcators, though we use kind for doing our tests, which is in, when we do testing for using my molecule, we use kind there, but that's only for running the Ansible tests. We don't use, we have not used for that because Bevel is targeted for production like production ready systems. And hence we don't spend a lot of time in trying to make Bevel work for mini cube or microcators because no one is going to use mini cube or microcators in a production system. Yeah. So yeah, I mean, from, from, from, if it is running then for this session it is, that's the thing. We'll try to of course connect to this and to get that connected. You can, as I said, you can use download. Sorry, download cube CTL or I'll give you another resource or you can download for a UI based like UIs then this Kubernetes lens is a very good tool, which we, which we use in as well, which will be, you'll see UI and it's much more, you don't have to remember all the cube CTL commands. Basically it's an abstraction on the cube CTL command. So you can use this. I think they have changed a little bit long. You have to register now with, with GitHub and all that, but it is still, still a good tool to use. Yes. The question is about using WSL as a controller when setting up the prerequisites. Yes, you can use WSL as a controller when setting up prerequisites. I think for the next, basically for the final session or whenever you're running cube CTL and all we can use WSL because most of our commands, most not all of our commands are Unix based commands. We work mainly on Unix machines. All our Ansible controllers or the machine that we work from is Unix or Linux or Ubuntu. And so it is, it is good to have WSL. The other advantage of course is you can run Docker because we have a Docker image. When we go to the third session, I will know that we have a Docker image where you can use as the Ansible controller. And as we know that Docker doesn't run natively on Windows. So anyway, you will need a WSL, some format of Windows machine, sorry, Ubuntu instance to run Docker on Windows. So in which case you can run it from WSL as well. So that's hopefully it's starting up. So in, we'll take some time, I guess. But in the meantime, is there any other question? Is anyone stuck at somewhere? Or if they are using, they want to see the same process on AWS. We can do that. We have not used to say or open to say Linux as the operator operating system. So I don't think most of the commands will work. So for that, I think in that case, let me see if I can. So in that case, Igor, can you send the presenter joined to Shubhajit Sarkar? Sorry, can you repeat? Shubhajit is S-U-V-A-J-I-T. Hi everyone, Shubhajit here. I'll just put my name there. So Igor you can add me. So as a co-host? Yes, yes. So because then he will be, he has, he will be doing using EKS, sorry for AWS EKS using EKS class, EKS CTL. Question from Anand, any dependencies on storage class as those may be cloud specific? Yes. There is definitely dependencies on storage class as they are cloud specific. And as a result, we have created storage class templates in Bevel for the cloud platforms that we have, we have tested on, which includes AWS, Azure, Google, I think there is some from Open, was a digital ocean as well. And many people have contributed additional storage class templates for a new or different cloud platforms as well. Right. Okay, Shubhajit, over to you. Right. Let me see if I can share the screen now. So determining powered usage, I guess that will be from your cloud provider because we're using managed Kubernetes. Steven. Right. Is my screen visible? Yes. Okay, so I followed the same steps as in to create the Kubernetes cluster, but the cloud provider that I have chosen is AWS. And to do that, what I found the best way is to use the EKS CTL library. So some of the prerequisite to get that started is of course the AWS cloud subscriptions and cloud account. And then a couple of binaries are required. For example, the AWS CLI and which will configure your AWS credentials. You would also need AWS IM Authenticator. And then the EKS CTL is of course the client that is required to bootstrap the Kubernetes cluster. Now I'll put some links quickly. Maybe later on you can just have a look or just maybe search for EKS CTL. So this one will give you all the steps of how to install on various machines. For now I have just used the installation step for Linux machines here. And... Can you paste it to on the chat? The EKS CTL link. And it has a lot of configurations on how you exactly want your cluster configurations to be. For example, the region, specific region where you want the name of the cluster, the number of nodes that you want. You can have the auto scaling group configured as well. This will also let you configure any SSH connection that you want on your Kubernetes nodes, basically the worker nodes. So all those set-ups are there, but if you want to just simply run in a base mode, you can just have the cluster name, the number of nodes and the KubeConfig file. Basically this is the local path where you want your KubeConfig to a KubeConfig yaml to be auto created. So that later you can use it to access your cluster using Kube CTL client. Right, and just kind of a couple of things in the user which you have authenticated with AWS. So the policies that are required are basically if the admin access is there, that's always useful. But then in terms of for the EKS creation part, you definitely need the EKS cluster policy, the EKS service policy, and the EKS CNI policy. The EKS worker node policy is also something required if you want to kind of access the worker node pool. So once you have this policy added to AIM, you can simply run the command here. So I have already done it and it looks like the cluster is ready. So I'll just kind of confirm that with, I'll just check if the file has created properly or not. So if I do a list here, I do see one second. Create there, but I'll confirm. It should be in dot cube. Should we? Oh, is it not there? Yep. For some reason it didn't create. Did I not put the correctly? And you can still get it again using, you know, the EKS command. You have to check that. Yeah, cluster credentials. This one is to write it to that part, but then Yeah, this one, the last one, EKS utils writeQ config. Yep. Give the path as well now. Just have. Okay. Well, you're at it. What was the question there? What is the roadmap of for hyperledger? As I can see, many projects are in graduate state and incubating as well. Are all these projects different among each other in terms of functionality, which framework shall we learn? If indeed bevel, et cetera. I mean, to answer that, I don't know if David, do you want to take that or Igor? Or otherwise I can answer that as well. So the question is, what is the roadmap for hyperledger? As I can see, many projects are in graduated state and you incubating as well. Are all these projects different among each other in terms of functionality? Which framework shall we learn? That's a good question. I cannot answer this moment. So it's really the best, like, join this court. Yeah. So I mean, I will give you a diff. I mean, from my point of view, of course I'm not a member of hyperledger TSC, but hyperledger is a foundation. And it's as we know it's called hyperledger foundation. Hyperledger is not a blockchain platform. So for hyperledger is a foundation. And under that foundation, you have many projects which are blockchain related. Most of them, almost all of them. And as I discussed previously as well, you have a few under the umbrella, you have a few DLT platforms, you have some tool sets, you have some, you know, integration, interoperability tools, platforms as well. So all these are separate projects under hyperledger. So you may, if in your, you know, business area, if you have to learn all of the projects, you will have to learn all of the projects, but generally people choose one or two of the related ones, like especially, for example, you're using Fabric, you will use, you'll learn mainly on Fabric if you're using Indy, then of course you have a bit more to learn because you have Aries framework as well. Then if you want, if you're using Besu, then again, it is, it's simple enough. You just try to understand Besu. Then the other question is, is there some kind of checklist or documentation regarding security considerations to take into account to deploy production ready bevel network? The answer to that is no, because every client, every implementation is different and everyone will have their own security considerations. The general security considerations for any deployment will apply to that client, a specific client. They should already have a security considerations for running any production network. And not just a blockchain production network. So the summary of that is that there is no different additional security because you are using bevel or because you are using a blockchain platform. The security will be same. The security profiles or the requirements will be same as any production project. Yeah, quickly on that part. So I think the kubeconfig file got written into the default kubeconfig path that I had set. So that's there. So I've just kind of exported that as a variable kubeconfig and then with kubectl, I can see the ports there and I can see the get nodes. It should show me the one node that I had created and the version as well. So by default, it's 1.2, I believe, yeah, 1.2. Okay. So anyone who is working on who is in progress along with the session, their cluster has come up because mine hasn't is still going under health checks. Yeah, I think Shivaji said it takes about 15 minutes on AWS. Yeah. So it takes a bit more for GCP. Sean, if you want me to kind of have this loaded on Lens as well, maybe it will be. Yeah, yeah, you can do that. Yeah, so I'll just stop sharing. I need to copy the kubeconfig file and then share back. What are some best practices to follow while creating Appledger Fabric Production Network via Bevel? Yeah, so I think there should be a kind of, we should publish a white paper or something because it will be quite big set of best practices. I've already said something about the security. The other thing that we have seen and we are, of course, learning and people who ever is using Bevel are also coming up with the questions. So for example, there is the first best practice, of course, is not to use Fabric Certificate Authority. You should use your own separate certificate authority. And even if you use Fabric Certificate Authority, you should have the certificate renewal and revoking all those mechanisms or what should I say, processes in place before using the Fabric Certificate of Fabric CA servers. So I think that would be the major thing that I can talk about. And the rest everything is similar, like you should have a secure development environment, secure operations environment, ensure everything is TLS3 encoded. The Fabric Network itself, of course, will be secure because it is communicating via secure channels. But then the accessing like as you saw, I showed you the Kubernetes cluster. You should have more tighter restrictions around accessing the Kubernetes cluster because if someone has access to the Kubernetes cluster, they can install something else or delete something. So those are generic best practices for any cloud development, I would say. Yeah, I think Shubhajit can show the final output. So I have the Lens installed in my local system, so I need to copy the KubeConfig files. I've copied it here and from Lens tool, I will just add a new cluster and then provide the path of the KubeConfig file, select the correct context, and then can I believe add cluster. So, okay. So I think I need to apply the Authenticator one second at that as well. Okay, any other questions? I mean, Jor-El had a question at what is the final look at the config file. I mean, it will be just like any other KubeConfig file, I would say, which has a single entry and as Shubhajit was mentioning, like if you don't update your config, KubeConfig environment variable, it will keep on adding new configurations to the existing file in which case your KubeConfig file will become very huge and then you have to change the context to update everything. Yeah, but ideally your Kubernetes file should contain only one context and details of one cluster and that's what you will be using in the third session. So every time you run a Kube to get an UpdateConfig command or the GCloud command, you should be using a new file, set it as KubeConfig equals to which Shubhajit just showed and then it will update into a new file. So the last question was, we have a support certificate renewal feature in Fabric. So right now doesn't support certificate renewal feature in Fabric. It is in progress, as you may have seen in our issues board and it is not being developed by Accenture at the moment because it is not our priority. For the reason that we do not suggest clients to use Fabric CA as a production CA, right? I think that's all, unless there are any questions, are there any questions on YouTube? Let me see. No question on YouTube. Okay. Thank you very much. Thank you. Please join us for the next two sessions and then next week, 14 June and 16 June, I put a link on the chat. Thank you so much. This is great. Yeah. And just on kind of a show of hands or something, like how many of your Google cluster or AWS cluster was up and running, because we'll be using, needing that for the next session. So for the next session, which we are deploying Vault, we should already have a running Kubernetes cluster. Okay. I put also link to YouTube, recording from the session. So please, you can go through once again. Okay. Thank you everyone. And have a good day ahead. Thank you. Thank you. Thank you so much. Thank you.