 Welcome back everybody to the second session of the Blockchain Tech Fest. The festival will continue. This is what they run and the Indian community and we have the African chapter here as well today. I've put together and Kamlesh and others. It was a great event last week. And this week we've got even more contributors and maintainers from, we've got something from Hyperledger Labs, which, you know, which actually, I've heard a lot about, a lot of, you know, a lot of interest in back. We've had a few other events and a lot of interest there. Calipa, which is a benchmarking tool, Explorer, and of course we have Fabric. So yet again, I think the message I'm going to give to everybody is, you know, enjoy this. This is a great thank you to the community for pulling this, putting this together. Please do get involved. All these projects are looking for people to help in contribution. This is all about developing this ecosystem together for all the benefit of everybody here. So please do get involved. And once again, welcome back. And now I'm just going to hand over to you, your run today to take this on. Thank you. Thank you, Julian. And we welcome, we'll welcome the community across Asia Pacific and Europe for joining us today. Good morning and good afternoon, good evening to you. So up first in today's session, we have Sohnak and Priyanka joining us from Accenture. And then they're going to speak about blockchain automation framework. And this is one of the projects which has been gaining traction within Hyperledger Labs. And it's also like recently called out, many people are describing that blockchain automation framework is defining the way we deploy a production grade blockchain networks for our use cases. So I'll hand it over to Sohnak and Priyanka. Thank you. Thanks, Arun. Thanks, Julian. Welcome everyone again. I know it's mid afternoon, mid post lunch time for India. And I hope we don't really bore you with a lot of details, but we keep the talk very relevant. And yeah, please enjoy the session. So I'll share my screen. Please let me know once you can see and we will get going. Okay. Okay, so the first thing that, so this is the agenda and we will give you a labs intro because the last time we spoke, we understood that lot of the people who are active members in Hyperledger community they really do not understand what exactly is the concept of labs, right? So we'll cover a little bit on what does it mean that BAF or blockchain automation framework is part of Hyperledger labs? You know, what is the problem statement? Why did we really create such a product? Why did we really want to work on something like this? It's architecture. And then how are we using it to consistently deploy production-grade networks? You know, I would say agnostic of the distributed ledger technology you're using. So if I just choose the Hyperledger umbrella, Hyperledger fabric, Indi or Beisu. So how does it use the same framework to deploy different kinds of DLT networks? The next part is, you know, how far have we progressed? But there is a lot more to do and how can you engage with us for that and Q&A and to get how to get involved. So with that, you know, you can just scan the QR code if you're really interested with your phone and it will take you to the Hyperledger labs detailed link but just to give you an introduction, the qualified projects that you see, they really require a lot of paperwork, technical steering committees, review and a legal framework approval to become qualified projects. So Hyperledger came with this idea that why don't we introduce a labs where projects that are still a little early to go into and become a qualified project can actually start working their way to become a qualified project. And with that idea, you know, Hyperledger labs has a lot of, you know, incubatory projects if I can call them, BAF being one of them. So, you know, I think we open sourced last year in 2019 around September, October timeframe and it has been one year since we have been in labs, we have got a lot of guidance and hence, you know, we now strive to become a qualified project. Okay, so with that, just to show exactly where do we stand, this is a view of the Hyperledger greenhouse, the DLTs, the libraries, the tools and on the right side, you know, bottom you see Hyperledger labs and that's where blockchain automation framework lies. So again, a QR code that will take you just to our, you know, get a blank. I'll just pause here for 10 seconds if anybody really wants to see while I talk on what exactly it is, if you have not heard about it before, please do that. Okay, so now I'll just, you know, tell you a little bit of story. So Accenture really started working on blockchain quite early, I would say, way back into the 15, 16, even before that in our labs. And then slowly we had a lot of client conversations and we then progressed to a stage into the 17, 18 where we started doing POCs or implementations with the client. Some of the common challenges that we saw was, you know, that we have done a POC, the client is really happy but when we need to scale, we have to redo the entire thing again because there were a lot of architectural components that, you know, we really don't think about or we don't give a lot of attention to when we want to do a quick POC, but when we have to go and scale it to production, those considerations make a solution, you know, of literally of no use. So for example, security, how our certificates are stored, how credentials are exchanged are some of the components that, you know, you might not be thinking about in POC but in production it becomes very important. The second thing was the network itself is the crux of blockchain or DLT technology and it is, I mean, let's be frank, it's not easy, right? The first trouble that we have seen developers talk about that it's really complex, how to deploy a network. There are a lot of dependencies. This service has to be up or that service has to be up. The configuration has to be done properly. So these were some common, you know, murmurs that we used to hear from all our developers that we are spending a lot of time in network deployment rather than our own chain code development or application development. So with that in mind, we thought, why don't we create a bridge, you know, which kinds of implements, what we call is a distributed ledger technology reference architecture. So it builds upon some principles and patterns. It should be designed for being production ready. So it's not something, you know, that you would want to use if you want to just showcase the power of blockchain. It's something that you would need when you really want to build something production worthy. It should support, you know, multi-company or multi-organization concept because we all by now know that blockchain or DLT is not a single player technology. It has to solve an ecosystem problem or multiple partners problem. So how do we really create a technology which would by default support that kind of business, you know, network? The next thing was we also did not want it to be logged with a single kind of cloud provider or infrastructure. We wanted to keep it open. So if a particular client already has, you know, is a AWS Amazon shop or it's a Microsoft shop and they really do not want to change their cloud infrastructure then we didn't want to force a single one. Hence we have kept it in friend dependent. And the second thing is it's obviously open source which you all by now know. So that's the concept that we thought and with that, you know, what we have built is very simple. So there's a single file which we call as you know, network.yaml file and all your network details, you know, what's the platform of your child choice? When I say platform is the, it's a DLT platform whether it's fabric, it's indie, it's Basu or the others because we already, we have also implemented it for Corder, Corder Enterprise and Quorum. So you specify the platform. So your application now becomes independent, right? You can do a little bit of plug and play. So, you know, you choose your platform, you choose your consensus, you choose the kind of cloud infra, environment details, your ports, your, you know, so all those details, external DNSs, et cetera they go in this single file. And the framework which uses some key components. So for example, Kubernetes is crux, you know, Sean will be covering these in detail but we used Kubernetes because it, it, the managed EKS on a cloud solves a lot of questions on high availability, disaster recovery, which we really do not have to then design in a custom way, but those get very deeply embedded in architecture. Hence, you know, Kubernetes is the crux. So it just takes that single file and brings up a network for you. That's it. So we don't call it one touch deployment but we say it one step deployment. So we don't have a fancy UI. It's all through, you know, command lines because we wanted to build it up for technical people for developers and architects. So there's no user interface. You will have to work on this configuration file as it stands now, but just using that one single file we can bring up a network. So a little bit on what is supported right now. You know, we had, we are on a 0.6 release right now. And so we have our three core enterprise, the version, Beisu 1.4.4, the Fabric version 2.0. I think we recently did a release to have 2.2 as well. Right, Sharnak? Yep, so Sharnak. 2.2 is still in feature. It's a feature branch there. If someone wants to explore, they can do it. It's not merged to the developer must. Yeah, so yeah, but yeah, you can see the code if you want 2.2 as well. So our 3.4 version 4.4, Indie and Quorum. So that's the, you know, BAF support matrix which we have for each of these platforms. Now a little bit, you know, when I say that it just takes a single file and brings up a network what are the components that we have used to bring up the network, right? So it uses Ansible. It uses mainly, you know, the Ansible playbooks. So we use it like an automation tool, not really like a configuration management software which when you Google to Ansible, you'll see that it is mostly used as a configuration management system but we are using it primarily as an automation tool, right? And it uses Helm charts. So we are using Helm, which again is a package manager. We are using Kubernetes for, you know, orchestrating all the workloads and services. We are using GitOps and Flux. So one interesting thing here is that, you know, once you have deployed the network and if the network goes through any change, then our network.yaml is in the repository and, you know, GitOps Flux. It's kind of binded to it. It's listening to it. And if any change happens in the network.yaml, it senses it and redeploys the network. Hence, the continuous deployment theme is very easily achieved using GitOps and Flux. We are using HashiCorp Vault. But, you know, we have also implemented it for some clients that who did not really want a HashiCorp Vault, but they wanted, for example, Azure Key Vault. So with a little change, you can, I wouldn't say it's completely pluggable, but it's doable. So it's manageable to change the world. And obviously, as I've mentioned, that it can be used on any cloud service. So with that, before I hand it over to Shanak for a little bit of more detailing. Thanks, Bianca. Yeah, I think there is a question. If my fabric peers, I'm answering live. So if I need my fabric peers to have users in Indy, does it support this kind of multi-chain architecture? The answer is, it is up to you to deploy fabric and deploy Indy and create that multi-chain or the integration between that. What BAF will do is give you that option to use the same tool to deploy your fabric and deploy your Indy network. Because again, BAF is not a solution. It is an automation tool which will create, which will make your solutioning easier or the deployment of both any of the networks that we support easier. Okay, yeah. So as we go back, I mean, Bianca, if you go back, I just want to dwell on that slide on the number of tools that we're using. Yeah, so we just to clarify that we are also using HashiCov Vault, which is our secret storage, certificate storage solution. We have chosen HashiCov Vault because again, we want to be cloud agnostic. So we are not using any AWS KMS or Azure Key Vault which ties you to the cloud. HashiCov Vault has its own integration. It's more like how we are using Kubernetes to achieve the cloud agnosticity. So HashiCov Vault can be deployed on any cloud provider and you can integrate it with the say KMS or Key Vault on with the HashiCov Vault as well as the Google cloud. So that's how we have received that. And that's how we claim to be any cloud, including on-prem. Of course, in our day-to-day development, we do not deploy on all the cloud platforms that are available because it is cost prohibitive. We generally test on AWS, but we have tested and have done projects on Azure and Google cloud as well. Right, okay. So now going back or going to the more on the architect, not really the architecture, the how we have achieved this, how BAF internally works. These, of course, everything on BAF is to be on Kubernetes. So hence, all the applications, all the platforms which is the DLT platforms must have container support. So must have their proper Docker images I mean, Fabric, Beisoo and indeed all of them are container native anyways, like we run them always on containers. So that's what's one very good thing about our lipo ledger Fabric, all these tools. So we all have the images, the Docker containers, basically Fabric orderer and all that is all taken from official Fabric releases. We don't rebuild them in any way. And then our main code is the Ansible. As Priyanka said, it's automation. It's not really the configuration management because Ansible is not deploying on the Kubernetes cluster or on any machine. First of all, there is always deployed on a cluster. So it doesn't deploy. Of course, your cluster is running on some virtual machines, but Ansible doesn't control them. So Ansible is more of an automation. It's more of a templating tool. That's how we have used it. So we, and as we see that Ansible, we have different playbooks and roles. I think people who are familiar with Ansible will understand that, I mean, playbooks are basically, you know, the stepwise, a sequence of steps that you will perform. And then the roles are basically your functions which you call from playbooks. So these are the playbooks or roles we have like which can install a chain code, we can create certificates, we can create channels, we can create, we can join a channel, orderers, or we can also do single cluster network or multiple cluster network. And then that creates the, I mean, the Helm charts also are also provided. So from BAF point of view, the Helm charts are also available open source. Now we are trying to get like a Helm repository so that you can use just the Helm charts, but right now also you can take the Helm charts and use it in your own way. You don't have to use Ansible to deploy the network. So also again, our Helm charts, we all have these join channel charts, update chain code charts, creating of raft or Kafka orderer charts, member MSP creation, peer nodes or CLIs, all those charts are available. So what Ansible does is when the developer, the developer gives that one single configuration file and Ansible is going to read the configuration file and create the Helm value files, which will then be deployed onto Kubernetes using Flux. So Flux gives us the continuous deployment as Bianca said. So it will create all the deployments in a sequence as we are running in. So the first time you execute the playbook, of course it will wait for things to happen like it will wait for orderer to come up and then peers will come up and then the installation will happen and all that. But once your network is set up, then all you need to do, if you want to change little bit of things like you just want to update the fabric peer image. So in that case, you will only do a check-in to the Git repo with updated Helm value file and that will be deployed automatically. Okay, and next one, or do we want to take some questions on fabric specifically? So, Shwana, one or two questions that were from Kartikey. I've answered, but yeah, please. Is whether two peers can be on two different cloud providers? Yeah, yeah, definitely. I mean, it can be as long as you have. See, I'll show you the network ML as well. So you can, all organizations actually can be deployed and we encourage people to deploy different organizations in different Kubernetes clusters and different cloud providers. Yes. And the other one, I mean, I'll answer this maybe. So yeah, I think we can go ahead and I'll answer these questions. Okay, so if you go to the next slide, please. So that's the, yeah, so that's on the indie side. I mean, it is more or less same and I'll actually go into how to share my screen and show the network ML so it's clearer. So yeah, so indie again, similar, similar concept. The only thing is that the indie nodes, CLIs, key management, the containers are built by us because we use the instructions from the Hyperledger Indie documents to how to create those containers because we again, as I said, BAF deploys everything on Kubernetes. And then again, we have the similar Ansible roles and playbooks which will do the DID management, credential definition, creation schema definition, crypto generation and the actual node, indie node setup and then single cluster, all multiple cluster. The only complexity with Indies that it needs static IPs because Indie by design doesn't work on DNS names because DNS names are going to change and that's how Indie is designed, right? So for Indie, we need, when you deploy Indie, you must have a few static IPs in your account, in your cloud account, whatever you need. And from the Helm charts, yeah, we have the key management group of charts, then we have the cluster management group of charts and also the organization where you can update the credentials or add a DID document. And yeah, so the genesis file and all that. And again, it's very simple because genesis gets stored into the vault as well. So if you want to share, of course, someone who has access to the Hashiqa vault, they can share because for Indie nodes to connect, you need to share the schemas and all that information is easily available when we deploy Indie. So Shanak, one question if you can answer live. It's how can we enable TLS on PR node deployed on Kubernetes? Yeah, all our PR nodes are TLS enabled. So we don't... Everything in BAF, be it Korda or Quorum or Beisu or Fabric, wherever the TLS options are there, all of them are already TLS enabled. So we strictly enforce TLS 1.2 as well. So that's one of the Accenture policies. We cannot use 1.1. So anything below 1.2 is forbidden. So all communication is 1.2. And the PR nodes in our network when you create using BAF, it is TLS enabled by default. So I don't know if that answers your question, Gaurang, because his question was how can we enable TLS? So is there something that... I mean, it is already enabled. So if you check the code as well, it is always TLS enabled. And our chain codes as well are... Or when you install or deploy or create a channel, it always uses the certificates. Okay, so Shanak, at this point, do you want to go and just cover this for Beisu as well? Or do you want to actually... Yeah, let's cover this for Beisu. Then I'll go back and depending on the questions, maybe, or the interest. Yeah, so Beisu is much more simpler, as I guess we'll share in the next... Later that we need more contributors, especially for Beisu, because... Yeah, so Beisu, we have the... We have just used one IBFT consensus and one Orion as the private transaction manager. So that's all we have. And it's much simpler because Beisu or Corum, by default, is not as complicated as Fabric or Corda. So we have the script generation, transaction managers. And again, just to reiterate, all transaction managers and everything is using TLS. There's no... Even though they are deployed within the same cluster. And yeah, so again, we are not using... We have not integrated, say, PostgreSQL DB or any other DB. We are still using the file system DB for Beisu. So... But the concept is similar. I mean, that's how we have tried. So basically, if you are working... If you are using BAF for Fabric and then tomorrow you want to also try out Beisu, then it is very easy because the concepts are exactly the same. So you don't have to learn new technology or something. I mean, you just have to learn... You know already how the Ansible and the Helmcharts work, right? So it's almost similar to how Terraform operates. So Terraform is the same tool you use for both... All the supported cloud providers, but then of course you have to rewrite some code for different cloud providers, but you don't have to re-learn any new technology. So that's the same concept. So if you're using Beisu... If you're going to use Beisu later, it's all exactly the same configuration file. I mean, of course the value contents of the configuration file will be different as we'll see now. But the concept is similar or the configurations that you put in them is quite similar. Thank you, Anak. So we do not have any open Q&A, but I think it will be good to take folks through network.tml to... Yeah. Okay. So I'll stop sharing then, Anak. Yeah, let me share. So yeah, please feel free to ask any questions in the Q&A section. Right. Okay. Hope you can see or is it too dark? Yeah, it's fine. Yeah. Okay. Yeah. So this is... We have many network channels as you can. If you... Some people, folks have already gone to our code repository. So I'll show you how the repository is generally organized. It's on the main BAF repo. We have these platforms and inside platforms, we have all the platforms. I mean, all the supported platforms. So under them, and there will be a file folder called configuration, which contains... Hey, Anak. Sorry to stop you, but there's a request to increase the font a bit. Okay. If you can zoom it. Let me just see if a control plus works. Yeah, that's work. Okay. Yeah, I think this is good. Let me close this. Yeah. So you have under configuration, you have samples and in samples, you have different samples for all the play, like the network ML samples, not the playbook samples. And so I'll just go through this fabric raft sample. Again, it's almost similar. You will have the network and then that's where you type fabric and then the version. We also support 200, but for I think 200 had, we did not create the channel or the chain code lifecycle. So it creates a few problems in our application, which is a supply chain application that comes free with BAF. But that's why we are migrating directly to 2.2. So you'll see 2.2 here in soon. And then we have an environment section. So this is more for your, for generally for developer environment when you are running multiple tests in the multiple environments in the same, but you will not always use this. If you are deploying it for production, it will be to say type production. And then for we have proxy. Now we only use HAProxy for fabric. For everything else, we use ambassador. Like here. So this is Indie and this is Basu. Why we used HAProxy was for the TLS pass through SSL pass through because our, as I said, our fabric network itself is TLS enabled. So it was, if you're using ambassador, the TLS packets were open at ambassador and ambassador was it was quite complicated to configure pass through. I think they have done it now, but we have not, we didn't have the time to upgrade the ambassador. So we continue with HAProxy, which is easy to do SSL pass through. So all the HAProxy at the proxy, we don't open the packets. It just passes through to the peers or the order and the order opens the packets. And hence this is not valid for just there for completion. And we have the retry count and external DNS. So again, these are your, for the cluster. So if your cluster is a bit slow, you can do because our, when the playbook runs, it waits for events to happen. So basically it will wait for 20 times before the event to happen. If it does not happen within 20 times, of course then there is something wrong and you can delete. I mean, it fails. The playbook will fail. Then you have the Docker section. Here again, it's more, you don't generally need it if you're using all Docker Hub images. But of course, if you have a private Docker Hub, Docker repo, then this will be beneficial because this is how we create the Kubernetes secret to download images. Then you have the order section where you define the orders. The order section is common because all the, all the Chan, all the other peers, non-order the peer organizations will need the certificates because they are all TLS to connect to the order and hence the order section is common. And then you can define channels here. And I saw there was a question of multiple channels. So yes, you can define multiple channels in this section. In this example, we have one channel and you have all the participating organizations of that channel and out of them organizations only one will be creator and others will be a joiner or the creator organization will actually create the channel because as we know that the channel creation only needs to happen for one organization. And then the genesis name as well we can provide it here. Okay. And then comes the organization part. So as we said, we create I mean, BAF is designed for multi-party systems. So it has to, it is assumed that each, there would be different people, different organizations joining the consortium and joining the blockchain network. And hence we have different all these separate of separation. So now, except the order section and the channel sections, of course, where you have the common things, right? Like, you know, you have to know the channel name that you are going to join and you have to get the order certificates, the public certificates for the orders, but all other organizations are independent. So which means that if I am the example that I'm saying is that if you are running one organization in Azure and another organization AWS, so you will only have that organization in your network channel. And so this first one is the order type of organization. So here, these are common themes. I think you can go and read and like, all them not draining on this one, but the important parts are the Kubernetes and vault sections. So that's where you are specifying a different Kubernetes cluster. You can, because each organization has their own K test section. So that means you can deploy on multiple Kubernetes clusters from the same network.yaml to access to both the Kubernetes clusters. Same as vault, each organization should have their own vault. So they would have a different vault address and a vault root token. Root tokens are generally, when you run the, I mean, this is a very operator node, operator framework. So it's not for general, developers to run this because you will only generally run it once when you are doing production, test environment deployment, right? But even then the vault root token as well as, you know, the AWS access keys and all may be visible in the log. So we'll always suggest that once you have deployed the component, you should change the access key as well as change the root token. And then we have the GitOps section. So this is where the GitOps part, which is basically our operations via Git, all the new files that was created during the deployment process will be checked in this folder, whatever path you have given and it will be checked in in this Git repository. And then the services part. Yeah. So for orderers, you will have the CA service and then a consensus and then the actual order definition. So that's all the orderer organization has. And then in, this is another, this is a peer organization. So in which case everything else is the, these parts are same, the common parts, but on the services point of view, the peers, they are hosting a CA as well as just peers and you can have multiple peers in it. So this example is a raft on. So you can see we have three raft orderers. And for the peer, we have, you know, we can do an anchor peer definition because ideally, at least one anchor peer should be there, right? But it can be more, you can have multiple anchor peers in your organization as well. Do you have the CLI option now? Sorry. Sorry in the interest of time. Do you, yeah. I don't know if you want to just cover or something. Yeah, I think I'll just cover that. So, I mean, again, these are all documented. I don't have to go through all of them each of the time. So just going through, going, showing the similarities. I mean, because as I was saying that it is for India and Basu also exactly same format. You have an environment and you can define extra ports that you want to use for in case of ambassador. This one is valid if you want to deploy because ambassador is acting as your general API gateway for the Kubernetes cluster. So you may want to configure the ambassador with multiple other ports as well, which is possible. So somebody wants to see the running, please show us running the terminal commands that consumes this YAML. So site.yaml playbook consumes this. Right. Okay. So that's, I mean, yeah. So that's how it is in. I mean, it's all documented. I don't know if that was a part of this demo because we haven't planned an actual running of a network in this today. So what's the land for everybody else today? We just wanted to take you to the components and, you know, how this has been architected. There's another session coming up. I hope I don't confirms on 21st November, where we plan to show you a 15 minute demo for each of these platforms. So please, you know, look out for the announcement. And if you really want to see how things, you know, how a network gets deployed, please, you know, log in on that particular day as well. Shaanak, one more question is for a multi organization setup, how each or gets connected to each other if they use their own separate cloud infra. Is there any invitation process involved for bringing everyone on same network? So no, so you will have to. So basically when multiple organizations are deployed on, there's no invitation point. So you, when you deploy, you will be just sharing. You, they will have a public way to public endpoint to reach, which is, which is defined by this external org suffix. So all you are, if you're going to reach a peer, you will have to have that address of the peer, which will be something.org to ambassador.blockchaincloudpuc.com. This is just an example. And then you will have to use the actual name, the node name, which is defined by this subject to reach that particular organization. So it's all via the public internet. So, so long as you know, these, the address and the name of the subject, you can reach each other. Okay. So yeah, Shaanak, do you want to now me to share the screen and go through the rest of the stuff? Because there are a lot of questions already coming on, you know, what is on the same thing. Yeah. Yeah. Sure. Okay. I'll share my screen again. So what we are presenting to presenting to you now will cover, you know, what we are supporting, what we have already built or implemented and what we really need your help. You know, a lot of community help to, you know, keep doing it with the same vigor. So let me go to the next one. Yes. Shaanak, after you. Yeah, sure. So on, on the implementations that we have already done, which is on the, on the left hand side of your screen, we do have multi cluster node deployment for fabric. We support one for four and 2.2.0, which is in the feature branch. Just remind you can, I mean, the feature branch is also public. You can get a pull from the feature branch. If you want to really use it. And then we have the multi cluster setup, supporting Kafka for four, one, four, four and raft. We have not tested. I mean, we did test for two, zero, zero, the Kafka one, but two point two is still, as I said, it's still in progress. The credentials and certificate management are involved. You know, you can operate a creation of PR CLIs as I show and enable CLI in when you create the peer and you can choose that you, not all your peers have a CLI, you can have only one CLI. Then you can do, we can do addition of peers on an existing organization as well as addition of peer, a whole new organization to an existing network. Of course, adding a whole new organization to an existing network is more, is more process intensive, like the playbook will be quite big. And all of those playbooks are there. And the documentation is also available. You can add an order, which is of course for raft. You can add channels. You can remove an organization. So removal of organization is, I don't think it's live yet because it's in this print. As we know, we run our, in our backlog as a two week sprint, we are, we do quite strict scrum. So it's two weeks print and, but all the planning and everything is open. We have a planning on Monday. If you guys are interested to join, like what are the things that are happening right now? And we do have the integration with the supply chain application, ref application. So that's, that's already there. In the roadmap, we have this, the multiple organization order that is not yet implemented. So these are the ones on the left-hand side is where we need community help on. And that's why we want your contributions. We have seen quite some contributions already and it's very encouraging. So these are the ones that are open stories or open issues, which anyone can take up. So you can do stuff, supporting standalone college TV images and then upgrading that one. Yeah. So 1.4, say eight or something. So because we would like to maintain 1.4 point X and 2.2 point X. So, so if someone does, doesn't, it will be quite easy actually just to test that 1.4 point eight works. Actually it's not, I don't think there is any code change needed. Test and document of multi-chain code. Yeah. So multiple chain code deployment. We have not tested yet because our application is very application specific. So and our application only has one chain code. And of course removal of channels or deactivation of channels. Those are the things that are not supported right now. And if I can add something here, we also want to know, you know, if you are working with clients or customers, what are they asking for? Right? Because we would want to develop what is being demanded. We would want to understand, get to hear from you. So, so please connect if you have something in mind, you know, you know, if you feel that we haven't built something which is required. Yeah. And there is a question in case of adding a new, or do you need, do we need access to their vault and control plan? Yeah. The answer is yes, because otherwise anyone can directly add. I had organization to, to the, to like that. So basically when you add an organization, you will have to say, for example, take for example, any example of supply chain. So I am a supplier and I want to join an existing network. So I'll have to host my own vault and I'll have to host my old Kubernetes cluster, own Kubernetes cluster. And then I will request to the consensus that please provide me your certificates to join the orders. And then, of course, they, they, they will provide it and then you will be able to join that channel or organization or consortium. Okay. I think there are a few more people asking about failover mechanisms. Like, for example, what if the network or node goes down? So maybe if you want to address that live. Yeah, yeah, sure. So a network goes down. So net all the failover again is to be managed by Kubernetes. And that is why we are using Kubernetes and we are not taking the, what should I say the, the maintenance of, of that it's all that we should be done by Kubernetes because all the orders and the pods, sorry, the peers are all Kubernetes deployment. So if, if a node fails on your Amazon EKS, it is again, one at one level up, we are using managed Kubernetes. We always encourage people to use managed Kubernetes that that is the cloud provider's headache to, to actually set up, bring up the, if the whole Kubernetes cluster is failed. But again, the advantage that we provide, of course, is that all our code, all the configuration like the health value files as well as well as the charts are stored in your get repository. And if, if both Amazon region is down and the whole GitHub is down. So that is a super rare scenario, which means all our lives are in danger. So, so that is, that will happen very rarely. Right. So you can actually spin up the whole thing again. But of course that data may be missing because if your whole network is down. So that's when you, any, any user who is doing it should do their own backup and restore strategy. And because that will be again client dependent. Okay. So now can I have question come this here. Yeah. Is there any trick how many production grid deployed using the BF and another thing. How do you see the difference between the say low and the. Okay. So the first question on, on production deployment. Yeah. There has been a lot of, there has been a lot of, there has been already production deployments in, in the range of around four for BAF and more are already in progress. Yeah. So those are from Accenture only or from the open source community. So they are, I mean the ones from Accenture only we know about if there are things happening in open source, we don't know. I mean, I would request anyone who is. They're listening. If they're using it, please say that yes, we are using it. So we don't know. It's very important to be know the community because. Is there any open source project? How many are in production and how people are using it. So come, I would like to add something here. Okay. So open sources is also sometimes a black box. The reason being safe. Our documentation is pretty much clear and somebody's using it. So we don't know. The reason being safe. Our documentation is pretty much clear and somebody's using it and they haven't contacted us on rocket chat. We would definitely not know. And actually this has happened with us. So one of our clients without naming them. Who were involved with us not for blockchain, but some, some other implementation. They contacted the account team saying that hey, we are using BAF, which is built by you. And then they inquired and reached back to us. So, you know, they just went ahead and we got information quite late. So hence the request from the communities that if, you know, please go ahead, try and use it, you know, take the value out of it and let us know because we do have information only on what we have done with an Accenture. And that is where we encourage people to join our open planning and as well as open PI demos where you can say, yeah, I'm using it in XYZ client or even without naming the client, I'm using it for our project. And we are trying to do this and that doesn't do this. Or I have a problem. So those kind of info that is why we encourage people to join our community calls as well. So one question is we can handle multiple applications using BAF. I think you're meaning front end applications. So, Shonak, it's... Yeah, I mean, that is... Yeah, exactly. So that is how you design your application, right? Because when you are using BAF, as I said, we have already open sourced the supply chain application, ref application, as well as we have for the indie, we have the implementation of the Alice and Faber University, for example, which uses the Aries components. So you have a reference application on how we would like you to design your applications, but you can always go ahead and do it in some other format. But our application or BAF asks, the sample that provides is the supply chain application, which is microservice design and it has separate API components and separate rest server or components. And all of them are designed with microservices principle. So if you want to design an application, which uses... Because the application is not going to use BAF as such, the application is going to use the Fabric or the Basu network that you have created, which is supported, which is on the Fabric support, how you design your application. So Shanak, we are almost on the time, one last question from Kartike, which you can see right in the Q&A, if you would like to take that, yes. So Kubernetes cluster consists of peer, orderer, CouchDB, ORSN, PIRS are separate. Right, so this is again how you design it. So, I mean, of course, as I've shown you, I think there was a question on Rocket Chat as well, that can you do an orderer and peer organization. So right now, out of the box, BAF doesn't support an orderer and peer in the same organization. So, organization can either be orderer and either be peer. So, if it's an orderer organization, all the orderer components will be in the same cluster. And if it is a peer organization, all the peer components will be in the same cluster. And they can be different from each other. And as many people know about Kubernetes concepts, we anyway create a namespace for each of the organizations. So, they are logically, even if you deploy all of them in the same cluster, they are logically in different namespaces. So, yeah, so that's how it is. I mean, is that what you wanted to ask? Otherwise you can ask the question if it is not. So, Shaanak, I think we'll have to now give it back to you. Yeah, so I'll just, you know, once again, this is the screen. I mean, sorry, I think I'm sorry. Yes. So, you know, please come and collaborate. We have a wiki page where you will find all the details, even the good first issues that you can pick to contribute. You can view our roadmap and please contact us on our rocket chat. So, this will be shared to all the participants, right? Arun? Yes, we'll share out these PPT or any material which you're going to share us on our Wikipedia page. And we'll also send out a video communication link, the recorded version of this session. And thank you Priyanka and Shaanak. This was a great session today. And for all that in this, we will have a continued version of Blockchain Automation Framework on 21st November. And then Priyanka, thanks for that. And Priyanka has accepted our invite to join again on 21st. And then we'll have a detailed version or maybe detailed demo of fabric show on 21st November. Again, once again, thank you Shaanak and Priyanka. Thank you. So, continuing our session for today. Up next, we have a great session from Attila, who is a PhD student at Budapest University. And then he's going to present to us about Hyperledger Caliper. So Hyperledger Caliper is a tool which we can use to benchmark our Blockchain deployments. So, over to you Attila. Thank you Arun. I will share my screen now. And hopefully you can see the full screen slides. Yes. Okay. Thank you. So this presentation will be about a high level introduction to the current version of Hyperledger Caliper. Since we experienced in the forums and channels that there are still some general misunderstanding about Caliper. So my name is Attila Kranik and I will do this presentation. I'm a research assistant in the Budapest University of Technology and also a maintainer of Caliper. And along with the maintainers Nick Lincoln and David Kelsey from IBM UK. So just to put Caliper into scope, Caliper is a virtual generator tool which can target complex distributed systems. And it's a performance measurement benchmarking tool. This is an important distinction. So Caliper cannot deploy the Altis that's left for the previous presentation. And the data analysts are not out of the job yet. So Caliper cannot perform the effective performance analysis or evaluation. It just gives you the results of the world generation. So that I think it's important to distinguish between these concerns about Caliper can do and won't do ever. So this presentation is built around the three main traits of Caliper which is its flexibility, scalability and extensibility. And in the upcoming minutes I will talk about these a little bit more. So from a really, really bird's eye view, we have HIPAA Caliper as a tool and the system under test which is currently can be four types of blockchain platforms. Hyperager Bezu, Ethereum Networks, Hyperager Fabric and Fisco BCOS. And you will see a lot of boxes during this presentation. I don't want to go into too much detail. So you won't see too many texts. I just wanted to focus on the architecture and components of Caliper and their responsibility. So from starting really far, Caliper generates workload towards system under test and measures the responses. That's the main purpose of Caliper. And from now on we will just focus on Caliper and leave out the SUTs of this equation. So let's start with the flexibility of Caliper at what we mean by it. So as a tool you give Caliper some configurations and measurement artifacts and it will generate a performance report regarding the given SUT. And what you mean by flexibility is that you have numerous options to configure how Caliper interacts with the backend system, how the benchmark is performed, how it's structured, etc. So you have three main configuration options for Caliper. One is a benchmark configuration, network configuration and runtime configuration. As you would guess, the network configuration describes the system under test. So where can you find its endpoints, who are participants, what kind of identities are present in the system, like user or clients. And Caliper uses this information to basically connect to a network with arbitrary topology. The other important piece is the benchmark configuration file, which will describe the structure of your performance benchmarking. Namely, you can separate your benchmark into different rounds. For example, targeting different smart contracts or targeting the same smart contracts with different kinds of parameters or with different kind of... at a different kind of rate. For the runtime configuration, I will ignore those. You can find the detailed description about them in the documentation, but those can basically affect the runtime behavior of Caliper. For example, performing some task or skipping some task or any other intricacies you might want to perform. Since enterprise DRTs have a strong selling point when it comes to performance, the workload generator side also has to match this kind of scalability and performability. So Caliper tackles scalability issues by using worker services to actually perform this round. So now we split the big blue box into two main component types, a Caliper manager service and multiple Caliper worker services. The manager service orchestrates the workers and keeps a means of communication among these services. We will see the options for this soon. And the workers are the exact services that actually communicate with the system under tests and send the specified requests. Now the orchestration and communication texts are just general descriptions of what is happening between those services, but there are several options to actually implement or configure this. Option number one is when the manager service automatically spawns child Caliper worker services and communicates them through inter-process communication. This is kind of a sandbox mode. It's easy to use. You don't have to concern yourself with Caliper workers. It just works out of the box. You call the manager service and everything else is handled to you. You see a dash line around the manager and worker services. That means the host machine boundary. So when you spawn processes and using IPC, of course every service will run in the same host, which can impact scalability a lot. A middle step towards the real scenario is when you switch in-depth communication to some third-party communication method. Currently Caliper supports MQTT protocol. So you can use MQTT brokers and means of communication between the manager and the worker processes. But in this scenario, the Caliper manager still handles the workers automatically. So we are still in the same host boundary as before. And now the last deployment scenario, the real deal scalability version of Caliper where the Caliper manager don't manage the workers anymore. You start every service separately and manually and provide them common MQTT brokers through which they can communicate. In this case, you can deploy your services however you want. You can scale out horizontally as far as your credit card can reach. So really, there is no limitation here. When I say you have to manage the workers manually, that's a bit of a stretch because a lot of container management platforms like Dockers 4 or Kubernetes allows you to simply request a number of replicas from a service. You can say, I want 100 Caliper workers and it will be created for you. So there is no manual labor involved. Only just setting up the communication between the services. So this is the backbone for Caliper scalability, the fully distributed scenario, communicating services through MQTT broker. The other selling point for Caliper is its extensibility. And let's see what we mean by this. So as we discussed, workers are the actual heavy lifters of Calipers. They perform the actual workload generation towards the system under test and they are instructed by the manager to perform some tasks. So let's dig into workers a little bit because that's where the magic or so-called magic happens. Let's suppose that the manager service says to a worker that please execute now the first round with the given parameters and how it's actually done. The worker logic is really simple. It's a main workload generation loop. You have a rate controller component and the workload module component. A rate controller component is kind of a delay switch or simply by breaking a circuit break if you are coming from digital circuit design. So when it's time for the next transaction or the next request, the rate controller simply gives up its control and now it's time for the workload module to actually assemble the transaction. And it repeats and repeats and repeats until some criteria is met. For example, you submitted 100 transactions or some time-based criteria. So this is the backbone and the hot pass for the workload, for the Caliper workers. This is a rate control mechanism that schedules the transactions and the workload module, which is actually supplied by the user, so by you. And this can be any arbitrary module and whatever you can code in JavaScript. So there are no restrictions. Your only job is to... when the time comes, so when you receive control in your module, fill out the parameters of the transaction and send it to the system under test. Okay, but how do you send it to system under test? Do you have to anticipate for every kind of different platforms or how is it handled by Caliper? Luckily, most of the details are hidden from your workload module with a simple non-magical design pattern, the connector pattern. So workload modules see an abstract interface of system under test through which they can easily submit a request or batch of requests. And every other detail, every communication is handled by you. You see them from you by Caliper and the contributors implemented these suit connectors. So when I say that Caliper can support hyperager, basal, Ethereum, Fabric and Fisco, I mean that there are four different connectors implemented in the Caliper code base that can handle communication with these services. And as you will see, you can easily add your own if you want. So there is no magic involved, just a simple abstraction for your workload module. I mentioned a lot of components here and all of those have some predefined implementations, some configurability points, but the third selling point of Caliper is extensibility and that wouldn't be a whole without allowing you to bring your own component to the dense. I try to summarize the main extension points for Caliper. There are the resource monitors which I didn't cover now because it's not the main backbone of Caliper, but basically those are the monitors that can look out for any anomalies or track the different data provided by specific sources. You can track the utilization, CPU utilization or other metrics of local processes or local or remote docker containers or you can just pull such data from a general Prometheus server and include the results in our report report or you can bring your own monitor because it's a pluggable component. You can for example pull this data from your Influx DB server or whatever you want. The other extension points are transaction monitors which reside inside the worker processes and they receive every kind of event about transactions. So a transaction was submitted. A transaction was finished with these results and you do whatever you want with these data. For example, we have an internal transaction monitor that actually will compute the performance characteristics of the benchmark. We have a Prometheus transaction monitor that will publish Prometheus time series data about the submitted, executed, failed transactions and their timings. We have a simple monitor that just drops the results as a log to the standard out to be collected by other pipelines and you can create a simple transaction monitor to connect with your favorite database backend or CSV exporter or whatever you want. There are many rate controllers provided by Caliper. I just listed the main ones fixed rate controller, linearly increasing rate and controller that tries to maintain the maximum rate without overloading the SUT so as you can see the complexity of these controllers can vary and of course this is a pluggable component also so bring your own implementation and the heaviest components in Caliper are the SUT connectors. We have four of them currently and we have a detailed documentation page about how to write your own connector to your own blockchain platform or any kind of platform really so don't hesitate to contribute your own platform to the Caliper ecosystem. I tried to give you some high level overview about the Caliper components but there is too much detail to distract you but you can find everything detailed documented in the Caliper documentation site if you scan this QR code or you can navigate there from the github page and now I think I saw some Q&A notification pop up so I think I will start answering them. Okay so question how many transactions a fabric supports? As I mentioned there is a maximum rate controller which tries to gradually increase the transaction rate of Caliper until your SUT can handle it so using this rate controller you can see that slowly the rate climbs up and it will stop at some point because it will detect that fabric for example is overloaded that way you can kind of estimate the maximum rate fabric can handle but of course this can depend on a lot of configuration options even in the fabric side so currently we can't really do these configurations automatically for you so for example we can find the best configuration to reach the maximum throughput that's still a task for designers and data analysts but with this you can kind of figure out where the limits of the system are or if you use a linear rate controller you start increasing the rate continuously and at one point you will see your system take down or get overloaded and then you found your limit. I hope this answered the question so you need to come up with your own strategies but we provide some basic components on which you can build. For multi-chain architecture no currently Calypher can only interact with a single SUT type during a single benchmark it depends on your scenario whether you can I don't know somehow circumvent this possible you don't need to have in the access in Calypher anyhow or you can put in the access in the workload implementation which is kind of a hacky workaround but it could work but Calypher currently supports a single SUT per run. Are there any more questions or a lot of these are oh these are the previous questions sorry any more Calypher related questions if not then head to the documentation site if you are interested in the details or contact us and the contributors of Calypher's channel and especially if you want to add your new platform to the Calypher repertoire then definitely contact us and thank you for your attention. Thanks Attila so there is one more question which is if there is a demo possible yeah I deliberately didn't prepare a demo for today so live demos are kind of a ghost for presentations there is a detailed tutorial for Calypher available from the documentation page so you can head there but in the upcoming next release or after that we plan to do a detailed introduction presentation to Calypher since we will have some major updates then so that will also include the demo I didn't want to derail the high level introduction with low level codes so I didn't prepare that for today but documentation contains it and thanks Attila this was a great session and Attinus please feel free to ask all your questions and Attila will be available with us to answer more questions and you may also feel free to reach out to Calypher's channel as well so all your questions will be answered there and then you could also join the contributors meeting which happens and you will find the details and Hyperledger's public calendar of invite and thanks Attila this was a great session and up next we have a talk on Hyperledger Explorer and to give a talk on Hyperledger Explorer we have two of the maintainers with us at Susie and Chiva I'll hand it over to them Hello everyone Hello everyone good morning, good afternoon, good evening based on your location so I hope everyone is safe and doing good and Chiva from DTCC Chennai and with me is Attu Shiv from Australia together we are going to introduce Hyperledger Explorer through a presentation we have been contributing to this project for the last two years we will have an interactive session at the end of the presentation when you can post your question and we will try our best to answer your questions so we are going to go through the following topics for this session the overview of Hyperledger Explorer terminology what are the common terminology we have used in that Hyperledger Explorer then next features architecture overview then deployment pattern then we will see the demo so hope everyone know Hyperledger is a tool for visualizing the blockchain operation of the Hyperledger fabric platform so it's the first ever blockchain explorer for the permission ledger so it allows anyone to explore the distributed ledger being created by Hyperledger members from the inside but it does not compromise their privacy so it's like a permission but within the Hyperledger member can access the ledger projects so let us now talk about the overview of Hyperledger Explorer the Hyperledger explorer was initially proposed by DTCC Intel and IBM as a future to visualize the data which is getting stored on the ledger so the proposal was approved by the TSEC Technical Steering Committee in August 2016 after the development of the tool was started in September 2016 with the many minor releases the first major releases right so version 1.0.0 was made in April 2020 in July 2020 we released version 1.1.0 with many improvements and migrated from the javascript 2 type script currently Hyperledger explorer supports fabric and we are working on supporting like Nukha and other DLT platform also hope it will next releases it will release that one Hyperledger explorer was developed using the latest technologies such as ReactJS with the Google Material UI NodeJS, WebSocket, Postgres SQL and Azure Pipeline so hope everyone know but I would like to say like you know ReactJS is the front-end framework for the client and NodeJS is the back-end framework for implementing the server side component WebSocket is used to push the information from the server to the client Postgres used to store the information of blocks, transactions and channels so Azure DevOps has been used to automate the builds and run the test for checking our code coverage so after we raise the PR so it automatically job will trigger the defined by the Azure Pipeline so these are the technology we have used in Hyperledger Explorer now let's look into the keywords which are commonly used in the Hyperledger Explorer hope everyone know but I would like to say before architecture view features I would like to say we will see the terminology based on the we will get the easily the data so channel so what is channel channel it's like you know private of subnet of communications between two or more network members that is called channel so PR each node is a PR like you know a computer connected to the network that is called a PR then we are using the term as a transaction so it's transaction it's like you know it's a invoke invoke a result or instantiated the result that is submitted or ordering or validation and committing for the it's a transaction so what the transaction is we have created right so it's set of transaction is cryptographically linked to the you know preceding there's a block so that is they are calling us a block chain code chain code is like you know so whatever we are writing our business logic right we are using this called as a chain code our business logic will be in the chain code the chain code will be written in of the supported language either go or java it is like you know installed or instantiated through the STK or CLA onto a network of hyperlager fabric PR nodes it's enabling the interaction with the network shared so these are the terminology we have using hyperlager explorer so let's look at the features what are the features we have integrated in our hyperlager explorer so this tool provides you know provides a user friendly web publication for hyperlager to view the query blocks transaction and associated data also we are displaying the network information of names status list of nodes then chain code of transaction families like you know we can view the transaction we can invoke the transaction we can deploy or we can query the transaction so that features we have integrated in our hyperlager explorer also any other relevant information also stored in our lecture so it can be we have other future it can be used to search the filter and transaction by date and range of channel dynamically discover the new channels and switch data presentations of channels also we can get real-time notification of new plugs if anything is fabric we new transactions happen it's automatically triggered to the hyperlager explorer we can get the real-time notification also later we have integrated the user management function so this this module right so allows you to create a user manage the user soft roles defined in the default security realm we must be logged in as a member of the administrator to add and delete the user if you want to add the user or if you want to delete the user user would be like you know must be logged in as a administrator so now atusi will take over please atusi we can see your screen atusi mute we can't hear you atusi maybe your mic is on mute maybe do you need to maybe do you need to change the source mic and all that in this we did test the feature just before the call maybe atusi is facing some issues we'll give couple of minutes no probably it will can you hear me yes now it is better okay sorry okay sorry for inconvenience the next slide in the next several slide I will give you our introduction about architecture over explorer and and also I will give you some deployment pattern of the explorer and give you a demonstration showing how to deploy the fabric test network and a hyperhack explorer and in the end I will also share our next development plan okay in this slide I'm going to explain about an overview of the architecture as you can see explorer has a typical 3 tier architecture consists of a front land and sorry server background and data background and in the front end layer there is a React Redux single page application and this DL2JS application get access to this web API server to get data from underwriting fabric network and in this server background layer there are two processes running in parallel and one is for providing data access API and the left hand side the other one for correcting block data from blockchain network and storing the data into database we call this process as a synchronizer process and as you can see in this diagram web API server basically does not access to the fabric network it always looking at the database and the database is basically updated by synchronizer process and the synchronizer process interacts with the fabric network via a fabric SDK on Node.js and for this interaction the administrator of the explorer application needs to specify the information one is the connection profile of the fabric network and the other one is crypt artifact which are generated when you started the fabric network and these two materials are used for accessing fabric network through the SDK and the synchronizer is periodically watching fabric network to see if there is any change on the network through the service discovery function and once synchronizer gets new information such as new block data or new pure nodes and so on the information is stored into possible database and web API server basically use these records to showing for showing each graph on the dashboard and synchronizer process does not only query to fabric network but also receive block event from fabric network when a new block data is generated on the network and this event is passed to the web API server through the inter-process communication using the message and ultimately is displayed on the dashboard of your web socket now the key takeaway from this slide is the architecture is quite simple so of course we know there are some complexity in our code base and we still have some code cleanup to be done but I think most of the technology stack we are using in this project are quite popular and easy to use so I believe that most of people can easily start small contribution in this project Atoshi, one question is it possible to add only the front end without replicating the data in the Postgres PBR query from our own TV sorry so they have raised the QA is it possible to add only the front end without replicating the data in the Postgres TV or query from our own TV I think sorry I think possible to replicate the front end without modifying the back end so yeah it's possible I think the question is to check if we can use any other database if not Postgres can we use any other database with Explorer at this moment you cannot use any other database but that's the answer so far we are using some abstraction layer for accessing the database so with some code changes it should be possible I believe hopefully that's okay yeah that answers thanks and the next I will take an introduction of the deployment pattern for Explorer the first one is basic deployment for those who are using Explorer first time in this deployment pattern all related components are located on single machine as a container and native application Fabric network is running on the virtual network on the Docker machine and Explorer is running outside of the Docker container as a native application so Explorer needs some host name translation to communicate with Fabric nodo endpoint this host name translation is automatically done by Fabric SDK if enables the option of this translation in the setting of Explorer in this case all endpoints on the Docker container are exposing port number to the outside of Docker machine like this so when you configure your connection profile you need to specify an endpoint with local host and same port number like this and one more thing in this case in this deployment pattern you need to install some software such as Node.js and also great database in your host site in the contrast in the deployment pattern 2 you don't need to install any software because all components are running as a Docker container of course you need to Docker machine to do this deployment but I believe the Docker has already been installed on the machine because Fabric network is already running as a container and in this pattern all components are located within the same virtual network on the Docker machine so they can talk each other directly without any host name translation and what you need to do is two things one in connection profile specify each endpoint is a host name is valid within the Docker machine and also specify path to crypt artifact with a valid path within the Docker machine and this pattern needs less effort to bring up explorer on your environment but sometimes it requires user to understand about the Docker the last one is each component has been deployed in separated machine or VM or POTS if you are targeting a Kubernetes cluster and you can separate the Web API and synchronize the process into defined machine like this and they were still using same Explorer container image but by overriding some run command for the image and configure changing configure now you can bring up synchronize the process without Web API server or bring up only Web API server without synchronize the process and if you want to make reliability of Web API server much higher or work load on Web API server distributed then you can also deploy multiple instance in your environment deploy multiple instance for Web API server in your environment and that's all that's all my major three type of deployment pattern for explorer now I'm going to share, I'll show you the demonstration to bring up hyperager fabric test network and hyperager explorer application this is same with deployment pattern one that is all component are located within a single machine and explorer get run as a native application let's get started and there here no sorry sorry the demo start with croning of fabric sample the poetry and once crone gets done you need to switch the code base into one version 2.1.1 and you also need to get hyperager fabric binaries and require to generate crypt artifact and control a chain code and you also need to get after finishing to download the binary you need to add the binary to your pass environment and make sure that you can call the two correctly and see the expected margin now it's get all done to start test network by using script and test network as you know consists of two organization and one orderer which is in common between these organization and once script finished you see the three containers running and one is the orderer node there and the other two is the node for each organization and as you can see appear is belonging channel in this case my channel and after joining the channel deploy a chain code to the network this and after completing the deployment next need to define some environment viable for executing chain code invocation yep like this and now you can involve the chain codes successfully here now that's all for the setup of fabric test network and yep and in the next a few step I will show how to bring up hyper-ledge explorer for the test network first clone the repository and next need install the node package dependency for each component that is web API server back end and react.js front end and yep now installing the dependency for the client side that means front end and also only for the front end need for one more step for building the source code of react.js application in the next in the next step we need to modify the sample file or connection profile for running test network the sample file located is here and we can find here we can find some paths to certificate or private key in here and we need to get these paths modified to actual host side paths and this sample file is by default prepared for this test network basically you only need to change a few paths and after finishing to modify the connection profile and need to set up database for explorer on postgraded sql database and this is last step required to run explorer and by using script we provide you can easily to do this setup and once complete script database called fabric explorer is created in the postgraded sql database like this now that's all done and you can use start explorer with NPU and script navigate to localhost with port number 8080 and login you can login to the dashboard with other administrator credential defining the your connection profile and then we could start explorer with the test network successfully and also you can view the each proctor data and transaction in the separate view like this and when generating a transaction continuously by using some command line to generating a continuous traffic in the port network now you can see the connection in the dashboard and also you can see the real-time dashboard updated that's all for the demonstration that's the final part of this introduction from my side I will share our next development plans while we keep up with the latest fabric development we are also trying to introduce new features at the same time currently we are planning implementation of the following two major interesting features by working together with new contributors first one is newly adding more information and metrics on explorer in the current plan we are going to show the endorsement policy of each chain code to make user easy easier to understand their network and it will support the endorsement policy for the chain code definition level and the private data correction level and also we try to add metrics of each node on fabric network to the dashboard and we are going to use the metrics exposed from each peer via Prometheus protocol and the other one the other major feature is raising up the explorer as the next level regular data query platform in the current explorer as you can see in the demo section it provides the raw data rather than human readable data for each block and transaction so this plan includes tracking feature for historical operation of any specific asset or state and also automatic schema summarizing feature of payload in each transaction by introducing this kind of a feature we are expecting that this feature will provide you deeper data insight and more flexible query and yeah we believe it will be fantastic contribution to the community and we are looking forward to pushing this plan to the release months later and that's all my part and in the end and we'd like to appreciate so much to the great contributors and this fantastic proposal and of course I'd like to say thank you all for joining this session if you are interested with this plan please always we are welcoming your contribution and please reach out ask our explorer community I will hand it back to Jeeva yeah thank you Atushi for that fantastic demo I'm sure now everyone now has a better understanding of hyperledger explorer and many of you may be thinking how could you help you to contribute right if it would be a great if all you were able to contribute towards the hyperledger explorer project so you can contribute by reporting issues features, request, documentation update and code development this is how you can get involved so you can subscribe and join our mailing list if you are the first time user then you will be asked to create a Linux foundation ID and join the explorer chat with us to start the conversation whatever if you are facing any issue or if you are willing to contribute this explorer so you can contact us in the rocket channel we also have a bi-weekly contributors meeting occurring every Thursday at 4 pm IST time zone so we always welcome you welcome you to contribute this project so these are the links for your reference a link to the documentation site you can refer this link then a link to the code repository we have given brief the readme so based on the link you can follow that and links to the docker hub where you can find the image for the deployment so use this link for the reference thank you so much now let's have any questions we can clarify them so thank you so much for all thank you very much Jeeva it was a great presentation on hyperledges explorer so up next we have a talk coming up from Ido on hype on blockchain whether our use case is suitable suitable to use blockchain or should we not go for it and how do we decide the life cycle and over to you Ido on the next topic and Ido is joining us from Nigeria in Africa yes thank you very much can you hear me yes we hear you alright thank you very much it's a pleasure to be here I've heard a lot of technical conversations here today from the blockchain automation framework which I think is absolutely genius in the sense that it eliminates many of the frustrations and headaches of people trying to deploy production networks all the way to a tool for measuring workloads essentially an instrumentation and efficiency or a metrics measurement tool for your network and then lastly the explorer which finally gives us a visual interface to interact with and consume the information that is being emitted out of the network second by second as it keeps on operating as it keeps on running I've often said that the future of blockchain adoption is business blockchain and you see me and you see that in a few slides to come and therefore the future of blockchain adoption has to will depend largely on the development of abstractions abstraction frameworks so these three things I will mention today are tools just using that word loosely that help us human beings to better interact with the highly technical information and based foundational software or applications that are called blockchain frameworks so essentially the value of these sessions today and mine that I'm about to start is to help us see how there is value in creating a bridge between that technical base of knowledge that is a blockchain and interfaces tools abstractions that enable human beings to interact with all of that technology the best analogy sometimes may be to use something like a car because I think that the word dashboard the most popular use of the word dashboard is with cars probably the most familiar example that everybody will be able to connect with a car when you think about it your dashboard in your car essentially abstracts a way all of the complexity of the engine that is driving that vehicle that is enabling that vehicle to work so that's essentially at the point I'm trying to make so without much further do our agenda will cover a bit of rudimentary discussions about the foundations of what are we going to do just to set up the stage just to set up the context so we'll talk about what blockchain is we spend a bit of time talking about why blockchain because I get the question every day in person, in charts, on LinkedIn in so many communities this thing called blockchain in fact I remember someone asking me a few days ago relating to the current ongoing US elections and if blockchains and elections are so revolutionary why are they not using it in the US and I have to explain that some of those decisions are not technical decisions they are political and business decisions so we'll talk about those human aspects of business blockchain or rather blockchain adoption so there's why blockchain and then there's why blockchain now as a business as a family of business blockchain tools and frameworks what's their vision, what are their goals and what is the market traction what's the industry saying how is the industry reacting to hyperlegia if it is a hype why is the industry appearing to pick up hyperlegia's tools and frameworks at such a high rate and you see all those things so we'll take a focused look at hyperlegia frameworks as well as the US and they will give a call to action if you want to get involved there's a number of things to do so of course these are foundational things blockchains typically include the ledgers as well as the smart contracts that run on those ledgers we'll come to definitions of those things what's a blockchain at the end of the day a blockchain is a database because when you think about it a blockchain is an iteration is an improvement is an optimization of the traditional ledger our traditional relational databases essentially are ledgers that help us to track information in either transactional formats if you're thinking about EF cards relational database formats at the end of the day a database is a record keeping system blockchains are also record keeping systems but they have certain characteristics, certain attributes certain properties about them that help them be more useful in certain scenarios so let me quickly run to why blockchains, why distributed ledgers and you see three major considerations can you see my screen right now I should be presenting apologies for that okay alright thank you alright can you see the presentation yes we see alright so what are the considerations for business blockchain okay one blockchains were not designed for individual use blockchains were not designed if blockchain is not a it's not a sheet of paper right blockchains were designed for community use so you need in theory in principle you need at least two participants if possible you know more so blockchains were designed by definition for communities and ecosystems okay going further is there a community of people where they periodically and habitually debates the authenticity of the records of information is there a community or an ecosystem where the players in that particular space constantly have reasons to doubt and or distrust the claims made by other players those are prime candidates for blockchain okay so these are particularly useful in situations where members of the community are mutually and naturally mutually distrustful of one another take the use case the classic use case of banks okay banks all over the world not just in Nigeria not just in Africa all over the world usually compete with one another head to head red ocean style you know you know deploying a lot of gorilla tactics against one another if you envision some kind of product or service or innovation where you will require those naturally mutually distrusting competitors to work together and or collaborate in order to get value from this new product or innovation that you're bringing on board then a blockchain is a likely candidate is a likely candidate for that kind of innovation again one critical aspect of blockchains is what you call the immutable part of the immutability of its ledger what that means is the third consideration is whenever you want to whenever you are considering a blockchain it's also important to consider the question what is the impact of a possible loss of history what's the impact of a possible loss of loss or time part of historical data in this particular scenario so take for instance people's bank records people's financial transaction records people's employment records people's immigration records people's medical histories across different medical service providers you know pharmacies hospitals and the like what is the potential impact that if patient A went to hospital A1 today there is a significant probability that patient A cannot access his medical histories from hospital B, C and D that already sounds like a chaotic situation right because what if there is a medication or a diagnosis that the doctor needs to provide that depends on understanding the history therefore systems where it is vitally important that the historical log of transactions remains sacrosanct for eternity for as long as possible right if it's absolutely important for that kind of integrity of the historical log then you want to consider blockchains again they were designed to provide a near zero probability of time and that near zero assurance of integrity of the log is provided by you know the mathematical the branch of mathematics called photography that's a bit technical and like I promised to go technical I'll be the bridge between the keys and the and the users alright so those are the three typical considerations for considering blockchains going further again this point just says vitally important to know that your copy of the ledger is identical to that in the metal so an additional consideration is at any point in time if you take the medical service example that I just mentioned for instance if you are considering going to hospital A or hospital B you need to have the assurance at the back of your mind before you set out of your house that regardless of hospital that you step into your privates and valid authentic medical histories will be equally available for access when you authorize that access across you know whatever number of medical service providers if you cannot guarantee that that already tells you that there's a broken quote-unquote society or ecosystem of healthcare providers and then you start to think which of them will give me access which of them cannot guarantee me access that already cuts up what if the one that cannot guarantee you access is the one that provides a better service or better service within your so those are some of the considerations so it's vitally important to be sure that everybody on the network maintains the same copy of the authentic version of the information of transaction as the case may be this is an example scenario that is very popular and it's just an abstraction or just an elaboration of what I just mentioned about the medical service scenario so everyone in your room takes a book and someone is calling out numbers or calling out instructions and everyone is writing those things down okay that's number two and then there's a request that two people should call out the numbers that they have okay at the same time of course because there is a there's a there's already a competition because they are doing it at the same time there needs to be a referee or an umpire or a process to determine who to listen to because they are competing for the same time that process is slightly technical and it's called a consensus algorithm we won't go into that because it's technical stuff it's for the geeks but essentially just on an overview that is what a consensus algorithm does it's essentially an umpire or a referee that determines who's who's claim at least in the present is honored when all the parties in the mix in that particular room agree on the version of information that is being proposed then everybody reports that new version in their private ledgers and then going forward into the future you can be assured that everybody has the same version of the same information smart contracts again the word smart contracts coming from the private blockchain section of the blockchain space and I'll talk about the dichotomy or differences between private blockchains and public blockchains shortly coming from the private blockchain space essentially where the cryptocurrencies have largely played there's a lot of talk a lot of people who are looking for alternative investments I've been hearing people talk a lot about smart contracts and the context with which people mention that sometimes has been slightly perplexed because I wonder can you really understand what smart contracts is but you know regardless of whatever it means to different people it just sounds like another buzzword to sell or justify cryptocurrencies a smart contract is nothing other than a set of instructions that are run when a set of conditions that are executed or followed when a set of conditions are fulfilled that's essentially what a smart contract is okay so that's essentially what a smart contract is the definition here says the code which is source code or computer program or code snippet that is run whenever the setting conditions are fulfilled on a blockchain you see a very interesting example shortly now this example talks about a a pharma based in Sacramento California who buys an insurance agreement that protects him or her from extreme weather conditions obviously the pharma had envisaged that oh should there be a situation in the future where the weather conditions are not favorable to me I'd like to insure myself against those conditions ahead of time the insurer the insurance company thought okay I'll calculate the risk the probability of extreme weather conditions happening I've done my mathematics and it looks like weather conditions happen once in 300 years okay I will therefore insure you because my mathematics tells me that I am likely not to be liable in not more than once in 300 years just theoretical scenarios and then they went ahead and documented that policy and said if for any reason dear pharma you experience a steady record of 100 degrees temperature 100 consecutive days then reimbursed with 100,000 US dollars of insurance okay that's the agreement the current state of the insurance industry and I mean any innovator that is listening this is an opportunity for you to start up the current state of the insurance industry is that right now insurance claims are processed manually even though there was a prior agreement of the conditions that should trigger the release of the insured some automatically after every event insurers typically habitually continue to review those conditions again right and you wonder what's going on was there not a prior agreement is there no way with all of the technology in the world today to code some kind of algorithm that automatically you know carries out the terms of the contract based on the conditions that have been fulfilled and that's the value here that's the gap so with a smart contract in place sitting on any type of blockchain fabric so to assume any of the typical frameworks you want to write a script a smart contract again I explained earlier that a smart contract is a set of instructions that are automatically carried out or executed when certain conditions are fulfilled now the farmer has demonstrated that the conditions have been fulfilled there's no reason why the insurer should doubt the claim of the farmer and again we are saying in this case if there is such doubt that we build a blockchain solution to address this particular use case then it should be automatic that the insurer should be able to automatically disperse given that the system also has a way of course verifying that the claims are accurate 100% accurate if that is the case then automatically disperse the funds a similar example to this is what recently happened in my home country Nigeria recently experienced a wave of protests which got a bit violent especially in Lagos, Nigeria where are you a number of private residents experienced damage to their private properties some people shop owners retail business owners some people their private cars they just park them in offices hazardous or dangerous places but those cars or those assets were vandalized I know about a few of them that just got to learn after this incident that the insurance policies they are taking out even though comprehensive were not covered or rather did not cover this particular case because there is some cost that says it doesn't cover riots and other civil disturbances and stuff like that okay so theoretically let's say that it was covered I think that certain insurance companies will still look for opportunities to dispute the claims on a case by case basis and I think that this is a perfect opportunity for a blockchain solution which would automatically verify that the conditions the list of checklist conditions have all passed and automatically disbursed funds to the claimants you know so essentially that's the concept of a smart contract I hear it, I hear the term smart contract being explained in all sorts of contexts and you know many times I really don't understand what's going on this is the authoritative definition and application of a smart contract alright the second myth I'd like to debunk is that blockchain is not cryptocurrency okay rather is the foundation if you think about a house if you think about building a house blockchain is the foundation and the cryptocurrency is the house so what that means is that there are many types of things or rather let me use another example blockchain is the operating system we all have laptops we all have mobile devices think about blockchain as the operating system cryptocurrency has just one app that has been developed for and operates and runs on top of that operating system what that means is that for all intents and purposes there are many other types of applications that can be built deployed operated and run on again the operating system that blockchain is so cryptocurrency just happens to get a lot of public attention because it was the first publicly propagated the first publicly renowned instance or application of blockchain as a general technology but cryptocurrency is by no means equal to blockchain if anything at all cryptocurrency would be one member of the family of possible blockchain applications and you will see in the following slides how people have deployed blockchain creatively to help integration identity management finance cross-border remittances all sorts of scenarios just to prove the point that blockchain is not equal to cryptocurrency we've covered that now let's talk about business business blockchains and the word business blockchains implies that the term business blockchains implies that there is a category of blockchains that are not optimized for business essentially again if you think about the history of how we got to where we are today blockchains started the first blockchain the first publicly known blockchain was a cryptocurrency the bitcoin the most popular on bitcoin and so many times when people speak today they still speak about blockchain in the context of bitcoin however over the years people have studied the characteristics the parameters the attributes of that particular instance the bitcoin and the cryptocurrency family and they've seen that they are setting attributes that we like that could be useful in other scenarios and they are setting other attributes that may not be useful in this scenario so we saw a gradual evolution of the blockchain landscape from purely permissionless public blockchains to something called private blockchains which are closed ecosystems unlike the public open ecosystems of cryptocurrency and then nowadays we have something called a hybrid which is somewhere in between a hybrid solution what that has meant is that when people find like I said earlier when people find situations where those three conditions one a community ecosystem two mutual distrust and three a need to have an authoritative sacrocent immutable log of transactions what has happened is that people have discovered that it is not all the time that it is useful to make all of the information publicly available so take a medical health records system for instance right in any city of the world where you live in it is easy to imagine that it should be easy for you as a patient who wants to walk into any hospital to have access to your or rather to grant access to your personal medical records to the attending physician or medical professional if you think about it if there was no way to protect your privacy even though it was on the privacy of your medical records right even though it was on a blockchain of participating medical institutions and organizations you know carrying out medical operations medical and medical operations is that does that therefore mean that blockchain cannot be useful to us because before business blockchains came about the only types of blockchains where blockchains where all of the information is publicly available anybody can join anybody can contribute anybody can create a node anybody can you know you know set up a node and join the network but in this instance now should anybody technically speaking claim to be a hospital and join that network that contains your private medical histories without some kind of scrutiny I don't think so and there and there comes and there lies the point I'm trying to make that there are certain scenarios in our everyday lives talk about medical histories talk about finance talk about identities across borders visa immigration histories and stuff like that there are certain scenarios in our modern societies and our modern lives where it is useful to have the benefits of a blockchain because it gives you an immutable history but it is also useful to have some degree of censorship and or scrutiny to a to a minimal degree not a hundred percent censorship that goes back to the days of centralized database systems but something that helps validate that the players in this particular space are people that are validated and verified to be authentic players imagine that it's the imagine that we built a medical systems blockchain that connects pharmacies hospitals HMOs insurance insurance health insurance providers and all the other players in that space and we made it open to anybody to join it means that over time we will have authentic as well as unauthentic hospitals authentic as well as authentic pharmacies and and so on and so forth so that's the that's the point that is being made here for business blockchains so business blockchains came out of the valid realization that they ask they are closed systems that need to be operated as closed systems or want to leverage certain advantages of blockchain and early adopter industries for the financial services supply chain and healthcare financial services we are all familiar with that we need the banks to work together you need to be able to carry your identity from bank to bank all over the world today it sounds like a no-brainer but all over the world today not just in Nigeria not just in Africa whenever you want to open a new bank account with any particular bank or financial services provider you need to do something called KYC you need to start to create your own profile afresh with that new bank despite the fact that you have a 20 year hit banking history with bank B you know and that's and that just sounds and that just sounds like a no-brainer of course there are a few you see in the common slide that there are a few innovators that have started to address this particular problem you know but that's supply chain supply chain one of the biggest challenges with supply chain is provenance the assurance that the good that you are looking at at the retail level at the at the retail end of the chain that started from either the manufacturer or the raw materials producer or supplier is actually what was claimed to have been sent so people have come up also with innovative creative approaches to introduce a degree of assurance into that process that allows people to trust the movement or the journey of goods all the way from sometimes from the manufacturer to the retail shelf or sometimes all the way from the raw materials processing plant or factory all the way through the manufacturer through all the nodes of the chain that go to the final retail shelf and if you think about supply chain there's food supply chain there's pharmaceutical supply chain there's diamonds and mineral supply chain it's a really really big industry supply chain has really really proven to be useful in that in the healthcare as well I mean I've been using healthcare all morning so the value is clear all right blockchain now I've been asking a question my LinkedIn posts which is to say and that question is what really really is the hype around blockchain is it really just hype or is it really some kind of revolutionary technology the answer to that is blockchain has been paired has been seen as a parallel to the introduction of blockchain in recent times you know in the last five to seven years five to ten years has been seen to be a close parallel or very very similar to the introduction of the early web back in the days back in the early 90s and what that means is that we all are living in 2020 today we all are familiar with the current version the 2020 version of the World Wide Web if we if we didn't make it to 2020 and we were alive in you know you know if we had asked ourselves in 1995 that can you imagine that this would be possible or the web would be so interwoven into global human society by the year 2020 do you think that any of us would believe that or that fictional statement most likely many of us would not believe it and that's the that's a similar scenario to how blockchain is a lot of people are saying how can you say that blockchain can be so groundbreaking so revolutionary that it would be the new business communication protocol it would be the standard of our protocols and communications and trends essentially the same questions were asked of the early web but look at what the internet and the web have done to modern human societies and modern human life so of course it's a long journey so some of us may be familiar with what is called the hype cycle so of course there will be some along this 20 to 50 year journey of full maturity there will be some periods of rapid adoption there will be other troughs right you know along that journey but overall if you take the overall journey we will discover that blockchain has come to stay and as we've seen as we're going to see with many of the use cases in the coming slides blockchain is here to add value to so so many more sectors and it can only continue to increase in adoption alright so hyper ledger right so our conversation is slightly dovetailing or narrowing or streamlining we started with DLTs we filtered out that permission less and or public blockchains cryptocurrencies the most popular of which is Bitcoin were useful in that era in that early era of blockchains when all that was needed was to create an open network where everybody can join everybody can be an equal participant what we've done what has happened nowadays is that there's now the introduction and hyper ledger sits in this category of a new category of blockchain businesses of blockchain products of blockchain systems where people are players rather are unequal in their participation in that space because if you think about the again let's use the healthcare or the medical services example every participant in that player is not an equal participant then the place of a patient is not equal to the place of a doctor a physician or a nurse and it's not equal to the place of pharmacist or a pharmacy and so on and so forth among many business blockchain families or frameworks or families if I can use that word hyper ledger is one of them and this is where I will comment my own personal comments hyper ledger is my personal favorite because it is that hyper ledger framework that gives you the shortest time to get up and running with a practical usable proof of concept so you have mentioned a few start-up ideas you have a start-up idea to solve the insurance claims management problem right you can be up and running in 24 to 48 hours if you come and take one of the many productions status production hyper ledger frameworks unlike other business blockchain frameworks that are not yet that mature so hyper ledger offers the entire community around the world the opportunity to quickly two things one majority of framework and you've seen the depth of technical expertise that many of the earlier speakers have demonstrated majority of the framework in terms of technical depth as well as the speed of quick deployment to quick demonstrated proof of concept so you have a new supply chain concept or idea for your particular use case your country or your continent your region and you think oh my god how am I going to get a blockchain experts to help me do this come over to hyper ledger will help you talk a quick POC in 24 to 48 hours and you have something that validates your concept you can use that POC to secure partnerships raise funds and they can start to come back and do a proper build for the system so that's where hyper ledger as a particular organization comes now it's a consortium I won't go over all of that too much because I just want to the vision again the 50 year plus vision is to change the way business is conducted and change the way transactions are distributed across industries what's the current momentum hyper ledger currently despite being just four years old is currently has 16 projects in the you see a diagram shortly that puts everything in the visual format out of the 16 projects five of them are currently in production at production level so one of those five is hyper ledger fabric that's the one you've heard everybody mentioned everybody that's presented today and that presented last year at least mentioned fabric in addition to others to other frameworks so hyper ledger is the most mature excuse me because it is the most advanced it was the first framework to be developed and it has the most community support so hyper ledger is currently at I think 2.2.1 but at least you know that's the 2.0 production release there are a few others that are chasing hyper ledger hot on its heels so too there's bassoon there's bro 4th one now and they are all also in production they are not in beta they are not in alpha they are in production so they are also useful in production environment some of them are tailored towards specific industries like we know that indie thank you indie has been particularly designed for identity management framework so a country like Nigeria my country where we are still currently challenged with identity management problems hyper ledger indie is a no-brainer for us hyper ledger sawtooth is a no-brainer for anybody that wants to go into the supply chain industry you know fabric is more generic and it can do just as well on any of those situations the training and certification parts there as at this time last year I think we had only one or two training and certification courses relating to hyper ledger this is as of right now we have nine and they are available on the website 16 active working community groups and special interest groups SIG stands for special interest groups so special interest groups are things like the trade finance special interest group where they think about all the problems around the global trade finance industry and how to bring business blockchains especially hyper ledger into that space 170 blocks worldwide and so on and so forth these are thank you this is the jargon that I was talking about earlier so we have distributed ledgers so in this if we think of hyper ledger again like I said as a family of frameworks and tools a community of business blockchain expertise and tools to use to deploy that expertise then there are some major categories or some major buckets right it's a distributed ledger category of buckets where you have like I said fabric is a poster child family there's Indy these are the ledgers these are the main frameworks that operators will interact with on a normal these are the frameworks that will be deployed by each member of the specific network whether it's a medical services network or a supply chain network then there are libraries which are like foundational reusable plug-and-play modules that are utilized by all these distributed ledgers so think about well it's a common concept to technical people to software engineers and architects but you know when you have reusable modules so we found when they were building these particular different distributed ledgers that fabric has certain common modules and functionality that Indy, Iroha and Sotu also have why not abstract it out why not pull it out and create a reusable package for it so that whenever fabric needs it fabric can call it whenever Indy needs it Indy can call it whenever Buro needs it Buro can call it and that's what the library section is like has just been explained by the by the wonderful lady and gentleman that presented on Calipa and Explorer are tools that help you manage your instance or your deployment so you have a production deployment of fabric for instance Explorer helps you visualize the health parameters is it you know what are the how many nodes do you have what are the status of each node what are the visual formats rather than having to go through this typical command line slightly less unfriendly format and so on and so forth so that's what that means we can pick them in focus hyperledger like you can see the status is active the code base is passing currently at 2.2 status and important to know fabric is growing in such popularity has grown in such popularity that fabric is available as an offering because it's open source and it's free it's available as an offering on all of the listed cloud providers AWS Google cloud Microsoft Azure Tencent by do Oracle SAP IBM Huawei Hitchit cloud is there I know that they do have it so there are quite a number of cloud providers that have available instance for anybody to just go and try their hands out deploy a test network or a pilot, a concept network and take it from there so it is also growing in popularity in the specific design for identity management applications borrowed design specifically for the Ethereum virtual machine so it's something like a bridge between hyperledger family and the Ethereum family which Bitcoin and other public permission less frameworks fall into business blockchain and these are the libraries that I mentioned earlier and these are the tools more important the focus of my presentation is not on the technicals but on the use cases and which is what we're about to describe so excuse me so industry use cases cross-border payments there's this collaboration by A&Z, B&P, Paribas, White Mellon, Swift the popular Swift and Wells Fargo essentially if you think about it right now if my brother in Chicago wants to send money to me or I want to send money to my brother in Chicago wants to send money to me and I want to send money to my brother in London I need to worry about a lot of things I need to think about what's the current exchange rate I need to think about what days are the specific conversion windows open and all sorts of complexities you know some people have come together to try and fix those market frictions smoothen those market frictions by providing a solution that can smoothen the process of transferring money across international borders reducing the time and money spent in the process reducing the transaction fees and reducing the time of delivery healthcare records I've spoken about that quite a lot and people are creating healthcare networks especially around Jewish traditions so I know that in the United States for instance there's a national health insurance scheme that connects many providers together it's already a natural blockchain ecosystem just waiting for somebody to deploy it if you come to Nigeria for instance we also have something similar it's called the HMOs where we have some kind of system like it's a natural no-brainer to do that interstate medical license this is specific to the US seafood supply chain so I mentioned earlier when I was talking about supply chains that it's possible to track things many things along their journey in this particular case some have attached IoT sensors so this is a combination of blockchain and IoT two emerging technologies to try and improve the visibility of fish so if you are in Europe somewhere in Europe say Paris or somewhere where you are ethically well you are conscious about the ethics of how that fish that was just served on your plate arrived on your plate then you can download some kind of apps where someone can provide you with traceability of that similar things have been done for coffee similar things have been done for many other products, dairy products I've seen some dairy products there's a hyperledger use case library on the website and you can see many of these use cases in diamond supply chain again anyway that is just scantily familiar with the process of the diamond supply chain knows that it needs some kind of innovation in fact many innovations blockchain inclusive so that's also mentioned and then digital identity so we mentioned this I also know that one of our hyperledger co-ordinators in Kenya Mr. Eddie cargo has a digital identity product I think it's called Quilly that also leverages indie hyperledger indie I believe you know that's an indie and works so that's ready for investment or deployment real estate transactions as well music and media rights and particularly passionate about this because I have something that is going in this direction and you know that's just it if you think about it how do you establish that someone's claim on a particular piece of music property is valued or invalid right now the process involves coming through unbelievably huge amounts of audio files audio histories over the past 60 years and then sometimes those laws are different produce diction so music and intellectual property rights will be strong in the US they may not be strong in Lagos, Nigeria they may not be strong in in Bangladesh or somewhere like that so you know that's innovation green assets management as well letters of credit food trust similar to the fish supply chain and digital trade as well so those are the global use cases and you know there are a few others but I'll just run through them because we're fast-paced for time but I guess I think that I've answered the major non-technical questions around one is hyper ledger just hype right hyper ledger has progressed beyond hype it is not hype okay I think I've answered that two how exactly is it useful for everyday life so if you step into the hyper ledger community you may be lost in the sea of technical jargon that is being mentioned but just relax there are people like us who can help you bridge between the techies and the non-techies and we can help you see the business value of what you need to deploy so that there are so many things this presentation will also be available through Mr. Aaron the convener of this conference they'll be available after the conference so you can review them and you can review the slides and take away what you need just you know Explorer was just mentioned these are some of the projects that are still in the hyper ledger labs which is our experimentation quote-unquote incubator for new upcoming projects so every each of the current hyper ledger projects that is currently in production status was at once at one time in the labs because it was being incubated and then after some time it passed the checklist to graduate it to a full status full blown project status and you know these guys hopefully will also graduate out of the labs into the mainstream sometime in the near future a lot of blockchain showcases like I said many of the world leading organizations are members of the hyper ledger organization from IBM to Oracle to JP Morgan to Hitachi you've seen some of them on the screen already thank you very much and that is my slide last point because just as an offshoot of the increasing adoption of hyper ledger as we've seen across many industries across many use cases just as an offshoot of that increased adoption the natural consequence is that job openings that require hyper ledger expertise are you know everywhere okay so that presents opportunities for anyone that you know would like to cash in on this to get yourself certified there's a link at the bottom of the screen where you can access training materials and certification information and there's also something called the hyper ledger study which is a meetup that meets weekly to review knowledge and examination preparation materials so if you're interested in becoming a hyper ledger professional and you want some kind of certification there's a hyper ledger certified public administrator there's a hyper ledger certified sorts with administrator and then there are all of these training courses that are available on the website where you can access that information and that's me thank you very much thank you there are actually two questions would you like to take them up on the Q&A portal the first question in the use case which you shared on insurance was there a production one or it's still a proposition okay so so that particular one in two scenarios one is a use case that is in proof of concept in Sacramento California and then I shared a fictional scenario that is closer to me and possibly African because we're just having legos a few weeks ago so the one in Sacramento California is proof of concept the one with the farmer with extreme weather conditions and there is one more question what are your thoughts on using time series database versus blockchain do you think blockchain can reach a point where rate of transactions are comparable to popular databases it's basically asking around the scalability aspect probably I understand well I think that with the state of modern psychology architecture it's kind of a no brainer that scale is no longer a problem and I will explain because pretty much every application I could hazard a guess and say 90% plus of applications just a guess of modern applications are built on the cloud what that means is where traditionally scale was a problem because it meant that you needed to procure additional physical resources either in terms of memory hard drive space RAM or some other type of network capacity nowadays you don't really worry about those physical technical constructions anymore outsource those things to the cloud so the potential to scale any application to unlimited levels automatically blockchains automatically inherit that ability by also if they are implemented on the cloud so if your use case does not prohibit you from implementing it on the cloud because I understand that there are certain use cases some of them regulatory that will prevent certain people from uploading certain or hosting certain information on the cloud then I think that using the cloud would be a way to ensure or assure that going into the future the application itself never scales that's the high level generic answer to that question in terms of specifics I don't think so I think that depending on the efficiency of the specific business logic that is implemented that subcontracts the implementation of business logic in individual networks depending on the specific excuse me depending on the specific business logic depending on the specific algorithms that are built to handle the business logic it should be easy to optimize those algorithms for scale thank you very much thanks Ido thank you for the presentation and with this we come to end of today's session and for all the attendees next week we won't be having a session so this is a week of festival in India so happy Dipali to everyone in advance we hope to celebrate TechFest the coming week which is on 21st November so once you celebrate the festival let's join back and let's celebrate TechFestival again on 21st on 21st we'll be having a continued session on blockchain automation framework with a demo and then we will have a session on minifabric that's a tool which we can use to set up a network on our machine and quickly set it up test out things write chain code and do all that kind of things and along with that we'll also have a topic on identity how do we write a client for example how do I use hyper ledger Aries and build a solution for identity use case and we hope to welcome you and seeing you on 21st November thank you all