 Okay. All right, welcome everyone back to the tech fest. This is this special event that's been organized by the Indian chapter, along with the meetups from around Asia Pacific and the African chapter. So this is the third. So, welcome back. We have a great schedule today. We have, as this event is all about, it's about meeting the maintainers and contributors from the Hyperledger projects. We have people from Cactus, labs, we have BAF, which I think we're going to have a demo today on which we went through last session, and we have Aries. So please, and the messaging, I think the message that we want to give is that please listen to these maintainers, contributors, and everyone wants more people to get involved. So I think everyone wants more people involved in this project. It's all about us working together and developing this technology together. So with that, I'm going to hand you over to the person who's arranged and organized this. Arun, Arun, please take over from here. Thank you. Thank you Julian for the great introduction. And today's session, we will have three different topics which will cover and upon request by the audience in the last session, we are going to have a demo session on the blockchain automation framework. And our first event for the day would be on Hyperledger characters and thanks to Peter and Jonathan who are joining us from far, far away place. I know it's too late for you guys. We really appreciate you taking out time for this session. Over to you, Peter. I'll just ask for the forgiveness of everyone in case I hope that was slower than usual. And let me share my slides. Can you see the slides? Yes. Thanks Peter, I work for Accenture's technology architect and regarding the project. Hyperledger cactus, I just wanted to start with a safe harbor sort of thing where I just stated very clearly that the project is an incubation status so we don't have a one point fixed stable release yet. We're not ready for production. So everything that I say about the future is sort of forward looking statement. And the data out of the way. What is cactus. It is in very short and SDK of SDKs for developing applications that have to use distributed ledgers. And we really hope that it is a plugable sort of framework that also will be enterprise grade. In the sense that you can count on it being available and stable and maintained for years to come and also backwards compatible where possible, not everywhere. And then to answer the question why it's to mainly to address fragmentation of blockchain is very popular nowadays it's the adoption is happening and there's a lot of different proposals out there for how to actually build a ledger. And it makes it very difficult as a as a business application user. It makes it difficult to figure out what it is that you can put in your architectures and what it isn't. For future proofing then you really need to somehow have a way out in case the technology that you picked actually ends up not being so great on the long run or just gets discontinued. So that's one thing. And the other one is just the obvious one for every framework that's out there in software is to try to save people from having to reinvent the wheel. And the third one which I kind of described is to lower the risk of adoption. And then if you want to think about how bad the fragmentation is, it's actually much worse than you would imagine at first, because the number of integrations. If you want to be optimistic or if you want to have high standards and say that your ideal role every ledger or different blockchain should be able to talk to every other blockchain. The number of integrations between these different blockchains goes up quadratically with the number of ledgers that are there meaning that if you just have 100 different ledgers which we have thousands right now. But if you just have 100, then you end up requiring 5000 different integration scenarios. And that's not really sustainable, especially if you take into account that there isn't really a framework out there to combine all of these. So, if you want any sort of larger scale enterprise is application today. And this is, this is your reality. So we intend to make that easier. And then just to clarify our position in the high pleasure greenhouse. We are right there. This category, but technically, I can make arguments for us being in almost anyone of those boxes, except for the distributed ledger box. We are definitely not a ledger. I would say that it is library, because it really depends on how you're using it what is your personal within the organization that uses cactus, because if you are an application developer then for you, it's more of a library. And if you are a person who's in operations, then you could think of cactus as a tool or a integrating ledgers basically. This is a very, very generic use case. This is intentionally over the simplified just to demonstrate that they're not a ledger. We are something that gets put between ledgers and a user application. And the user application gets to deliver value. And, and that's it. So I will expand on this much more later, especially if I have time. But this is the very high level scenario. And then a few quick design principles. The most important one is the plugin architecture, which is all about us trying to remain flexible because we don't claim to know what is the optimal design. I don't I don't think anyone knows for sure. It's, it's much easier to decide or judge these things after the fact. So our strategy is to try to keep the design flexible in a way that if next year or two years from now down the line we start to see sort of emerging patterns that we now sort of then we can adapt to it. Even if initially we did not think to do that particular design. And then second most important, well, I guess I could say it is also the most important is a secure by default. So we really don't want minor things to just take it spiral out of control with security so this is an important design principle and it seems trivial, but we want to clarify that we don't have the insecure defaults, such as you know when you deploy a software and then it boots up and then it has admin for the username and admin for the password for the complete rights. So toll free, which is a little clarification so cactus is open source and is being developed in the open. And also, if you deploy it then by default there's no any sort of payment mechanism in it that would make you have to either collect or pay any sort of fees. So it's basically up to you. If you want to implement something where payment is required then you can do so but it is nothing like that is baked into the framework. And then low impact deployment, which I kind of already described but I want to reiterate that we are separate from the ledgers. So in this sense, the ideal scenario is that you already have ledgers that you want to integrate. And then you can deploy cactus to do just that, without actually having to modify the ledgers themselves. There are design principles that I will also cover in greater detail by support, meaning that don't just want to support like the top 10% of the ledgers we want to make sure that we actually cover at least 99% of them. So trying to achieve this is to bear a little about the ledgers themselves basically the most we assume about a ledger is that is a data store that can represent transactions and maybe blocks. And it is able to run some sort of arbitrary code, the form of smart contracts. For cross ledger transactions, we made sort of the pledge for ourselves that there is possible we want to prevent double spending that this is not always applicable because we can only do this on ledgers that have guarantees for transaction finality otherwise things can go always somehow wrong. And then preserving ledger features very quickly just means that if your ledger has some sort of additional feature that distinguishes it from other ledgers that features your pride and joy then the idea is that you should be able to integrate that ledger with other ledgers through cactus and still have that feature working in the way that you wanted to. So that adopting cactus for integrating your ledgers is not a trade off in the sense that now you have to basically lose everything that's unique about your ledger, your particular ledger. It is very important to have a horizontal scalability, very important. We need this because it is derived from our performance goal, which states that we never want to be the bottleneck. However fast the ledgers we are connecting are we willing to be able to handle the transaction load that those ledgers can handle. So a quick plug for our white paper which is up on GitHub if you if you want to read more than there's a big list of use cases there. And also the rest of the design principles which I won't continue mentioning because I otherwise for now it's time getting a little closer to the fire this is the architecture on a high level. It's not the most up to date diagram but it still works. It's pretty it makes it pretty easy to explain the bottom line is that you have the ledgers, which can be any supported ledger. And then there's ledger plugins, which are aimed solely at establishing communication with these ledgers. And then you have a business logic plugin, or even an external application that implements your business logic. And this ends up talking to the ledger plugins and the idea here is that you only need to you or the open source community you only need to write each ledger plugin once. And then that ledger becomes supported and anybody else can use it so you can have two ledgers that you want to integrate or 10. As long as there's plugins written for it, you can just do so. And going even closer to the implementation details. One of the architectural decisions we made is to have code written in TypeScript, bundled with a webpack so that the relevant packages that we have that makes sense they can actually be used not just on the backhand in an old chess environment but also in the browser. And all this is grouped into multiple packages in the same main GitHub cactus repository and all of that is managed through this tool called Lerna. And we just refer to that as the Mono repo. And yes, as I mentioned we have cross platform packages so for example the API client package that we have that you can use to conveniently if she requests to be cactus API so that works the same way on Node.js, and in the browser as well. And the other big decision, or maybe it's just my personal pet peeve that same thing is test automation the focus very, very heavily on test automation. Even the things that are normally not testable in the average software framework we have all that covered. A big and important part of that is, we actually write our own constant Docker images so that we can simulate a completely fresh clean slate ledger for each test case if necessary. And more on the problem architecture is basically it's all about us admitting to ourselves and we don't know what the future looks like. So we just want to make sure that there is always a possibility to adapt the software by just saying oh well we just need to write a new plugin that implements this in a slightly different way but it still fits into the system because there are well defined interfaces between the plugins, the core software itself, and regarding the governance model. The important thing about the plugins is that if you want to start developing a plugin today, or any ledger that is not supported right now, you can do so and you don't even have to ever ask any one of the cactus maintainers. In fact, if you want to for whatever reason you can even keep that code of your plugin private because at the end of the day cactus allows you to inject that plugin at configuration time and then just use it. So there's no real difference between the cactus plugins written by the maintainers, such as myself, or a plugin written by someone else who just uploaded it on her PM. And this is definitely a forward looking statement big time. We hope to have language in us to play in development is lost so that if you don't like type script or JavaScript or even no JS or any of that. Ideally, you can just implement the plugin in your preferred language, which could be go see sharp or us or anything else that can communicate through the network. And we've seen a good idea elsewhere, where this is implemented with go plugins, which just ends up being a GRPC endpoint that your plugin talks to and then the core piece of your software is able to communicate with the plugin that way and then it really doesn't matter. What language your plugin is written in, it doesn't even matter what server it runs in as long as there's a network connectivity. But this is not something that we even aim to support and the one point over so this is definitely just a long term goal. And so this little chart demonstrates my personal full process. If anyone asks me about about supporting this ledger or that kind of key management or authentication. All of these should be in the end plugins. And then if you ask me what about supporting something along these lines then first I will say is there a plugin and if yes then just use it but if there isn't. Then, then I will evaluate what you're asking is that already a pluggable aspect and by a pluggable aspect I mean. For example, a pluggable aspect is key chains and ledger connectors so meaning you can add ledger connector plugins for different ledgers and you can add key chain plugins for different key chain implementations where the private keys or other secrets that you need for your business application can be stored. So if you want to add a new key chain backend because maybe the support. One of the big cloud providers key management service but not the other cloud provider which you happen to use. Then my answer will be okay so you can just implement a plugin support that and then use that plugin. And then the only edge case is, if it's not a pluggable aspect, for example, if you, if you want to customize something that currently is just hard coded in the in the core. Then it will have to have an extra step for you to be able to customize or support that behavior by first sending a pull request directly to cactus. Which the PR would just make it so that that aspect is pluggable. And once that's done, then you can implement the plugin and use it to customize the behavior. And after all this talk about plugins, I just wanted to hold the waters and slower the expectations of plugin is super simple. This is, for example, one of the plugins that we have in code. It's, you know, it takes five seconds to read that code. I'm not going to read it all out loud. It's just an interface. But a few methods that you can implement in your code and then you can feed that into cactus as a configuration parameter, and it will be used in a little more storage. It's just something I like to clarify is that. Well, I call it I fight it, but I like to fight multiple times is that we are not storing transactional block data as part of cactus. We don't intend to come out with a new consensus algorithm on that either. When I say storage for cactus, I just mean storing operational data or pending transactions, things that you need to implement your business logic, but not the actual data that you end up putting on the ledgers. And then I'll talk a little more about preserving the ledger features. The other thing that I always try to make sure we clarify is that we can only do this as much as possible, but there's always a limit and the big one is, for example, privacy, if you have two ledgers and one of them supports private transactions, but another one does not. Then, most likely, you cannot, you can no longer have a reasonable expectation of your transaction being private since only one side guarantees it the other side says well transactions are public private. So, despite the fact that in this scenario cactus would support the feature itself mechanically so that you can specify your transaction as private on the ledger which does support private transactions. In the end this ends up not being very useful for you if the other side of the other ledger that you're transacting on is just not supporting private transactions. I just wanted to make this clear because we cannot make miracles happen in this sense so the expectations have to be clear in this regard. So a transaction protocol itself as in how do you execute a cross ledger transaction. Well, it's a, it's a moving target sort of science is still being worked out on this there's multiple different algorithms and ways to do it. So we are not settled on anything that on this yet and the design is very much draft. But basically the flow is that we don't want any unintended consequences. So both sides have to agree to the transaction before it gets finalized and I know this is kind of way, but this is where we are at with the fundamentals for this particular piece. That is just three I did I just want to reiterate again that there could be surprises if, if you expect too much from cactus. And for example, you forget that on Bitcoin it is possible that the ledger just works, which is not under your control or our control or anybody else's control. So basically with the transaction protocol we try to mitigate this kind of mistake as much as possible so that we don't end up with a large number of users who ended up losing funds or somehow messing up transactions otherwise. And then we also intend to have batch transactions, but only they're applicable and that's probably not the most widespread use case, but I do see it. Being necessary for price applications because I just, I've worked a lot on different database projects, their databases were not ledgers they were just relational or no single databases and batch and transactions is always something we end up. And then regarding the performance. I probably will slowly mentioned this. What we want here is to make sure that cactus is never the bottleneck. And we want to have published benchmarks that actually show that we ran a test where multiple ledgers are involved everyone running transactions, right. The throughput of the ledgers were X and the throughput of cactus was basically the sum of all the throughputs of the different ledgers combined since it is the component in the middle. And then now a little deeper about one of the use cases. Adorated validation. There. What we do is basically provide an overlay network that is. There. You can obtain an attestation or a signature. We are also not settled in terminology yet, but the point is that you can get signature not just from a ledger that a transaction happened but also from cactus itself. Which can be good if the entity running cactus is more trusted by you then let's say the ledger or if you have a specific relationship with that entity that was that cactus node. And then the way this would work is that you deploy cactus. And then you can manage the validators that are on cactus logically within cactus. And then you can ask cactus itself to verify the payloads transactions as they have happened on the ledger. This is about the technology stack in an actual deployment, the way we imagine it. In this use case, and you can see that on the bottom of the stack we have the lectures, the SD platforms. There are smart contracts deployed on these that can manage. Sort of the identities or at least the public keys of the foreign validators. And then there's the SDKs or the API's of the ledgers on top of ledgers themselves. And then cactus talks to those to make sure that the data that moves around this indeed validated. So, in the end what you have to write code is mostly just sits on top, but depending on the use case you may also need to write smart contracts for a specific pleasure. And then what actually happens on chain in this use case is smart contract public keys. Or sorry, delaying the signatures is in the public keys. And then what can happen off chain is your application can sit off chain isn't it put outside of the ledgers and send request to the cactus API to either have a transaction payload verified with a signature of that cactus node. Or actually to export data from one ledger and maybe put it on another ledger depending on what is the use case. So in the end you can move data around and you can obtain cryptographic proof that the data has been around, and then the road map, which is very, very much subject to change. And then to finalize the one point on design and a new development recent development regarding this is that in January we have a meeting scheduled with the hyper lecture technical steering committee. And we actually start talking about gathering feedback about the architecture and we intend to specifically target maintainers of the ledgers that we are intending and supporting and the initial round. And we hope to what we hope to get out of this is people saying things along the lines of a cactus is great but I think this particular use case we just never work because of some little shortcoming or an issue with your design. And then we would take that feedback and we go back to the drawing board, and then basically just do it again make sure that what we have is flexible enough to handle what was brought up. Going into a little more detail on the roadmap. We want to have support for identity management. We're looking into supporting Indy. The ID DIF. All these goodies. We already have some support for Jason web signatures. It's just a slide hasn't been updated with that. And then consortium management will also have that. And we do not yet have plugins in different languages. Basically, this is the roadmap for now. And. Oh, yes, we also intend to have the performance benchmarks published. And then this is the presentation and the end there's just a shameless plug where I invite everyone to come hang out at the rocket chat. You can find the link to you through the wiki wiki which is the bottom link. And of course to contribute, which you can do so through the link. You are good. And I'm not sure if I should answer questions now or at the end. Thanks, thanks Peter I guess we should we should we should we should take up questions now. There is a question on Q&A portal some action asking that since plugin development is externally managed. How do you ensure that cactus will provide a uniform interface for all the underlying lectures. So we do that by having the plugin interfaces defined by the maintainers. So you can develop your plugin anywhere, but the plugin will only work with cactus. If you implemented the specific interface definition that we publish. Awesome. So I think this if you have more questions then please feel free to ask them on the Q&A portal. And thanks Peter for the excellent session. And we hope to get more contributors from this region and joining the project zone. The recording station will be placed on Hyperledger's YouTube channel, and we'll send out those videos information as well very soon. And up next, we have our second session, the most anticipated one, I believe we heard many people asking for this session last in our last session, where they wanted to see a live demo of hyperledger fabric being deployed through blockchain automation framework. And now I'll hand it over to Accenture team, over to you, Shaunak and the team. Thank you. I think it's not my spotlight today, it's Priyanka and Arnold and Suvajit. Yes, I'm sharing my screen. Thanks Arun. Thanks Peter. Okay, so thanks for the overwhelming Q&A last time. What we're going to do today is just do a quick recap for, you know, folks who did not join us last time. So just to have a starting point. And then I will straight away hand it over to the engineering team, the maintainers of blockchain automation framework. Suvajit and Arnold have joined and Shaunak is our main architect and product owner. So if there's anything that the team cannot handle, then yeah, Shaunak. Okay, so with that. So this is the agenda. Let me take you to what are we doing a recap for. So last time we just mentioned, you know, a lot of you don't know what hyperledger labs is. So I would request you to go and just have a look for it. So it's, it's like an incubation center within hyperledger greenhouse where the projects that are not really going into a full frigid project status can start the development can start and the team can start developing in the open source. So BAF is right now a project in incubation under hyperledger labs. The second thing that we discussed was, you know, what problem are we solving what is blockchain automation framework. And what we discussed in detail is that what we were facing internally in Accenture where we were doing almost 100 POCs, but we didn't have a consistent way to bring up a network to make it secure. We didn't have to store the keys. And, you know, we used to spend a lot of time doing all this. So why not have an automation framework which actually does it automatically while keeping the architecture consistent and secure. We then discussed on what are the components that we use, you know, and then we went, Shaunak took us through the code structure how our code is placed in the repository. And also through the network.yaml, which is our main configuration file. It's a single file that this automation framework consumes and spins up the network. In BAF components, if I can just quickly summarize that the company we have one thing that is mandatory is Kubernetes. We work extensively on clusters. We have Ansible, Helm charts, Hashicob vault, any given cloud infrastructure and GitOps flux. So these are the components used for the complete framework. So with that, without further ado, I would hand it over to Shuvajit today. Shuvajit will take us through a little bit a little deeper in how the network is deployed and then Arnold will take us through the features. So over to you Shuvajit. Thanks Priyanka. And yeah, welcome everyone to the session. Before we actually move to the technical demo, I'd like to kind of bring your attention to this slide. So this one talks about how the automation for fabric DLT is done using blockchain automation framework. I mean, people who had joined the previous session must have seen this slide for other DLT as well. So as BAF promises that all the DLT platforms that BAF automates does it in a consistent way. So the kind of automation flow remains the same with changes for DLT specifics. So what you see on the screen is the some of the main pieces of BAF automation and the automation basically starts with a developer or an operator config configuring or creating config main configuration pile or the single configuration file, which we call as a network camel. So that configuration file is then used by the Ansible, which contains playbooks, roles, and tasks. So these are the mean the network camel is basically the network configuration is used to kind of create further configurations, which are later used by Helm. So I'll talk about that quickly now. So once Ansible creates this configurations and what Helm contains is that it contains charts, the basically the Helm charts which contains various jobs or deployments or the services, which uses the configurations provided by the Ansible as values. So we call them value files as well. So these charts uses that value files and passes it as an instruction to Kubernetes and then Kubernetes does the deployment of whether it's a deployment or a job or any service which is required. What you also see on the screen is the Docker. So Docker is basically it can be a private or a public repository. In our case, it's a public repository under hyper ledger labs it contains all the official fabric images provided by fabric. So those images are basically used by either the containers in the Kubernetes or it is used by Ansible to create cryptos or yeah different artifacts required for the network deployment. With this understanding of the automation flow I'll move to the next one. So what we have is a technical demo and first we'll just talk about the baseline of the technical demo and what we already have is a BAF deployed hyper ledger fabric network. So it's a fabric network with 2.2 version. So just one important note is that this is the first time we are kind of demoing the fabric version 2.2. By the end of the session I'll also provide you with a link of fabric network deployment of the older version 1.4. But in for this part of technical demo this is a baseline that we already have an existing network deployed with fabric version 2.2. Now just to kind of talk about the architecture of the network basically how the network has been created is that we I mean the nomenclature and basically the architecture is based on a use case that we use the use cases about the supply chain logistic application. So we all do also have a ref app which you can find in our root directory under examples folder of our root directory in GitHub. So under that we have the ref app for supply chain. So I mean just briefly on the use case the use cases about the logistic supply chain where containers and products created by the manufacturer are shipped to different parties. So here in our use case or the network the supply chain organization hosts the orders it does not have any peers and just host the orders. The other participating organizations are manufactured with a single peer a single peer and organization also with a single peer all communicating on a single channel. So with this I'll move to the operational features which we are going to demo today. So I mean expanding on the use case what we want to do is that we want to add a new organization a store organization to the existing channel. So I mean also remove that particular organization. So what we're going to show is that I mean the whole complex process of adding and removal can be done using bath in a consistent way as we showed in the previous slide. So in fact for the features for adding removal and as well as the last feature which we're going to talk about we are not going to demo because I mean as I said the fabric 2.2 is something which is still work in progress. We have a feature branch on which we are still working on it. So we're going to talk about the last part of adding a new peer but for additional removal will do pre recorded demo for that. So what we're going to show there is that how the whole process has been automated by back so with that I'll move to the first part and that's about adding a new organization so I'll hand over to have not will take you through the details of that step. Yes. Thank you for the introduction and for your introduction as well. I'll go ahead by sharing my screen. So before we actually go ahead with starting the actual demo. I'd first like to set the stage again for the people that didn't join the last meeting I'll just quickly recap what has been set in the last meet up. So what Suvijit has already mentioned is that with buff we want to automate that tedious process of deploying the blockchain. And that can be done by executing one command on a terminal and then everything will be automated. So the file that you see here is basically our master master configuration file or master playbook. That will be run when setting up the initial network. So you see our site.yaml is basically a file that will spin up specific playbooks based on the network type. And this master playbook will take the network YAML that Priyanka has mentioned before as an input. So to also show that network.yaml, I won't open all the specifics, but we have some certain basic information for each of the network YAMLs. So depending on your network type, you will have a different type here. So in this case we'll be deploying Hyperledger Fabric with version 2.2. You have some environment variables and the Docker credentials which will be used to fetch the Docker images that have been mentioned by Suvijit before. In this case, like he mentioned, we will have three orders. In this case we will have one channel which will be joined by all the organizations that we have defined below. So at the top we always have the order organization. In this case it's one organization with three orders. This can also be multiple organizations. It's just depending on your network setup. And then we have the three organizations below. So what you can do is by executing that master playbook that I've showed before with this natural.yaml as an input, you will deploy a complete network which will basically give you this state. So I'll zoom in a bit and full screen it. So what you see here is that with the Kubernetes CLI we've fetched all the pods that are in the Hyperledger deployment. So what you see here on the left is that we have a bunch of namespaces associated to each of the networks and the organizations that we have defined. Important to note here is that normally in any production environment we would have multiple clusters. So each of the organization would have a separate Kubernetes cluster which they can manage themselves. But for our development purposes we develop on one cluster because that makes it easier to do and then we separate those organizations by the namespace. So everything that is within the carrier namespace is for the carrier organization and so forth for the manufacturer, the supply chain net which is basically the order organization. So it's kind of a network operator and then at the bottom you see the warehouse organization. Every organization will have some common pods and jobs that were run during this deployment. So for example you see some pods that are in the running state. So these are the actual active components that will stay up and running while the network is as well. So for example here you see the peer on the carrier network and you see it's CLI which we can use to, for example, create the chain codes, invoke the chain codes, things like that. You see the certificate authority and the certificate authority tools which I will dive in deeper because we will use those extensively when creating a new organization and adding it to the network. And then the completed pods, those are not really pods, they are jobs that run so they will spin up some pods in the meanwhile to execute some actions and when they will be done they will be spun down again. So in the case of every organization we will have a job that will join the organization and its peers to the channel that we have created. And then there are some actions that we will do for the chain codes. So for Fabric 2.2 we've implemented the new, the life cycle. So previously you would have a different way of installing and invoking chain codes to a channel. But now with Fabric 2.2 one of the biggest changes is that there is now a life cycle associated with the chain codes. So you have some different steps that will be done so you have each organization will install the chain codes. And then by the creator of the channel it will commit and approve the chain code to the channel and that endorsement will then be basically be on that channel so every organization on the channel will have approved chain codes. Alright so now that we've set the stage for the initial baseline of the demo I will start the first video of the deployment. I apologize if it's still a bit small on your screen but I will talk you through what exactly we're going to see on this video. So on the left I've split my terminal in two pieces. On the left we have our Ansible Playbook command. So this is the command that will execute the deployment of our Fabric network. So the first input is that configuration file. Important to note when adding a new organization file that we have a sort of a new master playbook. So if I just go to my code first we have this site.yaml which will be used for the main network deployment for the first initial time that you spend up the network. But now that we are adding a new organization we will have to use a different master playbook. So what you see here is that it's named at organization. And this will have a bunch of roles that are executed in sequence to make sure that everything will go right. And I'll kind of link those roles that are mentioned here with different timestamps in the video later. But with adding a new organization we will also have a different configuration file because there's obviously a new organization that will be joining the network. So we need some additional info and some of the existing values will be changed. So in here we have the previous network.yaml and then we will switch to a new network.yaml that will have these new organizations. So we see here that the first four organizations will still be the same organizations that we've used before. And then at the bottom we will add a new organization. So for the supply chain use case we'll be adding the new participant in the chain so the package will eventually be delivered to a store. So we'll be adding a store organization to our network. And that store organization will have all the prerequisite information that the organization needs to have. So the subject will be used for the certificates. We will have some information about our AWS Kubernetes and Vault. But the most important thing within the adding of a new organization is this organization status. So the organization status in our configuration file can have multiple values. So in this case a new organization will have the organization status new. And if we then open another organization so this warehouse we have already added three networks so we know that is an existing network. And then our BAF automation will know okay this is an existing network. I do not have to spin up anything new for this one. I will just use the existing resources that are already deployed in the network. So in the meanwhile I'll start the video. And while on the left our playbook is running with that network.yaml as an input. On the right we see the Kubernetes ports that will be coming up as the demo progresses. So each time when a role is completed and a helm file is generated it will be picked up by our flux ports. The get of flux which will be sent to our repository and then those helm deployments will spin up ports as we go. So at first you see the basic infrastructure prerequisites spinning up so we will test our Kubernetes connection to the clusters. And we will check if all credentials are valid and in this case we're using AWS as our cloud provider so it will download the CLI tools. It will configure that so we can use that in our deployment. So once I skip past a couple of minutes so the first minutes will be used to set up the prerequisites and then if I skip to about six or seven minutes. We see here that our first main component will become up. So in this case on the right we have our CA coming up which is the certificate authority. And that is the main component like I said that we will use to generate the certificate for the organization. So if we then swap to our playbook we see here that eventually it will include the role to create a CA server. So our roles in BAF we've really named them explanatory. So basically the name of that role will tell you exactly what that role would do without you having to understand what the underlying logic is. So this will spin up the creates the CA server which will take some inputs which are configured in your network.yaml so it's really specific to that organization. And in this case the important thing is again here you have that organization status so it will loop through the organizations which you have defined in your network.yaml. But in this case we only want to spin up the new organization so again we will hear filter on the status which is new. So the certificate authority and the certificate authority tools which is basically the kind of CLI that we will use will be spun up in the meanwhile. So we will just continue the demo. And then once those two tools are running so I'll skip again ahead so in the meanwhile the value files are generated it's pushed to the repository and it will be synced. So once that is done then we see here that there are two pods which are now running so that means at this point in our deployment we have those two things running. So one step that we do is in the middle we will add a pause to the network.yaml to our deployment. Which is basically to prevent our certificates from not being valid so by sleeping for six minutes we make sure that our certificates are valid and we can use them to continue in our deployment. But before we are sleeping we are doing some really critical things for this adding of a new organization. So we are doing two important things. So once you see the CA server and the CA tools one that's it's done then we will generate some scripts which will be used to add the new organization. The first thing that we will do is generate a crypto script for that new organization which is a script that is filled based on a template. So here we have a template of a script which is quite complicated I won't go into the specifics but what we see is that we are basically parsing some values which are based on our network.yaml within this script. So this script will be basically configured based on the specifics of your new organization so the script will never be the same for each organization. What the script will do is it will generate the configuration block of that fabric network based on the organization that we have already have. It will add that new organization to that block and then once we calculate the difference between those two configuration blocks we know okay this is the part from the new organization. And we can add that to the existing block to make sure that all the organizations in that fabric network know okay we have a new organization that will be joining us we all need to be up to date on that. And then once that is created then we will just save that to the Ansible host and then that will be used later to actually push that to the network. So while we are waiting a couple of minutes to make sure it sleeps we will just leave this running. And then afterwards once we see here we see the network is sleeping and then we will use the previously generated crypto script to create the crypto material for the organization. So what we will do again is create the crypto scripts for each organization which has the status of new and we will use a different template for that. Let me just check if I have that open so the crypto script which is again a template that we have for the script so the baseline is the same across all of the organizations. But again we see within those double curly brackets that we parse in some specific values which are based on the organization itself. So we see some variables being set and then at the bottom it will use the binaries from fabric so in this case it will use the CA client to generate those certificates. So once we have basically made a bash script out of this template, we will then execute that script on the certificate authority. So this this will already have the existing certificates which we have fetched from our hashtag up fault. So it has the key stores and it has the right binaries. So we can use the certificate authority to generate the certificates for that new organization, which is the certificate authority that we have spun up before. So this is why we first pin up the certificate authority and it CLI so that we can use that later. And then afterwards once that is done. I'll move to the next step. So we will create our conflict the X. So based on the network that YAML as mentioned here, we will create the YAML file for the conflict the X binary. Meanwhile, we'll also be putting some stuff in the vault which will be used later. And then that will be used by the conflict the X gen binary in the next step where we actually go ahead and create those channel artifacts. So again here we have a channel artifacts role, which will just create everything that this new organization will need to be able to join the channel. So if I then skip ahead a bit. Let's see. So we see here that our playbook is still running. We will have not spun anything up yet because now we still do that configuration to make sure that this new organization will be able to join our network. And then in the next step we will use that previously generated conflict block that I have mentioned to start the peer CLI for that new organization. We will temporarily spin that up and then we will fetch that configuration block. Modify it so then we can add as we said those delta that delta that difference between the old configuration block and the new configuration block. We can modify it and then we can push that back to the organizations once it is signed by the organizations which will come to later. Once that is done, we can actually go ahead with spinning up the peers for that new organization. So here again you see the create peers role. So again this is quite self explanatory. We can have some inputs from a network.yaml. In this case we will again be deploying the new organization. So if I go back to my video at about 17 minutes so we will take about 8 to 9 minutes to do the pre-configuration. We see that our peer will start spinning up so it will use the previously generated configuration files to spin up that peer. And now that we've done that we can have the new configuration block be signed by all of the peers in the network. So we will call the sign and update role which will fetch the block from the Ansible host which is basically your local machine or your Docker container, whichever you will use to deploy our BAF network. And then get it signed from the administration organization. So in this case it will loop through the channels and each organization in the channel will have a role. So in this case we have one creator of the channel which will be in this case the administrator which will then be responsible for signing that configuration block. So once it has been signed by the creator of the channel we know okay it's valid and we can continue on with the deployment. So once we have signed and updated that we can use that configuration block in the final step of the main part of Ag and the new organization. We can join the new peers to the channel. So it will fetch that block so block zero in this case we've updated that block so it's now the updated state of the network and it will join the peers of a new organization to the channel. So if we skip ahead in our video to let's see about 19 minutes. It's a bit ahead. Yeah so here on the left you will see that our deployment will wait for the joint channel job to be complete. So if we skip ahead we see that that job is running and that at about 20 minutes into our deployment we have joined the peer to the channel. So now the last step that we have to do is that we can deploy the chain code on that new organization. So in the channel we have to find a chain code that we've developed as part of our reference application that Suvjit has mentioned. And we will now go ahead with deploying that onto the peer onto the channel excuse me. And you see again that we will execute some jobs so we have the install chain code which is running on the right. And then with the new life cycle as I've mentioned we will have let's see. We also will have a proof and invoke part of there so we have for that new organization will have the approved chain code job and we'll have the invoke chain code job. So now we're at the end of our deployment so we've deployed a new organization and added that to the existing channel. So what we can do as the last part is to use our CLI for that peer to just validate everything has gone right. So we will take that CLI for the store. We can execute some commands to access the bash terminal of that of that peer CLI and then we can execute some commands that are part basically of the of the peer CLI to just validate that everything is working. So the first thing we do is that we fetch the channels which is peers on so you see that it has joined the all channel channel, which is the name that we use for our channel in our deployments. And then the next thing that we can do is just validate everything for the chain codes. So we will first fetch the install chain codes on the peer. So we will use the life cycle for this so you see that we have some chain codes installed. The next thing that we will do is to query the approved chain codes. The committed chain codes apology so we see here that we have some committed chain codes definitions. We see a diversion and the sequence of it and we also see the plugins that we can use for the endorsement. And the final thing that we will do is to check that the committed chain codes have also been approved. So we see here that there is the approved chain code on here which with a package ID. So we see that this new organization has now been correctly added to the existing network. So that is basically on a high level our demo for adding a new organization in the background that will happen a lot of complicated things but yeah to time constraints I have not dived that deep into it. So if you have any questions on the inner workings of the demo please feel free to ask. And yeah also we are still like Suvjit and Priyanka have mentioned still an active development of this Fabric 2.2 feature branch. So we're still looking for contributors on that we still have some really interesting issues that are still well issues well stories and features that we are deploying for this Fabric 2.2 branch so any contributors are very welcome. So let me just check if there are any questions in the meanwhile that I can answer before I move on to the next part of the operational features. There are two questions. The first one is by Samyak and he asked where do you fix the chain code package from can it be pulled directly and compiled from these years. So the chain coach package like we say we have some chain code that we have developed as part of our reference application. So if we open the network.yaml quickly I see that Shonak would like to answer this question so I'll also maybe hand it over to Shonak so he can he can answer that question. Go ahead Shonak. No I was I just marked it because you are already answering it live or not. But I think the answer to that question is the chain code package is fetched from a get repository which I guess you are getting to. Yes. So in our organization we have a chain code section which will supply the organization the information that we need for that chain code. So here you see a repository variable inside of the chain code. So in this case we are using the blockchain automation framework get repository which will have the chain code in our examples supply chain app. But this can be any get repository. So for example if you want to use the the fabric samples repository you can use that one as well you can deploy the Fabcar chain code on it. So anything that is on a repository you can pass that here as variable so we will deploy that chain code onto the network. So to answer your question yes it's fetch from VCS. And to add on to that it's totally pluggable as Arun mentioned we have used it in three different production systems with different use cases different chain codes. Yes I believe that the chain code that we use is based on on go. I know that we've also deployed Java chain code on it. I don't I don't know for sure if we've deployed JavaScript chain codes but then we have covered the three main supported languages for chain code on fabric. Right is there any more questions. So can BAF integrate with external CAs for getting crypto material or is it something that's planned further down the line. Yeah for that I'd like to hand over to Shona because I don't know the answer to that maybe Shona can. Yeah so BAF as of now it does not integrate with an external CA and if you need that feature you can always welcome to submit a PR. And it is not yeah and it is not planned further down the line by the current maintainers of the project. Yes and I think that's that's one of the main charms of an open source project right like you mentioned Shona if you want to add it you can always submit a PR to the repository and if it's if it's possible then we will sure look into integrating that. Yeah and the next question is about where it is running as I think we should also said that it is running on on Kubernetes cluster and are not can you just show where the Kubernetes cluster details are mentioned so it's under the KS. Sorry Kubernetes section right for each so each organization has a section called KS where you can run where you pass your Kubernetes the region if you are running on AWS you don't need it on other. And then the context which is the cube cube context and and the config file which is your cube config file. And that's all and it doesn't actually it can be any Kubernetes it doesn't have to be AWS I mean this demo was on AWS yeah. There's a rest API layer in the blog diagram in one of the slides earlier one. Okay so that rest API is I think it was at the application level so which does not have anything to do with with. With what we saw today so we have the rest API at the application level and that's a part of the supply chain application that we have a sample and why do we have a rest API is because our supply chain front end talks seamlessly to the to either fabric or the or column. Back end and that is the rest API in that case provides the the express API provides the abstraction and the rest API is only applicable in case of fabric which we have as an example again it's a part of the sample application. Yes. Yes so let me just quickly look at the time that we have still. Yeah so like we mentioned, we also have some operational feature which is still a work in progress. So I will not be showing a demo for that so I'll just be going through the same master playbook that we have for the organization. So again for this new. Sorry. Just, let's check with our room. I don't do we have. Yes. Our, because we have, we have 10 more minutes. Okay. Yep. More minutes. Okay. Yeah, if we have 10 more minutes then I'll hand it over for the next live demo and the closing statements by Swedish. Thank you for checking that. So I'll hand it over to Suvajit. I'll stop sharing my screen and you can take it over. Yep. Thanks I'm not thanks for your detailed explanation so it makes my job much easier so I'll go quickly because the next demo which we're going to show is about automation of removal of an organization. So the automation as we already mentioned bath does in a consistent way and the basis of the flow remains the same. So I'll just quickly share my screen. I have a prerecorded demo as well as I'll be taking you through our code structure or code flow. So like in the previous similar to the previous on the left screen is the Ansible controller machine where we run the playbook. The playbook in this case is the remove organization dot gamble. So if you have our latest code from the feature branch you'll have this under your hyper ledger fabric configuration folder. We passed the same single configuration file which was used to addition of an organization it will have the I'll show that in details but yeah I mean as we said that we have a single configuration files all we do is that we change some of the configurations there and run the playbook. On the right side you have the deployments which are shown on the which are on the Kubernetes cluster. This the warehouse was just in the previous demo it was shown that the warehouse organization was added to the network. So with that I'll just quickly switch to the code where I'll talk about the playbook which does the removal of an organization. Is my VS code visible to everyone. Yes. Thank you. Right. So this is the playbook which does the removal of an organization so in terms of adding an organization it is a similar process but more simpler than adding an organization. The whole process of removal of an organization we have automated it and kind of divided into simple roles or steps I would say so these are the roles as you see here are the ones which are used to do that. So basically if I have to start with the thing is that first it kind of generates a script which kind of modifies the configuration block to kind of add the organization which we want to remove. So the next role as you see is kind of talks about configuration fetching the configuration block and also the next one is after editing the configuration block it signs and updates it. So that is done and the configuration block is committed to the channel to the channel the other cleanup happens which is cleanup of our Kubernetes cluster it also kind of. Clean up not the whole cluster just the organization part which is also removal of the cryptos which was previously generated and put into vault and also the flux releases which will kind of remove the deployments from our Kubernetes cluster. So this is the base playbook which removes that so these as roles and it kind of further calls tasks and sub roles. So I'll not go into much details cause already explained by I've not in the previous video the functionality of removal is kind of same. The updation of the configuration block and then signing it by all the peer peers in the channel and then committing it. So I'll quickly go to the demo and on the demo what you'll see is that the same automation happening in sequence. So whether you will just play it. Yeah, before before I do that I also like to kind of talk about the changes which was made on the configuration file the network YAML file. So major changes I'll just talk about quickly as we don't have much time. So, yeah, so this is the configuration file is the same configuration file which was used to add of the adding the organization the store organization. Basically when I've not talked about the org status he said that there would be other options available for that or other values that can be a past year. So for case of a removal org or all we need to do is that we change the org status to delete. So there are automation kind of picks that up and knows that this needs to be this organization needs to be deleted. And the organizations which are already there and we don't want to do anything with that it's just org status still remains as existing. So this information needs to be added to the chat to the organization or the participants list in the channels as well as in the organization field list which is here the last one which is the store. We kind of add the org status as delete here. So this is the single configuration which will be used to removal of the organization and I'll start the playbook. I'll start the yeah. So I'll just quickly pause and you see that it kind of picks the first role which is like, I mean I just skip it I'll go back one step. So as you see that it kind of starts with the creation of the delete org script and then it moves to the next role which is about fetching the config block. So in our network configuration file we have one organization under the channels which is the which is tagged as a creator. So the first the creator it checks for whether the creator the peer CLI for the creator organization is up and running or not. Once it confirms that it uses a peer CLI to fetch the configuration block. So in our case the we'll go back in our case the carrier is the one which is the this is the creator organization so once it fetches the configuration block it updates and science it once the signing process involves in our policies what we have mentioned that we have to default so the signing process needs to happen for by all the participating organization so it will loop through all the participating organizations and it will sign the configuration block the updated one so here if you see it does it checks all the peer CLI of the participating organizations and connects to that and signs that so the signing happens through the admin account of all the organization so it uses the admin to do the signing. So once the once the channels are signed and the creator organization kind of updates the channel with the new configuration block so this is what is happening here right now as you see in the left hand side of your screen. And I'll move forward so once that step is done the. Once that step is done the part of where it kind of the fabric related stuff is done here so on the next step of the automation which I said is to kind of clean up the cluster. Not not complete cleanup but the cleanup of the organization's artifacts or deployments which are there on the cluster so this is what is happening the first one kind of removes the cryptos which are stored in the vault for that organization. So I mean before doing that it kind of test the Kubernetes connection it installs the required CLI is like the AWS CLI because we are using the AWS EKS you know in this case. Deletes as you say it deletes the ambassador credentials it deletes other cryptos and it will also next delete as you see on the right hand side it has started terminating the the various deployments of the store net which is the store organization. So it's clearing up the organization store. It also deletes namespace and then yep so that with that forward it and as you see it does that through it does not directly do that it kind of. Deletes the files and pushes them into into our repository and the plug sync works and picks that up and applies that for on the Kubernetes so with that we see that the deployment is done and the termination as already started for the pods. So with that I'll just close and yeah if you guys have questions please ask. Yeah I think I've already answered quite a few questions. So there anything open I'm not sure how if I think the next session is pending right. Thanks. Priyanka you are speaking on mute. Yes sorry so the last thing I wanted to cover around just one minute two minutes maybe that we still have a lot more to do. You know there's a lot of things a lot of operational features in our roadmap. So I would request the community here to be more active and contribute and if you have evidence, you know, from the client conversations that you are having that there are more things that the client are asking. We would welcome that as well so please come to the GitHub, raise issues, raise pull request and contribute. Thank you. Awesome. Thanks. Thank you. Everyone thanks. It was a great session and up next we have our next session from and works team on hyperledger areas. For that I'll hand it over to Kiran and Ankita. Yeah, thanks Arun. I will probably request Ankita to share the screen Ankita can you do that. And then probably because Ankita is going to walk us through all the things that we have to put together. But thanks to the hyperledger in the chapter for giving this opportunity to us. Once again rather yeah. Yeah, can you share the screen please. Yeah, hi everyone. This is Ankita from my own books. Let me share my screen to all of you. I don't know when we'll be able to see my screen. Yeah, we can see. So, thank you. Thank you. I sincerely thank you for giving this opportunity and happy like every time to present what's going on. So, we will be presenting hyperledger areas all about areas today and let's move on to today's agenda. So we'll quickly have a look at what we are what we are at ironworks and what we are working on. And then we will know about hyperledger areas, what is going into the areas, what it is what where it is placed, everything. And then we'll get to know about the repositories which are maintained under areas and contribution that we are making into, and we'll let you know how you can contribute to the forum. I would like Kalyan to take over to the introduction about ironworks. Yeah, okay. Can you go to the next slide. Next one. Yeah, very briefly, ironworks. This is a small boutique we are specifically working into the blockchain space I think we can skip this slide and get the next one. Right, so we are about to complete our six years mark in couple of months from now, of which I can say five years we have been into blockchain space. So we actually are a blockchain startup we can say from last two and a half years we have been very specifically working in a very focused manner into the hyperledger areas and in this stack which is to do with itself, which is why we believe that we can present about hyperledger areas in a little elaborative manner over here in today's tech test. So, over the last five years we have been working on various implementation to do with hyperledger fabric to start with and so on and then we moved on to hyperledger areas stack of technology when we are working with some of our enterprise customers to work on some of the problems that they are having with the identity space in the space. We also are part of various forums which are dedicated we will be working across the globe on the self sovereign or decentralized identity as well. We are part of solving network they are actually part of the trust or IP foundation as well we will of course cover that in subsequent slide we are also one of the contributing members to the decentralized identity foundation which is another foundation which is working into the identity space rather and we also have a small setup in Ireland which is focused on purely business standpoint but entirely based out of India and we look forward to contributing every possible way to the community giving it back to the community and leveraging the open source and bringing our expertise and the experience that we have earned over a period of last two to three years in the SSS space rather yeah. Quickly next slide Ankita because I don't want to spend too much of time on this slide just introduce to all of us and then we move to the core part of the presentation yeah. Yeah quickly as I said close to six years now we are working across customers on all seven geographies with 15 bus countries and customers and we are a 25 to 30 member team which is my team in its own size though. Yeah, that's all about what we are into. And now I'll take a pause here and I'll probably hand it over back to Ankita to deep dive into the areas stack over to Ankita. Thank you. Yeah. Thanks Kalyan for this very quick but brief introduction. Let's move on to what is this what is hyper laser areas project that is under the hyper laser umbrella. So before before moving on to hyper laser areas, let me quickly brief you about how what is identity and how it is managed these days. So, all the identities that we are having right from our birth certificate to email address to username passwords, or graduation certificates or everything. These are borrowed identities that we have. So, any of the organization or the issuer has given us these identities, and we are just owning those identities or using those identities to prove ourselves why not the other place. So these are the identities which are kept safe with the organization that has provided us. So this is the current model of identity we are the world is using moving on to the next thing that is self sovereign identity, which talks about as a user, I will be the owner of my own identity. All the other organizations that those are the issuers that that are giving me the identity but I will own the identity whole and soul. So this is what self sovereign identity or SSI is about and with this decentralized SSI this decentralized identity concept I will be the driver of my own identity. I know when to share what to share and how to share and why I want to share with someone else without, without been forging with the identity that is provided to me. So there is, there will be multiple roles in this ecosystem, mainly there will be three type of users. One will be the issuer, the issuer which will provide or which will give the identity that is, we call it as verifiable credential because these credentials will be digitally verifiable. These credentials are given by the issuer and the holder will acquire store and present the credentials whenever required. And the verifier is a role which will verify the credential which is given to the user, let's say if I have a email address or I have a graduation certificate. So the organization that is asking for my identities that will be an verifier. And of course the blockchain ledger will be the single source of truth where, which will maintain the status of the credits of the certificates of the credentials that is given by the issuer to the holder. So there the credential actually lies so the credential actually lies within with the holder with the within the wallet that is provided to the holder. So, in this way the holder is the sole responsible of his own identity and the concept of SSI works with the on the same lines. So let's come back to Ares, this is about how the decentralized identity will work what will be the workflow and data model and everything. So let's come back to Ares again. So what what Ares is so Ares is a is grew out out of the effort which community was building with Hyperledger Indy, which is the blockchain for distributed identities. And I realized that the client component that enables peer to peer interaction peer to peer interaction between the parties and the protocols can be more you reusable, and it can be extensive extendable to other blockchain blockchain platforms, like the other blockchain projects we have so it can be extendable to other blockchain platforms in order to use the verifiable claims in different contexts. So it provides shared reusable and interoperable toolkit, which is to create transmit and store the verifiable credential, and which are cryptographically safe and the key management system works on it uses Hyperledger to manage the keys and secure and secure the secret management. So the key characteristics that Hyperledger Ares has our wallet infrastructure. So wallet infrastructure can be used for secure storage of cryptographic secret that can be my digital decentralized identifier, or the credentials, or any other information that I used to build the blockchain client that we refer to as agent share. So the wallet infrastructure is a secure storage technology. The next characteristic is blockchain client that we also refer to as a resolver. So it is an interface for creating and signing of the transaction on one or the other ledger. And the next is secure messaging. This feature allows off ledger interaction so between agents so there are two agents one for let's say Alice and Bob they want to communicate with each other. So they're the messages which will go from Alice to Bob they they should be secured messages by one or the other mean, and that can use multiple transport protocols that can be that can go via HTTP or TCP IP or any other protocol. Like Bluetooth, or any other protocol we have. So this secure messaging we refer to as in areas it is the DID communication we refer to as .com messaging. And the next is API infrastructure. So API infrastructure are the mechanisms to build high level protocols based on .coms. So what type of data will flow. What, what will be the message structure, how we sees can be issued and everything. So this is where hyper ledger areas lies. So it uses hyper ledger indie as decentralized identity, and it can use any of the ledger. We have DIDs on Ethereum or any other ledger it can use those as well so it is pluggable. And it uses cryptographic library from hyper ledger Ursa that provides wallet security, your key management systems, key management and signatures and proofs that you present. So all the cryptographic hash cryptographic libraries provided by Ursa are consumed in areas. So this is an aerial view of how agents will look like. So everything, all the individual the identity owner, I as an identity owner of few of my identities. So an organization can have its own identities, any of the natural thing can have an identity, all the appliances, men made things, sensors, and everything they can also have their own identities. So all these, the all these identity owners, they, they will have their own agents, because they will have their own decentralized identity we call it as DID. So that DID is managed under the wallet. So every identity owner will have an agent will, which will manage the wallet and the, the agencies like the issuing authority, or the verifying authority, or any other peer that wants to verify or issue the credentials. They also have will have their own agents and wallets. These two agents will communicate with each other over a secure messaging system that we refer to as did come. So this is an aerial view of how agents will look like, what all different agents can we can have, we can have cloud agents, edge agents, mobile agents for that matter. So this is all agents that it is provides. So, there was a saying that seeing is believing. But in this digital era of, of advanced technology, we can mimic everything about a person, everything and anything about a person. So, what do you see, not necessary what you believe. So it is is about agents that are connecting, connecting and enabling trust over internet. So it is about making digital systems more like human systems. So this there is an ARIS RFC that talks about trust over IP trust over internet stack that introduce a complete architecture of bringing technology trust in human interaction. And ironworks is a contributor in TYP, TYP foundation and holds co-chair for governance stack and vice chair for technology stack working group. So, there are different layers of this trust over IP. So the very basic layer is the IDs where every entity will be tagged to the decentralized identifier, the unique identifier that will define an identity that will define an entity, which is provided by hypervisor entity. So the next step, the next layer is did com protocol that establish a cryptographic means or a secure mean by which two agents can communicate with each other. The next layer is data exchange protocol. So the first two layers, they establish cryptographic trust, we can say it as a technical trust between the peers. The second and fourth layer, they are human trust between the organization individual things that they are using like sensors devices appliances etc. So the goal here is to standardize all this all supported credential exchange protocols that are that are verifiable credential exchange protocols that are given and standards that are given by W3C. Next, the fourth layer is application ecosystem layer, like just an application calls a TCP IP stack for communication over internet. So the app will call TOIP stack to register the IDs or to make connections or for credential exchange or to engage in any of the three layers that are below it. So this is all about how you can bring trust in that over the internet or trust on a trust misinteraction. So this was about a risk where it is placed. I think that can I add quickly about trust or IP as well, a very brief way. Yeah, you're okay with that. Yeah, so trust or IP foundation has recently been formed somewhere in the month of March or April when some of the veterans working in the identity space actually felt realized that there is definitely a need to bring trust over a trustless network. I think we have a concept of voice over IP, the concept of trust or IP has been brought in and it's entirely emphasizing on how we can have a trusted or cryptographically algorithmically establishing a trust between the interactions people have over the internet with each other in any corner of the world. Yeah. So this is catching up a lot of momentum there now, and we would really encourage everyone to participate, be an observer listen to what is happening because this is technology in the making now. Before it starts getting as a receiving before we start getting out the deliverables and components and building block coming out is right and to just engage at start looking at what is happening we can join as organizations, or you can join as individuals as well at the same time. Thanks. Right. So, Colin has already shared few links where you can go through the trust over trust over IP foundation, you can check about years project that is over there. And this RFC that is specifically provided by that this is one of the RFC that is provided by a risk. So, moving on to the business use cases of SSI or business use cases that hyper ledger areas can bring in. So, there are plenty there are plenty of use cases that that that can be there. So, very basic are trusted data transfer. So, it's not just about sharing the data. It's about sharing the data safely and securely so that privacy is preserved. Trusted data transfer verified CVs and qualifications. If I want to check whether this the particular CV that is given that that is in front of me is valid or it is not forged with. Similarly with the certificates the graduation certificates or in certificates that we have the qualifications, the KYC process, which is in, which is in multiple sectors that can also be a use case of it. Cross border trust digital passports we are talking about so this this can also be one of the use case of the air stack. And also in healthcare, healthcare systems, the healthcare reports management and security of the data of the patients and everything that can also be the use cases of decentralized identity or SSI. So, there are few use cases apart from that that we are working on our own transfer of ownership of the verifiable credentials, the assets, they can be tickets or any of the assets that you are owning. So, if a person is owning some asset then how that can be a verifiable credential and how that the ownership can be transferred from one user to another user. There can be use cases where IoT devices can be IoT devices can have their digital identities and suppose I if I want to have access to the door of the racks the server racks. How I can ensure non forced identity that this is the identity I belong to and this is the identity that has an ownership or access for the particular rack. The IDs can be installed on the IDs can be in can be attached to the device, the IDs can be attached to devices to the person and the credential can be the credential can be provided to the user to the holder, and the device can verify the credential that is given to the So the door racks or the bike that have installed an IoT devices that those devices can can verify the credentials or the thought access or that is provided to the holder. So, these are couple of use cases that hyperlature areas or SSI brings in or leverage. There are different repositories maintained under a risk frame hyperlature areas. So we call it as frameworks and agents. So there are different frameworks. There are different agents, which can be, which can be, which can serve a person to to maintain its wallet, its DIDs and to use the protocols for the verifiable credential exchange and prove verifications. So the couple of frameworks that are that are over there are areas cloud agent Python. So it's, it's, it's an agent that is written in Python, and it can be used for any non mobile agent scenario. It's a cloud agent basically used for issuers verifiers or storing of the credentials or anything. It can be used as an enterprise edge agent as well. And the main contributor is the BC go and we are contributing to one of the RFC the one of the protocol that is given in that is need for deployments for endorsing the transactions. So you can go to the GitHub of this framework as well. There is another framework for which is getting developed in JavaScript. So it is also an SSI agent which leveraged it com and DID based communications and storing of the VCs verifiable credentials and exchanging the proofs. And it also provides the edge agent. These are all enterprise edge agents, which can be used in a non mobile scenario agent scenario. So we are contributing for multiple for implementing multiple areas RFC that are provided in the JavaScript framework as well. So there is one more framework that is written in.net and that it can be used for building mobile as well as Amarin. And it can be used as edge agent and mobile agent board and the main maintainer here is transit and we are contributing for enhancements and everything. And there is one mobile agent, all the all the agents that we discussed were for cloud agents. They were for enterprise edge agents, but we have a mobile agent as well, which so I would like Amit to talk about it because he is the main contributor and maintainer over here. So Amit, could you please? Yeah. Yeah, thanks, Santita. So yeah, if we talk about, let me start with the basics. So if we talk about the wallet, so what is a wallet? So wallet is a place where you add all stuff into that. If we have a physical wallet, then we can add a cash card, photos and all things. So an identity wallet is just a combination of agents and an indie wallet. So where wallet you can store all the credential which issuing party or any organization can issue those credential you just store in your digital wallet. So, you know, if we talk on a holder side, so there is the three party that Ankita already mentioned, one is a issuer, one is a verifier and another is a holder. So in SSI paradigm, there is a triangle, we can say triangle. So if we talk about the holder specific, so every holder needs their wallet, so where they can store his credential. For example, if any government wants to issue the government ID card, so every citizen of that country needs wallet and that wallet can capable to store the SSI based credential and whenever user needs to prove that credential, they can easily provide that credential. So mostly holder can use the mobile app in today's world if you are seeing the mobile is mobile makes our life more easier. So we are developing an indie area's mobile agent, we are called as Arneema. So it's a areas react native mobile agent. So there is a two type of agent in here for if we are talking on a mobile. So one is a cloud agent one and one is a agent so in cloud agent everything will deploy on the server so your wallet will be created on server. So all your heavy cryptographical library are processed on a server in the cloud. Yeah, so in this scenario, you just need to use those rest API through that you can create one mobile application or if we have existing mobile application so you just plug in those API and you can create your identity wallet. So to use the use the cloud agent again it's not a pure SSI based solution because when we when we talk about SSI solution first character characteristic came sees a decentralized. So for the decentralized we need a age age wallet for for every holders. So we we we team team of few members we team of ironworks we created a one one react native mobile agent. So this is a lightweight mobile agent for the holder specific because holders never need to do a creator credential things and never need to create a revocation registry and and all things. So it's a lightweight agent we guess we can say that. And again whenever you want to connect with any counterparty you just scan the QR code and executing the areas base connection protocol. Yeah, we recently created one branch on on the repository and we have added the connection flow. So subsequently we are also working on a flutter part so recently our team is started working working on a flutter. And then in the base SDK in a flutter as well. So, as I mentioned if you have a if you have some existing use case of your of your business so easily you can plug plug plug our SDK into into your, your, in your system for example, right now we on a mobile you can see whenever we want to do any transaction again we receive OTP. So if we if we are on out of country so it's time we are it's a difficult to receiving the OTP so in existing banking application usually using this SDK you can you can implement a pure SSI based credential verification flow. So our Nima is a open source mobile agent for react native so currently if we if anyone have a existing react native application and they want to use use in the areas base credential flows so just taken use this SDK and achieve those things so if you go to the below link you can find all the all the information about about the SDK how you can use what are the basic packages that needed for it and all things. Can you please move forward. So yeah apart from this few few more agent are also available on a different language so there is one agent is available on a go language as well. We have also a static agent in Python so they have some static issuer issuer you so you can just connect and you can you can easily start exploring the SSI things so in Ruby if there is also agent available and Java as well. So yeah, can you please. Yeah, so if you want to contribute how you can contribute obviously we have questions so because I know Java how I can contribute. So if you want to contribute so there are some some links and chat options are available. There is a weekly zoom calls are available for hyper laser working group so there is a one wiki page for that. There you can find all the call related information what are the stuff is discussed on past call or what will be a discussed on next call so based on your interest you can you can join those working group call as well we also have a rocket chat chat. Yeah, we also have a rocket chat option so where everyone can share those doubts and and restore useful links and resources where everyone can easily learn and clear their doubts. So yeah here you can see the hyper laser wiki page. So yeah all the things you can find here. Apart from apart from this. Yeah, so this is what this is what it is is all about and how you can contribute to the project that it is brings in. Thank you. Yeah, this was all about what we were intended to present over there and in in the situation where we are getting the laws like GDPR and PDP where a data protection is the key. Key for any individual any organization which are getting the data of the users which are storing their data of the users. So right to forget the data is one of the key thing that every organization has to keep in mind with the before storing or getting the details. And SSI or decentralized identity can fit in with all these laws, which are coming over here. So, this is what we were to present today. So, thank you for giving us this opportunity. Any questions, if you would have, you can just take those up. No question. Okay. Thank you. Thank you. And I'll probably stop recording this concludes the session.