 So, so thank you all extremely excited to be here and share more about what we've been doing with blockchain automation framework and hyper ledger labs. I think it will be sharing a bit of the journey, how do we get here, and some other pieces about what how we view the landscape of the various different DLT frameworks. But before that we do that. Let's do a quick intro. My name is Michael Klein. I lead our blockchain and multi-party systems architecture group globally within Accenture. I'm responsible for for the blockchain automation framework is quite well as hyper ledger cactus and our contributions there and the number of other technology assets that we have at Accenture. Super Jeet, would you like to give a quick update? Yeah, sure. Yeah. Hi everyone, I'm Shivajit Sarkar and I've been an Accenture for a year and I've been working at the BAF maintainer since it's open source. So my work at BAF is like being the maintainer and as well as the tech lead. Well, like I said, we're excited to be here. But before I start talking about blockchain and DLT, I didn't want to share a little bit of our point of view from Accenture and how we look at the changing space of data within organizations. And first off, when we look at the clients and our customers of how they are making different changes and making different choices in this era of a pandemic and the need to be more virtual, we're seeing a lot more drive towards digital, virtual and cloud based services. And we think there's a real opportunity for our clients and our customers to be thinking about not just their journey to the cloud, how do they become more virtual, but also how can they take advantage of new ways of sharing data and actually maximizing their investments and these shifts towards cloud platforms in a way that allows them to better collaborate and tear down the corporate firewalls between organizations. How can they collaborate better with their partners and change the way in which they're sharing data. And what we really mean by this is what, and this is sort of the foundation of this idea of multi-party systems versus just blockchain, looking at there's many different ways in which we can share data, many different patterns. And I think all are valid as we really try to break down these data silos that have evolved over the past, you know, many decades. And so when we look at this idea of multi-party systems, it doesn't start with technology. It really starts with this hyper focus on how we can we create shared value within an ecosystem. And being very, very targeted towards that shared value, how do we actually bring multiple organizations together to agree on a shared strategy and governance, a shared operating model and platform. But also how does this ecosystem grow? How does it sustain itself and grow over time and have a way in which it can operate where that shared value can be unlocked for all participants. I think we often talk about this as the side, yes, we need to solve that in the technology space. But what we see when we go implement this with a number of very large organizations that this is the hardest part of multi-party systems is actually getting the right governance operating model and growth model for that to be a sustainable model to actually support the need for the technology. And so that ecosystem comes first. But then it's not just that blockchain is the answer. And I think what we want to really do is really focus on not just one blockchain platform, but also other types of ways and other types of technology patterns that can be used to share data between organizations in new and different ways. And so this is sort of the evolution of data sharing in a very, very simplified slide. But you know, back in the early days, we would send the data, we'd sort of replicate those, the physical mail item, you know, sending a letter in the mail and now we have email, right. So did that move from the physical to the digital world and sort of replicating that and we say we're sending that data from one organization to the next. Then we've, you know, become more advanced applied new techniques and started keeping a shared record between data between organizations and this is possible with things like distributed databases. So the ability to actually share data in a common consistent state is also another option. And then what what is fundamentally new and what came in the advent of blockchain is this idea of sharing assets having uniquely digital objects that can be shared between multiple custodians and asset holders and you can prove with through the technology that digital object is unique. And that's, that's something a unique feature of a distributed ledger, especially as it relates to distributed systems. And so, when we look at multi party systems what we're saying is all of these patterns are valid valid. And what we want to do is look at what are the needs of the ecosystem, and how do we apply the correct technology pattern to a given ecosystems problems. So, we said that we talked about automating block blockchain deployments and we are so that was sort of my intro of multi party systems and how we sort of look at the overall space. And really how we got to the space of blockchain automation framework, but about four years ago, we started working on a reference architecture. The first version is four or five years ago. The DLT reference architecture and we've evolved it a couple of times since then added and removed items to it. And what it was is essentially, we looked at the ways in which a DLT or blockchain platform would be implemented and said, in a standard way that this integrates with the rest of the technology stack. When we look at a full production implementation, what are all the capabilities that would be necessary in order to deliver a full solution for a customer. So, we developed this and it was quite a complete as far as we were concerned, we had a lot of success for it, but our developers and our technical people wanted more. They said, Well, these are this is a lot of nice slides but how do I actually go implement this how do I have consistency how can we accelerate the deployment and make this easier for everyone. And so, some of those, we said well we we want to turn that reference architecture into something physical. So, we started looking at the landscape and saying, Well, what are the common challenges across all of our different customers and all of our different projects. What are those things that we really want to try to solve. And so, when we when we sort of scoured the landscape we talked to a lot of people. And we had a few common concerns right and number one of those was no reuse of assets so we're building something new for every single implementation, and it feels unnecessary why do we keep on rebuilding from scratch. It's been hard and I think for people who have been in the blockchain space for a while you understand that, you know, developing your applications themselves is not actually the hard part in the technology space. It's actually administering all the different nodes and getting everything to stand up and talk to one another, especially if you're dealing with firewalls and other sort of complex things that you need to poke holes in the networking space and getting all of your TLS certificates and everything to connect right. It's hard and complicated. The applications tend to tend to be quite a bit easier in scale. So it's hard to get these things rolled out. The, there's technology silos and vendor lock in so we see a lot of solutions out in the marketplace where the focus is trying to capture a certain market. And so what that does is in the space of where we're trying to build the ecosystems. It's creating pockets of innovation and trying to say everything should be done in this specific technology and we wanted to avoid that as much as possible. And then that we saw a high risk of selecting the wrong platform. That's actually foundational to how we, you know, the foundations of how we came up with hyper ledger cactus as well. And then we saw that there weren't established best practices so that conformance with the DLT reference architecture was not necessarily easy to do right everyone sort of had their own interpretation. And so what we wanted to do is, we wanted to say what could we do to solve these challenges and actually accelerate the adoption of the technology and make it easier and more accessible to everyone. And that's the blockchain automation framework. So, really what blockchain automation framework started as within Accenture and this was started about 2018 is this idea of how do we take that, you know, high level reference architecture and make it physical. How can we give all of our people within our organization, the ability to have a consistent way to deploy blockchain and DLT platforms. So cover what is blockchain automation. So what is it. Yeah. Thanks Mike. So, if I have to kind of define a bath or the blockchain automation framework in a sentence, I would say, blockchain automation framework is an automation framework which rapidly and consistently and securely deploy deploy as production ready DLT So that's that's the one sentence in a one sentence if I have to define it. So on it on a high level or on a nutshell if you have to look at what a bath does is that as you see here in the image that it kind of takes a single configuration file. So, this configuration file kind of consists of various network configuration such as the DLT platform of choice, your consensus mechanism choice or an other organization details and their configuration so this single configuration file is basically taken as an input by bath, and the bath automation kicks into place and it kind of could deploy as those DLT net DLT of choice into the cloud provider of your choice. So in bath, what we say is that we are kind of platform agonistic. So it's kind of fully dependent on what cloud provider you want to put so based on your choice back deploys the DLT network you want to put. So with that, I'll move to some of the principles which bath kind of converse adheres to. So the first one as Mike already kind of talked about the DLT reference architecture. So back kind of consistently has the methodology and out of box tools and assets to kind of have that architecture and development standard in place. So as I kind of pointed out that it is platform agonistic also it's kind of infrastructure independent. So there is no lock in with a particular configuration tool or or a cloud provider. So in the later slides we'll also talk about some of the components of all the automation and and how these are kind of designed in a modular way. So most of the components are modules in in back using that back for kind of plug plug plug in and play. So you can choose your modules and components and use that. So we've designed bath in a more secure way. So for example, we don't save credentials or in any local place or any configuration file or environment variable. So we'll talk about those components in the details in next slides. So, also, lastly, if I have to touch yes in fact it's open source. So with the understanding and the concern that we had around the blockchain ecosystem and the necessity to scale a proof of production environment. We kind of have open source blockchain or bath with hyper legend labs. So what we have open source is our automation deployment components. We have also open source our reference architecture and documentations. We have also open source the supply chain ref app. And it's all available under the link which is on your. So, yeah. Thanks to the G and I just want to jump in and point out the a little bit more of our open source rationale. So, we built this this in within Accenture. And we saw a lot of value right we're already capturing a lot of value by reusing this across a number of our of our customers. But what is really the one of the biggest rationales for us to make this open source is the fact that IP concerns, especially when building an ecosystem are very real and for anyone who's participated in consortium we'll probably, you know, second that opinion. How do you actually agree upon who owns these assets. And, you know, the goal, like I said at the beginning with blockchain automation framework wasn't to try to lock in, right, our goal wasn't to try to lock in is actually trying to accelerate these discussions and, and if we got into a conversation about who owns the IP around this type of automation framework, it doesn't actually accelerate in true implementation. What it actually does is hinder because we get into conversations about who owns the IP and whatnot, and that's the exact opposite of what we wanted to do. So in this open sourcing blockchain automation framework, we made that IP concern essentially a move point. We said, look, this is not something that we're even trying to commercialize. We want you to have the same rights to this, this set of code that we do, and that you can do as you see fit, or you can just take it in and run and then what we want to do is make this easier and more accessible for everyone. So we can have confidence in this in the underlying platform and architecture that will be upon. I think that that has been, it's actually been such a relief would say in general to not have to get into those conversations, and just be able to use the right tools and not get locked up in those IP conversations. So I'll talk a little bit here and see if you please please jump in here about the various components of the the Washington automation framework. We mentioned that it sort of starts with Kubernetes. So one of the principles is we want to be cloud agnostic we want this to run on every cloud Kubernetes is everywhere. This provides the abstraction layer for us to say we can run this on prem, we can run this in any cloud, and fundamentally it doesn't change our platform much the Kubernetes does most of the abstraction for us. So we don't have to be concerned to where it runs. Next we wanted to really focus on production. And so when we think about production. What does it mean to operate in production we need to change things we need to keep track of how things change over time. We're a big fan of the get ops approach of how do you maintain infrastructure and platforms through code and have that be a declarative type function versus a procedural function to maintain your operations and doing that all from get at the get go pun is really a great way to do that. And so looking at what's out there there's other ways to do this but we did choose flux as the Kubernetes operator for effectively making sure whatever we have in get as our configuration for these environments be guaranteed to match what's deployed in Kubernetes. In fact, we don't even encourage other than looking at logs and sort of checking that things are working that anyone even uses cube control. Right. The idea is all configuration for our deployments is done through through get and and flux automatically applies those changes for us into the environment ensures a consistency in the deployment between what's in get and what's in in Kubernetes. We have a helm as the the way in which we do that right so helm is executed via flux to actually deploy the the platforms into the the pods and containers running on Kubernetes. And so these are this is our set of instructions for how we actually deploy within the Kubernetes environment. And sorry, I'm jumping around no particular order here on the slide, but then we actually use Ansible as a configuration management management. Now for people who are familiar with Ansible you may say, well Ansible can actually do deployments and they can do a lot of other things and oftentimes it's used to manage infrastructure and make sure that all the infrastructure has all the right which is applied. That's not actually how we're using it in BAF. BAF is uses Ansible purely to take a single configuration file and turn it into many helm value files. That's really what it's doing it's being used strictly as a configuration management tool to simplify instead of having to create hundreds of configuration files across all the different helm package and releases may not be 100 I'm exaggerating, but to keep that simple for the users Ansible is effectively being used as a configuration management tool. And quite honestly and going back to our modular approach, it could be easily replaced with anything else that wants to manage those those helm value files. And the last thing we have here is hash corp vault. So, we're using vault and there's many ways to do this but with hash corp vault. The idea is we wanted all the secrets to be externalized from the actual deployment we didn't want anything on a file system anywhere. And if it had to be a hat we wanted it involved. hash corp vault is number one, it's, it's open source, but also it provides some abstraction from the underlying infrastructure if we want to replace it with other key management solutions. hash corp vault has integrations with many, but also has the ability to, to be fairly easy, they swapped out as well with other solutions. Yep. Okay, anything, what Suvajig what did I miss, did I miss anything there. No, Mike, I think you covered it very, very much so particularly the cloud agonistic part I would say that yeah so how we can achieve the cloud agonistic is through Kubernetes which I think might you started with so yeah. Great. Okay. And I think as we jump in here. This is going back to our reference architecture and maps in. How does BAF actually help implement our reference architecture that we showed sort of at the beginning of our slides. Suvajig, do you want to sort of explain how our BAF solves or doesn't solve certain aspects of the reference architecture. Yeah, sure. So, I mean just kind of to make the audience understand what what you're seeing right now is, I'll just go through the legend legends which you have. The blue ones what you see here on the left hand side is kind of your deployment and operation architecture. The green side is basically your runtime or execution architecture. So all of these companies components are kind of as already discussed our modular and also kind of without very little or no coupling at all. So these can be inter I'm kind of replaced with some other tools as well. So the very important thing which kind of needs to be looked at is the green circle boxes so these are kind of the prerequisites which are required for BAF. So it kind of clearly defines what BAF does and what BAF doesn't. So just kind of highlight those is like the like basically the crowd provider and the container services. So those are kind of prerequisite before you deploy BAF. And also the HashiCorp vault which is for our credentials and crypto management. So that is also something which is prerequisite and it's not part of your BAF automation. So just kind of quickly going through some of the other services on the operation architecture is the Git which is our version management for configuration management and also very awesomely kind of described by Mike is that what we use it for is the kind of the various value file or the configuration the helm which is which is for the Kubernetes package manager. And then we have GitHub or the Travis CI and the Jenkins for period build and artifact management. Also in delivery management we have read the docs which is our, our documentation or where we maintain our documentation documentation so that is also being open source. On the execution architecture or the green side what you see is that an integration service we have ambassador. So ambassador is basically our interest service, which is kind of used for, let's say we want to do a multi cluster communication or communication between various components. So in that ambassador acts as our interest service. So, I mean with one, I mean we are using HAProxy as well for, for fabric, but majorly ambassador for most of the DLT platforms. Currently in DLT platforms. As you see here we support Korda we support Hyperledger Basu in the fabric and Corum as well. So that's from the DLT part. Yes, and just one more thing is that if you look at your security services on the left hand side, you'll see that some of them are kind of like, for example, if I look if I kind of talk about the certificate authority here it is kind of dependent on the client or the customer options. So if I have to take an example for fabric, we have the fabric defaults here, but in some of our customer implementations we have kind of used, I mean, kind of replaced with their own CA or their own components. So that's my. I'll just add in one one more thing here. This is a picture of a single organization. But, you know, this is blockchain automation framework started with what does it take to do production implementation and then we worked backwards to the automation. And the fundamental to how we designed this is this is actually this picture is actually replicated for every organization, every company that's present that's participating in the network. So, I think the key there that super sheet covered was the ambassador integration, the fact that we actually have for fabric already sorted out the TLS integration across multiple organizations. And so that's really one of the core principles here is as we're not trying to make this quick and easy for a developer to run their code on what we're trying to do is make it quick and easy for an ecosystem to deploy a blockchain network across multiple organizations. And those are two different problem spaces and why, for some people who may have tried out bf in the past as a developer felt that is a bit more complicated or more didn't work well on many cube. Well, it wasn't designed for that right. It fundamentally was designed to operate in production without fancy you guys and making things easier for developers. Can I ask a question. Sure. So, currently you're supporting fabric and the best to quarter and quorum. Are you planning or are you thinking of supporting other platforms as well and where did the choice come for to support those. Yeah, great question. We are currently not focused on adding more platforms. You do see the great out distributed databases on the slide here, and we were looking at that and still continue to keep an eye on it as I talked about our multi party system strategy. Right, the fact that there's a lot of ways we can solve, but most of it's been on our customer demand. So we're not actually looking to add from our point of view more platforms at this point in time. If we did have a need and we see a rise in other places. Definitely what and we would love others who think there's value in blockchain automation framework to contribute their own platforms into the code base as well so no, no, no concerns there. Right now what we really wanted to do is focus more on the operational components of the ledgers we already have. I mean we have five, six sort of if you count enter quarter enterprise. And there's a lot to do to actually make those more operable, like what if we want to do in place upgrades of fabric from version 1.4 to 2.2. Right, that is something we want to have some support for. And I just throughout that one in a random, but we want to start making these that we have a bit more operable and build in some more automation for these specific platforms. So that's where our focus is in the roadmap and we'll go through that a little bit but our focus for right now is not adding more but getting better at the ones we have. Great question, thank you. Sorry, many more questions. Thank you for your answer. Okay. So, not now we get to the real technical stuff right up you know lift the hood back. Let's say, how does this actually work. We're going to get into the specific platforms and then we'll talk a little bit more about what this means. But super G do you want to walk us through serve. How does this work for fabric. Right Mike. So, I mean, as Mike, you kind of said that that we are kind of opening the hood of the hood so when we talked about how bath uses a single configuration file and kind of deploys the various components on a on a single configuration file in this cluster. So, the exact flow kind of happen same and it's consistent across all the dlt's. So, I mean, for what you see on the slide is particularly for hyper ledger fabric. So the automation here starts from a developer on an operator, putting a conflict single configuration file. So this configuration file is consumed or taken as an input input by the, our master playbook, which is, which is, which is a playbook in Ansible. Ansible kind of contains all the roles and tasks. So as you see here in the box, the various rules and tasks are like, I mean, kind of create channel or create channel artifacts, and various other functions are tasks and roles you can call them. So these tasks can roles kind of takes that single configuration file and breaks into multiple configuration file. So the work of Ansible here is as Mike has already kind of mentioned in the previous slides that it breaks it into multiple configurations which are then becomes an input as a helm value to the helm. So, and this kind of happens, and is managed via the get off so or the flux basically. So, all all the value files management and the operation around that is not done directly by Ansible, it gets kind of this flux keeps the sync of whatever new configurations has been approved and via the helmet applies it to Kubernetes. So, on the helm side, you'll see that all the various kind of components or the components which are DLT network. In this case fabric would require are are provided. So for example, various membership service providers, the peer nodes, as well as the order configuration with various consensus as well as various channel management like creation of a channel joining of a channel, all those features are part of the helm. Helm charts or helm configure I mean helm files, which which which are kind of deploy deploy those components on Kubernetes. Also, the major part of the automation you see is on the top which is the Docker or image repository basically. So, in case of fabric, we use the fabric official images so all the official images provided by fabric are used as it is. And this can be used from a directly from a public repository or can also be kept in a private repository belt and built and kept in a private repository and can be used in the in the back automation. So, Mike, I mean, yeah, yeah, yeah, yeah, I think you know what may be helpful is we can sort of talk about experiences with with fabric and using this a little bit and you know I think back in 2000. Was it 18 2000 maybe you're even early 19. We started doing, we started implementing this with some customers and one of our first customers, which was a large internet technology company who wanted us to implement this on on Google Cloud platform. And so, while our development team has done all of our work on AWS. The, the first actual implementation was was on GCP. And what was really interesting about that is, you know, we took the source code as it was and we weren't we hadn't even open sourced yet. And it was still very, very early stages, but the team took what we had and used it as the basis of their implementation with a customer. And what was really great about that is we got to see how it got used right and we were able to incorporate feedback of what work and what didn't, and actually start to incorporate some of the things that they added to the platform. And then we got into the core and I think, you know, that was really interesting. While we don't test on GCP currently as a dev team, you just, you know, cost of infrastructure, the ability to know that it did work and it worked almost seamlessly there was not too much that needed to be done to get it to work was was sort of a validation there. And this is back in the early stages. And then we've also implemented this at a large media company. And we, what was interesting there is that this was a media company looking to build a loyalty platform. Right, they wanted to build a loyalty platform across multiple of their media states that so they want to exchange these points across a number of their different partners. And what they wanted was fabric. So they told us it had to be fabric, and it had to be on Azure. And if anyone's familiar, familiar with the capabilities of blocking platforms on Azure today, you'll know that the, they do not have a current GA version of fabric in their, their managed service. And so what we said is, yes, we can do this and we can help make this go faster with blockchain automation framework. But what we also heard from, from that media company was, we want to use the Azure services where we're all in on Azure, we want to use these cloud native services from Azure as much as possible. So we can really take advantage of our investment there and our strategic relationship with Microsoft. And we said, great, we can do that as well. And what we did there is we actually replaced a hash corp vault with Azure key vault. I think I got the right name. Yeah. So replaced it with Azure key vault. And we did, and it worked very well that that that change was, you know, a week or so of developer effort, I think, to actually make that change. And, you know, a few more times to get through the testing. But overall is a very simple integration. And the fact that this has been designed and you can, for those that are familiar with this type of architecture, this is fairly pluggable, right? You can swap in different images, you can swap in different home charts fairly easily. So overall that validated our approach and also gives us more confidence and more sort of ability to say we can do this in other places. So I think overall the fabric's been our biggest platform that we've used with blockchain automation framework. But we've had really good feedback from those organizations that have used blockchain automation framework in their implementation. So I think like most of the features which we see here are kind of based on and kind of from those feedback from our customers and also kind of still we are shaping based on their requirements and as well as community months. Yep. So next we're going to talk about Corda Enterprise now just reminder. We started with Corda open source, right, going with our original principles. We wanted everything to be open source we didn't want there to necessarily be a lock in with a given with a, with, you know, IP and getting tied into licenses and commercials. But we also heard from our customers for those that were really serious about production. When they were serious about production with Corda they were not using open source right they were going and talking to our three and getting, you know, a commercial license with them. And so we knew we needed to do enterprise as well and provide that option. And there are official images, some official some non official images from, from our three that we wanted to integrate and the way in which it is implemented is different. And so, so did you want to talk through how is Corda different in the way we approach it given that we have sort of this enterprise open source mix. Yeah, sure Mike so yeah, like, I mean in consistent way it kind of happens the same way. But if we look at some of the differences it's basically the arcade based on our cord architecture we have to kind of change in the roles and these settings. So, so if I have to talk about some of the images, some of them were officially available by Corda, some of them we had to kind of build and create so for example, the architecture around the core enterprise firewall also how the hierarchy of the services will be deployed in the network. So based on those architectural or the process, I mean process definition or how we kind of thought that will create based on that we have to have to change and design our roles and our charts. So, mostly the, the differences here I would say is on the on the features that that are currently put into the particular DLT architecture so the way how, how we have to connect our nodes and how the sequence of the various services for example, in Corda enterprise we have the CNM which is a core to the network. So those things we have to take care and have to kind of create in a in a different way. Yeah, and I think I'm going to interject and just tell, like, if we take a quick step back here what we just covered was you know hyper ledger open source framework of fabric, and then a, you know, a software enterprise software license and they're using the same fundamental structure. And if you think about what that means then is you could have a single get repository essentially deploying multiple DLT networks independently and they would be separate. But, you know, future so think a few years out from now. We also if you think about what we're doing in hyper ledger cactus as well for those who aren't familiar it's focused on interoperability. So then layer on hyper ledger cactus in a future world in a future state that would actually allow you to interoperate across all these different DLT networks as well. So you can have one platform to actually deploy and manage the DLT networks, and then another framework another hyper ledger framework to actually integrate those different DLT networks. So what if, if you're not catching on to the theme. The idea here is we want to bring down those barriers organizations being worried about choosing the wrong DLT platform, or choosing the wrong network, and know that there's consistent ways that we can bring these things together. I fully understand there's trade offs when we do that and their whole bunch of other things that come into play. But if you start to think about how we bring the hyper ledger greenhouse together, and how do we bring just this this consistency of how can we all work together across different ecosystems. This is really a big focus of ours, and just trying to make it easier for everyone to do that and allow the entire marketplace to accelerate and move faster. I want to let you speak a little bit more Suvijita about, is there anything you want to highlight as it relates to hyper ledger Indy, obviously a very different platform focused on identity, and then really what's intended to do. Were there any sort of challenges you had or differences as we as we implemented Indy versus some of these more general purpose DLT platforms. So for Indy, in fact, we kind of, if I have to look back and see there were a couple of challenges, particularly regarding the Indy key management. So we there was no out of box solution provided so we have to kind of manage that ourselves we created an image and we are using it for all the management. So, there were some kind of if I have to go and kind of talk about how the communication between various nodes operate so Indy as we all I mean, if you know that it kind of does not support DNS we have to provide I static IPs to that. And kind of things we have to take care of we have to think out of box solutions and kind of, in fact, had to configure our ambassador to support those things. And yeah overall in terms of general flow wise and consistent way of deployment, it remains the same but yeah and we have to configure or create some features or charts very specific to the requirement and the limitations of particular DLT network. Yeah, and I think for people who aren't familiar with Indy, just understanding that Indy is just a small part of the decentralized identity story, right there's there's areas and a lot more of the functionality is actually built into the wallet where the people have control over their own identity so you know Indy is just a small part of that overall solution and hopefully people who are familiar and you know part of the Indy project would back that those statements up. So we're going to talk about one more and you know I think it's one of the newer platforms in hyper ledger, right, but I think we're very excited to get to base you as well. You want to talk a little bit about. I think we got involved in trying to get base you in very early after I joined hyper ledger so we do you want to talk a little bit about what we were able to do there and how how well that went and getting it on board to be a f Right I mean, I mean, we as you said Mike we kind of started with the base you quite early and though it was our latest addition to back. What you see here is that it's kind of very base basic right now and we're still kind of developing those features we are trying to understand what are the different requirements or the with the community needs and trying to kind of make it more consistent with other platforms which we have. So we're still kind of working on creating the base network setup with things like enabling boot nodes and as well as So right now, this is also kind of upgrading their their images and charts. So we are also kind of trying to see if we need those things so we are currently kind of, I would say work in progress in basic. Great. Thank you. Okay well let's take a little snapshot. Again, any questions please please throw them out. But if I have to talk about, we've had a great success from an Accenture perspective on on BAF we're using that like I said with with a number of our customers. And we've had great great success and I think the hyper ledger labs program overall is often an overlooked capability within hyper ledger. You know, we don't have to be a top level project in order to have something open sourced under hyper ledger. And, you know, our original intent and open sourcing in labs was to see if there's other people other organizations that were interested in this type of solution and wanted to contribute. So you can see we personally had, you know, five plus customer implementations where this is being used in production. We have, you know, great successes in terms of reducing the amount of time to set up some of this stuff and conform to strong reference architecture. And in the open source community we've had a lot of people. I'd say a lot of more experimentation than contribution at this point. So for me what would be the ultimate success here is if other people other organizations think this is valuable as well and wanted to contribute. The goal here is not to is to accelerate the entire ecosystem, not just accelerate things for Accenture. And so the best way we can do that is to have other other organizations to contribute and participate and we have had a very active participants in terms of use I'd say, but you know we'd love to see more contributors contributing those. There's so many features that we'd love to do that that we can't get to all of them so we'd love to see more contributors there. And the hyper ledger staff has been really great, even as a lab project with the marketing and the events and getting the word out that this even exists as a platform and I think this webinar is a great, great example is that of that as well. I think you want to talk about, we'll talk about fabric again the most popular platform that we've seen, especially amongst the hyper ledger community. In terms of use of BAF you want to talk a little bit about what we have in in the platform today and where we're going for fabric. So, what we have already implement is that we have an hyper ledger fabric network with version support for both 1.4.4 and 2.2 so 2.2 has been our latest introduction to fabric. This is also based on one of our customer requirements and also what we found that it kind of hyper ledger fabric has a bridge to that and they had a lot of other features and different things to which we wanted to kind of incorporate and look ahead and we found value value in that. So, we have added that new version. We have concerns the support for both Kafka and Raft, Raft being kind of for both the versions Kafka we had support for 1.4.4 and in terms of network operational features, we kind of have the peers orders and channels addition as well as a removal of an organization. We have the hyper ledger fabric integrated with our reference application which is our supply chain application which is a five organization consortium and it kind of supports both go and Java chain codes. And if you look at our roadmap and what Mike had all previously also mentioned that what we are currently focusing on is more of how to make the network more operable. How can we kind of have more operational features and based on that our priority is on that. So if you see most of them are kind of regarding the operational features like enabling addition of new peers to an organization as an anchor peer and also addition of organizations to consortium as well as regarding channel management removal, adding of new channels, etc. Also, right now in fabric we have multiple order but all of them are part of a single organization so we are also looking forward to enable support to have multiple organizations enabled in the network. So yeah, most of the things which we have planned forward is towards towards operational features enhancements. So, Mike I think in the next one. Yeah, so we are also following it consistent across all the DLT so here also if you see with the other DLT is as well our focus lies on particularly upgrading those DLT platforms to the latest support. And also some of the operational features which we have so for Beesu as I said it's quite quite new and we are still kind of trying to get it into a more proper network so things like enabling boot nodes, node discoveries, which are kind of enabled in the latest versions of Beesu. So those those versions upgrade will enable us to kind of create and enable those features on the network. Also, we are planning to integrate our the our same sample supply chain reference application with Beesu as well. Similar to Corda, we have the operational features like supporting multiple notary organizations supporting multiple nodes of a single or from by a single organization and also removal or the certification revocation of a particular organization from the network. Indeed, we are quite stable, I guess with in the deployment we have right now what what we have planned is kind of look at different operational features or additional features which we can provide so on that what we have planned is that to have various language support for the wrap up. And also if we if there are kind of different additional database supports and version upgrades. So yep, that's on the features currently we have. Great. Thank you, Vigitte. I think it's taking us to the end where we're opening up for questions we've had a quiet group so far. Are there any questions out there and what we presented doesn't have to be on blockchain automation framework, but preferably. I, well, we have three more minutes so we don't really have that much time for questions but do you are you planning to submit at some point back as a project just like cactus or do you think you'll be staying in the labs for a while. You know, I think the technical steering committee looks at the longevity of projects and their ability to be maintained and I think one key aspect that we don't have right now is other organizations that are contributing. I think that would be what we need in order to go to top level project. We, I think that's the key for us I think everything else is pretty much in place, but to know that this isn't just a tied to extension, I think is key before it becomes a top level project. Yeah, well, that makes sense and I, I do hope that this webinar and all the efforts that you're doing right now to get the broader community participation will pay off because it's really worth it. It's a great project. Well, thank you for this if there are no more questions. I'm going to just close it out with a couple of announcements or information but if you have questions please pop them in the Q&A or raise your hand. This is the time to ask those questions. In the meantime, next webinar will be in a month's time on January 20th, and it will be a different format so we are starting a very new format an hour or so with this time SmartBlock Laboratory. We want to have a more interactive guided discussions, you couldn't expect demos, tutorials, brainstorming. It will be hopefully no more just a one way communication, we want people to participate. So if you want to take part in our experiment, please sign up for the webinar and join us on January the 20th. So please do get involved as today's panelists or presenters described or said we are looking for more contributions we want you to contribute and that doesn't mean only coding it can be testing it can be participating. So whatever it is, I'm sure you have an expertise that will help Hyperledger and you will have fun. So go to our website, go to our wiki, join the mailing lists and let's see where it takes you for now. Thank you for joining us and if you have any more questions or you want to get in touch with our today's presenters. email membership at hyperledger.org. And guys, thank you so much for presenting it was really interesting and I really appreciate your time. Thanks for having us this was really fun. Thanks.