 Okay, you were live okay. Perfect. Well everyone. Thanks for joining this great session today and I'm John carpenter. I run the hyper ledger day number meetup group and we're doing this as a network meeting today and We have a great Set of guests with us today And we're going to be talking about accelerating the path to production with fabric technologies and we've got Taylor try it and Alex O'Rough who and I'll tell you a little bit about their backgrounds, but they're really experts in Fabric topologies Alex spent the first half of his career Architecting and building mission critical systems for trading settlement and risk management of derivatives at one of the largest options and futures clearing houses and And then Taylor has worked in the financial software industry to develop cloud services for security modeling management and accounting and We're going to have both of them go through with a very nice interactive presentation today and if possible we're also going to look at doing a poll via zoom today and You can kind of weigh in and we want this to be very active So the main thing what I'll say to you is as we move through their presentation today Please feel free to use the chat To post any questions that you have and as we can get him in you know right with the presentation We'll do that and then definitely at the end of their presentation We're going to go in and do a nice Q&A as well So at this point I'm going to turn it over to Taylor and Taylor why don't you walk us through this great presentation? Thanks, John. Yeah, let me share my screen here. Okay, and just let me know if everybody can see this Yep, looks good. Yep. Okay. Great. All right. Well, thanks everyone for joining This presentation is going to be about hyper ledger fabric topologies experimenting with and testing fabric features To give you a little bit of overview about the whole presentation The the first part will be just a little bit of summary about alpha ledger and his history with hyper ledger projects and then we'll go into why we built these topologies and After that we'll look at the repository go into a little bit of details about how structured and Then after that we'll go through each of the topologies and give a little bit of a explanation and Just a little bit of some details about each one Let me just move this way So about alpha ledger alpha ledger is developing blockchain-based infrastructure for the fixed-income market to originate bonds They both do loans insecurities Alpha ledger was founded in 2019 with the mission to bring transparency traceability and increased access to the bond market When we started with hyper ledger a fabric it was with AWS running an AWS with their ECS service You know, we have a smaller team so we actually worked with that we found that there was this other service called a dose managed We we switched over to use this for a period of time That that was good for smaller teams that you know they manage a lot of the services you've used it before That was all in TypeScript Compiled to the Node.js chain code And so we ended up getting you know better resources Alex particular so we were able to Host our own version and and so we've been we've been transitioning to be fully self-managed with the latest versions and so some of the current things we're working on is is Is adding external organizations that are our external nodes That are running with external or sorry external nodes that are parts of organizations running on cloud providers That are not our own or even in accounts that we don't own Okay, so what so why do we decide to build these fabric topologies? So the first reason we were trying to come up with a production ready system and so we wanted to address some important Coints one of those is secure communication between the nodes and the network part of SOC2 and the finance industry that require like they require Encryption at rest and encryption over transit So we had to make sure that that was supported and we had to also look at load balancing and failover to ensure that I Guess basically the the five eat five nines that have a uptime and that's one of the goals We have and trying to achieve so we need like you know multiple peers running in orders and what happens if they fail stuff like that And we wanted to also Understand how the world state data changes between different organizations and We also wanted to leverage The newer features of fabric just to be able to say hey, okay fabric release new feature How do we go about integrating that or even just what does this new feature do and so Those are the main points for production readiness and so after after the other point is There was some there was lots of online tutorials and there's lots of documentation I think one of the main things is that you know lots of these are are Are sort of scattered and they're on different operating systems Sometimes the the blog will be you know using Mac and sometimes windows and sometimes Linux so we cited you know all these Documents, let's just put them together into a single sort of Docker compose project and Let's see and that was one of the reasons and so we also wanted to for the point third point We wanted to have working examples. They're just easy to set up an experiment with So I'll hand this slide over to Alex. Oh, Alex. I think you're on mute Sorry, can you hear me now? Yeah Cool, and you can see my desktop, correct? Correct. All right, so In terms of the repository for The fabric topologies So we're gonna go over a little bit its organization and talk about Basically what how the topologies are organized and what are the main components for it? and what would you need to use in terms of starting up a Network and tearing it down and Also, some of the basic configurations or the most important configurations associated with that So if we were to navigate to the this is the same code base that you would find in the in the github repository I will navigate to that in a few minutes, too So we have at the top we have a topologies folder and we're saying that the pop that each of the topologies Is basically self-contained what that really means is that everything needed for running a topology is Located in that topology folder. The only exception to that are some basic are some very basic or Common components that we use across all topologies namely as part of back and compose we have a A network that is configured with a different name for each topology So that's kind of the first Docker compose base file. Then we have a shell command which basically it's a Service that actually is used across all topologies and What this does any commands that we need to run for executing let's say Committing some chain code or invoking some chain code register use it users or things that nature even those run inside the Docker container so nothing runs on the come nothing runs on the host if you will everything is Docker based So the two prerequisites for running at the Paul, you know, and the topologies is to run to have Docker and Docker compose installed Now inside a topology folder, but you'd have you have two basic scripts a setup script and a tear down The setup basically has all the script all the commands necessary to bring up To bring up the network. In fact, you would see right now that I just brought up One of the networks so topology number for it just came up We're gonna go a little bit more to see kind of what what it means, but in essence if a topology has started up you would see at the end That it completed you would get a message of this nature and it also shows that Sorry, it also shows at the end of the fact that we have loaded a That we did a get command an invoke a man to read the assets from the From the network. So let me just really quick start up another one while we're Going next to the slides. So as I mentioned There's no need for any type of parameters to be passed along just do a start up and then this will start again the network The network is basically composed of The network itself will all the all the all the nodes orders peers fabric CAs and things that nature But also deployment of a code deployment deployment of a chain code and then the execution of the chain code at the end Because in essence we want to make sure that the network is running fine and that's that's why we deploy and execute chain code at the end So what we saw there at the very end was an invoke commands to to do a read of what the chain code wrote to the network All the Setup of the I mean the network basically has one single channel that all peers are joining for the different organizations and The And The There is one topology in which case in which orders are also joining the same channel. We're going to get to that last apology at the end. That's a topology that use that I use the system participation at The channel participation API So just to come back a little bit to some of the other configurations. So the setup, you know, again runs all the scripts necessary to set up the network that they're done will will basically Take everything down for for that apology. The only thing that is going to remain on your machine is the image that got pulled into the into the docker Your docker repository and the reason for that is we want to allow it to kind of start up more easily. So the only thing you would really need to clean up That the tear down doesn't do is are the images inside Docker crypto material for all the components in the network are created in this crypto material Folder so you can kind of walk through it and then you will see for each of the Components, you know, the MSP folder with the certificates the Keys and things that nature and the CA CA search as well So that's where you'd find those in terms of fight fabric by binaries. We mentioned that we can run multiple topologies at the same time And those topologies can even run with different fabric binaries. The way that's achieved is through this ENV file this configuration file Where for each topology you can go and actually adjust the fabric version that you want to run for each for each of these components. Obviously now the topology will use a certain features for example channel participation You will need to make sure you know a newer a newer feature you would need to make sure that the version that you're using actually supports that Supports that feature, but other than that, this is the place where you can make changes In terms of ports. This is an important one. So when we run inside of a script just to come back really quick when you'll see a docker compose it's actually kind of layering three different Three different Docker compose files is this base one that we mentioned at the end and then the one for the shell command and then there is the docker compose that it's set up in each single topology And then inside it what you will find here that the ports are pretty much all commented out. So what that means is that No ports of the containers that are running are actually exposed externally to the host by default. The reason for that is kind of to fold one we didn't want to conflict with any ports that might actually be ordered to be taken on the machine we wanted to make sure that the Network starts up without problems or without any issues, you know, due to this power conflict and secondly, if you want to have actually multiple networks running concurrently. It gets a little bit trickier because then each topology would have to have its own port for example for a CA and things that nature. So we left it here for the users to to make adjustments and they see fit and just open the the ports that they need for certain components on Just at the startup time and when they when they start up a network. In fact, we'll see I just show two examples right now because they're going to come up and we're going to be doing some testing. So in this case, I open a port for postgres because one of the policies we're going to review later deals with storing data for the fabric see and postgres. So we want to be able to browse some data and postgres so therefore we open that port and later on, for example, here for couch TV. I'm sorry for open LDAP We're open LDAP We're using that to also one of the apologies we want to be able to access that open LDAP server and therefore we we expose the port to the host Let me see what else then we can cover here. So there is a lot of repetition actually in terms of what the setup scripts does if you were to look for one topology to another. So the setup script, for example, for T3 to T1 Would be looking very similar with just a few changes right And the reason we didn't want to move this kind of common things to like higher level common components was because we wanted to allow for comparison between topologies. So in this case, what I'm looking here it's a comparison between topology T1 and topology T3 so T1 is the base one that Taylor has been talking about T3 I think is the one that deals with external chain code. So this is making it easy now because we can actually find out okay what were the new files the new configurations. We had to set up or what did we need to change, for example in the YML file right for for Docker compose oops Well, I'm going to open it up again or a script right like I mentioned the setup script. What is it different between T1 versus T3 so we think that that makes makes the understanding of Changes that they have to have to be made because there's in some instances are quite a few changes. It makes it easier to understand the other differences between topologies. So I think we covered all the basic things at the moment so As you can see here now really quick in Docker desktop. I have actually If we scroll down just a second I have this in front of me. We scroll down and look at the Sorry I wanted to look at the containers. So for the containers we have basically here topology T4 right and then at the same time we have a topology for T5 running concurrently so That's that's what makes it easy to have Basically multiple networks running at the same time. With this I'm going to turn it over back to Taylor to start walking through the first apology to Our first set of topologies to kind of explain what's happening inside each one of those Okay, thanks Alex. Yeah, that was very helpful. Thank you. Okay Okay, how's the screen shown up Yep, the screen looks great Taylor. Thank you So the first apology we'll start with is T1. It's considered the base topology We use this topology to Form like the root the base point for all the other topologies So sort of going through these diagrams are going to see on each of the slides We have these green boxes representing each organization Within each organization you're going to see various blue boxes which represent the different services running So for example or one we have an orders service running we have a certificate authority service running Or two we have peers certificate authorities or three we appear certificate authorities And so within the blue boxes we have like individual I would call like nodes maybe so you interval services And also You can see they give you each each maybe has a little more detail like on this peer you can see it says level DB Internal chain code And then you can see right here says pure one CLI So it's it's quite detailed and And each one of these diagrams and under the certificate authorities box inside the box you can see that there's two gray boxes and The first one that says fabric CA TLS and the second one says fabric CA identities The the fabric TLS Is responsible for issuing the search for the communication secure communications between the orgs and the member nodes and the fabric CA identity service is for the Issuing the certificates to For the endorsements Of chain code and such so this is or this is T1 and so we'll We'll look the next at T0 so T0 is sort of T1 minus One so it's minus or three We've reduced the third. We've reduced the number of orgs. We've reduced the number of orders and peers And we just We did this most recently Just to have a very Even more simple simplistic Network to start up And So let me just go to the next one here. So this is more interesting here. So T2 We have additional orders and peers. So we've We've taken and we have some we have some other Improvements or I'll get to that but We have additional orders We have additional orders Included under the other org or two and on top of that we've also added Ingen X proxies. So As you can see we have Orders and org one and or two and so we when you want to reference the order in the chain code for example You it would make sense to have a proxy versus having to determine which orders to send the transactions to To endorse So Let's see. So there's also peers. This one includes an order is proxy and appears proxy. So both the orders and peers in this t topology are being proxied And just to show you a little bit what that looks like We can go into the code files here and exit this So in meld I have I think we're on Sorry, it was a T2 yeah T2 So Rather than use meld. I wanted to show you the engine next proxies here. So we'll go to T2 With the configuration And you can see right here where there's an order as proxies and peers proxies. So there's proxies And And for example here's the order Orders proxy Configuration for engine X. So we have It configured so that it will It's like a pass through and it's using like SSL so it's secure And it's just passing the communication to the whichever order We have configured here And we also do the same thing for the peers Each work has a proxy for their peers Oh, and also the peer Sorry, the can the if you want to know for the on this topology How the engine X servers are configured That's under containers Orders Here's the order of proxy container here. So you can see this is where the docker Images being configured here so That's So looking at the next apology T3 this is a T1 topology plus an external chain code server As you may have remembered the T1 the all the chain codes were installed internally so internal chain code and then this topology Each of the orgs that have peers we've installed the chain code and external chain code container shares So let's look at that real quick I can show you the only difference in this graph as you can see in this box. We have the External chain code right here in orange on each of the org two and org three Peers and they in the peers no longer have internal chain code And this one I have up in meld to look at real quick. So we have Just an example of you know how you using meld We're looking at T1 the T3's when I know okay. What is external chain code acquire? Well, we can see that there's a build pack Let's see there's some modification to the chain code There's some configurations that you have to add for each organization that has the external service There is a core YAML file that needs to be updated to point so this if I go into this file You basically have to point to this build pack and say that you're using the external build pack and then we have the external chain code Docker container definitions here We've modified the peer configuration a little bit And some of the scripts have changed so there's some there's a lot of probably I would say Slight differences in how the chain code gets installed there Let me make sure I got everything here. Yeah, I think that's That's I want to say for T3 Okay, so let me hand it over to Alex for T4 Start sharing again. All right. So T4 This one deals with the configuration on our lab store So if you happen to have work with fabric CA You know that there is Taylor mentioned there are two type of Sets of crypto materials that are being created right some for SSL communication with others for identities for Each of the nodes in the network now those identities in the network When you register them in fabric CA you provide a user ID and a password and we'll see later that ends up being stored in a In a database that fabric CA works with and that's okay for You know good number of environments, but in some places like larger enterprises security departments infosex they kind of want to use the Kind of single store The enterprise store for You know users principles, you know service accounts and things on nature and that typically tends to be an LDAP compliance store probably the most popular of those is Microsoft Active Directory Now there is some support with fabric CA whereby instead of having to register with these users these identities with fabric CA You can actually kind of hydrated or see them if you will From an LDAP store so you can actually configure them the fabric CA to work and connect to an LDAP store. So what we wanted to do here is Basically see how that works With an LDAP compliance store to make it easy here for deployments. We've chose to use open LDAP Which is you know LDAP compliance store. We're going to look at With a tool a browsing tool to an LDAP browser to kind of see what what data gets stored In open LDAP so I'm going to go over some of the new configurations Just so you can kind of have a sense of this. So if you start up that topology D4 You will Get as part of the setup A Container for open LDAP and that container will be configured to load at startup time Organizational units from this LDIF file And then also counts that sit under those organizational organizational units, right? So this is the base domain the domain controller and We will take a look at you know in a second how how that you know how looking at the Inside open LDAP When the fabric CA Is configured There is an LDAP configuration if you look at the Fabric Server configurations There is a section for turning on LDAP. So if you turn on LDAP then you can have it connect to a To a Open LDAP repository Or LDAP compliant path through some again in this case we basically have a container right exposing on that port. That's the base domain And then what you see this long string is basically a user right to connect It's like a bind user. If you were to open LDAP repository To connect It's like a bind user if you will Is the user and the password to connect with To that to that store. So if we look with Jake's floor This is basically for our one We have Kind of to use right organizational units one for identities the other one for TLS Identities probably is the more important one and then here you would see the Users that we've kind of loaded from the LDIF file right so we don't have a Microsoft Azure Or an Azure directory right here. So we're just kind of mimicking something you know to just kind of prove out the concept Obviously, no once you have it work with something like this is not that difficult to to to actually make a change and Connect to an active directory and in this case, right? You would probably have to adjust the user filter like if it's open LDAP UID is typically the unique identifier for you filter If you were to go there's like an SM account name for example my Microsoft Active directory so there might be some some queries right that have to be adjusted here some configurations and if you work with your Active directory Administrations that be able to help you with that but the point is here, you know We can see that fabric CA can indeed, you know Connect to this repository is an instead of again having to register accounts The users and the passwords actually come from active directory Right, so that actually can save quite a bit of configurations and then it can make the deployments, you know, kind of more enterprise ready So that's on the LDAP LDAP configurations if we go next to T5 staying with fabric CA's Sorry, I forgot to show here, but basically this is the open LDAP right that kind of supports both of these organizations If we were to go now and look at fabric CA clustering so Fabric CA I mentioned it has a database a relational database to store State user certificates and things that nature that database is SQLite. It can actually be configured though With with postgres and I think my sequel as well. Now what would why would we want to do that is it just in case if we want to provide failover for the fabric CA component Right, we can actually have now to to nodes for fabric CA Working with the same set of users and credentials and using postgres behind the scenes to store their data. That's not possible with SQLite right with postgres you can also you know use a cloud service or you know you're not limited to SQLite you can almost view it like an embedded database right not very enterprise scale So what we have here we have a postgres server with two databases in it one that supports TLS the other one that supports identities. So I have I showed earlier right now like how we expose the port for that T5 I think Right so we're exposing this five four three four in postgres and then if we go with a query tool. We basically can see here that we have two databases of fabric CA TLS and identities. We kind of like to get a sense of what's inside you know this so you can actually go and see that there is users right there's a user table and There's a user table and their users set up inside it and then there's also certificates. So this is this is where the certificates actually gets stored. So when it actually When you do an enroll command right and you have a certificate issued for an identity. Or certificate for An enrolled yes or you have certificates issued for SSL they get kind of send back as a result but they actually get stored in the database as well because then they can kind of Be Organized under a particular CA. So that's that's kind of the data store that goes behind This fabric fabric CA servers And I think that's kind of the main main point here on this again show you know running example of how how this can be set up. Moving on to T6 mutual TLS so by default all the topologies have SSL enabled. However, if you work to for example have a network where you have nodes communicating over the public Internet. And there is no point to point like a VPN connection. In that scenario and probably many other scenarios to it's very recommended that you actually have nodes authenticate to each other. Through client authentications right so that's what kind of mutual TLS means you know client authentication SSL or two way SSL however you want to call it. So what we've done with this one we basically enable client authentication across all the different All the different All the different components. So if I was to go here and show inside T6 just some examples. So if you were to look at scripts again you could see this with the diff to T1. So for example when we commit chain code, you see this parameter client auth. Right, so you wouldn't see that in T1 because we don't use client authentication but what it means here is that this administrator account. When it invokes this command it connects right to peers and orders and so forth. It authenticates itself and the way it authenticates itself is with this parameter and then it it has a certificate and a key file and certificate. The certificate gets sent you know during the SSL head shake gets sent to the server and the server then can verify that it's signed the certificates it's signed by a known certificate authority to that server. And so this way you know we can make sure that only only nodes or only only entities that are allowed to connect to those components inside the network you know can actually do work with them. So again very important for secure communications across the internet. In fact even like 2.4 I think some of the commands in 2.4 actually require client auth by default. So it's a good thing to learn about anyway. So the private data collections this basically deals with starting data off of the public channel. So data that is maybe private to a certain org or a set of organizations. So what this does it basically configure some private data collections and there is some data that gets stored in this private data collection so if we were to look again at configurations. When let's say when we approve some chain code one organization approves this chain code. Sorry we have to go to T7 so in T7 if I was to look again the scripts I forgot to mention this all the scripts get called from the main setup right. So that's kind of but we just wanted to have it a little bit more modularized if you will so that's why we have been separately here so in the approve you can see that there is a collections config right that gets passed to the approve. If I'm not mistaken this is the collections config so you can see this is how you define a a PDC private data collection in this case it's a PDC that only this member has access to be then write and then you could see then in the chain code and this is the basic chain code that we have from that is part of the distribution the fabric distribution so here we we basically store the assets if you will on the main channel but we also put data into the private data collection so we put some put for example in this case I guess we're putting also the same thing the asset but we're putting in the private data collection right so there's a put for private data. So that's how we leverage the private data collection. All right so let's go next to T8. So the T8 though it's a cluster couch DB configuration. We think one this is probably one of the most important ones. As you as you know state in hyper ledger fabric is represented by two types of data storage as if you will. So you have things like the blocks themselves right the kind of get stored with the orders and they get can be pulled with the peers and so forth. So that's kind of the chain right of changes that happen over time but then you have the world state. So the world state represents what's the current state of any record that you stored in on on the blockchain network. By default. The database used for storing that story it is called level DB. And I kind of view it again as more like an embedded database works fine you know if you just want to stand up something and run fast. Apparently they're even like maybe a little bit faster from a performance standpoint of you then couch DB. But again we we like to look at what's the data inside you know that we can store maybe balance some numbers maybe you know do a query or two. Not very fancy in terms of analytics or queries but just kind of get a sense of you know what do what do we have as far as data in the blockchain right because otherwise everything can just kind of seem like a black box. And it's, you know, hard to show a user you know what what's inside the blockchain. So for our deployments we use heavily couch to be. And we externalize couch to be it runs its own container. I mean, when you talk about Kubernetes and things like nature I mean, it's very nice because then you can have dedicated storage you know different IOPS I mean you can treat it more like you know enterprise database. If you if you if you if you use couch DB as an external database. So in this case for pure one we're using couch DB instances and we actually have two of them, we have a clustering setup for them so that's actually quite useful if you want to see how. I mean there's manuals of course in couch DB on how to set up couch TV but this is a working example of a cluster and then it's accessed by pure one this couch to be proxy right. So, what's nice about it and yes when you have it set up, you can actually go. So we have this T eight, let's see, I started up T eight earlier, and now here. Let me just show really quick. Right, so this is the T eight apology. I mentioned here that you know at the end we do. This was a ghetto assets, right, that's what we were seeing at the end that we got all the assets. Okay, here they are. And now if I want to see them in the database. Couch DB, we have two nodes. We remember the ports again. Actually know this one. Actually we left by default the couch DB is the only one that even if you're looking the repository. It's actually the only one that has external ports already mapped. So this one's here right so five nine eight seven five nine eight eight five nine eight seven. So if you log into it the user ID and password are in in the configurations. Again it's a little bit interesting at least to figure out how this gets stored so this typically the way things get stored there is a channel name and then there is the chain code name. So then you can navigate inside and you can actually see the, the all those documents that we've put. You can add the indexes through this you can management right you can have views. So there's a few things that I mean it's not, you know, nor a close to like a relational database but you know, there are some things that can be done in terms of performance tuning and. And configurations for for couch DB. And of course this is the just the same view but from the second node. And then if you were to go to like config you can see here right it says that seems that you're running a cluster with two nodes. So for this kind of settings you know they don't let you to do configurations through the UI and then you kind of have to do them through the configuration files. So again we think probably this is one of the most important ones in our view, as far as you know. From all of this. I guess I will look at the mutual TLS and this ones are probably the, the ones that are most important. We think for enterprise deployments. And then going to the next one and the last. This is the channel participation API, again a newer version. I think it's to to that for that it got fully implemented. In this configuration. What's nice about it there is no more need for a system channel. So prior versions right you had to have a system channel set up first all orders have to join to. Additional configuration important back to perform and so a bunch of things that you know are not very useful maybe so it's very nice that the team at at fabric came up with this approach of channel participation API, which means that just the way appear joins a channel order to do the same right. So you can have an order joining two different channels right and he's just becomes a lot more scalable and in many ways you know easier to configure. So the way you would see that what we have here is go back. So we have in org one we have three orders. So if we go to the org one. Scripts for joining channels, because you might wonder right or or one has no peers what are we joining well, it's orders now they're being joined. So this is a newer command right. I said admin or order service know that mean. And as you can see here we're joining. We're joining the same channel that appears with joint. Again, I'm working example of our child participation or can be set up. With this said, I think we have 10 more minutes maybe we have questions Q&A. I'll turn it over back to Taylor and John. Maybe we have more questions. So let them share and then we can also answer answer. If any questions come from the audience. Yeah, no that's great Alex. I think at this time, it would probably best to run the polls first and then let's take Q&A after that. So David, I don't know if you have those available ready to go. But if you do, let's go ahead and do that. Otherwise, over to the Q&A. Okay, I just launched the poll will give people a minute or two. If you want I can share. I can continue to share and then for anyone that might not have access to the Q&A. I'm going to go ahead and give you a minute or two. Okay, I just launched the poll will give people a minute or two. If you want I can share. I can continue to share and then for anyone that might not have access to the poll or has difficulty with it. Yes, we'll just pop up here the questions as well and people can answer the chat if they wish to do so. Yes, the poll showing up for people. It's showing up on my screen David and I can see great I don't see any votes coming through yet. Looks like we haven't had any but we have one person that's voted now in the poll to have gone through and voted. Yeah, people would rather leave comments and zoom chat or just share their comments on voice that's fine certainly. Yeah, now we're getting some good responses. We've got seven people have already voted in the poll. Oh, I'll flip in a second to the next question on the slide. Yeah, no that's perfect. Okay, we've had nine people that so far have participated in the poll and say let's just give it another, you know, a few minutes here for people who seems like people are still voting. Sure, yeah and if we want to do some Q&A why people are voting feel free I see some questions in zoom chat. So I can actually tell her let's look at those and try to answer them. And if you just want to kind of read them out and then respond to them. That'd be the best way to do the chat Q&A. Yeah Alex so the first one I saw that is from remesh it says how about the TLS search external CA is allowed or only self signed CA is generated. So the pathology is everything it's, I wouldn't say necessarily self signed because the certificates are signed by the CA certificate that comes with Fabric CA. But the fabric says self signs its own certificate. And then he uses that CA certificate to sign the certificates for the identities for example right now to plug in a external one I mean that is possible as well. So in that case you would have to deploy inside of the crypto material folder would have to deploy your own, your own certificates. And then we have another one from Drago says if the fabric CA is used, would it still be necessary to have a middleware to authenticate a middleware to authenticate. Well what happens. Drago, I mean, when you mean authenticate like nodes authenticated to each other or like an admin account authentic to authenticating to a node, because I guess I could explain I mean there is no user. Off to each other. Okay, so in that case basically the there is no like user ID password authentication happening at that point everything gets done for certificates. So that's where you would use that client authentication. So you enroll you get the certificates, and then using those certificates then the nodes authenticate with each other. By the way, I should mention that when open LDAP is used for an LDAP compliance stories used to hold identities. That keeps the need for having to register identities but you still need to do enrollments in order to issue certificates, you know to to to use for communication you know then to have your MSP folders popular with them. And this is just one approach, I know the fabric team actually is using is looking at kind of other, maybe Kubernetes based mechanism to generate certificates so fabric is CA is not the only one way you can generate certificates right so I want to make that clear, but it's just the way that you know comes by default and it's it's a it's a pretty decent way to get started with. Okay, there's a few questions on this one it's about supporting multiple host deployments multiple host deployments for which components we know it looks like sort of a full decent realization, maybe there's different machines running with different Yeah, I mean that's. So, yeah, I mean that's what we do you know in our case right I mean we have. You know we're actually trying to set up across you know multiple cloud providers you know. Network I mean components talking to each other. And the purpose of the topologies is to, to allow you to kind of allow users to understand, you know how this notes communicate with each other and where configurations have to be made. But yeah it's certainly possible when you when you do your initial, for example when you create the initial channel right and then you have to specify orders. Those orders can have their own domains, you know those those nodes can be in different, different networks right and they can talk to each other over the internet. And we have a question from satish. I'm not sure I quite understand it so are there any specific advantages of having orders and peers deploy in a dockerless fashion. Without Docker on. I mean, even like with Kubernetes right still you could say you still kind of need some Docker right configurations. It's not. That kind of means you know you can handle it with Ansible chef right you can do the setup you can run them on bare metal it's there's no need you know it's not a requirement to have Docker. What's your advantage I mean, I guess it depends at the end of the day what tools you're familiar with in terms of automation because that's the biggest thing here right mean to to really get this networks to work, you know, to have large scale deployments you really need to have automation infrastructures code I think I actually had. There are actually a couple of projects like you know you haven't looked at the hyper ledger fabric operator. That's a big one you know that's kind of coming up to you know, it's focused on getting getting a fabric networks deployed with with fabric operator basically Kubernetes environments. If you're familiar with chef and Ansible and you want to do it in, you know, bare metal and there's no problem with doing that. I think maybe two more questions we have we have one question about is there a structured query language encryption solution and then are there any production networks currently running with these topologies. So, I don't know about like a SQL for encryption that I'm aware of so I thought it was more meant you know how equity, you know couch DB so couch DB has its own DSL for data query but as far as encryption I'm not sure, you know what what solutions would there be their SQL base for that. And in terms of the topologies. I mean you can see what we have here I mean if. Okay let's put it this way we're not aware someone using you know this in production and it's not really meant to be used directly in production. It's meant again to, to help people understand how these configurations work through working examples right. And then from there take those learnings and deploy as they see fit into their production settings. But again it's important in our view to to look at some of these features the fabric comes with. Again we talked mutual TLS, you know store data storage and things that nature. We think it's, you know, those have to be understood well in order to leverage fabric it's at its full capacity. Any other questions or. Okay, well. Thank you so much I mean I think we on this last slide. Maybe let's take a look at the poll. Alex now that. Yeah, we have the results on your side would you like to share it and Taylor do you have those results or I can share those out with the group here. I'm sure my screen is that okay to share the screen or so looks like for number one we have the adding an organ tremendously steps, a few other ones here. You know what happens, Taylor. I think the this is a zoom and zoom itself. Yeah, it is a new one. Yeah. What do you do zoom yeah when you I'm telling it to share the results I think it's saying that it's sharing them on my end. What you could do Taylor you could take a screenshot. Yeah, unfortunately as zoom doesn't share its own interface. When screen sharing. Are people not seeing the poll results though I told it to share the results. I saw the results. Okay Brett, can you see it. Yeah, definitely. Okay good. So Taylor if you want to kind of read off the overview or if you need me to I can do it as well. I got it now I can share my screen I took some screenshots here. So that should do it. Perfect and I think we're going to need to drop in a minute I think another room another call has another call has this room at the hour. Okay, yeah, we should go ahead just if you got two minutes to take Taylor to just kind of run through those great otherwise we're going to have to drop as David says, or we can add them to also as a slide. Can you see this Alex and everyone else. Yeah, we can see it. Yeah, but you might want to just kind of read through it. Yeah, we can share the results over email Taylor if you want to send that to me, you know I can share it out with everybody in the call if we want to add it to the presentation I'll send the new presentation out to everybody. We can let the room go thank you so much for that participate. Yeah, thank you Alex thank you Taylor. Thank you. Everyone have a wonderful day and thanks for joining great informative session. Thank you so much. Bye bye.