 So for people who just joining in, we will start at three, minutes of three. Yeah, so it looks like we have enough people and let's get started. Thank you Virat and David for being with us again from IBM. So maybe you can intro yourself a little bit and have a briefing of the last session. This is a photo of AMA from the last, a good presentation. David loved the question and the thoughts and feel free to discuss in this session. So over to you, Virat. Perfect. All right, thanks David. And yeah, I'm curious. So for organizing and arranging the session. So this is the AMA session for the console, right? You know, so the fabric operations console. So let me give people an introduction, right? You know, I mean, we covered fairly, not too deeply but reasonable level last time, right? In terms of how to set up and how to run it and everything. So I will probably take about 10, 15 minutes of the first session talking about, again, the console and the intro, some of the things we covered last time. So people have a context, right? And then we will dive into any questions you guys might have, right? So then we'll cover it. And I am Virat Ramamurti. I'm one of the lead architects for the console I work for IBM. And David, if you want to introduce. Hey, yeah, I'm David Huffman. I work with Virat. I've done a lot of the back end work and API sort of stuff for IBP. Right. Thanks, David. Yeah, anything API, you know, that you see in the code it's all, it all came from David. So, so on the console again, quickly, right? You know, so people currently, of course, use Hyperledge fabric, you know, for setting up, you know, again, their orders, peers and you know, CAS and such, right? And what, you know, the console helps with and does is managing the fabric side of things, right? You know, easy to use interface, right? So you can do a, you know, again, I'm not going to go through each of these bullets, but you can, again, manage your fabric, like, you know, create channels and, you know, install chain code and, you know, use the tool lifecycle and, you know, update the channel and a lot of those fabric things. And, you know, what, you know, I'm sure people ask, like, can I create components, right? That's, no, you cannot, right? And that's one of the things that we will be having an interface spot at some point, right? At the moment, you know, the, you know, that is, that's not something you can do with the open source, but there are other ways to do it, right? So, again, high-level architecture diagram, right? You got the browser and console is what we are talking about in this context, right? And console talks to your peers and orders using a GRPC proxy. What the GRPC proxy does is it kind of translates the request, you know, you can submit it in HTTP, you know, protocol, right? But then we will talk back in GRPC, like, you know, back to the orders and peers, right? Like this picture. And we, again, will also proxy the calls. We have config translator, again, the reason being, you need to have all of it in the same domain. And, you know, if you're doing local development and, you know, a simpler setup, you don't need to, like, you know, have engine access on something running good, but instead, you know, you can proxy the call over, you know, to config translator. And, you know, the backing database is couched, in this case, right? So that's kind of a quick architecture, right, diagram. And in terms of running the console, it's fairly straightforward, right? You know, that is basically a three-step process, really. So the first part is setting up the network itself, right? That's, in this case, you know, we use the test network from Fabric, right? So if you look at the scripts, you know, the setup script, you know, network script, you will see, you know, all the commands it uses, like, you know, cloning it and changing it to, like, two to three and, you know, provisioning it, right? It's been fairly consistent, right? So you can set up a network. And then the second part is, you know, setting up the console. Basically, the stuff we just saw, right? Like, which is console, it will create couch TV, it'll create GRPC proxies, and it'll create config translator, all in Docker Compulse, right? And then that's basically the second part of it. What the third part is, is basically the create asset, right? What the create asset will do is it'll take the existing running test network, and then create, you know, JSON files that the console is able to take. And so you can just simply import them so that the console is tied to, I guess, you know, in the Fabric components, right? That's kind of the three-step process. And once you do that again, you know, you can go through the steps to basically create certificates. You know, we can briefly see some of that. But what I did in advance is I, you know, ran the set up network script, as you can see, you know, to save some time, right? So this is all, again, you know, going to Fabric samples and then creating a network, nothing specific that's console in this case. And then the next step we would do is, you know, console up. What this will do is it'll, again, go via the Docker Composite to download the, you know, it already downloaded before, right? The console image and then start that up. Then the last step, the third step, right, is creating the asset. Again, this will connect to the existing network, right? The test network and then massage the files and create these, you know, JSON files and then put them in a zip file so you can simply import them into the console, right? At this point, that's all it took, right? So other than creating the network, it should literally take, like, you know, depending on if the image is downloaded or not, you know, a couple of seconds literally, right? Once the assets are created, you should be able to simply import. So not that one. So you would go to localhost 3000. That's where it will be running. And the default is admin and password, right? And immediately upon login, you are required to change the password. Just a pointer, like, you know, some people indicated, like, at least for local development, they don't want to go through this process. They want to be able to, like, you know, just, you know, directly go to the console without doing the change passwords being on the fresh install. That's something we will add, like, in the future. But for now, you know, you need to change the password. That's it, right? So once you change the password, you know, you are in. This allows you to import an existing PR order or CA. And again, we talk to the backend using GFPC web, right? So as long as you have configured GFPC, you have been in front of your orders and peers, you should be able to talk to it, right? So I'm going to import it, right? I'm going to spend, like, maybe four minutes and then it's over. So within the console that has work area, right? And then if you go in there is the console assets. That's the new zip file that we just created. And you import it, right? So that should import the components. So if you go to the nodes, you should see the, this is basically all the components from, you know, the test network again, right? You know, the peers, order, CAs. And you can basically go into the CA and then this will create an identity, which is basically the certificates to talk to the user. Certificates to talk to the password, CA. Incorrect password. So once I, so I'll end roll an identity, right? Hard one, admin PW, that's the default for the test network, right? Hard one admin. And so we'll do the same for order quickly. I'm not gonna go through the second one out of that, but let's just connect at least the minimum orders in here so you can see where it gives you from context. Again, I'm going to end roll, create an identity slash certificate for the order, order, admin. Again, I'm going through quickly. This is not like, you know, super complicated, but. So now that we created the identities for the orders and the organization one, we can simply associate it, right? So you, and when you associate, we already know based on the root cert that this year has what are the allowed certificates in your wallet that you can use. So notice that you didn't see our order's identity. That's it, right? So now we are able to connect and talk to the test network here. Now we are able to see that it has my channel which was already created. And you see that using the tool format, we have a chain code installed on the here and the, again, the hash of that chain code, the package ID, you can see all of it from here. And let me associate it with the order, like using the certificate we just created. So now you can, so this is a view of your system channel, right? Where we see that the order has one admin or and then you have consortium members like the order one and order two. And you see that in the concenters that you know, we have one, just one node. We have multiple nodes in the cluster. You will see all of them listed here. Again, just one node. So again, the one last thing before I will stop for questions is you can go into the channel, right? And you see all the channel capabilities, order capabilities and application capabilities and the block height and you can do the transaction views. And again, I won't go through the flow but can easily follow the, you know, the install approved commit process that fabric to all life cycle gives you, right? So you were able to do that, right? From directly from the console without, you know, going through a command line and knowing what the, you know, the payload is for the commit process and all that stuff, right? So it makes it super easy to, you know, use it for again, like managing your fabric without again, command line business, right? That's the key thing. So, I mean, again, we can go into details if people want like specific topics but this kind of gives you a high level overview of console and what it does and, you know, where it connects. So, you know, then we will open up for questions, right? So, let's see. How can they connect to different clouds and on-premise nodes to the same network, right? So, right, so the console does allow you to connect to nodes from anywhere really, right? As long as you have a GRPC web in front of it you can import any node, right? I can, you know, in this console even though I'm running locally I can have, you know, node running somewhere else like an IBM cloud or Azure or whatever and then simply import it, right? You know, it's a JSON driven thing, right? So the network term is kind of in some ways in my view overloaded, right? So network is just a logical definition that is no real network, right? I mean, you can have one peer talking to two different ordering services, right? And you can have channels from, you know the same peer from two different ordering services. So I guess the network is kind of loose but the point is if you have peers, orders, CAs, whatever the, you know, those fabric components it can be running anywhere. As long as you put GRPC in the front and expose it you should be able to import it into console and manage them without too much trouble, right? Lina, did I explain it or do you have further question? You can unmute. So, yes, you did. I believe my follow up question is are all the nodes need to be on the same version of fabric? Not necessarily, it depends on like, so you need the lowest common denominator, right? So you can on the same console like one, four and two peers and orders. So it's all channels driven at the end of the day, right? You know, what does the channel do? So you can have two channels with one, four capability and two capability. So you can manage, if the question is can I manage different nodes with different levels on the same console? Yes, you can. So you, we support one, four lifecycle and the two whole lifecycle, both of them. So you can, so like right now I have a channel with two whole capability. So you see stuff that's relevant for two whole but if you have a channel with one, four capability you will see a slightly different console that shows again, you know, the one for you of it again. Okay, and then participating is not savvy enough to deploy their own node on a provenance project. Is there an option like which the, yeah. So you can manage it, right? If the client is not savvy enough, you can still, they can, I mean, instead it will use the UI, you can host the UI anywhere and you can have the components wherever and they don't have to be the same network, right? So, and then what parts of console is IBM making open source, right? So what IBM is making open source is all of the things you just saw, right? What we did not open source is the operator component that allows you to provision. So the internal IBM, not internal, I should say, so the IBM version of open source offering, support offering, that's the offering IBM has that allows you to create peers and orders and CAs and manage their resources and you can change like the peer resource and ordering resource and all that stuff, right? So that's the Kubernetes aspect, right? That part is the open source, the console is. That's a good question. And then, so, okay. Then the, from Google, how does console wallet work? Is it in a CouchDB wallet? Now, so one of the requirements we had way back and we started the console development, right? Is to not manage the keys and certificates for a customer, right? That puts us at a, I don't know, some customers did not like it, some customers are okay with it. But so what we said was, we won't store and manage the certificates in any database, right? So when you import or create the wallet, it's only in the browser's local storage, right? So it's not in CouchDB. So one thing we were considering is to integrate with some of the cloud-based solutions, right? So like, for example, IBM has certificate manager and Amazon has its own, right? So integrating with some of those solutions. So again, the certificates are maintained elsewhere but when you log in, we pull the certificates from wherever the provider and use it, but that's not something you do today. But the answer is we currently only store it in the local storage wallet, right? We're interacting with the network, with the newly created identity. How can we instantiate chain code? If you need to use the certificate that exists on the channel today, right? So you do, of course, yes, need the... So if you have one store, right? And if you have NoDoU, things are simplified a lot, right? So that's one of the critical things, right? So if you have NoDoU enabled, as long as you get a certificate, like what we just did, right? So that applies to one store also. So, which is, I did not import a specific certificate. I created it one using admin type, right? And I was able to, like, again, interact with the peers and orders. So you don't need the specific admin, sir. If you have NoDoU enabled, if not, then you need to make sure the wallet, right? Contains, it's a JSON format, right? So the wallet has the private public key pairs to work with the peer. So Nazari, did I explain your question? Did I answer your question? Okay, so I assume yes. So is it possible to support with the console basic maintenance, like certificate revocations and renewals? No, that's one of the things that was an open source. It's part of the operator. So anything that you need to have Kubernetes or some kind of infrastructure to, like, manage the certificates, you know? So there is no CLI or there is no command or approach that fabric gives you for certificate renewal, right? You need to actually renew the certificate and have it on the disk, right? So there is no API for certificate renewal from fabric, which is why the console cannot do it. And that's where you need the operator, right? To do some of those business. So no, so we purely, you know, abstract the maintenance of fabric from, again, channel, chain code, you know? And then, like, you know, if you are talking about CA, you know, you can get, of course, like we just saw, you know, creating certificates, right? Identities and such. But anything that touches Kubernetes or Docker directly, that fabric does not have an API for, that's done by our operator. And that's not part of the open search. So does console support HSM? Great question. So the images, again, this goes to IBMs. IBM, where with our support offering, we build certified images, certified images that support HSM. So those images supported console doesn't quite care about HSM, even though, you know, it does allow you to work with HSM. But again, obviously, you know that HSM, you are not going to have the certificates locally, right, the private keys. So for the console to talk to HSM-enabled peers, you still need a certificate on local. So the console doesn't do direct HSM. That makes sense. So you still need certificates. Console doesn't have like a, you know, the clients to talk to HSM. It's all happening on the browser, right? How can you manage externally intermediate CAs? So you could have, you know, again, you could have two different CAs, right, external, you know, intermediate CAs in the console, two different ones, right? And again, the support offering that we have allows you to provision and manage the fabric CA, you know, the server YAML and all that stuff, right? So that's all possible. So console does like within the MSP definition, when you create one. So you could, let me actually do that, right? Or test MSP. So what it will show is you pick a CA. You can see that, right? So we populate with what the root cert is for that CA, but you do have the flexibility to add your other root certs and PLS certs. So if you have intermediate CAs, right, you know, you can include their certificates also, right? And if you export it, you get even more flexibility, right? You can free form, like, you know, include the, you know, like multiple certificates and just make up your own MSP definition, right? So yeah, so I think, you know, the console does the loggy to use the intermediate CAs if that's what you have. Okay, so the next question regarding Nor do you, it only works with some operations like install, but in sense it works on channel. So the channels also have Nor do you. So if you go into the console, let me show you. So if you go into the channel specifically, right? And then go to the channel begins. You can see that in this case, I have one and two and both have Nor do you. So the Nor do you is applicable for the install side where the peer itself has conflict YAML that dictates your Nor do you configuration. And then the channel also within the MSP definition, you know, have Nor do you. So let me quickly show you some of the, let me, so this is a cool capability you have in console. So once you have console, let's say you drill down into a channel, right? So you can add debug, you can spell it, right? So debug in front of the URL. And what you will see is, right? That is a new option that will show up called open channel config. This is incredibly useful because again, you don't have to go to CLI to download and decode and all that business. So what I was gonna show you was, right? So org one MSP. So if you look at the org one MSP definition and then if you keep drilling down, you see this section, right? Within the MSP. So you see the organizations right there, right? So part of the channel config is also Nor do you and again, you know, it's configured with enable true in this case, which is why console shows that information. And since we are on this page, right? So you can inspect like the policies and it's basically the entire channel config dump. You know, like you see that the anchor peer is such, right? You know, for that organization. So you see all of these information. Again, it's super helpful for debugging purposes. And so yes, the Nor do you is applicable for both the admin of the peer and also on the channel. Decentralized orders, great question, right? So you could have distributed orders and peers. So within the console, you can like, let me drill down into the order. So you can see within the order, ordering nodes. So these nodes can be from different clouds and different MSPs, right? So let's say, you know, you, you know, by default, of course we have order MSP and you want order MSP two or something. Some other, you know, person or organization is managing the order. What you can do is you can add the ordering service administrator, the MSP. So you can let them send you the MSP definition. You can import that into the organization tab and then you can say, I want to add that additional organization as the, like let's say all the one MSP is the, you know, even though it's not right in this case, right? That's a newly, you know, added organization that want to contribute some of the orders. So you can simply add them, right? Add them to the order admin. In the moment you add it to the list of ordering service administrators, right? They can now add their own, you know, again, their own orders to the cluster and you can now have orders from different organizations, different cloud providers if you want it to be and then manage them directly from the console. Okay, so decentralized orders, yeah, that's possible. Yeah, so you can, okay, that's another question. So you are distributing orders into channels, not particularly peers. So when you go into a channel, right? Great questions, by the way, love it. So when you go into the channel, you can edit the channel, right? And it's not gonna show that in this case, right? So when you go to the, okay, going to the next screen, what I was gonna show, so, okay. So since I don't have multiple orders, so the ordering services, right? So if you, we do allow you to update the concenter set on the channel, that's the important bit, right? So since I have only one node in the whole thing, right? You know, I don't have the ability to do that. But if you have multiple concenters, right? You know, then you can pick and choose which of the ordering nodes you want to be a concenter on the channel. And then you can join the channel to the relevant peers you want to, like only certain peers are on certain channels, right? But the point is, I guess, the orders are, you know, you can associate different concenters in different channels. And that's something the console allows you to do. Again, since I have a single ordering node system, right? You know, you cannot see that. But, you know, again, you know, that's possible, all right? So any other questions people have, right? Really good questions, I love the session. Okay, so, you know, so if there are some, you know, David, you know, I guess Huffman, right? And if you want to show, so one of the things I know people have asked in the past that I'm bringing up, how do I set up the console to do TLS, right? Because obviously the default is non-TLS and one pointer I would like to make is, right? If you access console in non-TLS fashion, anything that's not local host, right? I don't know, you know, varab.idm.com or whatever, right? In non-TLS, it won't work properly because, you know, there is a security package that David can talk about, right? You know, the use that will only work when you are in, you know, TLS mode, unless again, you are in local host mode, right? So let me stop sharing for a bit and then let David show you guys how to configure console for TLS, right? So stop sharing. Okay, so David. Okay, let me set up my screen. All right, so to set up TLS, you need to configure two environmental variables. There's a README that talks about all these environmental variables and settings and whatnot and it's in this path. You can see in the top, let me make this bigger. The top of this left panel, packages Athena, environment README. Let me scroll down to this. These two guys, these are the guys to set for providing your own TLS file for the web server and they have a tricky behavior. If you set this path to a key that does not exist, then it will actually just create a self-signed cert for you at that location. So if you already have a proper cert, put the cert somewhere in the file system that's in the repo and then point these at that. So this is your private key, this is your certificate file and then if you don't have a cert and you just wanna make a self-signed one just to get through it, you can just point this to a blank or a non-existent file and then when it starts, it will create a self-signed certificate. So I'll show that actually. So there's my copy and paste thing. So on the right here, I'm just showing you, pretend these are your environmental variables that you would set up. I have a script that would just load these into my environment. So you would set these wherever you want for whatever OS you're running. And I'm pointing at this particular folder and if we look at that folder, it does not have those files in it. So what we expect to see is when we start the server, I should create these files. So from this directory, we can just do MPM start. And Athena is the name of the web server. So we're inside the repo packages and Athena for the web server. And then when we started it, it will log a bunch of stuff and it will show us that it created some certificates. There's a little silly log that will print out here, silly level to this level. So it started the app on 43,000 and there's an S at the end of that. So we know it did TLS. And we look up a little bit. There's some logs about it saying that the TLS server wasn't found. So it's gonna generate it. And if I open that directory, we can see new files, that's good. Go to the browser. So if it said it was at 3000, I'm on local, right? So local host, oh man, you put HTTPS in front. And then of course, since it's a self-signed cert, I get this little warning saying I need to accept it. And we can, it's probably good just to look at the cert real quick. And you know it's one of our auto-generated certs because it has this in the organization field. So that looks good. So I'll just accept that. And then it'll prompt me to enter the console. Now it's just kind of business as usual. I can log in with a user I've created and I'm in my console. So that's TLS. And again, if you have a proper cert, one that's signed by a third party, you would just point this path to somewhere in the file system to that cert. All right, stop on the server. So we have any questions on that? Actually, where is the chat? There's the chat. Yeah, I'm looking at chat, David, so no questions here. Okay, great. So TLS done, what else should we do? So we can talk about some of the settings that people might want to change. So a lot of the default settings are, you know, pretty good defaults, but they might not fit for some circumstances. So the default file is here. It's in packages, the Athena, JSON docs, default setting docs. This is where you can see all the default values, but I don't recommend changing it here. If you changed it here, and then you try to do like a get pull or something you might have some sync issues. So this is kind of the reference of what we're pushing out. If you want to change to these, you create a config file. And the config file is your, you know, scratch pad of overriding these settings. So in my case, I have a config file that I'm using config DSH. And these are some of the overrides I've already set. And let's go through some of the ones that you might want to do. So one of the ones we get talked about a lot is the rate limiting. So on the left here, I can pull up more information. Sorry, so there's two rate limiters in this setup. One is for all the APIs that the browser is gonna do, like when you're logged in as a user. And the other rate limit is for API keys. So if you're scripting to the server or you're doing lots of different calls, you can set the rate limits independently. So if you don't expect to have any API key calls, there's no scripts calling the server. You can set this low to make it a little tighter. And if you have a whole bunch of components in your browser and UI, and things start to happen that are weird, you start looking into the blogs and you see you're getting rate limits. Then you could bump this up. And so again, you wouldn't edit it in the default settings, but you could just put this in your config file as the config file is a YAML. So we just do like that. And we could set this to 102 or whatever. So that's how you would override any setting. Just whatever the setting is verbatim, put it in a config file, and then the server will look for this file and override the defaults on startup. So that's the rate limit. So that's good to know if you're hitting odd problems. And the problems you would see if you're hitting this, they're most easily seen from the server logs. You're gonna see 429 codes. And you're gonna see it being noisy, saying this client has hit too many requests in a minute. And yeah, another thing that you might wanna change is the app report. So we default to 3000. This is the internal port that you're binding to. So if you're wanting to bind to different port, you just change that and it'll bind to different port. And then we have a bunch of timeouts for fabric related calls that are good to know. Let's look at it in the read me, right? So over here on the left panel where I'm highlighting, these are various fabric calls and their timeouts for the client side stuff. So some of these will have a similar timeout on the peer side or an order insurance side, order or node. So you wanna update both places if you're running into these timeouts, depending on which one you're hitting, right? So all these are only gonna change the client side. So if you're in the browser and it says something timed out, you might wanna look into one of these settings to bump up the timeout depending on what's going on. And then you also might need to bump it up on the peer if it's hitting that timeout from the peer process. So once you can control getting a block, that's usually very quick. Shouldn't really take you more than 10 seconds. If it's having difficulty bump it up, all these are in milliseconds, so you can see 10,000. If you wanna instantiate chain code, their smart contract, this is the one-oh lifecycle stuff. That can take quite a bit of time depending on what type of smart contract you're running. So this might be one to look at to bump up. This is definitely one you wanna also bump up on the peer because it has its own timeout for letting that process start up. And there's one for joining a channel. So peer joining to the channel. And then there's one for installing the chain codes. That's just loading the binary onto the file system for the peer. And then we got two there for the two-oh lifecycle stuff that fabric two has version two. So one is installing the chain code to different operation than the previous one because it uses the lifecycle chain code. And then there's another one for getting the package out. So you got those two options. Then there's kind of a fallback of any other timeout that we're using. There's a 10-second default for that one. And one more thing to look at is lockout. The lockout limit is how many failed password logins you can have before that IP gets blacklisted for, I think it's only five minutes. But if you see getting four ones, even though you have the right password, you can see it's in the server logs that you're being locked out. And if you're doing dab or just doing something funky, maybe you don't care to have a lockout limit, you could bump this up to something crazy. Maybe zero would also disable it too. But if you're hitting that problem, you could play around with this value. And I think that was all the settings. Now that one to talk about, hopefully that makes sense. Yep. I'm gonna stop sharing that. And Virad, do you wanna talk about something else? Yeah, maybe. So are there questions from people? We can talk about a lot of different things. Wanna make sure you covered what people are interested in. Is there a question? If not, there is a next topic we can cover, which is local development. Some people have asked, how do I do local development? That's a topic. So let me cover some stuff on that. Because as you are trying things and doing things that may come in handy, I know a couple of people contributed some code and would be nice for everybody to know what that is. So if you look at the repo, quickly the repo structure, so within the console, we use learner. That helps with managing all these multiple packages. When you have a monorepo kind of scenario, there are multiple components. So if you look in the packages, this is where the main components are. So there's Apollo, Athena and Stich. So Apollo is the UI component, right? That's where all the react stuff is. If you're trying to develop something, if you're learning React or good with React, we would love to have you contribute stuff. So it's all react-based. And Athena is the quote-unquote backend of it, which is again, node-based, it's a node component. And Stich, we won't cover too much right now. It's slightly advanced, I feel like, but what it is, it's a protobuf, really, that's converted to JavaScript, right? That is the process where the protocols get converted to JavaScript. And that's how we talk to the Pearson orders using GRPC. We don't use node SDK, we don't use Java SDK. We have our own, again, the protobuf mechanism using JavaScript, everything directly goes from the browser to the backend, right? Again, we will schedule a different call for that, but just be aware that there are three different components that are in play here, right? And Lerna is the one that's kind of stitching things together. So again, if you're trying to do local development, what we have, if you go to the end of the readme, you see a couple of things, right? So you install Lerna, right? That's the first step. And then next you do the bootstrap, right? You know, it will like, you know, basically do the NPM install under the covers, right? You know, you do the clean, which will delete it, and then you do the bootstrap. And then when you do, again, if you're purely doing server side development, that's Athena, of course, right? Which is, you know, this command. And the client side, I mean, also development, if you want to do it, then it's the dev Apollo. So a dev Apollo would run the, you know, again live react, and then it'll also start the server. It'll have it all set up when you do that. So in my local demonstration purposes, right? I, you know, I started, you know, Lerna run dev. And when you do that, it will run on 4,002. And again, it's a clean console. What I was going to show, right, is how do I know the components? Where do I get those components to be imported, right? So once you've provisioned and started the components, right? So if you go look under the repo under, there is a work area, right? And you should see assets. And obviously the asset being imported is console underscore assets, right? And let's just pick, you know, is CA, right? Or the one CA. And what you see inside is that JSON, right? This is the generator JSON. Of course, when you rerun the whole thing, you know, this is going to go away and then it'll generate a new one. But what you see inside is these endpoints, right? So it's again, CA underscore or one, because that's how the service is set up under Docker, Docker compose, right? So that's how the network is now. But if you're doing local development, you have consoles running locally, you want to be able to reach to it directly, you know. So you would change these to local host, for example, right? So you would change those. And then what I'm going to do just for demonstration purposes is to import them, right? Or that specific one. So CA import that add a file, and, you know, within the work area, assets and we modify the file under certificate authority, right? We did change or one CA. And then important, you ignore the location, right? It doesn't, I mean, you can choose the different location, you know, if you want to say it's the open shift or just plain Kubernetes, right? You know, it's up to you. It doesn't matter. It's just display purposes, really, right? And then that's it, right? So the component got imported and you can click on it and you can drill down and say, like, oh, you know, I want to generate a certificate from this CA, right? Boom, right? So the, and now you have the component imported, you have a development system running, right? Let's say I just want to, I don't know, you know, change the, you know, change some stuff, right? You know, I don't know if you're available message or something, right? So I go somewhere, it's not a very useful demo, but, you know, I just want to demonstrate that you can change it and the changes will be live, right? I don't know, test, right? Change it. So it should do, you know, compile under the covers that's live and then it's reloaded, but let me see. Right, there, right? So the test message. So it's live reload, so you can quickly do development and check it with, you don't have to go through the whole build and all that. So you simply start that endpoint, the learner endpoint, and, you know, you can add a component or whatnot. Again, the source code wise, simply for the UI again, right, like we just saw, it's under the packages Apollo and then you have all the source code, right? And there's components, right? There's a ton of components you will see, right? It's hopefully fairly, you know, self-describing, I guess, right? You know, in terms of the names and what they mean, but you can of course do inspect and the usual things to do with the React development, you should be able to follow them and identify the components and make a change in all that stuff, right? So, and then for Athena, again, same things apply, right? So there is a file watcher that goes on that watches the file changes. So it should, if you go into the Athena folder, you know, again, all the server code is here, right? So you make whatever the change, it should restart the server, then, you know, you should be able to see it's in the folder, right? So that's kind of like a... So all the ports and everything just to mention, right? Our setup, again, the config file David was showing, right? You know, the under ENV, under Athena, there's a DevJSON that points to config file which has all the annual changes that David was showing. So this file is set up so that, you know, so that it works locally. So you can have both of the consoles running at the same time. You can have the one that's provisioned from the, again, the Docker image or you can run the local development in both at the same time, right? And this UI I'm just showing was connecting to the test network, right? So you can quickly import the test network into a local development, not just the, you know, a Docker-based one, and then develop stuff and try things, right? I think that's a cool thing to do. Let's see, what other question? Right, so does these messages have, I, you know, internationalization structures to that? Yes, right, that's basically, so we don't push the translated content into the public repo, but, you know, if you go into Apollo and then source assets, you know, then the second is 18n into nice internationalization, say, you can add other language-based folders and then push the message bundle and then, you know, the message stays on and you will have your translation, right? So yeah, so that is possible. So it's built in and you just don't know the time to get part of it. That's cool. So maybe just the time, Jack, we're almost at the top of the hour. So thank you so much, Vorat and David Hoffman in here, contributed time asking the question. So for the audience, please go ahead to the meetup, a comment, leave your comments, the page, leave your comment, what topic or question and continue to type there. So Vorat and David probably can pick it up, see if there's a future section we can schedule. Certainly, like Vorat, you mentioned there's so much things we can talk about. So yeah, please feel free to fire or the question after you try out the console yourself. Yeah, thank you guys, this is a really good audience. I enjoyed the questions and again, if you have more questions, of course, leave a comment into the meetup and then we'll follow up with you guys, right? Thanks everybody again. Bye. Yeah, awesome. Everyone have a good day, bye. Bye.