 So, as I see, we went actually live. So, basically, I would like to welcome everybody. This is Hyperledger Budapest. We are actually a community group dealing mostly with Hyperledger technologies, blockchain technologies, of course, and well, actually different consortium or enterprise applications of the distributed ledger technologies. I would like to welcome everybody. My name is Daniel Segu. I'm organizing this Budapest meetup for a while. And then today, we're gonna have some presentation, actually one presentation from me, which related to Hyperledger Fabric and Monitoring. So, as I see, actually, I promised that we start like in five minutes. So, we are live. We are live on YouTube. So, again, we're gonna be on YouTube. It is recorded. And then, basically, we will see some code as well. If you mean by hands-on, I have some GitHub repo as well, if that's what you mean. So, let me just basically start with the presentation. So, we got this technology called Hyperledger Fabric. It's one of the oldest consortium and Hyperledger technology. And, basically, it's used pretty heavily in enterprise use cases. So, what we're basically gonna see today, how you can monitor and what you can monitor with Hyperledger Fabric. I mean, especially if you don't have something as just a demonstration or proof-of-concept Fabric farm, then you need something which is monitoring. You need to have the possibility to see and check some of the data, some of the logs, some of the metrics, basically, as an operator or as an administrator of your Fabric farm. So, it's not gonna be so much a theoretical presentation, but I will have a lot of practice. And it looks that way. I will cover three main topics, basically. So, if we say, like, fabric and monitoring, usually we can think of three different things. The first thing is Hyperledger Explorer. It's not so much a monitoring tool for administrators, but Explorer is rather a blockchain-specific tool. It's a blockchain explorer having the possibility to check actually the inside of your ledger, of your data and stuff like that. So, it's not a monitoring tool for administrator, but it's rather a blockchain explorer tool, again, for administrators, actually, or even for power users. The second main point that we're gonna cover is Prometheus and Grafana. These are more like the cloud-native stack monitoring tools, so it's really for operators operating your farm. And last but not least, we will cover a little bit like fabric and locks. So, basically, if you have like a complete fabric farms, it's a practical idea or it's a good idea to collect somewhere your log to have the possibility to analyze this log and show some of the results or even set something as warnings or alerts or emails based on these logs. So, these are the three topics that we're gonna cover today. And, basically, so, again, it's not so much, basically, a theoretical presentation, but I will show some practical quotes rather. Looks that way, for this demonstration, I set up, basically, a very simple quote. This is in a very simple repo. It's on GitHub, so if you can just take a look, basically, it should be under Daniel Seguin, Fabric Kubernetes demo is the name of the repository. The reason is for that because I used actually the same repository as for a Fabric Kubernetes demo and that's what I basically extended with some of the monitoring functionalities. So, this is the repo that I'm using today and it has two parts, actually. One is Docker, the second one is Kubernetes. We're gonna use just the Docker part. So, in the Docker part, you find all the same quotes and then demo that I will present today, basically. So, it looks that way. This is a pretty much simplified, high-pleasure Fabric farm. It has just the minimal amount of components. So, we get something as a peer. We get a coachdb for the peer. We got an ordering service. It's a one-node ordering service, basically. And we have a CLI setup as well. CLI setup basically helps setting up your whole farm. So, I mean, if you're familiar with high-pleasure Fabric, I mean, setting up your farm doesn't only mean like starting the peers ordering service coachdb and stuff like that. It requires as well, like creating your channel, installing your chain code to the peers, and so on and so on, joining your peer to the channel, accepting and committing your chain code and so on. So, these are the stuff that will be running from the CLI setup component. Just covering the code. So, as you basically know, high-pleasure Fabric is pretty much a container-based application. So, if you so like Fabric samples, so my example is actually simpler than Fabric samples. Fabric sample is pretty much complicated. It has like free organizations and stuff like that. I will have basically just for this demonstration and a couple of other demonstrations, an even more simplified demo and an even more simplified high-pleasure Fabric structure. So, basically, as it is a containerized application, I mean, one of the main idea and main content for this application is the Dock Air Compose 5. I mean, if you use Dock Air, if you use like Kubernetes or some other stuff, then it's different, of course. But, I mean, as soon as we stay with Dock Air, we got the Dock Air Compose 5, that describes actually these services in a pretty good fashion. So, we got something as an ordering service. As I've shown you, this is our ordering service. It's on the table. We got one peer as well. So, this is one peer. It has actually one organization. If you are not so much familiar with high-pleasure Fabric infrastructure, basically, we have something as a physical infrastructure. Physical infrastructure is really the peer, the ordering service, CoachDB and stuff like that. But we have something as more generalized. Basically, I would say it's a logical architecture. And logical architecture looks that way, that we have organizations in our high-pleasure Fabric Fund. Organizations are they basically summarize or collect a couple of access slides and then other logical stuffs. And of course, organizations are not just fictive organizations, but I mean, they relate it actually for real organizations as well. And we get something as a membership service provider, membership service provider, abstracts basically this access for the whole high-pleasure Fabric Fund. It abstracts away from the cryptographical material to a more understandable pattern, I would say. So, on the logical side, we get just one organization and then it's org one and org one MSP is basically our membership service provider. So, this is our peer. We get one CoachDB as well somewhere. I think this is our CoachDB. And then we got the CLI setup container. CLI setup container is here as well. Again, it uses to set up our fund. And that's all. That's our high-pleasure Fabric Fund. And that's what we want to basically monitor today. For this fund, we would like to set up like high-pleasure Fabric and some local collection staffs as well, okay? So, again, basically this repository is on GitHub. So, feel free to use. It's just one comment. It's not production ready. I mean, it was basically built up for this demonstration. So, if you just copy this code and you want to go live with it, then don't do it. It requires some extensions, some modifications, stuff like normal certificates and so on before you can go live with such a code. Besides it pretty much oversimplified. So, oversimplified for the reason that it can be used well in a demonstration but in a production, in a real production environment, you got probably a more complicated setup on the high-pleasure Fabric site. Okay. So, let me see our first tool. So, our first tool is basically high-pleasure Explorer. If you take a look on the side, so this is high-pleasure Explorer. High-pleasure Explorer is a project from high-pleasure as well. It gives basically a blockchain Explorer. If you're not familiar with high-pleasure, you can imagine a simply Ethereum Explorer as well. It's pretty much the same. The big difference between actually that it is used in a consortium setup. So, we are at the moment in a consortium setup that's a permission setup. So, high-pleasure Explorer must somehow reflect these permissions set up. So, for this reason, actually, a blockchain Explorer in a consortium setup is a little bit more complicated than a normal Explorer in a public blockchain. Despite it, it does actually the same thing. So, you get something as blocks, transactions and stuff like that. You can actually check. So, this is high-pleasure Explorer. And again, it's a separate project in high-pleasure. It looks that way that actually it was meant to provide a consortium blockchain Explorer functionalities for many different projects. So, in high-pleasure, there's not just high-pleasure fabric, but we get a couple of other less well, like Iro, like Barrow and stuff like that. So, high-pleasure Explorer was meant actually to be a tool that provides this Explorer functionality for all of the possible blockchain platforms. In fact, at the moment, I'm not quite sure if it provides for all of the platforms. I'm sure it doesn't provide for all of the platforms. I think it provides heavily for fabric because they started actually with fabrics. So, you can use for fabric very well. I think it works for Iro as well. Perhaps some other platforms as well, but I'm not quite sure. So, if you have like a high-pleasure product or framework and you need a blockchain Explorer, you can take a look if high-pleasure Explorer is a good choice or not. If it supports your platform or not. If you have fabric, then surely you've got high-pleasure Explorer. So, again, it's a consortium blockchain Explorer. You can take a look on blocks, transactions, your consortium network structure. You can get some network infrastructure, a list of nodes and so on and so on, and basically the big difference that it's a permission system. So, let me take a look how we can configure such a thing. So, basically, as we have something as a dockerized setup, we need for high-pleasure Explorer two components. One is the ExplorerDB. And the second one is the Explorer itself. It looks that way that ExplorerDB is basically a positive database. Theoretically, you could use directly your Postgres database as well, but basically, it requires a lot of hacking. So, for this reason, basically, I use always the ExplorerDB. It's basically, we are with Docker, so we got an image, the image is on Docker Hub, and that's the official ExplorerDB image from Docker Hub. Again, it's basically a Postgres. So, theoretically, if you don't like ExplorerDB, as it was Dockerized, as it was packaged, you could use PostgresDB as well and setup on your own, but it requires a couple of hacking. So, this is the database, Explorer database. We got an image. We got a container name, host name. We got some environment variables. Fortunately, we need to have the database. I'm not quite sure if you can change it, actually, or if you can tell if you change it. You can surely change the username and password. This username and password is required for the authentication between Explorer and ExplorerDB. You get some health check. You need to have a persistent volume. It looks that way that this is a Postgres database. And so, I wasn't really right. It's not necessary that you have a persistent volume. It's just much, much better because it looks that way that as you start the Hyperledger Explorer, basically, Explorer synchronizes your blockchain to the ExplorerDB. And then, if you lose your container, then basically there's a possibility that these tasks are newly synchronized. So, the primary storage of data is the blockchain, of course. But if you have a big blockchain, then it might take like hours to synchronize your whole blockchain to the Explorer database. So, for this reason, it is usually a good idea to have something as a persistent storage for your ExplorerDB. So, as you see in the Docker 5, we have the persistent storage basically here. So, apart from this fact, it's just a normal container. So, we get some network. So, the second container is Explorer. That's Hyperledger Explorer. We got, again, one image. That's from the Docker Hub. Basically, that's the official image of Hyperledger Explorer. It's the 1.1.6. I think there's even two versions that are newer. So, we get like 1.8 as well. We get tasks like container, name, hostname. Not a lot, but even more like environment variables. What's important basically is to have a reference for the ExplorerDB. So, this is the Explorer. It needs to have a reference to the ExplorerDB. It needs to have the reference for the database in your ExplorerDB for the username and password. These must be the same parameters that are set actually here at the ExplorerDB. So, that's your Explorer. Basically, it requires some premise and some certificates to be mapped. And it has basically... So, it looks that way. It creates a certificate as it starts in a local wallet. So, I usually map this certificate to a persistent storage as well. It's not necessarily required, but I usually do that way. We get one parts. This is the parts where you can actually access your Explorer. And we have a couple of parameter files. What is important is one parameter file. And this is actually mapped here. So, we get here actually two files. One is the config.json, and the second one is under the connection profile. We have here one config.json. It doesn't say much. Actually, it says where you can find the other parameter file. So, this is the other parameter file. And this is the demo json that's more important. This demo json is the parameter file for the Explorer. It describes how Explorer can reach actually your Hyper Leisure Fabric Network. So, it is absolutely important. And it's sometimes a little bit hacking to fill out this file. But basically it describes stuff like what's your organization? What's your membership service provider? What's your name of your channel? What's the name of your peer? Some connection parameters. It's a great organization. You've got the MSPID. You've got a couple of certificates and keys as well. That must be set. It's like the admin private key for your organization admin, which can be found under the crypto config. I will show it just in a second. You get something that's, again, it's a peer or list of peers. You have, again, sign sets. That's the signing certificate of your organization admin. That's for the organizational parameters. And you get something as a peers. It describes your peer. So, we have one peer that's this peer zero. And we need the TLSCA certificate authority, the TLS for the certificate authority. It's, again, it can be found under the crypto config. This is basically a simplified configuration. In real life, Scenario, you might as well need like, I mean, the ordering service is not a bad idea to capture just one more comment. I mean, if you're familiar with fabric, I do not have certificate authority here at the moment. The reason is for that because I'm going to generate my keys and certificates with a special command. It's a special create config just in a second. It's called this cryptogen, which basically generates all of my certificates. So I do not have to carry, or do not have to worry about like certificate authorities and stuff like that. But of course, it's not the other way for real life production scanners. So it is simple to use, but it can be used only for testing purposes. So just going back, basically what's tricky to configure with Hyperledger Fabric is that, I mean, this file, this file that describes actually your fabric network. This is the main connection between your fabric, between Hyperledger Explorer and Hyperledger Fabric. So if you have fear something, you set something wrongly, then you're going to have a failure and network, and basically Hyperledger Explorer won't start. That's one of the classical errors. What you have to take care is basically the different certificates and keys. You need to be sure that you reference for the correct certificates, and then that the certificates can be reached. I mean, they are really there. This is the second error that you can do if you configure actually Explorer with Hyperledger Fabric. If you have a wrong certificate, again, Explorer starts. You get an error message. There's a TLS error, for instance. So if you have the wrong TLS CA cert, then basically you get an error message that the Explorer can't reach the peer. In the peer log, you see some TLS errors, and then basically Explorer won't start. I'm just going back. So it's as usual. It looks that way in my demonstration. I just attached the whole crypto config folder for like this pass, and basically in the Explorer config files, all of this pass actually are set based on these parameters. So I'm sorry if I'm very technical. I will just show some action right away. So let me just start Hyperledger Explorer. So what I'm having here, I got a two-step setup for this whole farm. If you've ever seen like Hyperledger Fabric, it usually looks that way first. Let me just make sure everything is empty. It looks great. So the first step is generate. Generate generates your certificates, your keys, your X509 certificates. Again, this is basically the easy part because I use this cryptogen tool, which is adequate only for testing purposes, but it's fast and then I do not have to worry about. And the second one that I'm going to start is basically I start my Hyperledger Fabric farm. It has many steps. If you've ever seen like Hyperledger Fabric, basically it's that way that, I mean, based on the dog care first, I'll start the containers themselves. After starting the containers, there are like a couple of steps that must be carried out. So first like creating the channel, joining the channel, installing the chain code for your Fabric farm, committing the chain code for the organization and then basically making sure that the chain code runs. And it looks that way. My instance looks that way that only if my channel and chain code set up, then after that I will start Hyperledger Explorer. The reason is for that is because if you don't, I mean Explorer, as you still remember for the config file, Explorer tries actually to authenticate with your Fabric farm and authenticate with your certain channel. So if you should basically just start the staff and the channel is not found, then you get an error message and the Explorer goes down. So for this reason, there's a, I mean, there's timing in this whole stuff. So first the whole farm has to be started. The channel has to be set and only after that I can start or it's practical to start Hyperledger Explorer. As you see, this is the models and then it's done. And basically I'm going to start like Explorer and Explorer DB. So theoretically, I mean, actually I start some other stuff as well. But if I do a Docker PS, I should see basically somewhere, I see an Explorer container and I see an Explorer DB container as well. And if I just take a look, the actual theoretically configured under 8080. So if I just go to 8080, there's actually a sign-in. The username and password that I use is again configured in this demonstration. So it's like just admin and basically blockchain forever. So let me just start it. And here we get the user interface. So this is not a sales presentation. If I have a sales presentation, I start with the user interface, but this was like more technical and I start with code. But I mean, just having like file forth with the code, we can see finally the UI. This is Hyperledger Explorer. It looks that way. We get the dashboard at the beginning. It shows like how many blocks, transactions, node chain code and stuff like we have. We have some peers. We have some statistics here as well. It's like blocks per hours, transactions per hours, transactions per minutes. We have here like blocks and some fancy chart actually. How many transactions did we get from the organization itself? And basically there's a second organization always, which wasn't on the slide, but basically the ordering service has an organization membership service provider as well. You can see the stuff here. You can see the network. Basically it has two items. It has a peer and an ordering service. We can see both of them. We can see actually the organization behind. We can see the blocks. So these are the blocks. It shows some metadata for each of the blocks. What's pretty useful is the transaction window because at the transaction window, you can see what you can actually deep-dive a little bit your exact transaction. So if you're not familiar, basically Hyperledger Fabric has a several-stage consensus mechanism. So the first stage is having some endorsement for your transaction. So basically endorsement means that there's a simulation at the chain code level, and then you get some reads and writes on your ledger. And this endorsement will be ordered somehow and put into the block. But the point is that for debugging purposes, it is sometimes very useful to take a look on the transactions. I mean, even on the transactions, which chain code did have a transaction from which organization and stuff like that. But what's even more useful is take a look on the real read write set. So this is your read read write set. So basically as the chain code has been simulated, then you can take a look on this exactly on the data and JSONs, which are the atomic writes, reads, and which are the atomic writes basically on your ledger. So it is sometimes very useful to take a look on the information even for debugging purposes as well. So I just close. You get here some other metadata like, I mean, tabs like def channel. This is the chain code that has been installed. And this is some channel information as well. So let me just start one transaction. Basically I have a test transaction. Test transaction looks that way that I have like, actually the CLI setup is configured in a way as well that I can invoke a transaction as well. Basically I have here these marbles demo. This is a demonstration chain code from fabric samples. You can create like new marbles with some metadata. So basically what I'm going to take a look here, actually I send this in it marbles already at the beginning, but what I will try to do is initialize one marble. It's like marble tool that we have with that red. I'm not quite sure what's certified. It's like probably the size of the marble. And then let me have like, I think that's the owner. That's both. Okay. So basically I just start this transaction. And again, it's, I mean, it looks complicated, but basically it's just an invoke. And then some chain code parameters and a lot of stuff like that, that you need for hyper ledger fabric. Basically the peer, basically some TLS file, where's the ordering service? What's the channel name? Where's the root search and so on and so on. So it looks that way. Chain code invoke successful results start stay to 200. So we should have somewhere here, basically a new transaction. And we got here a new transaction. So you can see basically there was a new block added to the channel. And I think you get this message here, not just after each block, but after each transactions as well. So if I just refresh here, I should get this latest transaction. That should be my new transaction. And if I just take a look on the rights, for instance, at the beginning, so as you see, I mean, the color is perhaps not so perfect, but basically you see here that there are atomic rights. I mean, the atomic right is actually writing here one jason, one complex jason, but this complex jason is like marble, marble two is the name color is red, size is certified and woman is both. So basically my transaction was in the network. Okay. So this was Hyperledger Explorer. It doesn't have actually much more functionality, but I would say it's pretty useful if you use it actually in hand and then you can take a look deeper in your transactions. So let me, let me just take a look on the, on the next item. And then, so basically as you soon as you have seen, Hyperledger Explorer is, you know, it's not like a real operational monitoring system. It's, it's like a blockchain Explorer, but it doesn't say much on information what's happening on your, in your infrastructure. Exactly. So like with your CPUs or with like, I don't know, Feb ledger read and write times or on some, some deeper level of infrastructure stuff. So it's not really a, you know, administrator monitoring stuff. It's, it's, it's just a blockchain Explorer as it is. So for having some more administrator staffs and some real operational staffs, what we, what we basically have to use is the operation services from Hyperledger fabric operation service looks that way. That's basically an HTTP server running on Pierce ordering service and on certificate authorities as well. And it has, it provides some, some REST APIs, some REST, REST operations. It has like, it has some low level management and there's a stats D or Prometheus integration as well. Okay. So basically if you have like a fabric infrastructure, again, this is your fabric infrastructure. We got here just one peer and one ordering service. So what you need to do is to configure this operation service. Sorry. And then basically there are two ways of, of configuring operation and service. The more complicated way is to, is to have your core, core YAML. It looks that way both in Pierce is, I think in certificate authorities as well. It's, it's in certificate authorities. The name is different, but in peer and ordering service, you get a core core YAML that provides all of your basic configuration. So you can theoretically configure operation services in your core YAML file. So this is the operation session. This is the listener address of your operation services. You can have some TLS configuration as well. And this is the matrix session. Matrix session is for, it's for the provider's integration. So it provides all of your, all of your, you know, I mean metrics basically how much CPU you use and so on and so on. So this is the hard way, but you can configure in a more easy way as well. If you don't want to do in a very fancy way. So like you just want to set the basic stuff. Then there are like two environment variables that you need to set. One is actually the operations listener address. And the second one is, is like the metrics provider. So I set actually, this is the ordering service. I set operation listener address for the order. It's like eight, four, four, three port. And then I get the metric at metrics provider. It's, it's from me. It's from me. The same things in peer. So basically if I go to peer, I find the same stuff. So basically it looks that way. I got the listener address that's nine, four, four, three. Actually the port and basically local host. And I get the metric providers as well. And metric provider is from it. So basically that's all, that's all you have to set in a simple setup. In production environments, usually it's more complicated because you might as well want to have some, you know, I mean, I mean TLS configured and stuff like that. But for simple scenario set, it's enough. So if you have this setup, you can actually check if it works. And it looks that way that you can take a look. It looks that way. That I get like two ports. One for the peer. That's nine, four, four, three. And I have one for the ordering service. That's eight, four, four, three. And I get like three endpoints. Health is for, that's a health check actually. If you like use, use an external provider, like in Kubernetes, you can take a look if your, if your pod is healthy or if your container is healthy. That's a very, very good, very good endpoint actually. The second one is metrics. And the third one is log spec. So if I just go to local host eight, four, four, three has, if I manage that, but let me just copy it because probably it's just better. And then what I'm missing is HTTP. Nope. Something is missing. The port number is missing. Yeah. Yeah. You're right. So I mean, it's just usually a better way if I use like, you know, it doesn't improvise. If I don't improvise on my browser, but just use has should be, this should be one. Yeah. So it looks that way. So this is the house. And then basically it says okay. I'm not quite sure it has some, some internal checks, which you should refer with the documentation. We had some problems with that, that like with the peer, this internal health check was checking like if the, if the docker, if the docker host is, is reachable, which was like not practical with, with, with external chain code, but anyway, so it checks something internal. This is the, this is the peer health check. And it gives actually a status status with okay, or not. Okay. If it is, if it is 40 for some reasons. The second one is similar. It's basically the metrics. I just copy paste because I can't type and speak parallely. So if you have a metrics, you can take a look that we get a couple of metrics. Basically it's a lot of things like chain code duration, chain code request duration, actually completion time, different time of processing your request. Different times of processing your database, your ledger height, your block, blockchain, blockchain size, and so on and so on. So you get a lot of metrics. Basically that's what we use with Prometheus. And if I go to my, that was my peer, if I go to my ordering service, that should be pretty much the same. I mean, not the same. It's similar. You see that like with ordering service, you get a couple of metrics that are more like, I mean ordering service specifics. So this is my ordering service. And I got one more stuff and that's log spec. Log spec gives your log level. So it looks that way. If you just say it says basically that the ordering service is actually in info, if I'm not mistaken, that's something similar. And if I'm not mistaken, then my peer should be back. It's just more simple, but basically it says that my peer should be in debug mode. So ordering service is something in info. Basically it looks that way. You have some environment variables setting your log levels. So these are your environment variables. And in peer, that's a good question. I mean, this one is in info, but there should be another one. I think this one, I have an ordering log level here as well, which is, it should not be here, but actually I think this fabric logging spec, what's shown here as an information on log level. And that's in debug. Just one practical thing. So you can use this log spec actually, not just to read out your log level, but to change as well. So like if you have, this is a get, but with the put, if I'm not mistaken, you can even rewrite your log spec. So basically if you want to set your log level without restarting your container, then this is the way how you can do it. You can configure basically the operation service and with the log spec, you can basically just reset your log level without like, I mean restarting your container, redeploying your ports and stuff like that. So this is the operation services. And if you have configured, then you can do a couple of cool things with Prometheus and Grafana. So basically, I mean Prometheus is Grafana, these are like monitoring tools of the cloud native stack. So Prometheus is more like real monitoring. So you can import actually like metrics, you can even define alerts and stuff like that. And Grafana is rather something that's showing the dashboard with the help of like Prometheus or other integration as well. But as far as I know, you can, with Grafana, you can do something as alerts or sending emails or stuff like that. So this is Grafana and I have Prometheus somewhere. So this is the tool Prometheus and this is the tool for Grafana. Again, so it's pretty much standard cloud native stack monitoring tools basically. So if you want to configure that with Hyperledger Fabric, again, so first be sure that your operation services is up and running. Be sure that actually your metric provider is set to Prometheus. And if you have these stuffs, then of course, I will have like two more containers. One is for Prometheus, that's for Prometheus. Actually, it's not much configured. I got one port. I will reach Prometheus on this port. I got basically persistent storage. And I have something which is a parameter five. I will show just in a second. And the second container that we have is Grafana. It's again very simple. We got container name, we got an image that's Grafana. I configured one persistent volume as well. I'm not quite sure if it is required here, but, you know, I mean, better is better. And we got the port basically. Okay. So that's all. We have one parameter file for Prometheus. This is, I mean, a classical way. So I just actually map here basically any external folder that contains basically this Prometheus.yaml file. And basically the path here is important. So it should be like ETC and Prometheus. So if I have these stuffs, I have my Prometheus yaml. Prometheus yaml looks that way. So I'm not so much an expert in Prometheus, but we get like some general information, how fast things are refreshed. For instance, information on where you can reach basically your Prometheus stuffs. And of course you have some information that's the target where you can actually find your containers that you are monitoring. And then in our case, this is actually like two links. I mean, two information. One is the ordering service and the second one is the peer. And more specifically, the port that I published for the ordering service or for the peer as the operational service endpoint. So that's nine, four, three, and eight. And if we have figured the stuff, then basically Prometheus should be up and running. So I don't restart at the moment because I mean, it looks that way in my script basically. I cheat a little bit and then at the beginning somewhere I start already Prometheus. I started at the end, I think, yeah, exactly. So it looks that way that my script basically contains at the end, starting Explorer DB and Explorer. And basically I start Prometheus and Grafana as well. So it's actually, I don't restart it again. It should work actually the same way. So if I have Prometheus, I will just cover the fabric Explorer parts. It looks that way that basically based on your metrics that you saw actually just a couple of minutes ago, you can take a look on your metrics. So if I just start typing here like ledger, then I can see that basically my metrics, basically my operation service metrics gives me actually the further metrics like the simplest is ledger blockchain height. But I mean, you can find here a lot of information, something from block processing time bucket, block processing time count, blocks processing time sum, commit time and so on and so on. So if you want to like really deep dive how your consensus algorithm works is really fine tune it in a very, very deep level. Then all of the metrics are here very good. I just take a look on like this is blockchain ledger height. So if I say like blockchain ledger height, it gives me two information. It's like seven and two. So actually we get like two ledgers because this is like 2.2.1 version fabric. So we got assist channel as well and we got the development channel as well. So it shows me that ledger height is seven at the development ledger and basically two at the assist channel. But I mean, if I just start typing, I just look something for blocks. So let me have something as block height, ledger blockchain height or that was the blockchain height. That was the blockchain. Let me have something as transaction count. This is another one. So if we just take a look on transaction count, zoom is not so optimal for that. But basically we can see that again, we get actually two channels because one is the depth channel and the other one is shouldn't get depth channel. So basically there should be like two and three transactions. I'm not quite sure. Yeah, because this is life cycle. Anyway, I should take a look. We got here usually more than one parameters. I think free transaction should be on the real depth channel. Perhaps, okay, perhaps, I mean this metrics gives me the information based on chain codes perhaps. And then we got here a life cycle chain code and we got here the real depth channel chain code and we got here something as unknown. I'm not quite sure why we have here something as unknown. But anyway, we should take a look deeper inside in this stuff. So let me just take a look if it really works. I will just start once again one transaction. So we see we just start an init marble free. It's gonna be like Eve. And then we have something as size that's certified. That's okay. So if I just start my new transaction, basically first I should see in hyper leisure explorer, I got a new transaction. That's cool. And what should happening that I think this free, I mean my ledger transaction count should be increased. So basically I appear free. I think this free should be increased to four. And yeah, again, this has been increased to four. And basically my block number should be increased as well. I'm not quite sure where's my block number. So there's a lot of parameters if you see just block, deliver block sense ledger blockchain height. Let's take blockchain height that is eight. I think it was previously seven if I'm not mistaken. Okay. So this is the prometals integration. Again, you can set here a couple of things based on this information like you can set alerts. You can visualize here as well in the more fancy way as well. And so on and so on stuff's like that. But I will show instead graphana. So if you have basically prometals, then then you can set up graphana as well with the help of prometals. So you do not directly need graphana actually for so there's no direct integration between graphana and hyper ledger fabric. Our fabric provides the information to prometals and prometals is integrated with graphana. So basically graphana can be seen on three thousand. And that's a good question. What's my password? Yeah, it's it looks that way. If you first start graphana or newly start with light, with light, you know, empty, empty storage, then basically the password is admin admin. So I just say yes. And I mean, again, not going very deep into the graphana itself. I just show the fabric specific stuff. So what you have to do is to add a data source and you can actually choose prometals as a data source. So I just choose prometals. It actually recognizes or it has a default parameters as local was 1990. We just try to set basically based on browser and then say save and set. So theoretically, it has been saved and set. And what I can do then basically like I can. So what I can do with graphana I can set up very fancy dashboards, of course, but I will just have to try one query. And then as soon as we configure the data source, the same information should be available here as well in this dashboard. So we can have like ledger blockchain height. And this is a ledger blockchain height. And then so we got here a little bit like, you know, I mean, more fancy way of visual visualizing stuff. I think you can even set this to different versions as well. We can see like the transaction count. Again, stuff that we saw. So this is the same metrics that we saw in prometals. You can use the same stuff. But you can, I mean, you can have something as serious. Much fancy, much professional, much good looking. You know, dashboard and operational tool. So one way of setting up such a dashboard is that you're professional with, with, with graphana. And then based on the metrics that is provided by prometals, you just set up your panels and you configure them. I don't know what's adding notation. So you can, you can set basically based on this fancy stuff, a lot of things. So this is one way. There's a second way as well. There are some pre-configured dashboards actually. It looks that way. I'm not quite sure if I find it. I think import. Yeah, exactly. That should be here. So it looks that way. You get this graphana.com. And in graphana.com, you have something as a community. I'm not quite sure if you can find it here. But there's something which is a, you know, basically a library where everybody just, just shares the dashboards that was created and everybody can use it and download it and stuff like that. I think it's with dashboards somewhere here. I'm not quite sure if I find it here, but basically there's, there's like, you know, a community library for, for different versions, for different dashboards. And if you take a look like graphana hyperledger fabric, then sometimes you find already something which is ready. I found one version that was for 1.4. And that's basically a dashboard in graphana for, for 1.4. And basically I'm not quite sure if that's here. But what you have, I have found somewhere an ID for that. I'm not quite sure where it is, but basically it looks that way. That on graphana.com, these shared dashboards having basically an ID. And then if you know the ID, then basically you can just import it and use it. So I already found somewhere this ID, it was like 10716. This is a graphana hyperledger fabric pre-configured, pre-configured dashboard for hyperledger fabric 1.4. So it's not, not for two, but despite we can just use it. So I'm not quite sure. I think I should say load. Yeah, exactly. We can see this is hyperledger for fabric monitoring for 1.4. So we say dashboard and then we can see what we get here basically is a pre-configured dashboard. It was created for fabric 1.4. So perhaps it's not absolutely. So you might as well get some errors, but it works pretty well. So it describes your fabric version, your goo version, your ledger block height, ledger transaction count, endorsement, proposal request, stuff like that. It's like the coach DB processing time. I'm not quite sure why you need it for the, I mean, just on the upper side of your dashboard. But anyway, you got here a lot of things and basically pretty much working. So like goo memory allocation, heap objects, garbage collection, you have here ledger metrics, ledger block processing time, state database commit time. You have here like pre-configured ordering metrics, SIC, it's rather, it's something just, you know, generalist and you get some chain code metrics here as well. So again, I'm not quite sure if it's, you know, I mean, it's for fully okay for any audience. But basically, if you want to set up like a dashboard for your fabric environment, I mean, the easiest ways to start from something already half ready. So like starting from a dashboard like this one. Okay. So just brief, brief, actually wrap up. And I have one more topic very briefly. So basically what we have seen this point from a little bit like architectural point of view. They just extended this peer disordering service coach DB and CLI setup, a mini fabric, I mean, minimal fabric fun installation with some explorer having like two containers and Explorer DB and an Explorer. And then we just integrated basically or extended our setup with a Prometo send graph on our integration. And the last tool that you basically need is some, some log integration. So it looks that way. It can be, it can be pretty complicated. I have here just a simple, a simple version. But you can have basically have our first you can, you can collect locks from your environment. So if I say here locks, I usually mean like the, you know, I mean the, the Docker locks. So these locks, for instance. So if I just have the locks for my peers, it's a good idea actually to, to integrate something that won't already, but it's docker container locks. Docker container locks. I think the true work that we as well. Why should why doesn't work? But anyway, I just stop, stop playing with that. That should work somehow, but I perhaps mistyped something. So basically one way of having locks. I mean, you can, you can take a look, take a look on these, on these containers individually. And then you can take a look, hey, do we have a, do we have an error message on our peer on our ordering service or warning and stuff like that. But in a real professional environment, it's much better if the locks are integrated sample and perhaps even analyzed automatically with some pullings. So for that, what we have in usually in cloud native solutions, these are the, these are the platforms. It's like lock, lock dash or phone, fluent ID. These are basically, basically items for collecting your locks from your containers on, from, from your pots. Then you have like stuff like elastic search. Elastic search is, is based on the previous items. So based on the collected locks and then basically you can like analyze or search the collected locks. And you have something as Kibana, for instance, Kibana is for visualization, these locks. And probably there are some, some more, more cloud native solutions here like, like generating directly alerts or, or emails or, or stuff like that. So I think I will show here just the one very simple solution and this very simple solution. That's, that's a fluent day integration. It's a fluent day integration that has some UI as well. And then I think the reason is that my docker locks doesn't work is because I already, I already set up basically my peer and ordering service in a way that it locks to fluent day. So let me just take a look if I want to have like my, I don't know, Explorer. That's my Explorer log. Haha. Yeah. Yeah. So it looks that way. Actually what you need to do to set up like fluent day. First, you have like a container. Again, this is a little bit special because it has a UI as well. Basically you need to have like two ports, one for the UI, one for, for icing kind of listener listener port. And we got some volumes as well, volumes look that way that we have one configuration file. I just have this from the internet, basically this configuration file. And well, actually three keys here that I already configured that logging is happening with the help of this fluent day component. I configured this with peer and with ordering service. So we should have here logging for ordering service as well. So basically the locks from these components will be actually collected in the, in the fluent day with UI container. So let me take a look. This one is set up as well. And I'm not quite sure if this is the port, but I think yes. So this should be like fluent day. And that's a good question. I think there's some, yeah, it looks that way. There's this fluent day config file. No, not this one. Then good question. Yeah, okay. So basically this is a, I think this is a special container that I use and probably the user name and password are hard coded already. But anyway, if I just sign in what's happening, I can set up fluent DB, fluent D. And what I'm doing here is that I set up that it should use this fluent day parameter file. So if I manage that, of course I do that just a second. Yeah, exactly. And that should be fluent the.conf. So if I create that, that what I can see is to start and basically it should start and collect the logs from all of the containers that are actually set up to work together with this, with this one. So that's what's happening here. Again, it's not very much fancy. I think even with some plugins, you can do something as, I mean, show it in a more fancy way, but basically the idea is something similar. So you have here one tool for collecting the logs in one place somehow, like here, like with this fluent D. So this is really for collecting the logs. And then you have some like elastic search for processing your logs. And you have some other tools like Kibana for the visualization. So I configured here just the first part. So we do not have some fancy tools checking our logs. It's just collected in one place. And then with the help of some further tools, we can take a look at them on a more fancy, more structured way filtering error, or making some alerts, or even emails, or stuff like that for errors. So let me just stop it. It has been stopped. And then basically, so that's the last slide of my presentation. So this is what we did. I mean, we did not do the whole part, but basically with peer ordering service and CLI setup, extending with Explorer, DB Explorer, Graphon, and Prometheus. And with the fluent D, we did not do the elastic search and Kibana part. It's perhaps next time. So basically, this is more or less end of my presentation. And then I would stop here. And if there's any questions, I hope you guys are still here. So if you have any questions, I'm happy to answer. So let me take a look actually in the chat. Because there are always questions in the chat. Yeah, my voice is breaking. I'm sorry about that. Sometimes I just so basically, the connection is not so good. Then I try to slow down. I hope despite it was understandable. So I don't know if there's any question. Hello. Will the presentation and your demonstration be available online after the first review afterwards? Yes, sure. So I mean, this whole presentation is live on YouTube and you can find it after this as well under Hyperledger. So basically, yes, absolutely. I just dropped a link to where the recording will be. Yeah, so basically, I will just share basically in the recording as well and in the meetup as well. I will send basically a link for the for the repo again and for this couple of slides here as well. So you will find everything. Apart from my breaking voice with that, I'm not quite sure if you can do anything. If they have like no fine tuning voice fine tuning after the presentation. I guess it's not much possible. So Daniel, one question. There is a difference between Hyperledger and Ethereum. So in what scenario? I know for the enterprise, they're using Hyperledger and then Ethereum in the more open, I think infrastructure. So is it possible Hyperledger can also be used in like Ethereum? So that's a difficult question and it requires some time to answer it. So basically, Ethereum meant to be like for public blockchains. So the idea of public blockchain is pretty much different than from consortium blockchains. You get like different consensus algorithm, different attack models, different works, different ways your user behaving and Ethereum targets that segment. Hyperledger fabric was actually designed just for enterprise usage, for enterprise use cases. So to answer the question, Hyperledger fabric cannot really be used in an absolutely public environment. So cannot be used absolutely the same way as Ethereum. You can set up something like a semi-public network where you're not sitting in companies but basically despite your network is public. It's something that you can do with Hyperledger fabric. But you can't do it in a fully open way like with Ethereum. So you can't do it in a way that everybody can just download a node and validate your transactions. From the other perspective, there's an initiative for Ethereum to run actually in consortium environments as well. That's like the Ethereum Enterprise Alliance focuses that way pretty strongly. In that way, it looks that way that you're not sitting with companies as well and your network is open just for the members of the consortium for just companies. So basically they were designed with two different ideas, two different ways, and there's some possibility to mix but I'm not quite sure if it's very much lucky. Daniel, I can add to that. What you're sharing is related to Hyperledger fabric but fabric is just one of the different Hyperledger projects. There is an Ethereum client in the Hyperledger community called Beziu and I dropped a link to that in the chat. So that's an open source Ethereum client that's being produced as well. So you may want to take a look at that. Yeah, exactly. So if you want to do consortium Hyperledger network with Ethereum then you can use the Beziu client or even I think if it is possible to do mixed or hybrid networks as well like combining your public Ethereum network with the consortium ones. Yeah. So I don't know how does Prometheus scrape blockchain-related metrics. I'm not quite sure what scrape means. How does it show or how does it get? So let me just try to answer basically. So I'm not quite sure how the Prometheus integration works under the hood. Basically on the fabric side what you have to set is the metrics provider. And if you set the metrics provider to Prometheus then the Prometheus fabric integration works. It's pretty simple in this way. Usually in a production setup you should like configure like TLS and stuff like that. It's more complicated. But yeah, it's again not so complicated. I think basically the integration works actually pretty simple because you got this endpoint which is metrics here. And then I see this Prometheus actually ask this metrics endpoint and then gets all of the data that it needs. Another question I want to ask here. In this you are like the metrics. Is this logs from Prometheus or is it logs from the Hyperledger fabric network? Okay. So this is from Hyperledger. This is from fabric. This metrics endpoint is I'm not quite sure if it's on the, let me just take a look. Yeah. So what you see here this metrics endpoint this is the endpoint of the peer and the ordering service. So what you see here is the what it's provided by Hyperledger fabric and this information is consumed by Prometheus in a little bit more fancy way. So like, I don't know if if I just take a look here and then say we find something as ledger height. There's here a ledger height but basically we should so we should see pretty much the same information here. Ledger blockchain height so if I take a look ledger blockchain ledger blockchain height should be somewhere here ledger blockchain height and that's the same information here. Yeah. So yeah. Then I know how it works exactly. You got some internal magic providing your metrics actually on your operation service REST endpoint and Prometheus only consumes this endpoint and then processes the information and Grafana shows in a more fancy way. So that's all. So just for the PPT for the PowerPoint I will post it. I will post it both in both in the Meetup and again as soon as we got we got on YouTube basically the video under Hyperledger live I will post it basically under the video as well. So let me just take a look on some more complicated question. I have no answer. Yeah. So that's a more complicated question. The question is how you can how you can set up how you can set up a new new organization to your consortium and it looks that way that in a developer set up you get this config config takes dot which is which is a pretty easy way to set up your organization. So basically it looks that way even if you have more than one organization but it's pretty much static then you can use this file and of course you can use it basically in tests canaries as well and even for demonstrations canaries as well. If you want to set up your organization in a dynamic way. So if you like set up your organization with like two organizations or set up your consortium with like two consortiums two enterprises but later on you want to have a third one and the fourth one and the fifth one then it's way more complicated then you need to have like several configuration transaction first you need to configure you need to have basically a transaction on your ordering service adding a new company a new organization to your consortium then you need to have one administrator transaction basically on your channel adding the new organization to the channel perhaps even modifying like different policies endorsement policies for instance then you need at least one more one more transaction basically that configures the anchor peer on your newly configured organization then you need to set up your channel your chain code and stuff like that on your newly configured organization so basically if you want to set up your network in a dynamic way adding enterprises companies one by one it's way more complicated than using this config takes.yaml you can find a couple of good articles actually good blogs actually on the internet even with hyper azure documentation there are examples but again it's not so simple yeah so is there any more questions for today I want to add something I checked your repository there are some yaml file for kibana and elastic search are this working can we test from our end? no it's still not working if you take a look on this repository like in one week I might just get them alive but at the moment they are not working only the flu and the working so basically I had some trials with actually the fully ERCAS stack so it's like elastic search logstash and kibana it's not working at the moment perhaps if I have a little time I will get them alive but for this demonstration actually it looks that way I mean so basically it's already not a hyper azure fabric task because if you have like so log is here so that's flu and the if you have flu and the then it's you can get kibana and elastic search up and running with flu and the and it's already not related very strongly to hyper azure fabric so I did not really take the time to configure them very much in details hi so when you set the logger to flu and the the logs shouldn't wouldn't be saved in the log file somewhere in the docker image so it won't be overwhelmed with the like the log file I keep growing honestly I don't know if I configured here something as a persistent volume no I should configure something as a persistent search because this way they are they are actually saved I think they should be a good question so probably it's not configured in a productive way they should be saved either here which I don't see at the moment which doesn't look well probably I'm missing here something so either this one should be configured in a way that in docker with this external file they are really saved somewhere or I should configure somewhere a persistent volume because this way the logs are deleted if I just delete and recreate the docker container okay thank you yeah I mean so basically this is not exactly the official flu and the I configured the official flu and the as well but you know I mean it doesn't have a UI so I just wanted to show something basically which is more fancy so I basically I'm looking for a flu and the which having some UI as well so it's not necessary actually production ready production ready of this whole stuff because you know I mean if you want to set up your production ready environment basically you need to have something which is an official image so I wouldn't use the image from somebody but just the official flu and the image and then basically if you need the flu and the ID then I think the only way is that you extend a little bit your official flu and the container and you have your own container so I wasn't fighting very much with so much stuff this afternoon but yeah so if you want to have like really productive ready then you can use flu and the but you shouldn't use it this way exactly but you should use the official container and then basically if you want to have the UI then you should extend the official container and basically it's practically if you have some you know like kibana or elastic search and stuff like that okay because the problem I faced when I did some tests was when I spin up I didn't I didn't set any longer in the peers and order I didn't have any loggers at all just the problem is when I spin up my network and work on that for like half an hour it will appears that it consume a lot of memory and it's increased during like along the time so I'm just have a little bit of concerns that in the production it's it's that because the locks fight keep growing and it keeps consuming the memory yeah so my experience that the locks are growing that's for sure independently how if you aggregate it with one with one component or with several so if you coincidentally have your hyperledger fabric log level in debug then you know after a couple of weeks you get a lot of information a lot of data but this should shouldn't be actually in the memory I think that should be somewhere on the storage so perhaps I was configuring this one not exactly in a good way but I mean so it shouldn't be in the memory even if I do not have persistent storage it looks that way that I mean the docker component has got some local storage basically if it's not persistent so if I just delete the docker stuff then it will be removed but that's a storage so basically our locks must be collected on a storage overall and I mean if you have like a logger and not in the memory I would say independently if persistent storage is configured in a good way or not so you know I mean if all of your locks are collected in memory then you will have I think an autogen like one day or something okay yes thank you thank you yeah so I don't have any more questions so if there's no more questions I would say that was the presentation for today and that was basically Hyperlegia Budapest for today as well I would like to thank you very much for everybody for the participation and then see you next time thank you