 I mean, and I would more people are going to join, but why don't we get started? So did you make me host? I'll do that just in a second. Yeah. So thanks everyone for joining this virtual hyperledger meetup today. We are going to be talking about deploying a network using smart BFT and the upcoming hyperledger fabric three O release. And our speaker today is David Viejo. And with that, David, take it away. OK, so welcome everyone to this meetup. I know this is a early topic because hyperledger 3.0 was released one and a half or two months ago. There is not much really documentation about this topic, but I dig in the libraries and hyperledger fabric and I was able to package something in order to be able to deploy hyperledger fabric 3.0 on a coordinate network. But we will. So we will see the agenda later. But the idea is to really understand what is the smart BFT and how can we use it? So not from the deep day perspective, but more from the user perspective. And then we will we will do a demo where I will showcase a concept that they did using a smart BFT alone. And then at the end of the meetup, we will deploy a hyperledger fabric 3.0 network using the smart BFT, using the bevel operator fabric. So this is the whole idea. And David Viejo, as David said, sitting at Kung Fu Software, where specializing in hyperledger fabric, we have been working with hyperledger fabric for four years now and we are the main maintainers of the bevel operator fabric, which is an open source part of Bevel, whose mission is to streamline the deployment across multiple cloud providers of hyperledger products. But this in particular is the one that is used in order to deploy hyperledger fabric networks. And if you want to contact me after this meetup, then you have my LinkedIn here. So feel free to contact me any questions. So these are the repositories. And in fact, there is a there is a document that I will paste in this chat. So in this document, you have the presentation, the one that we are currently seeing, the repositories and the BFT consensus library, which we will take a look at this because this is core in hyperledger fabric 3.0. Apart from the main features that are in the C.0 version is the BFT consensus. So before we had a raft and now we're going to have the BFT. Everything that is related to the smart contract to the PRs, etc. hasn't changed, at least not yet. So this is why this library is the core of what was implemented in hyperledger fabric. And then we have three repositories, the Meetup HLF 3.0. This is the one that we will use in order to deploy the network using the new version, smart BFT POC. This is a proof of concept using only smart BFT in order to have a minimal network. It cannot be called a blockchain, but just testing around how smart BFT works. And then we have the baby operator fabric, which feel free to check it out. So skills needed for this workshop. This is just a list of things that you should check. Cryptography, hyperledger fabric, of course. Kubernetes, Gold, and we will be showing code from hyperledger fabric and from the smart BFT. Docker, cell commands and basic networking concepts. This is the only things that are needed in order to be able to deploy and to follow this workshop. And the agenda. So how fabric works, what is smart BFT, why is smart BFT? And we don't finish this list, but let's go through how fabric works. So we have here the ordering service. And this is a network that was working in fabric 2.5. And before we had the ordering service with three orders in order to have high availability. And then we have the peer organizations, which have multiple peers and multiple chain codes. The ordering service is the one that manages the consensus. So in this case, it's wrapped before, but this is the pre-created, it was Kafka and Solo. But now the main one is wrapped and the one that we will see will be a smart BFT. And then we have the client, which if it's using the gateway, it goes directly to the peer. And then if it's using the SDK, it goes to the peer in order to collect the endorsements. And then it submits the transaction proposal to the ordering service. And this is the same in smart BFT. This process from the client perspective hasn't changed. So all of the SDKs and all of the client applications that were working in 2.5 can be working also in the 3.0. So what is smart BFT? This is a process protocol created by IBM, which implements a Byzantine fold tolerance protocol. This is specific to fabric, but they made therefore in order to be able to use the library as a standalone library and not dependent on fabric. So it's not coupled to fabric and it can be used in other projects. And this is the library, smart BFT consensus, which we can go. So here you have all of the code. And there is a very good paper, which is this one, in Asif, that explains in detail. So if you want to go deep into this topic, I recommend you to check this paper. So here one thing that caught my attention is that these are the benchmarking. So raft is very, it's very performant. And in this case, BFT is not so much. So if you want, I mean, we will see the features and the pros and cons of using BFT. BFT is much slower than rafts, as you can see here. But it also has some advantages, such as security, decentralization, it can handle malicious nodes. So that is the main features. But if you use it to BFT, it will not be faster. In fact, it will be much slower, I think two, three X. And you can check this, this library, which is the one bit maintained, and this is the one being used in hyperlabeled fabric. And the pros and cons, so pros, it handles malicious nodes up to one third. So if we have four nodes, we need at least three that are not malicious, it's suitable for truly decentralization. So the concept of BFT is the one that is used in public networks. So it's good that hyperlabeled fabric has it because we can expand the existing networks in order to have more order nodes, because most of the projects and the implementations of fabric that I'm seeing, most of them have one order of organization, one order of organization, one PR organization, or one order organization, multiple PR organization, but the ordering service right now in most of the project is centralized. And this is something that can change with this new consensus protocol. So in answer security, but the cons of this, of using BFT, is that it has more resource consumption, is more complex, and then, as we said earlier, it's slower, so there are scalability challenges. But if we want to decentralize, which I think there are some projects that maybe are going from the 1.4 version, then if they upgrade to the BFT, multiple organization can run this ordering service, and then it can be more decentralized. So if we check, and I want, maybe if I show too much code, and you don't understand it, let me know, but I want to, I want to go, I like to go to the code, because in the code is the truth, you know, so in the code we're able to see how things really work. So these are the dependencies, the dependencies go off the consensus library, and these interfaces are the ones that fabric has implemented. So, well, there's a question, draft is going to be marked as deprecated in the future. I don't think so, in fact, because draft is used a lot, and it's mass performance, so if you want to have the, I think how this will go is that if you want to have a centralized, quote unquote, ordering service, then you will be able to use draft, or if you trust your, the, the ordering service organizations, but if you want to have multiple organizations, maybe it's an organization, and you don't trust all of them, then you may want to consider using BFT for that. So then, so these are the dependencies, this is what we need to implement. So we have the delivery, which this is in bulk when a proposal is received, and this is the time when, where fabric grides the blocks to the, to the storage, then we have the communication module, and this is very important. In the POC that I did, the communication was performed using HTTP3, using quick, but this in fabric is using the GRPC server. So keep that in mind, but if you use this library standalone, then you can use whatever communication you can use HTTP, HTTP2, WebSockets, etc. Then you have the assembler. This is invoking the leader in order to create the proposal based on the request. You can change the request per transactions. In this case, and there is some metadata. Then we have a wall, a greater head log. This, I haven't went deep into this, but the library provides already an implementation of a wall. So you can just use that. Then we have the signer because the order, the order of nodes needs to sign data and we have to function here and we have the verifier. And we have to implement all of this function. The sign, the verifier, members is notifier. This is not used in hyperlabel fabric. The request inspector, this is to get the request ID, the transaction ID based on the request, and then the synchronizer. So pretty much everything is left, is up to the one that uses this library in order to decide how this needs to work. So this library handles the consensus and then it invokes these functions when it's needed. And this is what we will see in the POC. So this is also in order to know more about the SmartBFT. This is a picture from the Archive. So this is how fabric works. And in fact, we can do, well, let's go this way, but we have the client in the left and we have the filter, this filter is used for config transactions when a channel is changed, or when a channel configuration changes. And then it goes to the consensus library. Then the consensus library, we have this communication. So this com, this sync, this block, this crypto, the sign and verifier. This is everything that is not implemented by the consensus library. This is implemented by fabric. Then the assembler, this assembler is the block from the batch, and this is called only from the, from the leader, and then we have the validator, which is in fabric. So the consensus library is very tiny and handles the BFT, but the rest, how to write the blocks, how to synchronize between different nodes, this is handling in HyperLayFabric and if we check the HyperLayFabric code, then we will see that these are just plain JRPC invocations. So there is no much magic here. But what fabric 3.0 did is refactor the ordering service and reuse the functions in order to be able to store blocks, to validate blocks, to filter also some blocks, et cetera. In fact, if we go to the code in fabric, okay, so we have apply filters here. This is in, okay, it's here. Apply filters and this is verify config update. So this is only invoke and if we go to the source, so there is a validate config from the block validator and then this is the place where it's been invoke and if we check the configure, then this is the main loop. So there is a process message which process an envelope with this is a generic message that this is in between the different nodes and then we have, if it's not config, then it just ordered the message and then eventually it's process, let's say. It can be that a block needs to be safe, et cetera. And if it's config, then it validates. So this is why we have these filters here. Under the validator, what validates is the transacting. So this is the architecture and in the proofing concept that I have developed here, standard proof of concept in Go, which I will also show the code and you have it also in the, in this document in the, yeah, the deploy in a network, so is this GitHub repository. And let's take a look, so let's take a look at this. So this is all of the Go files are here. We have the application which handles the delivery and what it does is handles the proposal with the signature. This is not a final implementation, of course, where we will need to verify the signature, but you get the idea. This is just to play around with the smart VFT. Then we assemble the block based on the request on the transactions and then what we do is store it. We're using level DB, same as fabric, then we store it. So this is like a minimal implementation and then the reconfiguration is handled by the VFT library. Then we have the assembler. This is the one, this function is invoked from, if they know this leader. So this will be only invoked once. Then we have the communicator which has these two functions, consensus and transaction and this is a hyper function in order to reuse clients and then the nodes. And this is the reason why we need to specify the channel configuration, the nodes, because it needs to be able to get the certificate in order to verify that actually nature is valid from a specific node. So looking at this, we can understand more about why the configuration is needed in hyperlabeled fabric. Then we have the signal which there is nothing implemented here but this is up to the, in fabric, this is implemented, of course. Then we have the synchronizer which we will need to implement here the invocations to the other nodes and then we have the verifier. There is no verification right now but you get the idea. So from the, there's a question, does the client code changes when moving to VFT or it is abstracted? Yeah, it is abstracted. You don't need to change anything from the client perspective. So this is the main, so this is some logging configuration, the network options, the batch size and batch timeout. This is familiar to the channel configuration in fabric. This batch size means that there will be only 10 transactions in one block and if there are no more transactions in an interval of two seconds, then a block will be created. And this is the reason why if we have, if we send only one transaction to fabric, then it will take two seconds, three seconds based on this batch timeout. And then we create the nodes with the IDs. We will see this later. This is very important in order to understand the February operator of fabric implementation. And then we go through all of the nodes and then we spin up the nodes basically. We pass it the address, the ops address, the map nodes, which are all of the nodes, the login, the wall metrics, BFT, network options, et cetera. There are many parameters here. And in this node, in this new chain, then we create the node and we create the level DB. Well, this is a test in order to test if level DB is working and then we create the node and we create the BFT configuration. And this BFT configuration is the one that is in the channel configuration that we will see in hyperlabel fabric, okay? Then we create the communicator with the nodes and the ID. So I won't go more deep into that, but because there is a match going on, but I recommend you to download this project and then run it. It will start for nodes in four different ports in 1001, 1002, 1003, and 1004. And then the same for the desired operations and this is for the communication, okay? And this is where the consensus for the node is created. So we have the application, which basically are these functions. So the deliver is the one that contains, the node is the one that contains the deliver. And then we pass the node here and then the interface for the application, which is that only the deliver needs to be implemented. This is much if we go to the application and then we change it, then we will have a problem here that it is not implemented. So we must implement everything, okay? And then the consensus starts. Then there is the communication server, which is using quick for this, we need a self-signed certificate. And for the, this is very interesting because for the communication between the nodes, we have the transaction and the consensus. And this interact directly, if we check at the bottom, and is the node consensus handle message. And if we check in fabric, the handle message is also in the smart VFT. This in the handle message of the VFT change, this is also invoke. So this is more or less the same, okay? As what is, the functions that we are invoking are the same as the ones that were invoking that these are being invoked in fabric. And then we create the HTTP server and then we have the operation server which we have a few in order to stop, start the node. This is some helper functions in order to be able to test different network states. Then we have the status with the height, the leader ID, if it's leader, and then we have a way to send a transaction, okay? And then another endpoint to check the height. Then if we start the, let's start the smart VFT, you see? Okay, so this is started, and then let's say, let's go to the HTTP, okay? Yeah. And then this is the, so in this case we are invoking and this is the data that will go into the transaction in hyperlifer, there is another flow, of course, there is a smart contract invoked but, and this is the client ID. The client ID in hyperlifer is the identity that is used to interact with the smart contract. So the transaction is sorted and then if we go back here and then we try to get, well, let's look at the status. So in this case, we have the is leader is true, so the second node is leader and the height is three. So we will be able to check the block three with new transaction 99 and the block four throws an error. And if we change the, let's put another data, I can invoke this transaction as many times and now the height will be increased to five and then we can check the block five. And this, if I, let's create a adjacent file in order to visualize it, okay? So we have all of these transactions in this block, okay? And, but if we, so let's execute a few more transactions, okay? Now is leader is false. Why has the leader changed? So I was also surprised by this because in raft the leader doesn't change unless there is a faulty behavior on a node, at least that is what I know. But in BFT, and I think that this is because of security, so in the BFT configuration, we have a property that we can use from the reverberator fabric, which is leader rotation. And there are a number of decisions that can be made per leader. So if a leader performs three decisions, then he's out. So he cannot be a leader. Someone else needs to be a leader. If we set it to false, what happens? So let's start it again. Then the first node should be a leader, okay? So let's execute some transactions and this is still a leader unless the height is 10. So let's execute some transactions. So we see that the leader doesn't change. But if we go to the operation API and then we stop it, well, this is the operation API, stop, then this is false. But if we check the status from others, leader ID is one, but leader ID is not up. So if we execute the transactions, they are ordered, but if we check the logs in Bogland, no one can contact the leader one. So it's forwarded in the request and there is a timeout. So if you are going to deploy after this meetup, don't set this to false because if not, if there is a fault in all, then the network will not work. So if we set it to true again, and then let's execute some transactions and then let's see the height. So right now the leader ID is a one. Let's execute more transactions. Sorry for the changing of the windows. Okay, so now the second node is leader. And now if we, this is something that I haven't test, but if we stop the node one, now the height is 15, then this will work. Then this will be, yeah, this is 17th. And then we're gonna start it again. Okay, and now the node one is already available. The leader has changed. I don't know, the leader ID is three. So we get the idea. So the leader is changing all the time based on the decisions per leader. And this can be customized in the VFT configuration. So this is what I wanted to show. Are there any questions? Anything you want to, anyone wants to see? Yeah, transactions depend on the, not on the consensus. I mean, so what this shows is that we can put whatever data in the consensus, because HyperLeft already has a different way of structuring the blocks. In fact, if we see here, this is the, why did the leader change? Because you stop the node, yeah. So when we stop the node, then, well, when I stop the node and let's say that the leader ID is three, okay? And then I stop this node. So stop. So right now I go to the postman and then I try to execute some transactions. Then if I check the status, then leader ID is three, but height is a team, it will have changed. Then, in fact, this is not what, something changed with the, when we stop, something went out of synchronization. In fact, there is complaints. But you get the idea in HyperLeft already, HyperLeft already 3.0, that this doesn't happen. So this is something that needs to be taking, this is a problem with the POC. So yeah, that's Jacob said, text time 25 there, that the leader is dead because there are hard bits. So I think there is a hard bit every, every some time. So that's the, that's the idea. So in our case, if we check a block, we have here the block. There, if we see the block here, this is a naive block. And if we check in HyperLeft fabric, this is, this contains more information. So there is a question, how to test the malicious node behavior? This is a very good question. Maybe Jacob can contribute to that because you have to modify the order, to modify the fabric code in order to test the malicious node behavior, right? Yeah, either this or like you can, we had like users that opened like issues where they did a fuzzing, like fuzzing the input to the library. That's also an option. But if you like really want to test a specific behavior of fabric BFT node, then you need like to have a custom modified binary. I would say if you really want to test a specific use case, it's much easier to get clone the project and to look at the unit tests of the specific component and just make your own unit test. We have like something called integration tests in the library, where you really have like in the framework that you can control, you have some control over nodes and you can like make nodes alter their behavior, et cetera. So I would say if you really want to test a specific behavior, it's easiest to do that with unit tests. They think these are the tests. Well, you mean in fabric, no? No, I mean, these tests exactly. Oh, okay, okay. Yeah, these are the tests. So if you go to basic test, we have like plenty of tests here in basic tests, like a ton of tests and there are all kinds of complex scenarios that you can do here. The test node view change, well, in partition, test start followers, so there are many tests. I mean, this file is huge. So, but it shouldn't be changed also in fabric, maybe? Or? Well, you are in the library, right? So it doesn't test fabric, but you covered yourself that the actual integration of the library into fabric is really slim, right? Because you just need to implement these dependencies and there is no much state, unlike in raft, right? So, of course, theoretically, there can be some bug in the fabric side, but to test business team behavior at the protocol level, I don't think there is anything you can do in fabric itself. Yeah, they are right, because you need to go also always to the library that does the core of the consensus. So it's that question, I think that question is replied, draft is used in Docker and many more products, where is VFT used as an enterprise product so far? VFT is a concept, and there are many implementations. In fact, there is one which I think is changing between, well, I think my Chrome instance has failed for a moment, but there is one that is called Mir, file called Mir, and now has changed to consensus of SIFJAR and this is also a library that is more or less similar to the smart VFT. And they mentioned here that the first internet use of Mir as a scalable and efficient consensus layer. So the idea of making the library ordering service pluggable is to be able to implement different consensus. So smart VFT is the first one, but you can go on your own and then start implementing the ones that you're interested in. And even there are others when you tell your VFT, but it's a place that is in research mode, I think rather than production, and it's constantly evolving. I don't know what your thoughts, so. What is the question? No, if this is production, if this is... I cannot comment anything on Mir, right? I do know that there is at least one company from what I heard that runs smart VFT in production, but I'm not sure I should disclose this. But you know, we haven't released the GA of Fabric 3.0, so hopefully after we release the GA in Q1 this year, hopefully there will be pioneers that would test this. And you know, the first people to test it and to take it to production, they would benefit, right? Because they would benefit from being the first provider of a VFT-enabled Fabric. Exactly, there will be bugs, but I mean, seems at least deploying it locally with multiple nodes and I have also tested with multiple organizations and these organizations have PRs and orders. I haven't had any issues. The only issues that I had was with libraries of Fabric, with prototypes, with versions of libraries that were not up-to-date, but this was my side. It was no Fabric, so it looks pretty good implementation. So let's continue. So we did the demo, feel free to check it out and then test it on your own, test it with multiple scenarios. There is much work needed in this POC, but it helped me at least for me. And if you want to learn more, then you can go ahead and do it from scratch. In fact, you have a folder in the Smart VFT Go, which is called Naive Chain, if this loads, of course, the Naive Chain, examples of Naive Chain. So this is a place to start with your own implementation if you want to test it out and go deep in this. So next part, and we're close to demo time. So what about customization? And the thing that I noticed in Fabric, so in Fabric right now, if you download the Fabric 3.0, there are two concenters available, the DCDRaft and the VFT. And there is only this function is needed to implement the handle chain. And the structure of if we were to go, so we have, this is the Fabric 3.0. We have the order folder, we have the consensus folder, and we have the DCDRaft and the VFT, so no Kafka and neither solo. These are the only two consensus supported as of 3.0. And then we have that the types of the consensus in each channel, and I think it's good to remark that there can be using the same ordering service, there can be channels that are using DCDRaft and channels that are using VFT. So if you want to test it out, when Fabric 3.0 comes out, or now if you have a test network that is using DCDRaft, so you can have one channel that you create using VFT and the others with DCDRaft, and then you can test it out. I doubt it will work. I doubt it will work. I don't think it will work. Why? Because the DCDRaft communication dependency is different than the VFT communication dependency. And I think that if you start the smart VFT Concenter, then I think that only one of them combines to the GPC service. Like you have like the GPC service, right? And you have like the cluster service. So I think they both use the same service, but you have two different implementations. So I mean, which one should run? And I think you will get like a panic when the GPC service tries to register the second one. But I mean, that's what I think I have tried. For the migration, well, I think I tried, now I will try again. But I think I tried when I was testing this. But maybe not at the same part, I will check. But this is something that will be interesting for the migration because there will be some company that may not be able to... For migration? Yeah, because imagine that you have 26 channels. No, but you can migrate like all channels to VFT. And then if you restart, you will have like in only VFT. I just said, I don't think, at least theoretically, I don't think that it should work that you run a mixed ordering service node. Because as I said, you have like two different implementations of the slash cluster GPC service. So it's not clear like if the GPC service gets a message which implementation does it dispatch to. So it needs a router somehow. If this... So in the cluster? Yeah. No, in the cluster. Okay. Okay. Well, so this needs to be tested in order to be able to do this. Okay. So I thought it was possible to do that. Well, let's try it later. So the one time, I have this depository meter, 3.0. So this is a similar workshop to the one that was presented last year and the one that will be for the next week in order to be able to test to deploy a network locally. So feel free to download it. Cool. So you can have some channel run on VFT and some... Jacob, this is what you said. It was not possible theoretically. There's a question in the chat. So as I was saying, so these words of these are the steps. First, we need to have the Kubernetes cluster that we need to install on Gofiboristio and in order to understand all of this, so let's go to Lucidcharts. And then let's do the architecture, the diagram of what we want to deploy right now. So we will have the Kubernetes cluster which can be deployed locally or remotely. And then for the load balancer, we will have Istio. Then we will have the client, which will be us. And then we will connect to Istio directly. We will be the users. And Istio will be able to route to the different nodes. So we will have orderers. We will have PRs and then the CAs also. And one thing, if you're interested in deploying this and you're using traffic in the new version of the operator, there is support for traffic. Well, this will be the five CAs. So this can be either Istio or traffic. Right now this Warsaw piece is for Istio. So this will be the architecture. And then we will deploy also for the PR organization. We will deploy the chain code as external chain code. And then we will deploy a channel. And we will be able to test what Jacob mentioned in order to be able to verify it, but makes sense. I mean, I was a bit optimistic about running both channels and I thought it was important, but it seems no. No, I just said that theoretically it should work. There's no inherent theoretical problem. Just the way I remember the implementation, there is something that would make it not work, but we can change it and then it will work. Yeah, well, let's test it in this Warsaw and then let's see. So we will have the main channel, which is the founder channel, which will create the channel in Hyperledic Fabric. And then we will have the followers. And these are resources for the developer operator joining the PRs to the channel. And something that I want to mention, if you want to go deep into Fabric, release a course which is called Fabric Fast. So you can feel free to check it out if you are interested. So it contains many deep dives into the concepts, consensus, and also policy channels, and something flow. And then it has also explanations of the CRDs, explorers with some tools that are given in this course. So you are interested in learning Fabric and this course will be updated by one of Hyperledic 8.0, so it will be updated every, all the time. And then we have some, we have a way in order to deploy Hyperledic in networks using just Terraform. So I'm using just one command. So using this command, Terraform apply with the variables, you can deploy all of this. So PRs, Orders, Fabric CA, so if you're interested, check it out. So I launched it one week ago. If you're interested, it's here. So let's continue. So we have the Quaranties cluster, we have the PRs, Orders, Fabric CA, Jain code, and this is the, so this is the architecture. The users, we will use the QCTL HLF plugin, which we will install. And this will be the way to communicate with the Quaranties cluster and also with the network, which includes all of the nodes. So, yeah, if there's any question, let me know. If there are no questions, I will continue with the, with the workshop. And so this is the meetup. Actually, I think it will work. After checking the code, I think it will work. Okay. I think I tested, but I mean, it was like a long time ago, one month or something like that. No, because I forgot that we actually have two different JPC services that actually look almost the same. I always forget they're two different, but they're not actually the same one. Okay. Okay, perfect. But after this, we will test it, just in case. So, let's delete my Quaranties cluster, and then we start from scratch. There is a difference between the workshop that we're running today and the workshop that we ran the last year. This is that we can't use now K3D and kind, but we need to have these ports, I suppose. So, why we need these ports? Because we are going to set Istio to listen on the service port in Quaranties, this port, 30,949, and then the 443, which is the one that is used for the HTTPS, for the communication with the nodes. Only these two are needed, and then you can set the image you want, et cetera. So, let's create a cluster, okay? This is, if you have the image already in your computer, it will be fast, and then we will create a cluster with two nodes, one master, and the cluster will be called K8S-HLF. Then, you can do the same with kind, so whatever you like most. Then, we need to install Istio, so let's install it. And, well, this has been, this is downloading Istio, this is not installing it, so we'll set here the target set, so. And, now loading Istio, okay, perfect. And now we have Istio 120 here, which is the latest major version. And then we can now create the name space, Istio system, and then we can initialize the Istio operator. So, what this will do, if we go to, let me, if we go to Lens, okay? So, now, the Istio operator is being deployed. And, this is part of the Istio picture here. So, this Istio contains the Istio operator, and now we will deploy the Istio in less gateway, which is the component that leads to the, actually to the PIAs, orders, and fabricated. So, now we have the Istio operator up and running. And, let's install, this is the CRD that is used in order to install Istio. So, the kind is Istio operator, we have the Istio gateway, Istio system, all of the addons, which none of them are enabled. But, the main component here is the ingress gateways. And, we have only one. And, this is where we specify the ports in the service ports. There are two named HTTP with need in this node port. And, what this will do is that, from our local, we will be able, so, since we have mapped this port in the Kubernetes cluster to the port 80 on our machine, then we will be able to access the nodes the same way as we are in the network. So, how does this work? The, the main name that we will use is localhodo.st. And, this will route to the IP loopback, okay? In fact, if we go to the DNS, the NPR01 localhodo.st, then this routes to this IP, okay? So, the address of the PRs will be PR01, localhodo.st, or 4.3. And, localhodo.st, the 4.4.3 will go to the Istio. Okay, because it's, and we go to the service of Istio that is on 30,000, 950. And, this is the trick to be able to test it locally. Then, within the Kubernetes cluster, we will, we have the core DNS service. And, then we have the core DNS config map. Config map, core DNS. And, then, what we will do is configure this core DNS in order to be able to roam the, because this PR need to communicate between, between themselves and also the orders. So, inside the cluster, localhodo.st will redirect to the Istio ingress gateway. And, now we have the communication between the nodes inside the cluster and from the client, which is us, which are in the same machine. So, this is how this works. Maybe this is a bit complicated or anything. It's not clear, let me know. So, let's continue. So, that is why we need these ports. And, after this, well, let's install it. After this, let's install Istio gateway. And, this is the part where we need to configure the internal DNS. And, what we're saying here is that we're going to rewrite all of the localhodo.st and we will be routing it to the Istio ingress gateway. And, doing this, we will be able to, to communicate between the PRs using a domain in our local computer. And, then, let's apply it. And, one thing that I want to mention is that this needs to be applied every time you restart the machine. Okay? Because what happens is that the config map is overwritten by K3D or by client. And, then, you cannot connect to the network. And, most of the times, is that you need to configure the config map. So, make sure that if you are using this network for test purposes in your local Dev machine, then make sure to update this config map every time. So, let's do it again. So, now, it's unchanged. Then, we need to install the Fabric Operator. This Fabric Operator is special because this is the version 111.0. Why do you use Istio in the demo? Because we need to be able to have a lot balancer locally. So, usually this is Istio or Trafic in the version 1.10, but the developer operator is integrated with Istio and Trafic. Because we want to be able to access from our local machine to the Kubernetes cluster, which is locally, but we need to be able to go directly to Istio and then route to the right PR. And this routine is based on SNI. This is server name indicator. And, just for TLS, client-hello, just for information, this is a property and extension, the server name indicator. This is an extension that goes in the client-hello. So, here, if we have this client-hello SNI, the server name indicates. And this is an encrypt that show, but it's on the first packet, that it sends. So, that is why we're using SNI in order to route between the different nodes. And why we are using SNI? Because the TLS handshake is made in the nodes, not in the Istio. For a normal HTTP application, for example, this website, the TLS can be finalized in the Istio and then the rest can be HTTP or whatever protocol that we want. But in this case, we cannot modify the request. So, we open the packet, we check this, well, when we say we Istio opens the packet, checks the SNI, which is public, it doesn't affect the request, and then it routes to the node that is needed. So, yeah, that is why Istio, and traffic also has the same routing strategy, I think with host SNI, and then we put whatever here. So, yeah, that's the point. And this is a route that you can apply to traffic. But this will be in the version 1.10.0, which we will be releasing it this week, before the meetup this week. And you will be able to use traffic and much more. And this is very important. So, that is why in the prerequisites, I said basic networking concepts and because often it's one of the hardest part of this, to be able to know what certificates are involved because we have the PIAs, we have one sign certificate for each node and then one TLS certificate, which is used for the server. And then we need to be able to know when these certificates are used. And this is for the PIAs and this is for the orders. So, yeah, if anyone has any question, let me know. Otherwise, let's continue. So, this is the version 1.11. This is a version that I pushed with the implementation of Fabric 3.0 maybe we will register in the next version. So, we will register in 1.11. And this contains the modifications in order to be able to create the BFT. So, then let's add the repository. This first update is just in case there is a new version. So, if we release a beta 4, then you will need to execute this command. Then upgrade it. Okay, so this will upgrade it. And if it's not installed, it will install it. So, we can run this command as many times as we want. And if we check in Lens, okay, the operator is being pulled, the image. Okay, successfully pulled, but it's pulling the other image. And there, the quiet.io, okay. So, what it's an operator, okay. So, now it's running. And we have this kubectl plugin, okay. So, this kubectl plugin I have my own because I compile it directly from the source code. But if not, there is a not product, but a project that is called CRIU, which is from Kubernetes, which is useful in order to download plugins. So, you can just install it here, depending on your operating system, macOS, Linux, Fissel, Windows, et cetera. Then once you have it installed, then you can just execute this command. If you want to compile it from the source, so here we have the operator, the chief operator. Then what you can do is go to the kubectl folder. kubectl hlf folder. So, let's go into that. And then we can just build it or build hlf. So, we will do locally and then we move it to the path. And then this will take some time. And, but if you are going to test these words, then I recommend you to do it this way. Otherwise, I haven't tested it with the public, but with the one that is now released, especially because we will release another one with the version 1.10. But yeah, this will finish and it will last for the password. And then if we do which kubectl, we can already interact with that. And this is how the kubectl plugins works. So, there is kubectl and there is the plugin, which is this one. So, if I, there is another one that I'm using, SSI, which is for an operator that I haven't released, which is related to a self sovereign identity in order to deploy issuers, file representation, and it's the same. So, if I do which kubectl SSI, then it's on the path. So, this is very, this is useful in order to be able to write extensions for Kubernetes, this especially for developer experience. So, let's go, let's continue. You can do this or you can do also what I just saw, I just explained. And then, let's deploy the PR organization. So, this, I remember when I did the Warsaw pass year, there were specific images for ARM and AMD, but now the same are used, the same can be used for AMD or ARM. So, I'm right now on a Mac M1 and I can use this, and this tutorial works perfectly. So, you can, if you have a Mac or you have an Ubuntu with ARM, you can also try this tutorial, this workshop. And then, let's deploy the certificate authority, keep one thing in mind, because the storage classes right now are coded to local path. And if you go to Lens, you will need to see the storage classes that are available in your cluster, because for this one is for K3D's local path, but for K3D's standard, and for, I think I have another cluster here for this Azure. So, we can see here that for Azure is different. So, there is a default, but there is also a search file. So, you will need to check whatever storage class you want, if you are using K3D's new standard and then let's create the CA. And while this CA is creating, I will create also the order CA for the orderer organization. And while this CA is being created, I will explain shortly, what is the process of creating our, what does it mean to have an organization in HyperLife Fabric? So, we have the Fabric CA, which is the first one that needs to be created. Then we need to register a user in Fabric CA. Okay, this is the next step. And then we can create the PR CRD in the Kubernetes cluster. So, this is the, this is the structure. And then the operator, what it does is react to the CRD. So, we have here the operator reacts, takes this CRD, and then deploys the deployment, also the Istio gateway and virtual service. It deploys everything. That is a requirement in order for the PR to run. But this is the, these are the steps. But the steps are very similar also when we are deploying an orderer organization. So, for the orderer organization is more or less the same. So, we have, let's do this. Okay. So, the same way that it takes the CRD for the order, for the PR, it takes the CRD for the order. And that is the flow. So, same way. That is why we need to create two Fabric CAs because the certificates are different for each of the organization. And that's what makes an organization different. So, we have the operator, it will create the deployment and we will create the deployment for the Fabric CA and also for the PR, et cetera. So, it will also take the CRD from this. React to Fabric CA CRD. And then in the lens, if we check the lens right now and we go to the K3D cluster, we see that the both organizations, both the CA's have been deployed two minutes ago. And if we go to the Custom Resource Definition, Fabric CA, we see that these two are running. So, we're good to go. So, the next step as it was explained here is register a user in Fabric CA and we need to do it for both. This is for the orderer organization and this is for the PR organization. And if you have this process clear, you can do it for, you can have multiple organizations. You can have four, five PR organizations or multiple orderer organizations. Then, when creating the channel, when creating these two CRDs, you will need to configure them accordingly. So, let's register. So, well, apart from that, we can care the orderer CA, localhost. And what this replies is the CA name by default. The CA chain, this is the certificate authority. And this is, if we decode it, this is species default encode. So, this is certificate authority. And there is a parameter, which is CA and we have by default two. So, we have the CA, which is used for signing and then the TLCA, which is used for generate the certificates for the servers of the PRs and the orders. So, this is the certificate, the certificate authority. If we go to certlogic, we can see the, this is a webpage that I use, certlogic.com decoder. You can see the properties. So, this is valid until 10, or for 10 years. Commonly TLCA, this is the default organization unit, organization, street, locality. And there is a property here that is the certificate as certificate authority certificate. Yes. So, that is what we want. And so, next step that we have validated. So, we can do that also for the organization one CA because this was for the orderer organization. So, we can get to that. And we see that we are in our local machine and we are accessing via the 443. This is because of the mapping we did between the 443 that is on my machine and the service, the node port that is running on Kubernetes. If we go to Lens, then we have this mapping. Well, this 80 is not the 80 for the machine. What we're interested is in the node port. So, this is very important. If you don't have this node port, then nothing will work. And this is for local, of course. In cloud, this works differently. And it depends on the cloud that you're using. So, let's register the user. And also, let's register the user for the orderer. This will explain in detail in the next week. In the next week course. So, that is why I'm not going deep into that because we have an hour and a half there in order to go through all of these comments. And if you're interested, you can check the Meetup from the past week, from the past year, sorry. That I think is one of the most seen lives. So, you can check that out. So, we can deploy a PR right now. We're going to deploy two PRs. The state DB will be level DB. This is in order for the PR to be lightweight. And the image will be the one that we have declared. So, make sure that you have declared the PR image in the current cell. So, if you change of cells, then you will need to go to the top and then declare these variables. If not, it will not work and there will be an error. So, let's create these two PRs. The storage class is the one that we have defined above. The Android is the PR. It's the user that we have registered. MSPID is the organization that it belongs to. This case is organization one MSP. Let's make it bigger. Then we have the enrolled password PRPW. This is the secret that it was used in order to register the user. So, this is the user and this is, let's say, the password. And then we have the capacity, five gigabytes, but since it's in local, then it can be whatever you want. I mean, the limit is your disk. Then the name, this is for the fabric PR CRD. The CA name that will be used. This is using name.default and the host. And this is very important, localhost.st. Always localhost.st in this, in this workshop. And then there is the port. And this is the port, this 443 is the one that will be used here. So, it's important that we keep 443 because then we will be able to access the PRs through this port. And so, right now the PRs will have been deployed if we go to the pods. Organization one PR, okay, so this is deployed. This is running, 54 seconds ago. Fabric PR, and then we can see the CRD also. So, this is running, perfect. So, what we're going to do, and this is different from the raft implementation, we're going to deploy the orders. But before we were able to deploy one order for raft or three orders, but in this case, the minimum is four, okay? So, we will deploy four orders. This is for the, in order to reach, well, maybe Jacobs to kind of explain it better, what it is for, but I think it's it. It's because if we assume that you want to support at least one failure, then like the formalize, right? You know, you need a default in order to be less than a third, so less than a third when with one failure is unit four, right? Yeah, but we will deploy it with three, but we won't have high availability, I guess, no? Okay. So, let's deploy this, four, and then they should appear here, so four, one, or two, or three, or four, and then we can wait. This command is to wait until a lot of them are running, which will be in no time. So, yeah, with this four orders, at least three will need to be up, so three out of four will need to be up for this to work, and we can test spinning down a node afterwards. Okay, so now all of them are running and this condition aren't met. So, now if we go to this picture, we have this green, okay? So, we have to deploy the PR, the orders, the public CA, so what we're going to do now is create the main channel, and the difference between the main channel and the follower channel is that usually, and this is how most players have been working as of now, so usually there is one organization that has most of the knowledge about fabric, and this organization is the one that creates the channel and configures the channel, administers the channel, because if you have 10 organizations gathering all of the organizations in order to make a decision, this can be hard and there are no automated tools. So, usually there is one organization that takes care of the ordering service and takes care of the configuration of the channel, and the configuration of the channel means adding an organization, removing an organization, changing the channel configuration, et cetera. So, that is the purpose of the main channel is in order to be able to push up a channel with some configuration and to be able to change it later. In fact, there is a concept that is called the administrator of the channel, which are the ones that are needed, the signature that are needed in order to be able to modify the channel. And then the followers are usually PR organizations that join this channel that was created by another organization, and the ones that create the follower channel only need to have PRs, and of course, the chain code. So, we have, let's check the QCDR get both, so we have followed the PR running, everything is running correctly on the operator. So, right now we need to create a channel, and in order to create the channel, we need, right now we have two organizations. So, we have the organization one MSP, and the order MSP. So, we need to have two identities, the sign identity and TLS identity. And the same, well, for the organization one, only the sign identity in order to be able to join the PRs to the channel. But in this case, the TLS identity is used for the channel participation API in order to be able to interact with the order admin server, and the sign identity is used to sign channel updates. So, this TLS identity is used when creating the channel for the four-dimensional channel, but if there is any change, then we need to use the sign identity. And for the organization one, only the sign identity. So, these steps that are needed here are used to be able to create this identity. So, we will register an admin identity because both identities need to be administrators. It's okay, so this has been registered, and then we will create two identities with the same user. One will be enrolled using the CA, which this is for sign, for signing. And then this parameter, that's the CA, TLS CA, so this is for the TLS identity. Okay, so let's create this, and the identities have been created. So, this is in the, in a CRD, in a custom result definition, handed by the operator. And this also supports the renewal of the identity because one of the problems that we encounter is that the identities creating them is very good, I mean, it's very fast, but the renewal part is also hard. So, these identities are reconciled by the operator, are checked every minute, and if it's 15 days, if there are only 15 days up to the expiration, then it will renew it. And this fabric identity will create a secret, and we can check here the secret name, well, spec, well, the secret name will be the one. So, the secret name, if we check in the secrets of the same namespace, so we have this secret, and then we have the third PEM, the key PEM, the root PEM, et cetera, in the user's domain, which contains everything, and this is inside the secret. And if this is going to expire, then the operator will re-enroll using the same public and private key, and then will update the secret. So, you can make sure that this secret will always have a valid identity. And we need to also register and enroll the organization's one MSP identity, so we do the same, but instead of having to create two identities, we create one, and this here will be the one used for the sign identity. Okay, so, let's create it, it is now created, and now we can check in the fabric identities in order to see if this has been, okay, so if there was any problem, then the state will be failed, it will not be running, and then this identity has been saved in the secret, okay, so what we will do, because we need to pass to this main channel the identity, so this main channel will take the identities from the fabric identity, so we will specify the name, which can be organization one MSP, and the namespace, which can be default, and this is the, this is the structure, and we will check this in the CRDO, the fabric main channel, which is one of the longest. So, create main channel, we have here all of these environment variables. Now, why do we need each of them? So, we have the PRR certificate, this is not needed, but this part, this is needed, so we have here the orderer TLSert, so this is the TLSCA certificate, this is for the organization, okay, then we have all of the certificate, the sign certificate and the TLSertificates for each of the orders, so if we have six orders, we will need to add a few lines here, because if we check the usage, there is a consenter mapping, and this is the configuration added by the smart VFT, so there is a consenter mapping, which contains the host, the port, the ID, which we saw in the POC that we had the ID, so this is the ID passed to the library, then the MSPID, this is for the fabric, and then we have the client TLSert, the server TLSert, which are the same, and then the identity, and the identity, the identity we needed because it will be needed by the consensus protocol, in this case, smart VFT, in order to verify that a node is part of this consenter mapping. But not only that, also smart VFT, actually part of the agreement on a block, it also assembles the signatures by these identities. Can you repeat, Dekov? Yeah, so part of the smart VFT cons agreement protocol, it also assembles signatures on blocks, and it verifies that the signatures on blocks match this identity. Okay, so it's part of the metadata. Yes, exactly. Yeah, in fact, I saw it on the code, and I have metadata signature there, there here. So in the assembly proposal, so it gets all of the metadata, okay? Yeah. Actually, the signatures are not in the assembler, they're in the deliver, because in the deliver method, if you saw, you have like the proposal and the signatures, right? In the signature method. There here, yeah. Yeah, but like if you go up to the signature of the method? There, here. Now, I mean like the definition of the method. Yeah, so you have here the signatures. So these signatures are actually passed from the agreement protocol. Well, this is more deep than I know, to be honest. But what do you mean by the agreement protocol? So during like, you know, in the, I actually explained this in my, like in the previous session, right? The one that I did. Yeah. So the 50 agreement protocol, it has, no, that's not the paper. It has like the preprepare, prepare and commit messages phases. Okay. The, like we piggyback the fabric signatures on blocks as part of the commit messages. So once you collect the commit messages as part of the agreement protocol, then you already have the signatures on the fabric block. And this is on the, on the deliver. No, this will be on the assembly. No, assembler, assemble is the leader gets a bunch of signals, a bunch of transactions and creates a block. Okay. So, this is the, what's for doing? So, yes, we have here a signed proposal. So signed proposal is actually, it's like a way for the library to call back into fabric to say, hey, here is the proposal. I don't actually know that it's a fabric block, but please sign this proposal. And actually underneath the covers fabric actually takes this proposal, converts it to a fabric block and then signs it. So the signatures that we see in the, in the, in the deliver are the ones that are, that wouldn't go by the same proposal. Exactly, exactly. There is some kind of conversion or formatting, but yeah, that is the signature, exactly. Ah, perfect. And these are in the metadata, which, if we check the block, it throws the, well, sorry, I need to re-open it again. But if we check the structure of the block, then there is a metadata specific to the ATCDD raft and then from the, from the VFT. So each of the protocols of the consensus protocol we need to have is some metadata, I guess. Yeah, but the, I mean, actually it's like a, a term of overriding. So there is a metadata of the fabric block. The signatures reside in the metadata section, section of the fabric block, but there is also another thing called like metadata of the protocol. So both raft and VFT have metadata, like metadata for the protocol. For the consensus or the consensus. No, it's like a metadata for the library. Actually, the nice thing in Smart VFT, by the way, and that's also like the reason it's low. So it's like in Smart VFT, the entire metadata of the protocol can be encapsulated in a single block. While in raft, the metadata is also written in the wall. That's why if you have a raft order and you delete the wall, that's really not a good idea. But Smart VFT is much, much simpler. And also because of this reason, it only can do one block at a time. And that's also the reason it's low because raft has pipelining. Smart VFT does not have pipelining, but the advantage of not having a pipeline is that you can really easily implement dynamic addition and removal of nodes. Yeah, because of the, when there is a delivery, you check if it's a configuration or not. And then you can update and then config directly, no? Or am I missing something? Or is it because of another thing? You know, that's correct. I mean, this is like the state of the protocol what you're showing now. Yeah, okay. Well, I need a deep devisation on the consensus for that. I mean, I just focus on integrating it with this and let's continue and let's test the TCD raft now because it's working side by side. So yeah, because we're over time. So I will continue. So we have the consented mapping. So the identity is also because the verification of the signatures, which are in the metadata. And then we have the four order nodes which need to have different ID. And we have the order two or three, which is the fourth one and the order type, which is VFT. In the previous, I think there was a, in the previous meetup, if we check the creation of the channel, then this was, it is in the raft. Okay, the order type, it is in the raft. So we will test afterwards, we will test it with TCD raft. And apart from this, the consented mapping, we have the configuration for the smart VFT. The request match count, this is for the transactions to be bundled in a block, the max transactions. This is the max size, as interval, we have the incoming message size. This is for the protocol. And also the other configuration that was mentioned in the POC is the leader notation, which two means that it's activated. And there will be three decisions per leader. I don't know, Jacob, I mean, I haven't tested it in the best way. Actually, in fabric right now, this is disabled by default. So it doesn't matter what you put here. Wow, in the leader rotation. So it's activated all the time, no? No, it's the other way around. It's disabled by default. Okay. Sorry, it is disabled like in the code. It's hard coded disabled. So even if you put here something, you will not be able to turn on, you should not be able to turn on leader rotation in fabric, I think. That's it. Yeah, you see? We hard coded it in the meantime for false. And there is a reason for that or? I would say like stability and like, we want to test the system in small steps. Like we want users to test the system in small steps. Leader rotation adds more entropy naturally, right? So we decided for now to make it false. Okay, but it will be enabled for? I mean, probably in the future. Okay. So, well, so this is not used, neither this one. And while the rest of properties we need to go deep. So maybe, because most of them, I mean, I haven't really went deep into that. So these are, if we go to the operator, the teleoperator in the main channel, there's the VFT configuration. Well, there's SV option. So these are the ones that are passed to fabric. And well, this is not used, of course. Well, as Jacob said. And when that is used, then we will be able to configure it. But so right now, the configuration is at this, in the, and it's passed to the order. So next step. So we have the channel configuration. This is the order, which we can configure the batch size, which I'm not sure, Jacob, if this, well, this batch size is for the ETCD rough. So this is not used, and this batch timeout is also not used. So we have the capabilities to 0.0. Then we have below, and this is a very long jammer. We have the PR organizations, MSPID, organization 1, MSP. This is the CN name and CN namespace. This is in order to get the root CA and TLS root CA. TLS root CA, there is an alternative, which is using external PR organization, which you can declare the MSPID, which can be whatever you want. So this is for external organization. So if someone else creates an organization, then they can give you the CA and TLS root CA, and then you can add it here. Okay, so this is why we have this external PR organization, and we also have this external order order organization. So you can add organization from another clusters when there is no CA. And then we have the identity. So in the identity, we have the order MSP, which this is, as we see the identity for the TLS in the order, for the order, then we have one for the sign, for the order 2. This is for the reason, so these are the identities that need to be sent to the Fabry-Umen channel. And we have the orderer organizations. And in the orderer organization, we declare the CA name and CA namespace. This is the same for the root CA and TLS root CA. Then we have the CA namespace, the external orderers to join, then the MSPID, and then the orderer endpoints. I don't think they are used, but I haven't, I don't have the knowledge of deep in Fabry, Jacob. So I don't know if these, if the concenters mapping are being used or... So the order endpoints are used to replicate blocks and the concenter mapping is used for the consensus protocol. Okay, perfect. So this is still needed. So you have only one organization here, right? Yeah, there was a multiple leaf, yeah, right now, yes. Orderer organizations, but what are the organizations here? There is one orderer organization and one organization, one peer organization. Yeah, but I think you need to have an endpoint per org. An endpoint per org. Do you mean only one order endpoint per organization? Yeah, well, you have here MSPID orderer MSP, so do you have like orderer 2 MSP or something? No, no, no, it's just... Okay, so in this file you only have one? Yeah. We can have multiple, but we're gonna test it later. But this is the same as it has been on Fabry 2.5, so we'll... Yeah, I'm just saying that each org needs its own endpoint when you want to deploy it across several organizations. But in this Yaman file, there is only one org, right? Yeah. It's only an example. Well, this is used by the one endpoint. I don't know, maybe we can check it later, but I don't understand one way we need one endpoint because this is the configuration that is on the channel. So, well, let's continue and then we check afterwards in the configuration of the channel. So, let's deploy it. One thing that it needs to have, the Fabry 2.5 channel is the capabilities 3.0 because when I was trying to integrate this, I had here another capability and that was the reason it wasn't working. So, let's apply this and let's create the Fabry main channel and then let's go Fabry main channel. It takes around 20 seconds to create it, to do everything. So, to join, because what this does is, so what this does is, first it joins the order join the orders to the channel. Okay, of course what it does is also prepare the channel config and convert it to a block. Join orders to the channel and then for subsequent request, it will try to update the order configuration and try to update the application configuration. The application configuration contains the PR organizations and policies who can update, et cetera, commit chain code policy, et cetera. And then the order configuration contains the order organizations plus the consensus. Okay, so these are the main blocks. And so this is not created. It is a problem. Channel demo doesn't exist. There's a problem here with a certificate. I don't know if I have changed anything. Yes, the identity. Okay, I think I haven't circuit this. Let's delete it. Let's delete the channel and then let's create it again. And then this will work. I will create, I will delete the operator. Okay, this will be running, yeah, perfect. So, this will be delete in no time. And acquire the leader. This is for operator, perfect. So right now this is deleted and then let's apply it. Invalid, leave it as it was earlier. Right now, should be created, perfect. This is because we had here some, I already circuit that a lot more. So, but what was missing was this, this part, so they didn't 12, because I spotted this variable but this was missing and this one is needed in order to be able to indent the jammer. So, if we check the TLSR, this is intended 12 spaces. So, this is to be able to put it here, the concentrated mapping. So, these fabric main channel right now should be running as we can see here, perfect. And now we can continue with the follower channel which the configuration is much more simpler. So, we only need the unorderer in order to be able to fetch the block. So, the process is this one, so we have the operator, it will have an orderer and then it will fetch the block and then with the identity given, it will try to join the PRs. And also, it will try to, if we have a network, so this network here, it will try to update the anchor PRs automatically because in order to update the anchor PRs, even though this is a channel config update, it only needs the signature from the organization that the anchor PRs belong to. So, each organization can update their own anchor PRs, not all the anchor PRs. So, we have demo or one MSP, we have the anchor PRs which this is, these are the hosts of the anchor PRs so you can have whatever hosts that you want. We have the identity to be used to one update anchor PRs and second to join the PRs to the channel. And then we have the orders that the operator needs to connect to in order to be able to fetch the block and then we have the name of the channel which is demo and then the MSPID. This is the MSPID of the organization that we are joining. So, let's create this and then let's check the fabric follower channel in order to check if this is running. So, it was running a while ago. So, when I execute, when I execute everything. So, right now we have the network configure using smart BFT, changing only this configuration. So, we have the only thing that we need to change is BFT, order type BFT and then concentrates mapping. We need to add it with this version of the operator with the one that is stable, this is not supported. And then we have the smart BFT properties. Then we have the capabilities which need to be updated to the 3.0. These are the global capabilities of the channel. And that's it, so these are the changes. Right now what we can do is continue with the installation of the chain code. So, we will create this identity which this is an admin identity, this already exists. And then we need to create the network config. The network config is a CRD of the operator in order to be able to ease the creation of the connection profiles in HyperLAF fabric. So, right now we have created a connection profile called network config called organization one CP. And then we can go to Lens, go to the fabric network config and then we see here the network config created. Then we can go to the secret and then check for all one CP and then we have our network config with the realizations, with the other MSP, with the organization one MSP, with the user that we have specified, all of the orders and the URLs, the external URLs to be able to connect to them. And that's it, so this is a utility that is very useful to generate connection profiles. And for two organizations is not a big deal, but if you are handling five organizations or six organizations, then this is useful. So, we have the network config created, now we're going to fetch the network config using the organization one CP, then we have the data config.jammel, we decode it because all of the secrets in Kubernetes are decoded and then we redirect the output to the organization one jammel. And here we have our connection profile. And in order to deploy a chain code, the steps are the following. So, we have first to package the chain code, second to install the chain code in order of the PRs, in this case we have two PRs, so we will need to install it in two times, one in each PR, then approve the chain code for the majority of the orgs, and then commit the chain code. And then when we commit the chain code, then we will be able to use the chain code. So, this is the basic structure, the chain code lifecycle. So, we will start by creating the packets, so packaging the chain code. We have, we're going to deploy the chain code using chain code as a service builder. This is why we have in the metadata.jammel, this type, ccaas, and this is bundled into fabric, from 2.4, if I'm not mistaken. Then we need to prepare the connection profile. So, we're going to deploy the chain code in the same namespace as the node. So, we can, this is the connection station. So, when we will install this in the PR, we're basically telling the PR, hey, for this chain code, you need to connect to this address, which is basically the chain code name, which is asset, and the port 7052, which is the one that will deploy the operator. And what this block will do is install the, it will install the chain code, and we'll create the chain code data set, which will contain the metadata station, and then another target set with the connection JSON. The metadata is for the builder, for the standard chain code builder, for this chain code service, and then the connection JSON is when it is committed, then the PR will need to connect to the chain code. So, let's execute this, and of course, it will explain in detail in the next workshop. Then we can check if the chain code has been installed, as we can see, this is told, and then let's deploy the chain code container on the cluster. We're missing still two steps. We're missing the approval and the commit. So, but in between the installing and the commit, we need to deploy the external chain code, and this is a basic chain code deployment. So, if we check in Lens, we see here that the asset is being deployed, and if we check the service in the network, then we see that the asset in the port 7052, endpoints known, this is because the port isn't still being created, but this will be the service that will be used by the PR to be able to connect to the chain code. So, this is okay, perfect. So, the external chain code has been deployed, and now we're going to approve the chain code, since this is the first time that we will commit the chain code, then the sequence is one, the version is 1.0, then let's approve it and check the logs, but I was testing it an hour ago and I had some problems. So, pending verification, this is from now. So, but we're able to get to the nodes. So, I'm not sure what is happening. Let's create another channel, which is called demo one, because this is failed sending to node three. I don't know if there was any problem with the problem that we had here with the variable. So, let's create it from scratch with no issues this time. Okay, the demo one created, and then let's join the PRs to the channel, and the same will happen. Or like they cannot communicate between themselves on chains. Okay, let's try it now. So, let's try to approve it. Okay, we need to update, then I will config. Let's delete it first. It's over now. I know chain code approved. I don't know if I try to do it for the channel demo. I don't know why this happened, but I think it was because we created the follower on the channel at the same time. Let's try to do it for the demo. Okay, so this has been, okay, so now it's good. We have approved, and then if we try to approve again, then it throws this error, which means that I tend to do the final commit a sequence. That means that we have already committed it. And then we can commit it. Okay, chain code committed. But what I don't know is why for the first blocks, it's taking so long because also for the fabric main channel takes 20 seconds. So maybe there is something to be, like what in preview in fabric. So this is an early testing, but there is something that is going on there that I don't really know what is going on. But right now we have committed the chain code and we can invoke it. And this will work as a normal fabric network. So now we can see here the proposals. Okay, so all of the logs from the smart BFT. Perfect, applying filter. So right now these errors, and these errors are for the demo I think, not for the demo one also. So I don't know if you have this problem. You need to send, I mean, when you compute the config block update, you need to grab a fresh, like you need to grab the latest config block. Yeah, maybe that is why it was, because I also created, so if we try with another, because it's been fetched, but... So this is essentially an MVCC error in the order. Okay, okay. This is most likely the fabric follower channel. It's doing this. Okay, so right now if we, if we continue to make transactions, we shouldn't see this error. Asset seven, let's invoke it. And then we should see here some logs. Perfect, rating block six with one transaction. And if we try to execute another one, then we should see here block seven. Perfect. Okay, so here we have the ability to query the change or to invoke it also to start and start. We had this problem with the demo channel, which was because these variables weren't set, but other than that, and the time out on the proof. It should be okay. If you want to test it locally, then just download this repository. And I will test the ATC draft. I don't think we have time right now, so I will leave it to questions. If anyone wants to comment anything about this workshop or what we can try. Do you want to try, Jacob, to see the ATC draft? Let's try to make a draft channel. Let's see if it works. I think there is also an issue in Fabric that someone from my team opened about checking if it works. So maybe you can then later on offline search that issue and comment there. But this is in Fabric 3.0? Yes, yes. Okay. So let's, one moment, one example for the main channel. Using ATC draft, I think here we have order zero, order one, okay, perfect. So this will be the demo two. Okay, this will be the capabilities, and then we need to update the order not four, and then this one, order not four, then the certificate, and then the orders to join. And I don't think there is anything else. So let's try it. Okay, running. So this means that the channel has been created and we have the ATC draft here. So, you sure it's the same cluster? Yeah, yeah, this is the same cluster. Okay, cool, then it works. Well, we need to test, of course. No, but it works because draft like it communicates, right? Yeah, but I mean, let's try to join the PRs and secure the chain code. So, because that the channel is created, maybe at execution fails, no? Because of the, No, but you can see that the draft is running in the logs. Yeah, but maybe right now it's misconfigured with the DCD draft and BFT is not working or, I don't know, these are, I'm thinking out loud. Okay, anyway, I posted an issue in the chat. So, like after this call, you can comment there and say that you tested the thing works. Okay, let's try to approve it. Okay, well, I don't have any. Okay, let's do this, approve for my org. Okay, chain code approve, then commit it. And then let's invoke it. Okay, perfect. And then if we test it for the demo one, which is the one with the BFT, so it's working, but I don't see logs about BFT. I don't see logs about BFT here right now. Okay, yeah, this here. Perfect, so yeah, it's working. I remember I tested it, so, but it was long time ago. So, perfect, any one that wants to have any question, ask any question, that's the time. If not, I think we can close. Thank you so much, thank you for everything. Sure, very nice presentation. Thanks. Well, well, are you there? Well, I don't think he's there, so. Yeah, thank you for being here, for the meetup. And if you have any question, please reach me out on LinkedIn. And, well, see you in the next one, which will be in the next week. And if you, well, I launched this course about fabric, so if you want to check it out, feel free. So, thank you, and see you in the next one. Bye, have a nice day. Hey, David, are you all done? Yes, we're done, so. Great. This will be posted as a live, no, in hyperlayer, I guess. Yeah, I'll follow up and send you the link on Discord. Okay, perfect, so, see you next week, David. Yeah, thanks so much. Thank you, bye.