 OK, you're live. Thanks, David. Welcome, everyone, to this workshop. This is a Hyperledger Foundation workshop named Operating Extend Hyperledger Basics. A little reminder of what we're going to go over today. If you have the chance to stamp this link here on the Wiki, you'll see a copy of this content as well. So we have two sessions of two hours today. So a lot of meaty content. And this is the first time for me doing it. So please bring your feedback and be honest with me. If I bore you to death, I need to hear from you. And also bring your questions, bring your interrogations about what the project does and all those things. And so for these two first hours, just to focus a little bit on the immediate, we'll first talk about what this is, where it's coming from, what it's trying to be, where its inspirations are. We'll spend a little bit on how to configure it, how to actually run it correctly, all the gut chests, all the things that you need to know when you're about to operate BESU. And then we'll go deep into actually running BESU ourselves on our machines. So we have some different exercises here from running it in a simple dev mode to working on our own Genesis block to then actually work on a consortium. So that's it for the operation session. Before we continue, I have a public announcement. Oh, sorry. We have some prerequisites as well on the workshop that you can check out on the Wiki. If you haven't done so, please do so ASAP. So you have all the resources available to you so you can participate and follow along the workshop. Now for a public announcement from Matt. Matt, take us away. Yeah, thanks for the tea up there. So my name is Matt Nelson. I'm a product manager around Hyperledger BESU for consensus. And we are in the spirit of this workshop. We're looking to set up a bunch of new education sessions to kind of talk about some of the future of what we're seeing with BESU and public networks. So again, what do you want to learn about this topic? What do you have as far as questions around public networks, the merge, proof of stake, how BESU is planning to evolve for these functional pieces? And we want to get direct feedback on what you might see if you're already familiar with BESU and this is just a refresher. So we'd love for you to take this quick survey. There's only seven questions. It's very quick. There's some are multiple choice, some are write-in. So feel free if you're looking, if you use BESU or if you don't use BESU and you're looking to get more familiar or if you're looking to shape the future of our software. It is, of course, open source. We're always looking for contributors and feedback. Please help us out with this survey. It would be tremendously helpful. And it'll also help determine what education sessions and workshops we run in the future through Hyperledger Foundation. So if there's anything that you'd love to see covered around BESU or even Ethereum staking, any general questions around public networks or around the future of BESU, give us a shout in that survey. And yeah, we would love to see your responses. It's super quick. And I think David will also send out the link. It's in the slides, but David, maybe you could paste in the chat now and we'll, or it is pasted in the chat. Yeah, I just posted in the chat. I'll also send an email out to everybody who had registered as well. So you can look in your email for that or just click on the link in the Zoom chat. Yeah, thanks so much. We'd really appreciate it. You can also email me directly if you have candid feedback. It's mattnelson at consensus.net or matt.nelson at consensus.net. So please feel free to help us with the survey. We look forward to hearing from you all. And I hope that you enjoy the rest of the workshop. I'm learning about BESU. Thanks, Matt. I really appreciate you taking the time and telling us a little bit about what your trench achieved. And I think this survey is critical to BESU and its success as an open source project. So thank you so much for taking the time to explain a little bit what your trench achieved in terms of understanding and please all take this survey. That's the best outcome from this workshop is that you take a minute and you fill the survey so we can understand better how to serve you as a community, me included. David, would you like to go over the roles of engagements before I move on? Sure. So as people are probably aware, Hyperledger Foundation runs a number of open source projects, BESU being one of them. As an open source project, we welcome everybody to get involved as the name suggests. We're open if you want to join our community events such as this or the regular calls that BESU runs to talk about the development work on the project or if you want to join in of our channels. As I said earlier, we have a Discord chat server. We have mailing lists. Everything is open. You're welcome to get involved. No invitation needed. No requirements about getting involved. Anybody who is interested is welcome to do so. The only really ask that we have for people is they just treat people well, basically. We have a code of conduct that lists what we expect in terms of how people treat each other and it's just basically be civil to everybody. So the same applies here. We welcome any questions, comments, feedback, suggestions, any discussion that you would like to have. Again, just we ask that you be civil when you do so. And then speaking of different community channels, I know I had said this earlier when people were starting to join, but I'll say it again. You are able to ask questions in Zoom chat, but we would encourage you to use the Bezu Workshop channel on the Hyperledger Discord server. There's a couple of reasons for that. This Zoom chat channel will obviously go away when the workshop's over, but that Bezu Workshop channel is permanent. That's something if you have questions about Bezu after this workshop, that Discord channel is a good place to go. It will also put you in touch with other people in the community who aren't able to join this call. So I did just drop a link to the Hyperledger Discord server in the Zoom chat. Again, you are welcome to ask questions in Zoom if you want, but we encourage you to go to the Bezu Workshop channel on the Discord server. And if you have questions about how to get on the Discord server, you can put that in the Zoom chat and I'm happy to help. Thank you so much, David. And we'll remind you throughout, there's such a thing as this Discord channel. If you have any questions, I personally will welcome any questions as we go because I want this to be a lively exchange, not a dry set of slides that you could just read at home. I kept the content of the slides minimal so I can actually entertain you with my voice, which may at times not be intelligible because of my French accent. Now, I recognize that and I understand that people have different types of ears. So if you don't understand what I'm saying, make me repeat, don't feel bad about it. And with that, let me take you a little bit about me. I work at Splunk, which explains the really wonky background and the fact that there's a toaster next to my picture. I'm the senior engineering manager for the blockchain team at Splunk. I've been at Splunk for over two years. Previously in the past life, I worked at Consensus just like Matt and I worked on the Pentium, BexXen, Chrom. I worked on Orion. I worked on all sorts of products related to Enterprise Ethereum. I currently hold the title of a chair of the Testing and Certification Working Group for the Enterprise Ethereum Alliance, where I build a certified, a number of requirements related to Enterprise Ethereum. I'm also an Apache Software Foundation member and I've been a committer on Hyperlegio Basu for three years, two, three years, where I contributed a number of enhancements to the protocol, such as E65 or the stratum mining framework. May read the chat, no questions so far, great. So today we're going to dig into what Bexu is, where it's coming from. At first, we're going to need to zoom out just a little bit. So before I explain to you what Bexu is, if you have no familiarity with Ethereum, let me just walk through why it exists and why it's important in this particular context. So Ethereum is the second largest crypto by market cap, Bitcoin being the largest. Ethereum was started in 2014 by Vitalik Buterin and a number of confounders who went on and created even more value ever since. And it has many different clients. Some of them are well-known to just go Ethereum. One was Parathy, renamed to OpenEtherium, now deprecated, upcoming one is Errigan. And Bexu is, of course, one of those Ethereum clients. Those clients have interoperability, meaning they all work together and that actually parts of the business trends to the network. They all have in common that instead of a single application such as Bitcoin, which was specialized in money transfers or all the history around crypto, the Ethereum blockchain is the first blockchain that adopted a general computing paradigm, allowing people to create their own smart contracts on top of the chain, so they could execute their own very custom logic without the folks building the blockchain knowing about it. And so that has come to be called the Ethereum Virtual Machine, aka EVM. Okay, so the other component of the discussion today is the enterprise, and I'm using air quotes as I say this, as in existing businesses. So in 2008 when Rails started to get a lot of traction, there was also this notion of the enterprise. The enterprise is different. You're competing with existing tech businesses. Think Oracle, think IBM, think all the big players out there. And those enterprises are very different requirements because now they have to have better auditing requirements. They have to make sure that whenever they deploy, all the data is created and owned and hosted on trusted parties. Everything has to be permissioned. Everything has to be editable, right? So in terms of the requirements, now we have a completely orthogonal set of requirements compared to what a blockchain would be by default, which is by default publicly, everything is open. Everything is shared. Everybody gets a copy of the data and everybody is able to participate openly into a public network. Now we have a different set of requirements which actually come against what could be possible with blockchain. The enterprise Ethereum Alliance, which was built around 2017, started to work towards that vision and you can see a number of logos back then. This is actually taken from the launch where people started to come together and say, well, we want to see Ethereum in the enterprise. How do we go about doing that? Because we have a lot of work to do to bring this vision to bear. So those requirements, this activity came with a first attempt at solving the problem. And the first attempt was called Quorum, which has not been renamed Go Quorum. Quorum is a enterprise client which was actually a fork of Go Ethereum. And it was developed by JP Morgan Chase, which is an interesting choice of technology. So this is basically a fintech industry based in New York and those folks have been working in earnest to add additional concepts of algorithm, breaking new ground. The big thing that they did that really changed a little bit the way people think is they added a private enclave mechanism which would allow people to share private transactions on the ledger. So sometimes you would send public transactions where no whole transactions would be available to all the members of anyone who was being participating in the network. And sometimes you would choose to instead share only to some members of the consortium which she would encode as part of the private transaction. And those folks would get a copy of the content of the transaction directly through the private enclave that it would then gossip the data point to point, right? So there would be no middleman, no actors in the mail passing around those pieces of data. All the data would be encrypted at all times and be safe and secure in the vault. There's a question here from Nidhi. Yes, of course. So GIF only supports public transactions, absolutely. There is no notion of a private state with GIF. There's no notion of a private enclave running along with GIF. This has been added by Gokorm. The way Gokorm does the job is by adding a new component to the firm client which runs separately in its own process and interacts with Gokorm or this modified GIF to over five sockets, right? And so there would be securely sending data between those two processes. And initially this private enclave was built in Haskell in name constellation. It was eventually retired in support of Testera which is a private enclave built in Java with a much better versatile set of tooling in terms of support for relational databases and all the places where you could encrypt and install this data in a safe manner. So this was eventually copied a little bit by Pension slash Beesoo to make it possible to also interact and remove the need for Gokorm. The problem with Gokorm is kind of double. First, it's a fork of GIF. And there are a lot of forks of GIF out there, lots of projects which just take GIF for spin, change it and then go on their own very way. Not a problem with that, but then you have to pick up all the fixes and all the upstream fixes from GIF. And I've been seeing it up close. It takes quite a bit of work for you to reconcile your own changes with what GIF is doing, right? If you're not too careful and folks were definitely being careful with GPMC, then you end up spending quite a bit of time applying patches back. If the other issue was that GIF is licensed of the GPL, LGPL, which is kind of a no-no for underpriced because you want to have a private version for example, patch version, a well-supported version or commercial version of your software that you would eventually be able to support. This is just what the enterprise requires just to reduce risk. There's another question here. Any idea or reserve you take may be able to run on top of notarals slash radix for massive scalability to come. I have no idea about this. You need to tell me a lot more. Sorry, I'm just being honest here. I don't know what radix is. So now we're going to talk about Hyperledger Basu. Hyperledger Basu is a contribution by consensus. And you just heard from Matt and I used to work there. So, thanks consensus for this. This was a massive contribution to the Hyperledger ecosystem back in 2019. He was named Pantheon if you were around at the time. I feel ancient mentioning this now. And it's a Java-based main client for Ethereum. So in a nutshell, that's how it was sold, right? So we're doing this in Java. It's well-known to enterprise developers. It's removing a lot of the weird uncertainty around how we actually work. And it's getting to support all the enterprise requirements that I alluded to earlier. Cool. So it's part of Hyperledger. Hyperledger is not just Hyperledger basis. So just for your curiosity, if you've never really taken a tour of what Hyperledger is, it's got a number of very interesting distributed ledger projects that I recommend you check out. Some of them have been around for a while. So just Fabric, which has the ability to do permission and enterprise. It was kind of the grandfather in the space and one of the first Hyperledger project, if I remember correctly. And then you can see all of them have kind of a different type of use case or different types of thinking around how they contribute to the ecosystem. The Hyperledger foundation itself is, I believe, a Linux foundation. So it's corporate-based there. Some members are helping. And you can also participate individually as a conventer. If you need to know more, if you want to know more, hyperledger.org is a good place to start. Okay. So any firm client at a high level is actually not a client, right? Or you could think of it as a client, like a mainframe client, right? But really it's going to work like a server, a peer-to-peer agent. It runs as a single process. It's supposed to be completely independent, meaning that if you have a good firm client and you've taken the time to sync it and it's got all the data from the network, it can perform all sorts of exchanges. So sending transactions, submitting transactions, gossiping, getting all the data, getting sharing it with others, and you can in turn get the chain. So you can actually get all the data you want and really pick up with the latest, right? And this is an important point. It's not just this who all the firm clients can kind of behave the same here. With the idea way back when from the GIF and other clients that you would just need to run this on your laptop and it would give you the ability to interface and interact with the blockchain at large scale. And well, because of that, it's complex software stack. So just looking at the open ports that, without diving too deep inside the basic base, you can see that we have different types of connections that come out. One is the DevP2P port, which is going to be over TCP and UDPs, 303, 03 by default. You have the ability to expose RPC over HTTP with 8.545 or WebSocket with 8.546 by default based to also support natively stratum of a port 8008 and it can connect to an if that server to report stats over time directly here. So we're dealing with complexity of the bat, right? There is no as this is no longer age of simple PHP MySQL application. We're talking about something that is going to be hyper-threaded. It's going to have components all over the place. Well, to add to our misery, this is also a database because now we're storing all those blocks. And when you say we store blocks, we will store them in RocksDB file storage on the file system next to a business client. So we can rehydrate and reread that data quickly when we start again, basically. And as you can see, when you look at the structure of a block, this is actually coming from Lucas Seldanja. I tip my head to him for his hard work back in 2018 was trying to explain the yellow paper I find these illustrations. The second time we use it, very useful. We see that when we store a particle block, we're going to store a block. Yes, especially its header. We're going to store the transactions that were made part of this block and are organized in the tri-tree as part of the block itself. So the transactions are here. And we're going to store also the receipts of the execution of all those transactions as we go. Those transactions will impact the world state. The world state itself is made of accounts which contain your announced, your balanced, your code, if you happen to be a contract and maybe your storage. Your storage itself needs to be actually stored. So all those things are stored into separate files, are indexed as much as possible and are trimmed of the time if possible to avoid that you blow away your disk into storing all the intermediate states of all the blocks since Genesis. Any questions about that? Okay, keep storming around. There's also a transaction pool. So when you receive transactions, they float around in a nice little pool. And at first, the only thing we want to make sure is that they're valid. But very quickly, we're going to want to sort them up and order them into a possible block. To do that, we were going to apply a algorithm that is going to sort the transactions according to the value that they represent. How many, how much fees are going to be brought in by actually minting this block, right? As they're being ordered, we can also see actors who say malevolent who might want to order them in a specific way or insert their own transaction before yours so they can play with what we call MEV. Maximum, oh my God. The maximum, oh, I have a mic. Please Google MEV for me. That's what happens when I do a presentation at 8 a.m. So this type of behavior is actually part of an Ethereum client. If you can't create blocks, then really, you're not participating actively in the network, right? There are two types of transactions that can happen. Someone can be local. I mean, they come through it as an RPC call. Some of them are gossiped to you. They're coming from a remote location. So they are actually going to be participating with you in creating the consensus, but that also means that other miners out there are trying to also mint a block that may have those transactions. So if they do, then your block is no longer valid, meaning that every three or four seconds, you have to recreate and re-change the block that you were trying to mine for a new one to try to keep up with the tide. And MV stands for minor extractable value, right? Sorry about that. Okay. So a few more things that it does, right? So this is a network for Ethereum, meaning that it has to know its own truth before it starts to connect to others. And that means it requires configuration. So it's going to have its own genesis block, which is going to be stored into a JSON file. We're going to take a look at that. It's going to have its own consensus engine, which is usually defined in the consensus, the genesis block itself. And it's going to also hold on to boot nodes. So boot nodes are specific nodes that tell you, if you connect to me, I'll give you peers that talk the same language or part of the same network as you. Without a boot node, you don't know who to connect to. Therefore, you cannot find other peers and cannot participate in a peer-to-peer network. And that gets magnified through discovery system that is built into Beisu. Beisu has the ability to find other nodes using UDP-based messages by default. So first you connect to boot nodes, you ask for what nodes they may know based on your own identity. It will give you a selection of nodes that you will then be able to connect as well to, and then so on and so forth until you've reached a certain limit. Those peers get stored into specific buckets. So we use a KADEMIA hash table that allows us to sort around the peers that we connect to in separate structures so we don't get peers that will look the same. This is important because if you don't pay attention to that fact, you can actually get an eclipse attack. You can only connect to a few peers that may feed you bad data, bad blocks, right? So there's a new discovery mechanism now is using DNS and using a DNS server that will actually, what it does, it scrapes an existing boot node, gets all the peers it knows about and stores them in TXT record in a cryptographically verified structure on the internet. So the root record is going to tell you, hey, go talk to those other TXT records and you keep recursively going down the tree until you discover and read all the information about all the peers which are part of the tree itself. Really easy to download because it works with any DNS server out there. And it really easy also to check the integrity knowing the root hash of that particular tree. If you don't like that and there's good reason not to like that, especially in some settings when you're working in the enterprise, you may use static peering. And then in that case, you set which peers you want to listen to you want to connect to in your configuration and Bezu will dutifully try them until the end of times. So here, what's interesting is to look at exactly what that looks like. Here is an E node at the bottom. And you can see it's your eye with the scheme of E node. Then it's got this identity user string here which is a long hexadecimal string. What this is actually is a non-linked SCP256K1 public key that is being represented in hexadecimal format followed by the host, the port for connection, oops, to connect to the TCP port. And then the disk of report which on purpose here has been made different to kind of give you an idea of what that looks like. The public key is supremely important. And let me explain why by going to the next slide. When you connect to in other peer, you're going to use the FP2P which is going to allow you to to connect to that other peer by encrypting the message using the public key. The first thing you do when you connect to another peer is you send a hello message which says, hey, I speak those following sub-protocols and I have this much of an identity as well. Then the other peer will be able to tell you whether it supports the languages you speak and it will allow you to negotiate sub-protocols such as if, but in the past, so distant path, there are other sub-protocols such as Whisper. If you use IBFT consensus, it will also use specific messages to really decide type of consensus. So you will have different sub-protocols for all of those. I don't mention it, but there's also the light if I am service server, LES, which is also something that is supported out there but is not supported by this for example. So when that negotiation happens, the if sub-protocol is then picked, if it's picked and we have compatible versions because there are different versions here and different offsets in terms of messages, the status message is exchanged and we check that we're actually on the same chain. If we find out that we are on the same chain, that we have all information in place that we're good to go, then we can actually start working together. Okay, the question here, what are the benefits of BESU when compared to Quorum? These are interoperable. So they work together. There is the main reason you might want to move to BESU is better to name better deployment options or maybe the fact that it's a pet license gives you a bit of peace of mind in terms of when it's possible. And yes, there is a large community around BESU itself. The hyperlage of BESU lifecycle. So you started your client, now what? And given everything you've kind of teased to you so far, I think you'd be already able to create a diagram like this. First, you start with your initial state. You come up on the network, you connect to others. You want to get new blocks, what did I miss? You reached the head of the clinical chain. By knowing this, you pretty much told, you got from everyone else that there are at the same block as you are, as far as you can tell. If you've not been eclipsed, if you have not been attacked by bad actors, which are feeding you bad data, well, you can pessimistically trust the network to say, okay, I've reached the latest block of the chain that is currently known. At that time, you may start minting new blocks because you feel confident enough that your block might be accepted as the next block of the blockchain. And either you mint or you participate in gossiping new blocks because new blocks will come through your network by connecting to other peers. And as you do that, then you start having the need to sync again and then connecting to other peers and getting new blocks. So it's never like you start and you can serve requests. It's more of a adaptive cycle where you have to work with a complex reality of working with other peers that may themselves be out of sync and minting your own blocks in the middle of a very chaotic environment. That's the default. That's what happens, especially on proof-of-work networks. Okay, that's also part of the consensus. So when this is an actual dashboard that we use, that's one that we kind of built as a proof-of-concept where we wanted to show a consortium and its health between different actors all working together to try to understand whether they're all okay or not. So reporting the health at a larger level. When you work on a consensus by default, if you use proof-of-work, which is the default right now for mainnet, anyone of the members of this organization may provide new blocks by minting them by actually spending the time to solve the cryptographic issue of EFASH to build a new block. But most of the time when you're in a permissioned consensus like this one, you would want to use either click or IDFT. Click is used by some test nets for if you are in. And what it says is that a number of those nodes here have been designated as signers and can create blocks and you're going to have to believe it when you see them. As part of the genesis block, we actually encode the identity of the node that is able to create blocks and we go with that particular set of nodes. That can be changed later but it requires a consensus change using a click-specific method. IDFT is also possible in that case, you select a number of proposers around the network. This is also for here are spatial and can propose blocks and then they can vote on each other which reduces the possibility that one of the proposers has gone rogue and allows for better security consideration. The downside of that, of course, is now you have the need to keep more nodes around to propose blocks. And finally, in proof of stakes which is going to come after the marriage, it's no longer up to the execution layer to propose blocks but on the consensus layer, we'll see that block construction is made outside of the realm of those nodes proposing blocks themselves. So this is all part of the consensus. You can't just have it based on its own. It gets very boring and as part of that, you need to pick the right consensus algorithm that works for you. Oh, finally, there's an RPC server. So there are four different ways to interact with that RPC server. Here is a very simple method where you always post a JSON blob to the server. You say it's just RPC 2.0. You give it an ID so it knows what to reply with so you can correlate between requests and responses and then you pass in a method or you can pass it as many parameters as you want. If you do this over HTTP, it's kind of interesting. You can actually batch it by having multiple of those blocks together in one request separated by new lines and that's going to be used by wallets such as MetaMask. If you want, you can also use WebSocket which is going to adopt the same type of thing but it's a live connection so you can actually get back using subscription data up to date on let's say a new event, a new collection was pushed into the client so now you can get that real time. It's more expensive but people really like that responsiveness. The inter-processed communications or file socket approach is kind of the same as HTTP but now instead of sharing it over HTTP on an open port in your machine, you're just pointing to a place in your file system where you're going to create a socket where folks can interact with you and it's really useful when you, for example, would like to have admin capabilities against your clients and set things like removing blocks, reordering things, changing your consensus algorithm and that is usually done using GF-attach which is the kind of a client that connects to your client. It's the most secure option and it was added to Bazoo back in April, since to Diego. Finally, you have the ability to use GraphQL. GraphQL allows you to do deep queries inside the Bazoo state so you can save things like, hey, I'd like to see what's stored in that particular contract on those at slot number three so you could dump, for example, the content of all the tokens, the balance of them, who is assigned to. This is useful when you're trying to really understand what's on chain. It's more specialized, it's more of a custom use case for what you're trying to do. Okay, so in recap, now we have a new frame client and it's doing four things. It's database, it's a peer-to-peer agent, it's a queue system and it's an API. But this thing is massive. It's got a bunch of kinks all over the place and we're going to dive into that right away. Any questions before I continue? I'm going to respond to the question that's on Discord by saying that, hey, we forgot one more thing, the EVM. A frame is special because of the EVM, but where does that play out? It actually plays out everywhere, which explains why you're seeing all those four things coming together and playing with each other. It's used to validate blocks. It's used to update the world state because when you execute the transaction, that's when you know what account should be changed, its balance should be changed, storage should be changed. All those things come out from the execution of an EVM. And of course it's used when you want to create your own blocks by executing transactions so that you can actually understand whether the transaction is valid, whether you can see whether it's failing or not, et cetera, et cetera. So the question on Discord is, does Bezu support other Ethereum compatible EVMs? Bezu has its own EVM built in Java. We're about to go and dive into this as part of this workshop. You're entitled, you're in for a ride. And you'll see that it's not possible to swap it to easily find another EVM because all the EVMs are very specific constructs. If you're interested in EVM as a library, there is the concept of an EVMC, which is a binding based in C built by the Ethereum Foundation folks, especially Babel and Alex, who are working on trying to build a more modular EVM layer where it could be dropped into any fan client. You have a question here. Hi, what is the use of API in blockchain Bezu? And does cloud involve before sending data to DRT? Well, yes, you actually call this API quite often whenever you ask for the balance of an account, the latest block number, the miner of that particular block. I want to see the receipt of that transaction. I want to deploy a contract. All those things comes through talking to an API, the JSON RPC API, right? And for example, it's Plunk, the way we've been using these APIs, we actually call every block. We call every transaction in that block. We call all the receipts of all those transactions on all those blocks. And we capture all this information into analytical tools which allows them for us to create dashboards and meaningful visualization of what's happening on chain as we go. So does cloud involve itself? That's a loaded question because for example, short party providers such as Infra, Alchemy, Anchor, I forget a lot of them, are cloud-based and will have a free tier which you can use to connect to Ethereum nodes which are going to be running on the back, right? And they work on providing all this information to you and they cache it heavily because it's pretty expensive because you end up calling this endpoint, for example, to see the latest block number millions of times. If you want to know more about that, there's a number of Infra presentations on the web where they talk about how they have to scale in cache to make it possible for people to connect effortlessly to Infra. Yes, thanks for the love for the logo. The logo is not for me. It's actually from Boris Mann who built a conference called Run EVM back in 2018 that took place in Berlin. And I think it was a very interesting crystallization of the time at the time where people were trying to understand, are we going to standardize on the EVM or are we going to go and build along? And the answer has been crystal clear so far that the EVM is still being used as a standard layer for execution. Okay. Now, if you want to ask me more questions, it's a good time. I'm just going to catch my breath here. No questions? Great. Let's keep going. So, so far, I've left you into a nice days listening to my dream voice about how Bezu is coming together. Now, let's get real. We need to configure this thing. So, can it be right? Oh, wait, wait, I have a question. Is there a path upgrade from current EVM to latest EVM? Oh boy. This is not just the EVM. So, the way the EVM is going to change over time is through what is called hard forks where the whole set of not just the EVM but how the blockchain works is going to change, right? So, we're going to say, okay, we have a new app code or we have an existing app code that's going to change its gas prices to reflect the reality of how much cost to execute it, for example, and those changes are really widespread. In the next session, I'm showing how these are put together inside Bezu and you can see that nothing is safe, right? From collection signing to how we reward for blocks, everything there is changeable. So, the path upgrade is in your Genesis file. You tell, hey, on that block number, change completely the way Bezu works to adopt this new behavior from that EVM to that EVM. Can you explain about static peering? It would be my pleasure to explain about static peering. How about I go back to that slide? So, I explained a little bit here that there are different ways by which you find all the peers on the network. And some of the times you want to, you don't know, you're working in a public network and you cannot by yourself find all the peers. You need to find, you have a first step of discovery when you connect to what is called a boot node server, which is going to hold as many node information as possible, get you hooked up to say the phone system, right? And then you can call everybody else and start creating your own set of peers. But sometimes that's not what you want to be doing because you know very well that you're in a permission environment and you don't want to connect to some anonymous peer on the internet and start sending them blogs, especially if you work with the enterprise. So instead, what you would do is you would configure Besu to connect to a select few peers by giving the complete information instead of having to go through discovery. So that would be static peering. Any particle explanation Besu is wrinkly? Wrinkly is the testnet. Wrinkly is a testnet that is being developed right now and it's been running for years. Besu can run on wrinkly. Wrinkly has the genesis file that tells you exactly what type of consensus is being used actually. How about I go and do this for you and stop sharing for a sec. And then I'm gonna share my whole desktop one minute. And then we can go here and we can go command shift R, wrinkly you said, huh? Wrinkly.json, here is your wrinkly file. Here is everything you need to know about it. This chain ID is four. Studied this Homestead block at one, which is interesting. It's currently up to the London hard fork up to this and that's particular block. It's currently using click with a block every 15 seconds and an appointment for 30,000. It's using those but not. And here's information about the first block, non-stamps, temp, extra data, gas, the net. The extra data itself here is meaningful because it's containing click-based consensus information which is going to come useful to know which nodes are allowed to sign transactions. And then at the bottom, you have all the allocation of who got money, or how much, right? And this is built into the genesis box so you can have people who have money initially when the change starts. Otherwise nobody has any money and the only way to make money might be to mine to get rewards. So that's how it can be works. And to answer your question, there's a question in the chat as well about how you would go about changing the EVM version. You go in here and you say, hey, we're gonna have to update our genesis block. We're going to add one more hard fork and you add the hard fork and the block number at which you need to take place. And that's when you upgrade your EVM. All right. So I'm really glad I got those great questions. I'm really thankful for you taking the time. Thank you for asking me those questions. Keep them coming. Oh, got one more. Oh, thank you. All right. So let's get real. I mean, this question is actually great because they're tying me up a little bit too, taking me up a little bit for the configuration discussion. So the hyperledge of basic configuration, frankly, there's nothing it could do that would not look like a screenshot of our excellent documentation. So hyperledge of basic has some of the best documentation team on the internet. I guess it is. And if you go and look at our basic website, it's extremely versatile. So it's really going to give you everything you need to know about the documentation itself. All the options here are stored. The only thing it can do for you today, which is going to help you a little bit in bootstrap yourself as you try to run this, is go over which of those options are important. The thing that you need to pay attention to as well is that when you look at this configuration, it's got a little bit of a three different ways for you to specify options. Let me see if I can go back here. So there are three ways. You can do arguments, command line arguments, environment variables, or a config file. And it's in that order of priority. So if you're asking yourself in the config file that I inherited from this particular setup, it's saying this, but it's being overridden. It might be because you have an environment variable that it takes over, or you may want to use the command line argument to override that as well. So it's in that order. There's a little bit of a rule here, which is maybe you'll see me use command line arguments throughout the discussion right now, but it can be a very easy translated to environment variables. What you do is that you would remove those dash dash the first, any dash in there is replaced by underscore, and there's a basic underscore prefix to everyone of those options. So when I say RPC dash HTTP dash enabled, new environment variable would be base to caps underscore, RPC in caps underscore, HTTP in caps underscore enabled, right? Same thing. It's just that for environment variables dashes won't work. So we're using underscores instead. And then the configuration file, so it's written in Tamil, which is a well-known format by now. Okay, so let's go over what's important. What should we care about? First options. The network option is the most important one. We just saw Winkabee. And in this example, I'm using Dev and Robston. So you can actually set a network and right away, if it's a well-known network, this is going to be able to take that configuration from genesis file, the bloop nodes, the consensus engine, they will know exactly what to do. You don't need to do it more than that, right? The next option, which is really important is the data where it's going to leave. If you don't get to choose where each of those databases are going to leave, you just give it a folder. In that folder, it's going to be self-sufficient. It will migrate it from data if need be. So you can do a dash dash data path, common line argument to set where it should leave. By default, if you don't say anything, then it tends to not live in the working directory, but more into the parent directory in which base with run. So usually when you unzip Beesu, it's been Beesu. So it's going to leave next to the been folder, which might not be what you want to be doing. It's good enough for development and testing though, if you're just playing around. For peer-to-peer, that's important because you want to expose peer-to-peer to the rest of the internet. By default, it's going to go and listen to local host, which is not enabling you to get connection from the internet, which might not be what you want then, right? And by default, the port is 30303. That's been the port for if I am following this time. And discovery is important, which just talks about that quite a bit at length, right? If you want it enabled, you can say it's true. And then you can say dash dash boot nodes and use a comma-separated set of boot nodes here to connect to them. So these are the first options. If you don't have those, that's becoming really difficult for you to run Bazoo. That usually you can grab onto this. Here's one that trips everyone, me included. When you start Bazoo and you forget to put that in, you want to enable just an RPC, it's not enabled by default. Guess what? That happens to me every single time I run Bazoo, I always forget this thing. So open it up with dash dash RPC dash HTTP dash enabled. If you want W WebSockets, then it's WebSocket WS instead of HTTP, right? And then there's another gotcha. By default, the RPC HTTP is going to only support and offer a subset of what is possible for the different clients because some of it can actually change and alter the chain itself, right? So if you were to expose the admin set of methods of just an RPC, you may have bad surprises down the road where an attacker taking over your just an RPC port might be sending messages to say, hey, forget every block ever mined or ever collected by this client. So by default, the default is if net Web3. Web3 is just a little bit of metadata about your clients, just client version. Net gives people the ability to ask for how many peers you have, for example. And if is also thing that you would want to be brought to query, such as your balance when it counts in collections, calling a smart contract, things like that. If you need to spec around just an RPC, there's a set of methods here. Let's take on that real quick. And you'll see that all this information here, so this is a foundation effort on execution APIs. I believe this is still in progress. Right now you can do things like if you get blocked by hash by number, you can see the chain, you can chain your own. So for example, Robston is four. The Conbase is a miner who gets money and all those things that you would want to be able to get to. So here's just a ton of documentation on that. W part of the slides. No questions. Okay. Hidden flags. You will not know this if you don't work on Besu, but hey, we've hidden a bunch of things in there and allows us to do all sorts of dirty tricks to you. Ha ha. So the hidden flags are when we don't know whether we want to really expose this as an actual functionality or when the developer working on the feature was kind of unsure about the best way to ask for, you know, the good default. So we're going to guess at the best defaults are, but you know, we may want to change that during testing or maybe in some production settings, it may not make sense to use a default. A good example here, these are options for the Ethereum wire protocol. By default, when we ask for bodies, we ask for 128 bodies at times. That might be too much in some settings because you're going over the limit of what the, what this type of queries would be. So maybe you want more, maybe you want a lot less. Right? So by default, we don't expose a number of options. And you'll see in all developer workshop that we actually build one of those unstable options. The only rule is they start with dash dash X. So now get to good stuff. Waste to run BESU. Go ahead and go to GitHub, you can download BESU right there from there and you can check the chat to 56 of the download. So you're not being abused. You can also install it with homebrew if you happen to be running a Mac. I believe there might be a Windows version of this which is chocolatey, I don't know if it's still maintained. And finally, that's why I didn't put it. And finally, a very popular option has been to use it with Docker. So just Docker pull hyperledger slash BESU. From the source, once you download BESU, it's its own, you know, source code. You can do a Gradle W assemble in terms of OS support. So BESU has a number of native components especially around crypto to make it faster. Because otherwise if you just use Java for everything related to just adding numbers together or playing source and stuff like that, you may actually not be of accompanying performance. So by default, there's actually a BESU-native GitHub repository which hosts a number of Rust based or C based libraries that are compiled natively and embedded into BESU itself. They support 86, I believe we're close to ARM support especially M1, which I would welcome. I got a new machines M1 based. So that is going to eventually be in a realm of possibilities soon enough but X86 has been supported forever. And that means it works, I believe on Windows and I believe it also works on Linux. There's a fallback if native dependencies don't work which will use pure Java instead of native crypto but it's not as fast. Okay, the advanced options. So far, I'll give you the easy ones. Let's read the band aid right away. Sometimes you just want to go and really own your destiny. So do you want to set your own Genesis file you just created your own consortium that's just Genesis file is allowing you to point to a JSON file on your file system. Let's say this is the truth. This is how I'm going to run my network. When we talk about RPC and security and enterprise requirements, some of this is actually normal by if I am standard. So you want to say which host you're going to bind to that's what we bind to local host may not be where you want it to be bound. Maybe you want to expose it to the internet which comes with risks. The course origins really important. It can trip you up if you're trying to connect to it through a web browser, for example or if you're trying to get metamask to play with you. And then there are more interesting, hardened configuration possibilities where you use a JWT token to kind of authenticate yourself against BASU whether you use TLS client of connection or whether you want to use a password type using basic authentication reconnect to your PCT. So these are specific to BASU actually. I don't believe that other clients have this type of functionality built in as much. Another one that's disregarded but it comes really helpful when you're in pod is enabling your metrics. You can enable metrics using promifius or open telemetry. That's the protocol bit here. So you can say equals promifius or equals open telemetry. And by default, I believe the port, I don't remember the default port but you want to also manipulate the port in the host to see either where you're exposing those metrics as it can be scraped. You can also decide to push those metrics using promifius. If you are attempting to do proof of work mining with BASU during a treat, you can enable mining. So that would actually enable CPU-based mining but you can also enable stratum-based mining which then opens the port and allows miners to connect to you for the stratum protocol. And I believe BASU supports each proxy stratum one. And so those two different types of protocols are supported, you can connect to them using your miner just pointing at the 8008 port by default and then you will be able to mine blocks using BASU. So let's go into some exercises right on top of the error, which is great. We're going to start running BASU on our own network. So if you haven't done so, I invite you to go and download BASU or install it or build it from source and run it with those two options, dash-nash-date-work-equals-dev and RPC HTTP enabled. So what does it do when you say dev? Let's actually take a look. What that does, it points us to dev.json-genesis file which is going to be stored alongside BASU as part of its class path. So when BASU runs, it finds the dev.genesis file and then is able to configure itself. And there are a number of things to really pay attention to as we go here. First is the chain ID, 1337. It's the default chain ID for that particular network and it's useful because it happens to also be the local host 8545 chain ID in Metamask. So we'll be able to very easily add ourselves to Metamask in here. Got a question. Oh, I've been trying to grow W build inside BASU WSL2. Repository has been built for so far and states only 1% done. Is it expected behavior? It is not, but I have no idea what WSL2 is. I have no idea what WSL2 is. Windows subsystem for Linux. Oh, if you're in Linux, just build it in Linux. It's not expected. Okay, let's go back. So what if I want to run my private network but should the value of network flag, then you can't use a network flag. Network flag only works for well-known networks. So all the well-known networks, we got Aster Calaveras classic dev, ECIP 1049 underscore dev, girly, kiln, cutty, mainnet, motor, rinkaby, rubsten, tipporia. Awesome names. Probably meaningless to most of you, but that's the network that you can use, right? And you definitely don't want to use that if you're going to build your own because these are all public and shared. So yeah, Miles, please let us know if you're able to build it outside of the Windows system. I'm interested to see what's possible here. So here in the particular dev network that we're going to run, we are going to say that our first block is a Petro's book block hard fork at block zero. We're also setting some configuration. So the contract size limit is quite big actually. I think it looked like it's two megabytes. Looks like it. And then we're going to run with a proof of work algorithm named eFash, but we're doing a little bit of a stint on it. We're saying it's got a fixed difficulty of 100. Our first block has a nonce of 0x42, which is in the exodus mode. So it's not actually 42 timestamp of zero. So first of Jan 1970 at midnight, sure. Extradata, the extradata is just 32 bytes up to us. And it's got some gas limits and difficulty. All those things are kind of arbitrary because the Genesis block is kind of made up as it is. And then it's allocating a number of addresses and accounts. And you can see that they have additional information that you may not see in other Genesis files. The first thing it's, this contains the private key, whoops, the private key of the account. And the private key in this comment I ignored in real chain, the private key should not be stored. You definitely do not want to be passing around the identity of actors of the chain alongside your Genesis block configuration. But that's helpful for us because we're going to be able to import these identities inside MetaMask, another question. So I'll say one thing. If you use build, you're actually going to build, test everything, test all the things from unit test, integration test, acceptance test. It might be better served by just doing an assemble task, which then bypasses testing. Otherwise you're going to spend quite a bit of time waiting for the test to run. She's not a bad thing. She don't want to try that code blindly for sure, but also takes a while. Okay, so let's go back to our exercise. Let's see. I have right here a little bit of a terminal. I'm pretty sure it's unreadable. So I'm going to change my preferences and I'm going to change my background. I'm going to make it as clear as possible. Maybe it's too much. Here we go, little gray. How about that? Can you repeat about including the private key or not? Do not put your private keys anywhere. Keep them safe, you know? Bring them, do not share them. Do not share them ever, right? The only reason they're in the dev.json file is because the dev.network is used for testing. Do not put real funds associated with those accounts. Do not transact on those accounts, even in mainnet, because people will, this private key being known, people will sculpt the money stored on those battery accounts, right? And so this is great for testing. It's great for development. If it leaves your computer, you lost the war. So we're going to try and run Beesu now. So I happen to have, well, that's not good, okay. Okay, so I have Beesu on my laptop. I have it built. I have it unzipped here for the sake of building from scratch. We're going to unzip it again. Okay, so now I can go into it and let's follow the exercise. So I do network equals dev and RPC HTTP enabled. Oh, no. Okay, that's what happens when you do demos. We're going to build again. So as we build, we are reconfiguring, we're compiling all the code that needs to be compiled. Takes a little while. It's a classic issue where some of my code is in the next workshop. And because of that, we're having a little bit of an issue and I don't want to spoil the surprise by passing an argument session. So really a 9% miles, I'm beating you already. That's great. Really close now. The percentages themselves are not actually worth paying attention too much to because they are not based on the actual wait time. They're based on the amount of work, number of tasks that Gradle has to perform. Gradle itself is a little wrapper that is building, using Groovy and picking which options to pick from. As you can see, we're already at 57. Now we're building the tar GZ distribution. Now we're building the ZIP distribution which is actually some contents and we're just compiling the acceptance test. And we're building an additional tooling called the EVM tool as well. Okay, so build distributions. You can see I still have this folder, remove that. We can ZIP again. And I'm going to go in the folder and then we'll confirm number correctly. We had BeanBesu, NetworkDev, RPC-CTDP enabled. All right, no existing database. We're just starting. We have a bit discovery agent on here. And you can see that our system is now up. It's even cryptics on Inno, that's part of setting itself up. So it's got a public key here that is advertising to the network and it's saying, hey, I'm proudly exposing myself on 127.001.300.303, which means that this content is pretty much unreachable at this time. It's great for what we're trying to achieve here. We're not trying to connect to others, but either way, it's still going to wait forever to see if other people are going to join. So the first thing I'm going to do is do a curl over localhost 8545 with this application JSON and we're going to try to get the balance of this account. Why is this account? This account is listed here as a second account here. So what is the purpose of Gradle build? Is it the idea of just to test if the build is ready to build? Do you run Gradle build when you want to not only just build the default command Gradle, and it's used to build and test and run everything that could be possibly under the hood with Gradle, right? So it's not so much a business construct. It's actually something that is being inherited from Gradle, which stole the idea from Maven where as a Java tool, a build is a complete build. But if you're just trying to package, get the zip file that contains the basic stuff, then you would want to just run a sample in Gradle and package in Maven. And that's been like that for a while. This is just a philosophy thing as part of the lifecycle of this software. Oops. So let's see if we can copy that. Okay, I'm exiting full screen so I can copy that text for you. And you can see, so we're using backticks here because we are using Linux-based systems on bash. And then we don't need backticks anymore in this because there's single quotes like that is around the JSON content. So let's try. Okay, it looks good. All right, so we've got a result here and that result is this huge X. How do I make sense of this? Let's see, I got low. I got a little website here, it's at sales.dimachine.io, which I own. And if I put this exact decimal in decimal, we see over, we're fluffy rich on this particular network. Good work for everybody, we can just retire now. We have 90,000 if at our disposal. So those slides are not quite yet yet. I'm sorry, been working on them till today. And however, we have a basic workshop with all the instructions. And I believe I did a good job of pasting them here. And you can, let me put them in the chat. Okay, we got two minutes. You're gonna play something more advanced. Let's go and actually add it in MetaMask, right? So for MetaMask, we now have to play with the security setting because MetaMask is going to connect to us through a Chrome extension. If you look at this Chrome extension, it's extremely specific, which I think is a great idea. And so we're going to say that we're going to allow MetaMask to connect to us of our course origin right here. Okay, so let's go. And feel like just pressing Control C, right? You can feel Bezu. You can see Bezu is stopping, it's stopping in its network, it's stopping synchronizer, it's being very nice. And now I can paste this additional set of instructions which are a little nightmarish to look at. I'll admit this and Bezu is back up. Okay, so I did a good job. I'm not left out of my MetaMask and I can share it with you and show you. Ooh, look at that. So I've done the work, which is not a good way to do it, but I have already a localhost 8.545. By default, if you go to my net, you'll see that this Bezu dev number two account is completely out of money, which is really, really unfortunate, but hey, it's okay. Let's go back to localhost 8.545. So if you scroll down, you can see all the well-known networks here and at the bottom, you can see locals 8.545. Click on it. You can see that your account may not have anybody yet. So how do I go and add this account? Let's import a new account by entering its private key. Private key said, yes, in the dev.json file that I shared earlier, sorry, Zoom is getting in my way. There was this notion of a private key here and I have not imported this account yet so I can copy and paste this private key, right? Any questions in the chat? Okay, no questions in the chat. So now we're gonna go into here again. I'm going to click on this. I'm going to click on import account, private key, paste, import. Account number seven, I have this many accounts, has 90,000 if as well. And how did you do that? You did that by connecting to your base with locally, getting all the information and I can do all sorts of interesting things such as sending money between the accounts, playing with the data itself. You got nine accounts, that's just showing off. So of course you can have more than nine. So thanks for showing me. And Tristan, I'm not sure you should disclose that you have that many accounts. I think you just target, you place the target in your back just now. So this is how you can play a little bit locally with Bezu. So already now you can do development. You can do like deploying your own contracts and play with Bezu locally and do all sorts of things. Is that real? The even amount. Its amount is real on this depth network. It's not good at all on anything else, right? Nidhi, we're gonna get right away to this network idea with FordalCurt containers. We're gonna talk about that. So that's our next review. First we're going to agree on what we're going to do as a consortium to work together. Because it's so far, timing is great. We're right on time. So if you have questions and you want to drill a little bit this thing, let me know. For our genesis file specification, we're going to go into a tutorial, which is not mine. And I just wanna spend a few minutes for you to understand how we go about creating our own ADHD network. And the great thing here is that, you have this ability to then read the docs as well and see exactly what's what. So the first thing that needs to be done here is to build a genesis file that's going to be specific to you in your enterprise with your phone nodes, for example, like you're trying to achieve here, right? So when you create the configuration file, you're going to give it information such as it's chain ID, it's block, IBFT information, allocation of funds. And here in this particular setting, it's also adding some custom things such as blockchains, number of nodes, generates those nodes and account, right? You want the tutorial link here? Okay, no worries. You can follow from here. So all those actors are being placed in here in this particular configuration. Now what I want to pay attention for you too is that we're doing something here a little unnatural which we're adding actual information. This information is not to be located in the final genesis configuration file. It's only going to be used because we post-processing this JSON file into a genesis file. So using upper-end genesis 16, so you could try to work, wait, you're using Ubuntu. Interesting. Okay, mouse for now, you can try to work with a download of PSU if you're trying to apply here or a Docker pool. I'm not sure how to help you. Okay. I read that QBFT should be used over other constants, what are we calling un-priced environment? What is the advantage? Well, so QBFT has been around longer. It's been tried and true. QBFT has the immense advantage of being used by both Go-Koram and base too. So you can actually swap clients and you'll be able to continue making it work. QBFT too does not, by default, have private states for a number of cases. These are unrelated, yeah. These are actually related. So for the purpose of this, they actually look quite similar in terms of configuration because you have to, in both cases, set up the designers on your node system. So the thing that's been at this point used is that Bezu has a subcomment called the generate blockchain config using a config file to network files. And, you know, it's a screen to generate the test file but also going to generate the accounts. And it's going to allow you to keep going like this. There are two more questions. Okay. So this then creates all those things. And this is important. I really need you to understand this. The genesis.json is actually existing here, right? All the information is already loaded. The only difference is that it does not have an extra data string. This extra data string is actually generated using this approach. And it contains the keys and information about all the four signers that we're going to allow going forward as part of the consensus algorithm of this particular network, right? So the genesis.json now has an extra data which is the encoding of those four nodes identities using RLP, which is an inputting algorithm used by FAM. And the directory of it has the key and the public key of each of the four accounts which will be allowed to sign blocks for each of the operators. So, you know, each of those nodes they'll want to have those identities so that they can operate as part of the network and sign things, sign blocks. Now, that's exactly what we're doing. So we're copying the data in each. Sorry, go ahead. Okay. Each of the data of all of those nodes is going to be copied into different nodes, directories under the data folder where the key and key dot bar are being loaded. And as you can see, then you run them with genesis.json file, you point to the data folder next to you and you say, hey, make sure I pick up that key file which is a default path to that data and to that key file inside the data path. This is important to get. It's a little bit of Lego assembly and it's a little painful because sometimes, you know, that's when we mess up. Speaking of experience, frankly, but if you get this way of building the genesis, then everything else goes smooth. And this is the same thing for QBFT. So I bear good news. I don't want you to do this right now because I think it's quite involved and it's not a lot of fun, but I have a shortcut for you to go through the finished result. And if you're game, we can go to that right away. Unless you have questions. Okay, this tutorial is right here for you if you need to. Again, I'm coasting on our excellent docs. There's no question about that. Our docs team has been excellent at delivering great docs for BISU and tutorials. So those tutorials, instead of trying to recreate them, please follow them, they're really good. And if you have any questions, those docs are actually also open source. You can open issues against the BISU docs repository and you can also contribute to that repository and help it become more comprehensive over time. So, you know, Constance has been a great actor as part of this ecosystem. And there's a way for you to, instead of having to do the gruesome work of building your own network, get a kind of a quick start approach to it. So you can start with something sane and then you can customize it to your needs. So it's got a number of defaults that is going to try and pick up for you. I'm going to paste this in the chat. If you're familiar with this comment, you should pick up right away on NPX. NPX allows you to run the Chrome Dev Freakstart node module directly without having to install it or do anything with it. And the Chrome Dev Freakstart NPM module itself, let's take a look at it on NPMGS. So you're not spooked by running some code on your laptop is right here. It's well maintained. It's published four days ago now. It's got some interesting things here. You can go to the GitHub. You can see it's not, you know, you're not about to run something that is a bad actor. It's only going to generate files for you. It does not actually run anything after that. So it's also very lightweight in terms of, you know, it's not lingering around on your computer. So let's, let's kill Bezu. This thing is running forever. Now we're going to go back to my little Bezu workshop. In the Bezu workshop, I have some instructions on how you run NPX. But frankly, let's just do it. So I did say NPX from the Freakstart. And you can see it's downloading the Chrome Dev Freakstart from NPMGS. And whoa, got a prompt. All right. So we can create our own network. It's great. It's going to use both Docker and Docker compose. And we keep to pick which client we want to run. It can be Hyperledge, Bezu, it can be Go Chrome. Great. I want Bezu to do it. We'd like to try out if I'm over straight. I'm going to up that for now. I'd love to Chrome Manager, no. I'd love to have support for private elections. Yes. Is there any logging that you want? So we can support Splunk or Elasticsearch. I happen to work for Splunk, but today I'm going to just want you to really spend the time on Bezu itself and none. Do you want to enable support for monitoring your network with BlocksCount? No. Where should we create this? By default, it's going to create it under the Quarantest network folder. Sounds good to me. Situation complete. Now you can run it with Run.exe. Wow, what kind of Blackmagic is that, right? So that's what we did. And it shows right here. And it's got a number of things in here. Let's actually open it with Visual Studio Code. Okay, so we're in Visual Studio Code and we are looking at this new folder that was just brand new created. First thing to look at is the Docker Compose. The Docker Compose itself is using YAML anchors to allow, to avoid recreating several times the same content over and over, right? So it's going to create a Bezu, it's going to create one Tessera and then it's going to apply that all over the place for all the Bezu, for all the Tesseras and so on and so forth, right? So it's got all this configuration in those aspects. So when you start Bezu, if you're familiar at all, it says bin bash copy genesis.conf, oh, oh, oh. So we would want to take a look at that and then we're going to run Bezu with the config file and you can see all our familiar RPC API here or RPC WS API here as well. So we're really opening up Bezu to be used. The config file itself is under config. So let's go to config Bezu, config.tomo. And so you can see all the configuration here has been created, we get all the RPC setups. We allow requests from any course origin, which is interesting. We have GraphQL as well, Unenabled, we have metrics enabled, we have some permissions and we have a boot node. The boot node has been generated for us over that IP. So one of the nodes is going to be special in a sense that is used to run as a boot node, right? And then finally we can use static nodes as well. So let's take a look at those static nodes. Static nodes have been generated as part of the run, right? And so we can see all those nodes here all have the same public key setup for each of them. And then they all have different IPs. Those IPs are actually being negotiated when we set up the Docker compose. So let's go down a little bit. Let's say, okay, we have a business node here and you can see we hard-coded the IP as part of the Docker compose. It allows us to make sure we're running to the right node when we say so. It's also got the right key. So the config node validator one folder has the right key for opt Ibizu keys. Well, I only trust my eyes so far. So config node validator one node key. There we go. This node key here is actually the private key that identifies all node on the network. And that's the pub key for it. So the quick start has done the hard work of wiring all those crypto elements together. So you don't have to, right? And it now helps to go much faster at understanding all those elements together. So all those days of death node are now being set up, right? They're exposing ports. So it looks like these RPC ports, for example, are going to be exposed over 21,004, et cetera, et cetera. And then we have additional tooling such as if signer proxy, which will allow us to send data and sign it as we send it to the network. Next, we have the private enclaves. So we have Tessera, which is going to be at a private specific port as well, and is going to connect to one of the nodes as part of that. Any questions about any of that? I'm looking at this court and I'm seeing some folks feeling amazing today. It's great to hear. Okay. So this is where I'm going to leave you in terms of exercises. I'm happy to answer any more questions at this time, anything at all you may wanna ask. But this is a quick tour of how to run your own private network by having it being generated. So you don't have to do the hard work of actually running all those things together. I would say it's a quick start. That's not the end. For example, if you look under the enterprise Ethereum Alliance GitHub, I use this as a start. And then I modified it. So I actually smashed together Beisu and Gokor, I'm working together over QBFT algorithm. Miles, I'm really sorry that you're having issues with building on this particular setup. Yes, so let me drop in this court as well all the Beisu Workshop reference material. This is going to be here. Of course, it will all be coming together on the slide deck as well. So we'll post that as well. And we'll all link together from our Ricky on hyperledger.org. So we'll have the ability to really help you with that. That's a good time to ask me any questions you may have around executing running Beisu any particular behavior you'd like to see, any enhancements, anything I left out, I left a ton out. There is some additional workshop material that I know some better versed people on permission networks who want to contribute. In particular, I did not talk about the smart contract based permissioning which allows only some members of consortium to do certain actions. I believe this should come into a separate workshop because it's on the topic. It's quite interesting. If you have any questions at all, feel free to ask me now. I'd like to take a minute to find my voice again and then at 10 a.m. my time, we'll continue on the journey of actually building and extending Beisu and really do create new up code and deploy that to make sure you play with that. Just taking the messages here, some questions, tricking through. All right, one question I'm getting is how to, well, which hardware serial did you use to add a new network to MetaMask for the local FNET? So, let's take a look. I'm still turning my whole screen, hopefully, looks like I am. If I go to MetaMask, any point in time, I can pick a different network by default. The network is going to pick is always going to be a mainnet. But you can also choose to go to another mainnetwork. Let's see, Robston, right? No, if I'm Robston, it be ring to me? No ring to me. Okay, but if we go to the bottom of this, we can add local host H545. Oh, you know what happened? I killed Beisu. So it's not a variable to connect to it. It's not a variable to actually tell me what the balance of my count is. So it's going to just be fading here. How about we do this? The leader was right here. I'm just saying, and then if I just go up, I find Bean Beisu network there, RPC should be enabled. I have the HCP course origin sets to local host and this Chrome extension. Have you seen this in the conflict tunnel file? You can also just put star in there if you just want anyone to be able to connect to you. It comes to risk that someone might be able to hijack your RPC port. So let's run Beisu again. And now Beisu is up and running. Let's go back to here. Something went wrong. Oh, really? Let's see. Connect again. I'm just not going to let me. Here we go. So we're back on and we have this. If you go to, I believe, we can expand this. This is a lot of MetaMask trickery. You can have it as its own. Then you can do things like looking at the settings of MetaMask. So you can add the networks, for example. If you actually go to the networks, you can see localhost. And you can see it's by default set up to HTTP localhost in file 4.5, the 10 IDs, the ones for 3.7, and the currency symbols if you can also set a block explorer URL if you want to. So if you run your own consultium-based enterprise development, you can also allow for that to happen and for you to connect to an RPC node for the Enterprise Ethereum Alliance Testnet. That's what we do. We actually have an RPC node and we can connect to it using MetaMask and all those instructions that are located in the EEA GitHub, which is members only, for questions. Thanks. It seems that it doesn't accept the chain ID. I guess I'm lacking some configuration. We'll try in a bit. Yes. The chain ID is a thing that MetaMask will really not like if you don't provide it exactly the same. So here it happens to be coinciding. Chain ID is being set to 1.337. It must match the chain ID written by the network. This is because of the EVE versus VETC clash. VETC is starting to use the same chain ID as EVE. So as part of that, the two networks had to separate each other and introduce so include the chain ID as part of signing transactions. So someone could not use a transaction that was signed on ETC and replay it on Ethereum itself. If you look inside the dev.design, this is at the very top. The chain ID is set to 1.337 there. If it does not match, user argument to ask is not figured right. Or maybe you're not using this dev environment and you're using your own genesis file instead. Just perfectly fine. But just make sure they coincide and then you'll be good to go. No worries. Any other questions? Here is the plan. I'm going to take questions for another five minutes or so. And if you would allow me, I'd love to take just a bit of time for 10 minutes. And at 5 to 10, we'll start again and we'll talk about development. We'll talk about how to build on top of Bezu. And we'll get really a little bit of a, you can already, if you have the Bezu workshop materials, you can already see a little bit what we're about to do. We're about to get really deep inside building our own EGM upcode and then deploy a contract that uses it and play with it a little bit. You can see that in the extent, I'm only telling you this, you know, you come back in 10 minutes, right? You can use that as part of a complete deployment that allows you to play with this new upcode and call the contract and deploy contracts. That actually makes sense. Going back to my question slide. If you have questions, just by raising your hand or just get off mute and ask me anything or go to Discord, feedback is important to me as well. Did I leave anything out? Is there anything you'd like me to go over again? I'm afraid to ask. Hi, Antoine. I actually won a bit of an unrelated question, if I may. Sure. So when building a Docker image for Bezu and for example, deploying it to AWS, is there any metrics or measurements that you can tell me about the Growl VM versus a normal JDK? Is there a significant speedup using Growl VM? Oh, that's a very good question. Thanks for asking. I believe there is a speedup and it's currently done, I believe, in a Docker build. So I don't know how significant it is. I know that the build is much more grueling in a way. I haven't seen any benchmarks on that particular difference. I've used Growl VM in a different setup where I saw 20X type of improvements in terms of at least startup times and things like that. So let me answer as best I can. Growl VM is currently supported by Bezu. It has not been put on the foreground as the premium or best way to run Bezu yet. And as far as I can tell, it's mostly used as part of the Docker build. And so it allows you to play with it right now. But yeah, I haven't seen a benchmark that says it's authoritatively hate. So you should use that with this 20X faster. But I believe it is much faster. If you're interested, let's see if I still have that. Open in our Docker folder in here with Growl VM. And the Docker file itself is a little involved, but not that much. And you can see that it's just taking Bezu and running it on top. Yeah, thanks for that answer. That's actually how I discovered this. So that's why I'm asking. But yeah, if you don't have any benchmarks, that's okay. Thank you, Sweety, for the answer. And Norris, I'm going to eat my words a little bit because what I'm seeing in this particular Docker file is that it's running on top of the Growl VM as the JVM, but not actually as a native artifact. Like you could, you would want to eventually is that you would actually build Bezu natively down to a binary that would actually be just executing on its own. I don't think we're there yet in that state. I think there's a couple issues around having native bindings and something like that that might get in the way. I think the Growl VM approach makes sense to me. So yeah, if you want to help with that, that sounds like a great enhancement for Bezu. The only question I retell there is that it's a long build process because you're compiling and assembling together every bit of the binary into one huge binary that can be 100 megabytes big. Yeah. Vijay is messaging me as well. Vijay has been sending me a message saying, hey, I'd love for you to share the links that you mentioned. So the Enterprise and Female Alliance is a members-only organization sharing with you content that is supposed to be for members, not great, but some of it's public. And I'm not above making some publicity for my hard work. So we're going to go over the QBFT Testnet. The QBFT Testnet is this QuickStart Testnet that I customized to match together Chrome and Bezu. And in that testnet, I removed a number of complexity of things I didn't need for the particular use case that we had at hand. You know, Docker compose, you can still see that we're using this, you know, Gamalinkers X-based with good node depth, and we're also using that for Chrome. And then we actually put them together for all of them to work together as is, right? So very data, one, two, three, and four. Two of them are Bezu-based. Two of them are Chrome-based. They work together using the right keys to come together over consensus of a QBFT consensus. On top of that, we have an RPC node, which is a separate deployment, which allows us to connect to it and send transactions. By default, we don't want to expose your consensus nodes to RPC traffic because it might get really noisy. And we have a new signer proxy and other utilities to a skate shop, Explorer, Remix, Prometheus, Splunk, as it believes, the advisor, eFlogger, which is a Splunk-based utility that allows us to get all the data and take it out. You open telemetry collector to connect all the information from Bezu and others in terms of metrics. And so all those utilities kind of surround the environment of a very simple four nodes for consensus, one RPC node, which are being hosted as part of this Docker Compose. And this is what it looks like. Thank you. No worries. I believe some of the sources under a virtual license. So you can get heavily inspired and you would be more than welcome to steal from it. If you're interested in the Enterprise Deferred Alliance activities, there's a testing working group, which is a separate repo where we host the actual configuration and some of the permissioning, such as the private keys and all the initial locations, all those things. So members of the Enterprise Deferred Alliance can come in and say, hey, I'd love to be able to transact on that testnet. And we sell them, of course, here is an account that you can use for all sorts of running and allowing you to play with the data itself. All right, I'm gonna take a five minutes. No, sorry, there was one more question. Is there a way to prevent the business nodes from deploying your contract in a private network? Is there a specific setup for that? Or maybe we're going to cover that later on. We will not be covering that later on. It's way above my big grade. I am not good enough at this to tell you exactly how it goes. However, there is a set of functionalities in base to right now that allows you to curtail who can deploy and who can transact on the network according to smart contract specification or according to a permissioned node setup. So the permissioned node setup only allows some of the nodes to participate and the smart contract specification allows to set up who can actually do deployments or conduct on the network itself. And that's a good reminder that earlier for the people who may have missed it, Matt Nelson from Consensus was on earlier and he said that they're putting plans together for future workshops. If there's content like that that's not covered here but you'd like covered in the future, you can just fill out this short survey. I've dropped the link both on Discord and Zoom chat. All right, this is my chance to grab five minutes of rest and then we'll get back in the saddle and we'll do two more hours where we do a little bit more around the development. I'm going to queue up our next presentation so I can find it as I'm getting in the way. And Antoine, while you're taking a break, I'm happy to answer any questions people have about hyperreligion in general, the community or anything. People are welcome to take your own break or if you have questions I can help answer. So I think as Antoine shared earlier, hyperreligion plays it was just one of the projects of hyperreligion. So if people have questions about hyperreligion in general, feel free to ask and I'm happy to answer. If not, we can just hold on and then Antoine will go back in a couple of minutes. So just a little bit of background about myself just to give context about the questions I can answer. I've been at Hyperledger for almost five years now. Senior Director of Community Architecture along history with open source. So my focus is on the hyperledger community making sure that the community itself is healthy and that people who want to get involved can get involved. So any questions you may have about getting involved with Bezu or other parts of Hyperledger, I'm happy to give you some suggestions and pointers and share some links. Okay, I see there's a question about FinTech. That's great, let me share some links. So just to give a little bit of context about the hyperledger community, there's really two parts to it or two main parts to it. We have the projects such as Hyperledger, Bezu, Hyperledger Fabric for example, those are developers around the world coming together, creating these different tools. But if it stopped there, that wouldn't be enough. We also want to help people use these tools. So I just dropped a link to the special interest groups. So we have a number of special interest groups that are helping build open communities around taking these tools like Bezu, they're getting created and applying them in different places. So for example, we have a financial markets special interest group. We have a healthcare special interest group, telecom supply chain. So kind of really runs the gamut from a number of different things. So if you're interested in applying, and I'm just dropping a link, for example, to the financial markets special interest group because there was a question in chat about Fintech, those are good places to go to get involved. Again, you're welcome to get involved in the development of these projects themselves, but if you're just in how these tools are getting applied, I really think the special interest groups are something to take a look at. So those are open as well. All the special interest groups have regular calls that are open. The dial-in information is on the wiki. They have chat channels, they have mailing lists. So if you have questions or you wanna share what you're doing or just get involved, connect with other people that are looking at using blockchain or distributed ledgers in a given industry, special interest groups are a good place to go. And they also have similar, they have presentations like this too. It is really a great place. If you wanna see use cases in a given industry, they'll, a lot of the SIGs do have regular presentations where people share about what they're doing and talk about what they learned. And you can ask questions. I don't know if that answers the question posted in chat about kind of FinTech and blockchain. If not, I'm happy to talk more about that. Here, I'll share another link here. Another good place to go if you have, if you're interested in a specific area like FinTech or pick a different industry, telecom, healthcare, et cetera. There are a number of case studies that have been published on the Hyperledger website. I just dropped the link to that in the Zoom chat. And you can go there and learn a lot more about what people have been done. These are in-depth looks at a business case that has been addressed or solved using distributed ledger technology. So for example, here, this is a perfect one. Here is a case study about what an organization called Public Mint developed with Hyperledger Basu. You can see, I don't have the ability to share my screen right now because Antoine is currently sharing his, but if you click on that link, oh wait, okay, let me share my screen. Maybe I can give you access, yeah. Okay, great. Is my screen showing up? Just give it a sec. Yeah, I have actually had a problem with Zoom recently not sharing my screen. So let me stop, if maybe Antoine, if you don't mind just clicking on that link. Yeah, no worries. Sorry, I know you were taking a break, so sorry about that, but. Nope, it's over. But I think this is relevant because it is kind of an example of Basu being used in production. So again, I dropped the link to the case study area in the Zoom chat. You are welcome to take a look. This, again, is an example of the different case studies that are there. We worked with a technical writer to really do a deep dive on a specific production use of a hyperledger project. This, again, one being done with Basu. So if you're really interested in how the technology that Antoine is describing is being used in production, check out these case studies. They do have a lot of information in them. And again, if you wanna get involved with people using Basu and other projects in a given industry, the special interest groups are a great place to go to get connected to people. So if Antoine's back, I'll stop. But if people have any additional questions about the community and I'm happy to answer them on chat. Thank you, David, so much for helping and providing context and the view into the business of how to use Basu. I think this is instrumental. We're not just building bits. We're also printing real money. Yeah, exactly. If people were developing things and it just sat in a repo somewhere and nobody ever touched it, what's the point, right? We wanna build these tools and have people use them too, right? So, exactly. And so we're streaming this away from starting our next workshop. I'm going to queue up a number of things here. Let's see. We're going to go talk about extending Basu before we get started. I'm going to employ you to follow the prerequisite steps. And so it's just possible. Install Java, Git, Docker, make Docker compose. We will not use Docker. We'll not use Docker compose for this section. It's only for the operation and running Basu and especially if you want to run the quick start. So you might be able to skip away from Docker and Docker compose. Let me paste the prerequisites in Discord. Let me paste them in the chat as well. For this session, I will be spending quite a bit of time in an ID typing away code. It may not be for your liking. If you don't like it, say so. Let us know how you feel because otherwise I can really geek out for hours into how I pick this particular jar class or what's going on here, right? So if it gets too technical and it's not what you wanted to hear, I'm happy to take it a notch faster or different or give it a different pace so you can have a better experience. If you want more, we can give you more. Someone is typing away in Discord. I was actually, but it's gonna be easier if I ask it locally. So the link that David shared speaks to how the tech is being used now, right? And then, so is there a long-term vision where Basu is going technologically? Because I know that there's been a lot of work in terms of, for example, the privacy groups and the flexible version of that and stuff like that. So is there a long-term vision for the technological stack? Yeah, so there are two different visions which kind of complement each other and are a bit odd in ways. One is the mainnet pool. So there's a huge pressure on the development team of Basu to keep up with the mainnet changes, specifically the merge, specifically all the changes coming through that have been really interesting but are quite taxing in terms of time from the team for what I can see. Basu is also not just supporting the merge but also supporting Ethereum Classic through a set of different committers that have been supporting all the changes there and making sure it's still compatible with the Classic. So this is actually taking quite a toll in terms of effort on the team. The biggest push that the Basu had over the last two, three years has been to get itself to compatibility with Gokorm with the explicit goal of eventually being a better contender for privacy and enterprise use cases than Gokorm has been. But Gokorm has quite a bit of an install base and the idea would be to eventually replace Gokorm with Basu in a safe manner. And for that, there's a separate team that's been working on harmonizing the different clients and making sure that they can get to Parity. Now Parity has been reached, I think a year ago. Now we're entering the phase where Gokorm is being offered by Consensus itself as a service as well. So we're starting to see a little bit of stability happening. We're starting to see a bit more of a market around that that's playing out. The thing that needs to happen to keep propelling this offering forward and to see in terms of privacy what they want to do is to have more of the step of use cases that David was advertising just now. So I believe it's now in the maturity stage where we don't wanna bring a lot of changes to the code base on a whim and there's been some interesting on those radar feature improvements such as adding support for new encryption algorithm which are actually FIPS 1040 compliance. So allowing folks to actually play with the existing FinTech industry and being, for example, used by MasterCard currently. From what I'm seeing as folks from MasterCard that she made some very significant contribution to make it this much more modular and available for FinTech. So that's where I would go is that it's going to be based on this type of use cases, more community based on the enterprise side and it's going to be quite opinionated and very extensive on the main side because that's where some of the innovations coming from. Awesome, thank you for that. And then, so you mentioned the team yourself. So out of curiosity, how large, if you can share that, how large is the team contributing to, like actively contributing to BESU? Yeah, it's not a secret. Let me see. Yeah, we have a metrics dashboard that has a lot of that. I don't know if you're pulling that up Antoine, or if you want me to grab, Billing. Never, yeah, I don't even know. There's dashboards, right? But I mean, you can take a look right this second and see the maintainer. So this is all maintained, then it's all part of the, there's actually votes that happen as part of the project life where people get voted in and out. That's part of the life cycle of their contributions. If you go, I don't know if it's the maintainers, there's a set of requirements around what it takes to become a maintainer of BESU, what it takes to remain one. If you stop maintaining or pushing code or helping or voting on things for six months straight, then I believe then we've removed you as an emeritus and there's a vote that happens so that as well. And Antoine, I'm going to drop the link to the public BESU dashboard if people want to take a look at that or if you want to show it briefly, but the analytics foundation has Insight, what they call an insights tool that looks at not just development activity, but community activities such as chat, whatever the tools are in the community, just analytics for them. So if you're curious. Yes, it's really good. Thanks for sharing, Dave. I had no idea that this was a possibility. I would say there's been a little bit of a creative tension on the chat side. The Hyperledger Foundation used Rocket Chat for a long time. Most recently it moved to Discord and it's really helped in terms of maturity. So if you want to chat with developers on BESU, there's a number of channels on Discord you can join. BESU, BESU-contributors, BESU-reviews, where you can see the activity. It's a worldwide team. There's people in every geography pretty much. And so there's always someone around. Okay, that's great, especially the chats that that's probably going to be useful in the future. Yep. Yeah, and all that's open, no restrictions. It's all public and available for anybody who is interested. So let's go back. We are at overtime. I believe we have a little bit of time for folks to also join us. And so we're going to go to our next part of our workshop. I'm going to ask David as well to just drop again a little bit the rules of engagements and a link to Discord, if that's possible, David, in the chat so there are folks that have questions. Yeah, I can just say a quick word or two. And if you've heard this from me earlier, apologies for repeating, but for people who are joining the second half, just welcome. As Antoine said, there are some rules of engagement basically. We just asked that people are civil, we did each other, but it is an open community, which means anybody anywhere is welcome to get involved. That could be joining this workshop. It could be joining the Discord channel. I just dropped a link to that in the Zoom chat. It could be going to the regular Bezu calls. It could be submitting a patch to Bezu. There's all sorts of ways you can get involved from joining discussions to helping with development from using it. We talked earlier about how people take the code and use it in different ways and then joining the special interest groups around different interests. So there's all sorts of ways to get involved in the community. It's open, there's no restrictions. The only thing we ask is just that people be civil. And if you have questions about the community, feel free to drop them in the chat and I can answer those while Antoine's going into the technical details. Thank you so much, David. This is very helpful to frame a little bit of engagement here. So I'll be on the lookout for any questions or anything coming up as I go. I prefer to be interrupted. It's meant to be interactive as much as possible. So just chime in if you have any comments, questions or anything that comes up about the war work today. So we're in the second part of our workshop. We just had it to our workshop coming up where we talked about Bezu. We talked about how to run Bezu. We talked about how to configure Bezu. In this session from 10 a.m. to noon Pacific won't go deeper inside Bezu itself. So we're going to take a look at the code. We're going to take a look at how it's been organized on GitHub. I think the question we just had around like how we maintain community in the lab is a great tee-up for us to also show the life of the project. And then we'll go into actually working with the IDE. We'll share and type code away, which is really, as you can imagine, is developer always a risky operation. So I'll try to impress you with my typing and we'll work on actually understanding how to extend Bezu, when we'll extend Bezu together with a new app code and we'll play with that app code to actually generate a new Bezu distribution that is using that app code. We'll deploy a contract to it and we'll be calling that app code as part of this workshop. Again, let's read out questions, interruptions, things you want to know. It's a good time to ask those questions. I'd rather have a lively debate in interaction with you than just type away or blabber at slides for the rest of the duration. So the slides here are a little bit more, let's say more like an indication of what we're going to talk about and content that you just need to read to understand. So follow my voice. Again, the French accent may get to you. If it does, feel free to ask me to repeat or clarify what I say. I'd rather you ask me than I blabber away and you're left in the days. Again, some prerequisites. This is on the Wiki. If you go to Discord, I drop this link as well. Now, let's talk a little bit about how to build your Bezu in development tasks. So as part of the prerequisites, I've asked you to please download the source using a clone task so you can actually play with it, right? And there are a number of things which are really specific to how Bezu performs. Bezu works with Gradle. If you're familiar with Gradle, I recommend you take a trip to gradle.org. If you've ever built anything in Java, especially with Maven, Gradle is kind of an evolution of that. It's a bit more scripting oriented. It's actually got some roots in, maybe I'd love to say Apache Builder, but it eventually moved on to becoming its own DSL. And has the ability to build Java, makes sense of it, create a lifecycle around it, which is one of the developer. One thing that Gradle does, which is really welcome in all this, is that it has the ability to add a wrapper that is going to be local to your disk, to your development environment. You can check it in Git, and it takes care of managing Gradle for you. So you don't need to download it separately as a separate tool, et cetera. Take a look at that. Let's go and open the Gradle.w wrapper. You can see it's connected to your bash script. It's a bash script that will check whether Gradle is installed or not. If it's not installed, it will download the wrapper for you. And it will then download the whole distribution. So in Git right now inside Bezu, there's a Gradle folder with the wrapper that points to a Gradle wrapper.jar. The wrapper wrapper.jar has properties which point to which distribution in Gradle to use and it's going to use it going forward. 7.3.3, you can use Linux, you can use Mac, you can use Windows, as far as I can tell, it should work fine if it's going to be building the development. So, yeah, if it's giving you trouble on Linux, you can definitely, I'm on the Mac right now, right? So I'm very biased toward Mac for all the bad reasons, involving laziness and being a manager in my day job. So most of those examples and most of the work that we're going to be doing today is on a case-insensitive operating system but with Linux properties. So Gradle itself is managing itself as part of a wrapper. When you do, you know, when you do the Gradle W, you're actually calling this little bash script that then calls that to this distribution of Gradle. If you just do Gradle W, we will run the default task. So to avoid that, let's fix something that makes sense. With new modern projects in Java, we have found ways to kind of help us to remind development. And this is new. This is not something that was existing five, six years ago that easily, but now we're using things like Spotless, which will reformat the code in a way that does not invite scrutiny anymore. Just not actually allow people to be care about the format of our code, right? So it's speaking tabs, there's the spaces, picking the indentation rules for us. And best of all, it's doing it for us. So we don't have to. So we don't have to maintain on indentation. It's also checking things like, you know, making sure we have a header for all the licenses for all the copyrights. So we don't mess up and have files which are not up to date with our copyright headers. Here is an example here. And, you know, it's going to really format everything for us. So a good example is if you do Gradle, Spotless, apply. And this is in my ID and I'm running a terminal in my ID, right? So just to make it simple for myself. And then you can see it's actually running. The first thing it's doing is it's actually compiling and configuring itself. So it's discovering the modules that are built inside BASU. Now it's going over each of them and it's doing a Spotless Java small task, right? So this is your first run of Gradle. And you should come back with build successful because you haven't done anything and nothing has been changed, right? And if you look at the state, if you look at status, there's gonna be no changes whatsoever. You can do something a bit more extensive when you can do say a Gradle W check. So in the same way that we're using Spotless to make sure that you don't mess up and you don't have inconsistent indent or rules, we also want to make sure you're safe in a way you compile and provide the code. But most of the time, you know, there's some grouches in Java that have been there forever. We don't want to spend time in code reviews telling you about how to close your file or how to manage better the way you split a string. So for that, there's a Gradle W check common. This is run by CI every time you make a pull request against a basic. So what does that look like? Well, in our build.gradle file, we use something called error prone, which we configure and extend to make sure that we are in line with the best practices in terms of development for Java. It's very useful because we just have to upgrade our prone to the latest version every once in a while to pick up new rules and to pick up better ways to develop our product. So if you don't know that and you're trying to make deployment or the development of Bezu, you may not actually have a great time because you find yourself in the situation where, you know, those checks might fail. So before you comment, it's a good idea to run spotless. It's also a good idea to run checks to make sure that you're in line with what is possible. Any questions about any of that? And really, I'm just using those two tasks. They're very convenient to kind of show you what Gradle is and how to run it from the common line and all that. You also have the ability to run Gradle from the idea itself, which is don't do it out of habit of relying on the common line for everything, which is just showing my age. All right, let's go back to your deck. So the layout of Hyperlegio Bezu is layout as a multi-module Gradle project. So most of the time when you build that stuff in general, you don't have multiple folders, libraries, components. You usually just build everything in one place. Bezu is more complex than that. It's got many moving pieces. So it's going to be built into one project but of many modules. Well, take a look at what that means in just a sec. The same project also contains sources but also the distribution logic. So we're going to take a look at that. We're going to see how it's being built and how the Z files and the target Z files are being generated. And finally, because it's one big layout, it allows us to sanitize all the versions of all dependencies into one, what we call a build materials bomb. This is great because for other reasons, then we can tell you authoritatively, hey, we're using this version of Spring, this version of Vertex, this version of, yeah, yeah, right. So let's take a look at what that looks like in reality. In reality, when you set up a multi-module Gradle project, you would set that up into the settings.gradle. You then include all those additional modules right here. That's part of your Gradle project. That's how Gradle knows to pick up those subfolders, all this information. To pick on one that is easily recognizable, we could go to the idiom folder. Let's take a look at that. And you can see that it's got its own build.gradle because it's its own module. So it's its own module. So it's actually telling you, hey, I'm a Java library, please apply the IDE plugin as well. Here is a name that you should use when you package me. Here are some attributions, you can add it to the manifest of the jar file that helps identify me when I, when it's started 22 to be downloaded or used. And here's the dependencies that I have. Now, you notice none of those dependencies have actual versions, right? None of them have versions by default. And then finally is publication. We can set some rules about how we want to publish it to Maven. I mentioned as well that we have the distribution logic as part of this. So in the Basel project, which is a module of Basel, we have a build.gradle that goes a little deeper inside. Oh, by that, no, no, no, that's not quite it. That's the application itself. So, you know, build.gradle at the bottom, we should have a distributions here for this star and this zip. So the publishing here of this zip is going to take all the sub-projects and push them into an application. And that's part of the, here we go. That's part of the actual distribution. Distribution stands only under build folder, distributions folder. And that's where you would see the tarjeezy and the zip file. This is where we were earlier in our previous workshop, where we were running bit too. Like you'll just check things, it's running further. So this is also where we create our distribution of Docker, right? So we take our existing untar, we untar the distribution that we just created and then we copy it inside a build folder and then we run it with our Docker build comments. So we can actually generate our Docker image as part of that. So all of this is being completely run through. Gradle, including like creating the distribution, creating the Docker file, creating the Docker image, all those things are happening there. Any questions around that? Okay, let's move on to the next item. Versions sometimes in one file. So for that, we go to the root of the checkout and there's a Gradle folder here, which would contain all the information we want. It's got a version.gradle file. That file is actually being imported by the root build of Gradle, but it's generated in a different place so that we can easily separate edits to that file from the build script. So tell you it's a build dependency management, which is a build materials and we're about to go and discover all the dependencies of all the basic components in one file script. So if you have any dependencies and it's updating or if you want to see the versions that is used by Besu, you can easily go here and find everything that you need to know about the version that is currently in use. It's extremely useful when you're trying to understand this version I'm using of Besu is it has this particular vulnerability. That's one way to do it. Another way to do it is to go to mvnrepository.com and see if those dependencies are being listed as such. The third version to do it is to go to the distributions folder, unzip Besu and look inside the lib folder and see what is actually being packaged in there. So to keep a look, you can see right what is being provided here, double configuration is eventually what is going to be showing as part of dependencies here. Okay, so if you're familiar with Java development, this should not be too far off home. So I'm seeing combinations here. So we're still having a little bit of an issue here. Let's expand this. Yeah, so it seems like you have an issue with repo.maven.org not being available to you for some reason. It sounds like could not get JSON smart 247 upon. It's interesting. Visiting a very big project. I believe this is a flaky repo.maven.org issue. If you want to try again to run the exact same thing, then you might get more likely to download that and then it will get you waiting to be. You're still just like you're assembling. VL file in 49 minutes, are you? Is your internet flaky? Okay, Miles, I hope you get there, trust me. And so please try again if you get a chance. I might get you where you need to be. So we covered the basics. If you know all those basics, now you're able to kind of run around the repository and make sense of what's going on. You just have been given a vague, vague assessment of okay, there's a notion of modules and we are running with Gradle. So we can do an assemble now. We're going to, an assemble is not our task. It's not something that is special to Bezu. It's actually a life cycle task of Gradle. And we can use that to assemble all very on Bezu. I believe I performed that task earlier in the last workshop. So as you can see, it's going over, it's going pretty quickly because it's already compiled all these codes. It's already got all the dependencies it needs. So it's just checking that the sources are up to speed. And very quickly it says, well, you know what? The Z file does not need to be even regenerated because everything is checking out. So we're able to be smart about this and we're just going to say the build has succeeded. And yeah, assemble as you can see, now it finishes in 14 seconds. I'm sorry to, hopefully I'm not making you too uncomfortable with that. And it's generating the Z file and the start Z file as well. So if, you know, please let home, please run this as well. We'll be using that as a way to generate all very on Bezu in a moment. So I need to make sure that it's working for you. If you have any questions and if you run into anything, please let me know. The assemble phase is important because it's happening before tests are run. So it will be a lot smoother and nicer to you and if you try to run something a little more elaborate such as Gradle build. Again, if you run with assemble, then you don't run tests, which is what you want at this time because it's going to be too long otherwise for you to try it out. The test phase, and we're going to explain why, is pretty intensive with Bezu. And that's the tie into my few next slides. So the repository is made of modules. Each module typically contains source code under source main Java. That is not up to balance even a Maven default repository layout, right? Additionally, you need tests are present under source test Java. In some cases we will have also integration tests that are more rare, but they happen to exist quite a bit under source integration test Java. These are different source trees because they respond to different things. Integration tests are running outside of the inner internals of the code and therefore are more involved in more expensive to run. When we're not done, Bezu has two more types of tests that it runs every time you do a build. The first one is a set of acceptance tests which are extremely taxing for your system because they actually run Bezu. They run Bezu, they may run multiple versions of Bezu to create a consensus algorithm between them and actually performing tasks such as projecting blocks, making sure this works as it should. And finally, we have reference tests which are borrowed from the Efrain Foundation reference test frameworks using a set of JSON files that creates scenarios to test for the EVM or to test for transactions in block processing. So let's take a look. Let's take a look at a simple one, such as the config folder. If you open the config folder, it's got a source folder, right? Source main Java. Java itself, and then you can see it's got its own package, a whole hyperledger basic config where everything else is neatly lined up and ready for you to play with. It's got some resources here, which has all the Genesis configurations that we talked about and is useful on its own. And then on top of that, it's got some source test Java, which is going to be all the tests that you want to run. We use JNIT4 in Bezu. There's been some movement towards JNIT5, but it's not there yet. And those tests kind of look like this, right? So you can actually run them in your ID by pressing this button. For example, trust, run all the tests in these particular classes in this button in IntelliJ. That's not different from Goats, not different from other elements. Oh, I need to zoom in. Okay. I think there's a presentation mode on this and it's trippy. I think this is it. Totally was trippy. So that might be a bit too much zooming in, but this is a particular test class here where we're working on a specific test in Bezu and if you're using IntelliJ ID, you can click here or if you're using Eclipse, it will also show that and you can run tests right here and there inside your ID. Fortunately, I don't, that's not really what I want to show you. I want to show you, I want to show you this particular project as much as possible. So let me try zoom. Nope, nope. Well, I won't try to be as explicit as possible about the format of those folders. So if the consensus folder, for example, we have a few more. And if I remember correctly, we have an integration test one, test support, integration test for IBFT. So the IBFT folder was deemed to be complex enough that we wanted to have everything, right? We want to test at the unit test level. Sorry, unit test level. So we have all the unit tests here, the test that the Gossip is working, that we're able to create all the messages that we want. And at the same time, we want to have integration tests that will perform much deeper type of tests where we're going to just take the whole consensus, run it and then really get to the bottom of what the Gossip does. So it's really creating very fine between peers that were able to see the messages, they wanted to see, et cetera, et cetera. The acceptance tests are just out there. So the acceptance tests have a whole DSL that allows you to run BISU. And I'm going to pick one that I happened to have worked around recently, the open telemetry acceptance test where we're trying to get a BISU node to run. And so here, I'll enter the presentation mode so you can see it better. So this acceptance test is setting up on the side a collector that is going to collect metrics from BISU, right? That's simple, right? And when we set up the test here, we set up first a server that is going to collect, be the destination of those metrics. And then we're going to create a new node which JSONRPC enabled with some metrics configuration that we set it here, right? And we're going to build it and we're going to start it as part of the cluster. So what that means that behind the scenes we are actually are going to start a complete BISU node with the configuration items set up. And we will then be able to interact with that node of a JSONRPC, for example, or by checking some configuration or some issue has been propagated properly from the metrics configuration. So in that particular case, for example, we're going to say, we're going to wait 30 seconds for this particular condition to become true, right? So we're going to say, hey, give me the metrics and we're going to assert that we are pleased received one metric from BISU. And it was the same thing with a trace where we interact with the node by sending a JSONRPC call. And then we expect that the node will then pick up from that interaction that it needs to emit a trace of a JSONRPC interaction. And then we're able over 30 seconds time out to repeatedly try to execute the logic here until we see that we got a span. It looks good. It looks like what we wanted to see, et cetera, et cetera. When we do changes to BISU, we usually will cover all this type of testing because it's important for us to understand whether we're really getting the behavior we wanted or not. And some unit tests might get you to some extent some safety, but the acceptance test really proves that the future is working as expected. So another one that is interesting is the reference test. I need to learn that keyboard shows got to move between testing node. So the reference test from the FAEM Foundation are the same for all the clients. And for that reason, they actually are a little annoying because they are not going to be so simple for Java developers to pick up without any work. The reason is they are stored in JSON. Oh, okay, I'm going to find it. So, reference test, source, reference test. Oops, appears that I don't have them checked out. So what I'll do is I'll go to show you what they look like directly on the cloud. By default, we do not check them out as part of the checkout there is some module. So if we go to, let's say, VM tests, if I remember correctly, it looks like it's been removed. We can go to general state tests and then we can see a good one would be, for example, ST chain ID. We can see that a chain ID.json here is what we're going with. It's a JSON file that is setting up your environments in which we execute. So it tells us, hey, no, let me see. Hopefully that helps you. So this is, I mean, we'll help you kind of see better as this GitHub page. In this environment here, we're saying that we're operating with the current difficulty of this match. The con base in the client is this much. We have this client with random, this previous hash. And then we have a pre-environment in which we say we start with two accounts that have this much balance, the announce and have some code deployed even. And there's one that does not have any code but has a brand new announce and has some money. And the collection is going to impact our state by sending no data whatsoever, but sending some information, sending some money to this new actor. So the collection here is going to change the state and change the behavior of the client, change the world state. So here it's testing it for all sorts of different hard forks. For the branding hard fork, we'd expect the hash of the collection to look like this. We expect the indexes of zero gas for data, gas and value. That is going to be data, gas and value. Data, gas and value are going to be the three impacted thing. And here we only have one option. So sometimes they have multiple options based on the different calls being made. And the logs being emitted here says, this is the hash of all the logs being emitted as part of the execution. And these are the bytes of the collection. So let's test the hard. They're really hard. Julie, we want to have them so we have the compatibility and interoperability between different clients. And as you can see the test, all sorts of different hard forks from Istanbul, London, the Merge, all that. So to make sure that we're in compliance with the rest of the clients, we also have to run the same test. Any questions about that? So let's dive into the repository. I want to show you the interview by my best user name. The different components and the different ways these are interpreting to each other. So let me try again. Oh, did something. So let's open this again. So Bezu has top-level components and has components which are going to be tributary to the top-level components. A few components that Bezu has had to build and are not really related to Ethereum are things such as crypto. So the crypto component here defines how we work with crypto signatures and cryptographic key pairs for SES, S-E-C-P-256-K-1, and later on with those new changes to be fifth compliant, S-E-C-P-256-R-1, which is a different curve. So all this crypto work is almost like a different library, that happens to be part of the Bezu work and then is being imported into Bezu. Data types also is meant to be a independent set of types that are being used by Bezu but don't have any tie-in to the internals of Bezu itself, right? There's a few more. Metrics, they're meant to be a metric system such as open telemetry or Hermitius, but they're not tied in to the actual Bezu code base. They're not tied in to the internals of Bezu. They're being imported as a library. The where it gets interesting is when you start looking at Ethereum itself. So Ethereum is not a module, but it contains modules. The Ethereum API module, for example, is going to be the top module for all the interaction you may want to have with Ethereum in terms of the wild state interfaces and things like that. The core module is where the meat, most of the stuff is going to actually be present and it is going to be quite a bit of Java here, especially around like storing data, all the core set up and all that. And the if module is where everything related to if itself to the merge and the validation that if protocol is going to be stored. So you can see already that there's a separation of concerns in the code between libraries and things which are more related to Ethereum itself and its processing, block processing, peer-to-peer agent, discovery, elements like that, right? And what's really interesting here is to understand that this is meant to allow you to perform with mainnet without, for example, looking too much at the permissioning approach or to look at all the consensus algorithms. So even consensus algorithms would actually be a separate library which is running on its own here and it's got all these consensus algorithms being imported into Bezu core as part of this. The big haters as part of this, the things where everything comes in starts tying together going to be the config module where most of the configuration is being treated, processed, assembled, validated, built into domain objects that can be used by Bezu itself and the Bezu sub-module. So Bezu has its own Bezu sub-module and that's where all the CLI arguments are going to be assessed. This is where the main method is. That's where the top level of the iceberg is present. So again, everything has been done so that the EFAM stuff is really just EFAM as much as possible result, trying to have too much pollution from permission environments or enterprise requirements. Everything that is enterprise is going to be on the side into its own set of modules, especially things like the enclave, especially things like plugins which allow you to extend Bezu using an API or anything related to privacy contracts or additional services. No questions. Hopefully it's not too unreadable. Let's get going. So there's a gripe with Bezu which is always fun to read about. Every time I see a new committer join Bezu, there's a honeymoon phase about amounts or so where people are like, well, Bezu is so great. So much complexity, so much to do. And then we give them a job to implement a new service or a tech-existing service and tie it into a new component. And then we hear it. We hear the incredible gripe that people develop when they realize that Bezu is a type of nut and a bowl of spaghetti has been left too long on the kitchen counter. It's a lot of moving pieces. It's not, frankly, anyone's fault at this point. It's just that there is a ton of configuration and options that go into a building such a complex software stack between normal configuration options that you would want and also the complexity of all the hard work that we're doing, compounding on each other, we see a lot of complexity. So I'm about to show you something that may want you to peer your eyes off. I'm not proud of it. And I believe it's important to understand it intimately so you know how to contribute to Bezu because that's where most of the innovation is kind of stuck. So before I do that, there is a pattern that is used a lot inside Bezu itself called the builder pattern. It's a design pattern of sorts. What it allows you to do is that instead of creating objects directly by passing elements to the constructor, you can create an intermediate object that you can configure to eventually build the final object. And so you'll see that. I'm going to show you an example of it. I want you to keep your eyes wide open on that. It's an interesting pattern that's used throughout Bezu. It creates some indirection but also allows you to validate whenever you want to create the object before the object itself is validated. So the lifecycle of validation and checking that the object is being built properly is kind of left out to the builder and it's not polluting the lifecycle of the object itself. Okay, let's go take a look at the Bezu controller builder. So I'm on GFT. Bezu controller builder. And then I'm going to go into the presentation mode. So the Bezu controller builder is extremely important to Bezu because it manages all the components that are going to be used as part of running Bezu. And it's building a Bezu controller, if you could tell. So it's getting all those components in the right place and then it's going to build the controller. By default, you can see it's got a number of protected fields which are not final, meaning that it can be set later in the lifecycle of the object. And it's got all these nice low DSL methods which says, hey, storage provider can pass it in, set it, return this, right? Return this, return this, return this. So all these configuration aspects, all these configuration elements can be set and be used as part of a DSL to work with the Bezu controller builder. At the bottom of the file, once you've passed all this, you can see the build method itself. The build method is used to check that you're building correctly the domain object that you wanted to build in the first place. So it's doing all sorts of interesting checks, just as if it's not, then there's not going to be a good Bezu controller. Therefore, we need to make sure it's never known. The next thing it does that it's going to create from a, you know, information configuration that's been passed to it, additional domain objects that are required to make sense of, in the Bezu controller. So for example, you're getting a factory to create a wild state, meaning you are going to wrap around the RocksDB database into a thing that is able to read it and make sense of it and gives you an API where you can access accounts, for example. Well, the storage provider is going to create a wild state using the data storage configuration and give it to you. Same goes for the blockchain storage. Then you're able to assemble together a blockchain, which is going to be made in the world state and it's blockchain storage. Then you can have the archive, et cetera, et cetera. So all this information, all this programming has to take place as part of setting up your client, right? Once we're good to go, we can create a collection pool, if political manager, a synchronizer, which is responsible for talking to other peer, a mining coordinator for using proof of work. We are going to create this humongous object called the Bezu controller, right? It's going to have a political schedule and if political manager, a sub-political creation synchronizer, a same-state collection pool, a mining coordinator, privacy parameters, mining parameters themselves, et cetera. When it's time to then close Bezu, this Bezu controller can then close all the services in the wide order to make sure that we're not getting ourselves in trouble and we're not abandoning, for example, a half-mined collection or something like that. So there's a lot of those in Bezu and there's a lot of spaghetti code that has to deal with the complexity of all those things coming together. Why are they so intertwined? It's because of the nature of the frame itself, nature of the work that we're doing in the frame itself. We could use dependency injection with Spring. We talked about that and frankly, didn't come out the other way that completely convinced by it. The reason is we would not actually abstract that much complexity because of the way it's all intertwined together and the still life cycle issues if you don't pay attention to things. Instead, passing explicitly items and checking that they're valid has been helpful to keep sane when we're working with awkward. Any questions about that? Okay. I'm going hard at this and it's just the beginning. So we're going to keep going on that. Which folder and of how are we in? We're inside... Sorry, I'm getting in the way. We're inside BezuControllerBuilder, which is under Bezu, Bezu, source, main, Java, org, hyperledger, BezuController, BezuControllerBuilderBuilder. If you want, I can copy the path reference. I'll do that from the repository root and I'll do that and share it with everyone here in the chat and then I can paste here in this code as well. So this class is so important, right? It's where everything kind of comes together and you can see all those domain objects and all those different vectors playing together to play around. Okay. And in the same kind of place, we also have the Bezu command, so Bezu command, which is kind of the origination of all the configuration because we have to run and get all the common line arguments and read the config file, read the environment variables, all these placed together as part of the CLI in the Bezu command, which is where you would start and then get all the options to start working your way down to build all those elements. If I open the structure and outline of this class, I can see, you know, it's creating the builder in the first place, for example, whoops, it's building the builder in the first place. It's setting all the builder configuration that we just saw based on the configurations being passed to it. And then the run method is where it eventually, after binding and configuring everything, it starts all the, it starts the runner, which is going to be responsible for running this to itself. So understanding this lifecycle, understanding how it comes together is important, but it's also interesting to understand that when you build the builder and you have all those domain objects, those domain objects are part of those libraries and they may not be aware of each other as much, but it's also very careful in terms of class path and dependencies that we don't have domain objects depending on each other in a way that would introduce conflicts or, yeah. Jack, just reporting that the Discord invitation didn't work. David, would you please help him out? Okay. We'll give David to help. Okay. So maybe it's calling through. No, it's time to also show you another example of shared concern where we have many things coming together that seem completely unrelated, but yet have to work together to make sense. It's called the fork ID manager. It's used as part of the status message. So when I talk to other clients, I'm going to tell them, hey, I'm currently executing on this genesis configuration, but I'm up to this fork ID and the other client will be able to tell you, looks like we agree that we're on the same fork ID, but therefore we're able to work together and continue working together on synchronizing each other, or I don't know you, I have no idea who you are. And in that case, you need to disconnect from me. So the fork ID manager, who? You know, the fork ID manager is a class. So I go from class to class and I just come on shift C and Mac, come on Shift R as possible as well. Now I'm going to put the slides at the end of the workshop. We haven't shared them yet. And the fork ID manager itself, as I mentioned, is responsible for building and representing the latest and greatest in terms of which fork I'm on. In terms of forks, let's take an example of what the fork in which that means. So a fork, for example, would say, you know, at block zero for Robston, I was on that particular fork. At block 10, I started being on that fork. At this block, I was on Byzantium, et cetera, et cetera, right? So as if you were, let's say, or after Petersburg, you'll be able to say, hey, I didn't know about up to this, right? Up to this particular fork, I mean. So to do that, one of the things that has to do is to know what's our latest head. And the head is the latest block that we know about that is deemed canonical, meaning it's not going to change. And to do that, we're going to use this function on the blockchain object, right? But we don't keep the whole blockchain object. We use this lambda notation that allows us to keep a handle to the method itself without having to keep the object around. So here we pass this blockchain object, and we keep the chain IDs, head supplier, then we perform all sorts of interesting things around the Genesis configuration. Using the Genesis block, we get all the fork block numbers that I just showed you for Robston. Then we compute using, check some, a set of hashes that we can identify where we are in terms of those forks. And whenever we check a peer connecting to us, we check whether if there's no forks whatsoever, we can drop. I'm going to do this again, open the plantation mode here. So is there no forks between us? We're good. There's no need for us to check anything. If we don't know that hash, if you have no idea who they're coming from, then it's not good. If our chain ID supplier is less than a fork next, it needs a known fork for that one. And if we're aware of them, then we're good to go. So what we end up having to do here is even in our network communications, we constantly have to know where we are in terms of our own chain head. And this changes every certain seconds whenever we have a new block coming in, or if we're synced up. So we have to continuously keep asking the chain ID head supplier for the latest block that it's aware of. And then when we create our own forks, we'll be asked of that. So we will be asked for this fork ID for chain head every time we connect to the peers so we can send them what we know about ourselves. We also will be getting the latest block number. So exiting a little bit this mode. If I go to the fork ID manager, I'm low curious, how do I actually get myself created? So we're getting stuff created in tests, sure, but also getting created in the main. Looks like we're being created by passing in the blockchain here with no known forks as part of this. And this is visible for testing. Okay. How about the other guy? Now we can see the blockchain, the forks, and our wire protocol configuration. Are we using the if 64 fork ID or not? So the if protocol manager itself, now we can go back, we can start looking at it and say, okay, in our basic controller builder we're looking at earlier, we're going to create our own if protocol manager. And again, we're passing around the protocol context, blockchain, the known forks, the wire protocol configuration, etc. So right to this is configured for us at the end. So as you can see, all those objects are again intertwined and coming together and making sense of each other. Okay. What people find a way to or discord this keep going. So I'm going to discourage you a little bit here, even more by showing you a little how complex and involved. It sometimes gets when we have to work with consensus and we have changes that impact everything. We use if I am from the EVM Docs codes, that's well known when folks go like, Oh, I did create to edit this new opcode. That's awesome. Yeah, that's not the thing that changes. Whenever there's a hard fork, you may see changes to how we send connections. That's EP 155. We may change the way we want to reward miners for money blocks. So that's a further name, for example, we may want to change the way we mint blocks. So maybe we want to change things as simple as the extra data, or we want to change the debate, the base feed to be involved in the block printing. We want to have change in the way we do production for calculations and more way more than that. Right. So protocol schedule to help us make sense of that. Let's take a look at it. A protocol schedule allows you to see a protocol spec. Right. So you may have, say from blood zero to 1000, I'm going to behave with that spec and then I'm going to change that, et cetera, et cetera. So the protocol spec, and I can open that in presentation, has the ability to really completely configure every aspect of how things work inside the protocol from the way we're going to calculate gas to the way we're going to calculate the limits of the gas to how we validate transactions, to how we process transactions, to how we validate block headers, et cetera, et cetera. So the block rewards, the difficulty calculator, everything here is up for grabs. Right. Pretty much. And as Ethereum has matured and went through many hard forks, this complexity has just been a runaway train where we're adding even more, you know, configuration aspects to it. Okay. So the good news here is it doesn't have to be too overwhelming if you're working in a straight line and you're just adding one more thing to your protocol spec. So we can go to the mainnet protocol specs here. And I'm going to open the outline here. So the first protocol spec is the frontier spec. Right. The first one that was ever created. And it says here, protocol spec builder, we're going to use a gas calculator or frontier gas calculator. We're going to use all sorts of interesting things. We're going to build an EVM using frontier. We're going to use a precompile contract registry builder here, right, for the precompiles on the EVM. And then, you know, we see all the configuration of that. Now the good news is the next one, which was homestead, if I remember correctly, takes on the frontier definition, passes in almost the same configuration. And it says, oh, by the way, no, no, no, change the gas calculator and use mine instead. Change the EVM, use mine instead. Change the way we're going to do protocol creation, use my approach instead, then you're going to validate the collection, change mine instead, and name it homestead, and so on and so forth. So if you're looking to understand how BESU is going to stack up all those protocol, all those hard forks, this is the place to be mainnet protocol specs under Ethereum Core, Source Main Java, or Hyperledger BESU Ethereum mainnet mainnet protocol specs. So if you go to the end to kind of play with the merge, we see that there's a Paris hard fork, right? And the Paris hard fork is actually building on top of the error last year definition. So the Paris hard fork seems to be adding new EVMs and new EVM with new upcos. Okay, cool. The error last year definition on the other side is changing the difficulty calculator to use the error last year difficulty calculator. Okay, no problem. Then it goes and builds on top of London, London, which changed a few things. They added a new fin market to change the way we market connections, change the way we allow contract size limits, et cetera, et cetera. And then itself is building on top of Berlin and so on and so forth. Any questions about this contract and how it's built? Hopefully I didn't lose you there. We're going to change this file in a few minutes to add our own hard fork. I hope you're ready for this. Okay. There's a sound silence. I didn't mean to do this to you folks. I'm sorry. Hopefully I didn't put you to sleep. You'll just wave by your screen if you're still around. I can't see you, but I can feel your karmic energy. So, yeah, we talked about protocols, we talked about all the specs. Let's take a look at many TVM's too. It doesn't hurt anymore at this point. So the EVM itself is a different library because it's been moved out into its own configuration so it could be used by other projects such as a header, a hash graph. If I go and enter the presentation mode, I can see the exact same approach and construct as you would expect from what we just saw with the political spec. Oh, question. Okay, Christos, I'm glad you're here. So if you're not a little familiar with the EVM, it's got a number of operations which are defined initially with Frontier. You can see things which are more like arithmetic-minded, add, mall, sub, d0, sd0, mod, smod, x0, all those things. You can read the yellow paper to understand better what they do. Or you can also just check out the basis source code and open one of those operations to a friend and we'll do that in a moment. So all those things are part of the default set of Frontier. We add push and form push operations, dupes and foundations, swap operations, log operations, all those things. For Homestead, we are just changing the gas calculator. We're not adding one. No, we're just adding the delegate call operation in Homestead, okay? For Spirit's Dragon, I believe we're just calling Homestead. There's no changes in changing the whistle, calling Homestead, Byzantium. We're adding something again. So we're adding the Byzantium operation for return data copy, return data size, reverb operation, static call operation. Crazy, we didn't have reverb back then. Constantinopley, we have Constantinopley operations, and then we register them as well. So we have Creature, which was very popular back then. SAR, Shell, STHR, Xcode, hash operation, all those things. Getting some chats. Okay, looks like people are in a good state. So now we can see how those combines. And again, it's an aggregate state. We're not removing anything because if you start removing something from a previous set, you may actually get yourself into a lot of trouble where you break consensus or you break backward compatibility. So most of the time, what we see is additional elements that are being added to the spec itself. And so we register all those operations. We can go all the way down to Paris, and we can then make our way back if we want to, where we say, hey, we're going to register Paris operations, meaning register London operations, but then we add the previous rundown operation. Okay. So that's how we actually compute everything that goes into the EVM. And I will not really show you what the add operation looks like. So you can take a look for yourself. The other operation has the bytecode that finds it on the chain whenever you look at code from the EVM. If you see 0x, 01, that might just be the add operation. It's consuming two items from the stack. This is one. It's got an upsize of one, meaning it's consuming one bytecode from the code when it executes. It's got a gas calculator. And by default, it's a fixed cost operation here. So it's using the gas calculator. Get very low. See a gas cost, which is 0x3 or 2 gas, depending on which hard fork you're on. Yes, that changed too. So to execute this fixed cost operation, we're going to take from the stack two items that we're going to convert into a big integer, because they may actually be upstudied to bytes of data in there. So it may be really big integers. We're going to add them to each other. And then we're going to have to deal with the fact that maybe sometimes when you add those two integers, they overflow past the word size of the EVM. If more than 32, then we're going to cut it so that it's back to 32 bytes. Otherwise, just wrap the result array into bytes and then push them on the stack to be used by the next operation. No idea what the EVM is. It's a good time to start screaming at me. Otherwise, I'm just going to go even harder at this. So tell me to stop if you need more clarification of what the EVM does and how it works. All right. So the EVM, right? It's a registry based virtual machine itself, right? So when you execute the EVM, you have access to a few more trucks. You can have access to a stack, which is a set of operands which have been added to, you know, and they can be either pushed or popped, not much more than that. I think some operations can get values without interacting with the stack without actually depleting the stack, but most of them do not. And they work with the memory as well. So you can actually choose to offload items to memory and some particular upcodes will only read data from the memory. They will take the data from memory and then return it. I'm going to maybe show you a little bit what I mean through an example. Actually for this workshop, I built a contract that we'll talk about in a second. And if you look at it, that's what it looks like. So this is using a DSL which I built in a separate project called Apache 20. I will not bother you with that today. But for example, here we're saying, hey, push zero into the stack, meaning go and push the value of zero into the stack. Then call this particular operation called the download. And call the downloads going to take from the position in the stack to zero up to 32 bytes from the input that is being passed into this operation. Then call our custom upcode, which is what we're doing today, and push the jump destination of 0x11 that may be used on the road to go to the jump desk. So if custom returns zero, we keep going. I say, yeah, I'm not interested by this. Or else go to jump desk by 0x11. So this is an actual real program of the EVM being built using a DSL in Kotlin that allows you to generate the bytecode for your particular program that's going to execute on the EVM. And that's what it looks like. Right, 60 being pushed one, 00 being pushed, 35 is called the download, F6 is our custom upcode, et cetera, et cetera. So we're going to play with that. And this is kind of an exercise that we'll do together. And this is what an actual EVM execution looks like if you're able to peel off all the layers. If you want to see an execution of the EVM, you can also run this with trace mode. And then all of a sudden, every time there's an execution of a contract, you'll see every state of every change in the stack and in the execution goes through being actually exposed so you can see what's going on. Okay. So it works in frames, meaning that you also have the ability to have calls between different contracts, which is what people have been calling money legals for DeFi and stuff like that. So you can have APIs where you say, hey, you can call my function to kind of get the current value of these particular assets, for example. And when you do that, you actually pass them to a frame where your stack is being passed in, which is the tracing tool. What's your question, Christos? This basic workshop, this code here is some DSL that I built on the side that allows me to create custom EVM code that we'll use as part of this exercise. Oh, to see the execution, there are two ways to do this. You can use the EVM tool. You can also use a Go Ethereum EVM, which I believe is installed whenever you installed Go Ethereum. So let's see if it's on here. I don't have your computer, but you can install it using Go Ethereum. There's a way for you to run the EVM tool. And it takes then, in terms of arguments, you can take some EVM code, some input, and some even some state, and let's you play with changes to that state based on how the EVM executes. If you're interested in that at all, a really big search should turn up this EVM tool reference here. And you can see, right, you can say, okay, I'm going to run this code. I want to set some price for that code so that you can run out of gas, for example. You can have an account where you see, hey, am I using all the money in the account? And you can set the standard receiver, the input, that's important, the value, because you can have transactions actually will pay money to a particular target as part of the execution of the contract. And it goes too deep for me now, as like the JSON engine is just back and all that stuff. So you can really debug EVM code using this EVM tool reference. So, yeah, that's the working frame. One more thing. I'm going to show you the JSON RPC server and how it's working together. It's a nice little construct, because that's happened a lot, right? We had to kind of assemble and extend our JSON RPC over time. So there's a method for that called the JSON RPC methods factory. It's right here and there. So the JSON RPC methods factory has the ability to add methods using the namespace that we talked about way back when we talked about the different APIs that you can enable. Let me go and blow this up. Okay. So some of them are going to ring through right away. Some of them are more arcane, but the one that is most well known is the if namespace, which you use to interface with the chain and make sense of what's going on. So the if JSON RPC methods here, as you can see, is being passed in a number of domain objects and things that you will need to answer questions. A good question, for example, is, are you thinking right now or are you up to the top of the chain? What's the latest block, right? So blockchain queries is going to be this model, this object that you will use to answer queries against the states of the blockchain. The if JSON RPC method itself is just a builder where you ask it to create methods. It's going to tell you, hey, you know, if block number, see, it's passing blockchain queries, for example. I want to see if it gets code. Well, you can pass blockchain queries and it looks like someone thought through the fact maybe you don't want to expose all the code from everyone, right? So there may be some privacy parameters in the construction where you would want to share that. So if you go to if block number, which is a benign one, not too big, you pass in this and stores it as a blockchain here. And whenever there's a request coming in, the value of the current head is being looked at and then it's being returned as a quantity. It's being anchored into a hexadecimal format unless it's been told to be returned as an integer at all times by default. It's not used as an integer. So this is how we map out all those methods and it's nicely done because then it kind of removes all the crafts of dealing with HTTP and all that. It's just dealing with just an RPC-specific construct. So for example, what I mean by that is you're not returning 200 with a JSON, you're just returning an object which has everything figured out for you, so you don't need to think about the details of it. Any questions about that? Cool. Okay. So here's the exercise today. It's hefty one. I got 45 minutes to do it with you. I happened to have it done last night. If you want, you can cheat, run ahead, and find on my fork of Bezu the workshop branch if you're so inclined. But otherwise, you're going to have to suffer with me and I'll give you the reference to the code at the end of that. Here's the spiel. Here is the business view of your manager pertaining to YouTube one day and saying, hey, I really want to have a new way to guarantee that no one is using our permission network and I had this great idea when I was taking a shower about using a shared secret that will be executed against all the clients. So clients that don't have the right secret will not be able to get the right answer when they execute. So for that, we're going to create a new opcode that is going to take a shared secret and is going to check whenever the stack has the value if that value actually is equal to the hash of that shared secret. We're going to allow us to also set that shared secret of our JSON RPC. And we're going to all make that work using Bezu as part of this exercise. And we're going to actually deploy a contract with it and run it against the chain and make sure it's performing as it should. So I'm going to take this as far as I can and we'll see how far we can take this with a number of objects. So the first thing we're going to do is we're going to create our very own opcode. We're going to create an opcode operation that checks the first world on stack. The world is salted. No, actually, we're not doing that. The world equals a parameter pass in the opcode operation reply zero, else reply one. I'm going to say, yeah, that's better. So let's go in and do that. So our ad operation was up there in the EDM itself. And we're going to add our new operation here. It's going to be called the shared secret operation. I don't want to add it to this point. I'm going to extend fixed abstract physical operation, which allows us then using common one in INTJ to add additional parameters. And also, it's going to ask me for a super cool to go to the super class. Sorry about that. So here, the idea is we shiplessly steal from the ad operation. We copy here. We're going to call it a shared secret. Not that it matters. This is just a name that we use for our stuff. We're going to consume exactly one stack at a time. We're going to produce exactly one. Our op size is one. Oh, we need a gas calculator. So we need the gas calculator. And then we'll get very, we'll just assign ourselves the cost. That's always the same. Very low. We don't want to disturb anybody, right? So our code is 0x01. That's the same as ad. That's not going to work. So let's do a quick cursory Google search about EV about codes. And if you look here, we'll see that a bunch of them are used, but some of them are unused. So FBFC are marketed. I don't see anything wrong. F6 here. So we'll use F6. So 0x86. Here we go. Now, when we talk about executing this operation, one thing we want to do is to be able to say that we have a shared secret object that is working with us, right? So we're really missing something here. We're missing an object that we can use that would be usable for this. So what we're going to do is we're going to add a supplier of bytes, even bytes, so to do, frankly, which is going to be our shared secret supplier. And now we're going to store this directly as our shared secret supplier. And a nice little field right here. We're going to import this class as well. And we're going to import the supply class as well. So first thing we do is we need to pop a stack item. That gives us an item here that we import as well. And very simple, right? We say if or shared supply.get equals item, then what we do is we push to the stack, a unit of 56 of 1 else didn't match too bad. We're going to push 0. So I used unit of 56 of 1 because it's known to be a byte32 object. And it's all good. It's working with what we need to do. And 0 is actually a byte32 empty, right? So now I'm going to return operation result. You know what? I really don't want to do this for too much. So I'm just going to push that, yeah. And that's all success response. That's it. We just finished. Good job. Everybody give yourself a clap. We got our up good. Yeah, not so fast, right? Now the hard work starts. First off, now we have passing this and now we notice that this is in gray. I mean, it's not used anywhere. It's never used. So now we have to do a little bit of work to actually make sure that it's going to be used by our EVM. How do we do that? Well, I guess we're doing our own EVM schedule, huh? So I don't want to take Paris because Paris is after the marriage. It hasn't taken place yet. How about we do our own workshop operations? So I'm going to do this public static EVM workshop, right? We're working with our workshop here, right? So I'm just copying this. Boom. But we're not going Paris anymore. We're going to call an operation registry. Oh, yes. So we're copying this and changing it to here. And that's what we're calling. We're choosing a London guest calculator from there. And then we're changing this to workshop operations. The workshop operations are very simple. We create this, we register workshop operations. You're going to use it. And when we register this, we cut that back in method. And then instead of registering the preventive operation, which we don't know about, we don't care about, right? It's fine. We're going to register our shared secret operation. And notice it's getting red because now we also need a supplier. So the supplier itself is going to be final supplier by starting to share the secret. And we're going to import that. And then it's going to trickle up to this, which is going to trickle up to. Okay. Oh, let's make this a public structure. And then everybody should be modest at now. The workshop operation still doesn't like the supplier. So we're passing that around. Now it should, it should, you know, if you're really tired of this, let's do a thing here where the medium configuration is going to be extremely helpful for us. The medium configuration here is number of domain things. So maybe it could also pass a supplier. Okay. And choosing linens on this stuff. Now we have changed a configuration so we can pass an additional field, which is going to be also through the whole structure of this configuration aspect. And now it's being, it's being finalized as part of that. So we cannot have issues later final in Java means nobody can reassign this particular operation. So in our main ATM, we can stop the bleeding in a way by saying, Hey, hang on. We don't need this. We're going to do even population, get shared supplier. Cool. Cool. All right. Done. Right. So now we have this. Oh, looks like we need a default one. The default version, which is going to be by substitute. So here we create the short-term lambda version of what the supplier would look like if we don't provide a shared secret operation. Okay. So now we choose the problems one layer up, right? Now we have the VM configuration and somehow up the hill right from there, we know how to pass in the shared secret. Now let's, let's go and do a turnaround. Like, I want to do this, but on the just an RPC method factory. So I really like this idea of having a, you know, part of the if namespace, which is a probably sacred age to most people that you work on, but it's on operation to this. Why not? And one thing we're going to do is add a new. If set shared secret to this, I don't need the boxing queries. I do need a shared secret holder for this now, because now I'm about to set this. So I'm going to create the shared secret holder. I'm going to imagine that it's being passed in. So to make it passed in, it needs to be here. And you're starting to see a problem, right? Now we're playing plumbing. So I want this shared secret holder, probably to be an actual object. So they can sell it. So you can also get the value from it. And so we're going to print that as a private final. Shared secret holder. Now I need to create this. I need to be creating this object in a very particular strategic place where it's easy for people to find it. Where could I do that? Let's see, maybe in the API project. And so for that, I'm going to go to outside of the event. I'm going to go inside. Here's the API. I can probably build this in more. And let's see if it results. Yeah, so we're going to create our very own new package. We're going to call it shared secret. We have no shame of taking someone for ourselves here. And then we're going to create the Java class called shared secret. So the shared secret holder is going to be a public class. Absolutely. It's going to have a public little structure, but we don't do anything with it, right? So we're going to have it also have an atomic reference to a byte32. So here, we need to create a private final atomic reference. Right, 32, right, equal to new atomic reference. Okay, byte32. All right, we don't need that anymore. We can simplify this. So now we have just one field in this class. When I ask for shared secret, I can just return what's whatever is stored in the rest. And when it's set, though, I don't know what I'm getting at. So let's see a string value of it, for example. And we're going to do these dot prep dot set, but we want to buy 32. So we have to hash this value. So it turns out that basically has a way for you to hash forth. That's really convenient using a hash method and static. So we can hash this, but we need to wrap it into a byte subject. So we're going to wrap that. And we're going to take the bytes of the string using the UTFA, including standard. So now we have a way to set a shared secret. We have a way to get our shared secret. We have a ref to it. Now, let's go and set that. So we're going to import that class. It's not being imported. And, well, hey, I think I forgot something. We don't have a if-set-shared-secret method yet. So let's go and create that in our methods. If-set-shared-secret. And I really liked earlier, we were looking at that block number because it gave me everything I ever wanted. So we can do things like this. Yeah, let's copy everything. And now we arrange a few things and we remove this new class here. We don't need no blockchain, no result. We need our shared-secret-holder. That's going to be our shared-secret-holder right here. We don't need two constructors. So we can send that. And we're going to save this folder on our field inside a method. So our shared-secret-holder is definitely going to read the context and try to understand what's what. So to read the context from a request, let's say if submit hash rate might give me a way for me to understand a required parameter from string. Okay, that sounds great. So we're going to use this method here and say, hey, I'm on the first parameters to come from this request, not to be an hash rate, but to be a new shared-secret. And now I'm going to set that on my shared-secret-holder. So I set this new shared-secret. And I'm going to just return, we got it. So if you were to call this method and it's bound properly and everything works, then all of a sudden you're able to set the shared-secret. And it's still giving us trouble because it needs to be imported and not been imported so now it all makes sense. Okay, now we have this set-shared-secret. It's passing this magical shared-secret-holder. Guess what? Everything else is on fire, right? We forgot something. We didn't do any plumbing here. Oh, no. Oh, no. Okay, so shared-secret-holder passed in here. How can I pass it in? Oh, my God, let's go and start playing the game of passing the hot potato. And those methods here are being called. You get the gist of it in the runner builder itself, which is going to want to see also a shared-secret-holder. And the shared-secret-holder here is going to be part of our little method. See, I think it's finally appeasing itself, right? But then we opened up four different places where it's not liking it. And it's not liking it because now it's building the builder. It's building the builder in different ways, different places. And that's just nothing we can do about it, but just buy the bullet to make sure we are able to get to where we need to be. So thankfully, there are some stuff in there, like the metric system, something like that, which are our actual field of this. So what we're going to do is we're going to say, hey, the shared-secret-holder can be set on the builder of the runner, right? And then we need to just add it every time we go. And then we keep doing it. Notice I don't care for indent. I don't care for how the formatting works because I'm going to run spotless before I come in, right? Hopefully, if I don't forget, then it will take care of reformatting the code in a format that is variable for people to look at after I'm done. Okay, so my shared-secret-holder is great, but it's not actually mapped to anything at all. So now it's time to roll up all sleeves and add it to the runner builder here. So, uh-huh, uh-huh. And then we do these shared-secret-holder returns. Now we have this done, but we don't know where this is called. It's really awkward. Let's find a usage of that. So it looks like it's run as part of the thread. This is your node runner, which is for acceptance test. Oh, don't want to touch that, right? Not now. We're playing with code in here. But maybe we can use that spotless synchronize, for example. In that case, we can do, and then we pass in a shared-secret-holder that somehow we should have by now. And to do that, we can look at, for example, a private final RPC endpoint service is looking good. So maybe we can do the same thing. But it's final, which is a great, you know, a saving grace and sense. We can actually see whether it's going to work or not, right? So it's not going to work here because it's not being actually essentially the spot of the constructor of the basic command. So when we need to do that, it's not going to find usage of this other component here. And we're going to look for when it's being created, it's being created in the constructor of the basic command, which is great. So now we can do that. Ta-da! All done, right? Right? Nope, not on the slightest. Now we have to build, or from our AVM, a protocol to make sure that it's actually being using the right version. Who wants me to skip the line and gets to the finished state? Or do you enjoy seeing probably? All right, I'll keep going. So remember, we got our main ATDMs and we defined our workshop EVM, which is our new, very old EVM, these are all outcodes, all onability. That's awesome, right? Finally, this VM is ours. We can just print money all day, right? Okay. How do we do this? We must have thought of some security settings, right? Gosh, yes. In front talks about protocol specs, right? So the mainnet protocol specs have told us that there's such a thing as a definition by which you will abide. You will not just create a bunch of outcodes on your own, you also have to tie them to the hard fork. So for example, if you look at the Paris hard fork, it should be at the very bottom. I'm getting my way here. The Paris definition here is actually going to tell us, hey, I'm actually going to use the Paris EVM. So gosh, I guess we're up. We have to do the exact same thing. We have to do the exact same thing for our own workshop definition. We're going to call it the workshop fork. I mean, I don't know. Sounds awesome to me. You do. And we just replaced the Paris with our own awesome version of the EVM. It's got its own definition here. And we can see nothing's calling this. So we can use this for nothing. Almost. Let's see where is the Paris definition being called? It's called in the mainnet protocol spec. Factory. Awesome. Let's do this. Let's just rename a bunch of things. It's called the workshop definition instead now. And it's still not being called. Oh no. We just walk away up now. There's going to be such a thing as creating a workshop definition that's going to be assigned with a workshop block number. Okay. But this is not going to just go away on this. There needs to be implemented. So to be implemented, we need to implement it as part of our Genesis config options, which I've been bashing you on ahead about as a way to configure if you're in clients. So now all of a sudden we have to do this, but it's not being implemented in the classes. So now this is starting to fail. Let's take a look what that looks like. This is the Paris block. This is my workshop block number. How is that workshop block number going to be fetched? It's going to be fetched from the json file by looking for Paris block. That's not what I want. Okay. So virtual block. Notice case insensitive. Also looks like they got overboard and they were to have different ways of looking at the Paris block. Maybe it's a premiered four block on that. Yeah. Yeah. No, they're not going to be both here. And yeah, maybe we just returned that option along. Okay. And now it's all good, right? Should be able to run this and it should give us a Genesis file where we can play with the actual workshop and everything is ready. Let's do one more thing on the basic comment. I'd like for us to have a way to set an initial shared secret. So for that, I'm going to go to the EVM options. And if you notice something is not quite right here. It's not actually combining. Guess what? We forgot the supplier. Oh, see me. Okay. So what does that mean? That means that this EVM configuration is no longer going to be able to be built here because if we're to build that here, we need to involve ourselves as a shared secret holder, which is not a good place in terms of decisions to be made. Right? So what we're going to do is we're going to create an EVM partial. EVM partial configuration class. That we've not had to get. And it's going to copy EVM configuration, which is instantly not part of CLI. It's part of the EVM. So it should not know anything about shared secret holder or anything like that. We're going to make it super simple here. There's no such thing as a default for that. There's no shared secret. And there's no supplier. And this is not the right name. Okay. We remove this. And before you know it, we have again partial configuration, which is not being dealt by the EVM options class, which is going to be here. And it's stored right here. Okay. Oh, so that's something. No, we're good to click again. Oh, it's because this is a generic class. It's looking for the EVM partial configuration to be returned. So we just keep the can a little longer, right? Okay. So now I'm going to set up a common line option that's going to be a secret just only for us. And I'm going to name it the shared secret common line. So taste that. I'm going to copy this. Paste it here. Okay. So we got a shared secret. It's being set as a string. Notice it's never been assigned. It's like assigned through Pico CLI, which is a library that is being used by basically to set up the arguments. And the shared secret itself is stored here. And what we're going to do is we're going to call it. We're going to just set it as a, as a, as a string in here. So we don't, we don't defile this object. And it's going to be its initial shared secret. Initial search secret here. So we're going to set it as a string. And it's going to be its initial shared secret. Initial search secret here. Here. And then we create a field for that. And then we. And then we got it. Okay. Now my visual comment, everything should be on fire. Because we're now playing with GVM options along the way. And I'm starting to see a little bit of an issue here. So now we can do this. We can do new medium configuration where I'm going to take. I'm going to take the Indian partial options first. Now I'm creating this object for it. And I do. And. The. It also wants us to pass the shared secrets holder. And it doesn't want the whole thing. It just wants a supplier. There's one more thing that we need to do is. We need to set. In the, when we generate the. The shared secret holder. We want to also. Also, it's initial writing, so I'm gonna look forward to it. And look inside this basic command right here, and then you're gonna say this, share it with holder, share it with, get these stability options to domain objects, get initial shares, beautiful. All right, so that was a lot of prompting, right? Less than an hour, I made it happen. So let's cheat a little bit. We're gonna move next 10 minutes to actually test this new app code, and we're going to run it. With the current command, we're going to be able to interact with the chain and make sure it's working as it should. So in the terminal, I'm going to change my current branch. I'm going to, I do a get this, get status, I see that build a bunch of code over the last half hour, interestingly. So what we're gonna do is get stash, as much as possible of it, and I probably going to have to remove this. Remove this, remove this, and let's do a dash. All right, all right, now I'm going to work with each watcher branch, which is on my folder this way as well. And we're going to do a gradual W assembly. So in this new version of it, the impartial integration is still there. We're still sitting in the initial shared secret. You can still see the shared secret operation that it just built, it's very much there. The only thing we change is, we're going to add a bit more information about what's going on, which is useful for our testing and for us to realize that this thing is used. And everything else is the same. We even have the new set shared secrets operation, which allows us to set the shared secret, and it's about to show you, it's going to be exposed under the if on the score, set share secret, just enough to smear it. Now we're combining, we're merging, regenerating the zip file for all the stuff that we've done. And as you can see, it's combining the whole, all the elements that have been changed, not all of them, most of them have been kind of impacted. Some of them, such as consensus to go zones, we touch them, we should be able to fly through them very quickly, we're having to re compile all of them. And now we're building our distribution using our zip. Here we go. We're building the AVM tool that we mentioned earlier, as well as zip file, it's a separate distribution. And now maybe to make it a bit easier for everybody to read and see what's going on, just going to reset and use this particular terminal. So first, let's remove the old Bezu. If I, and that's right now, I'll see the target is in the zip file, can zip Bezu here, I can move inside Bezu. If I come on P, I can then see in my instructions for the workshop that we're creating a contract that's using this new F6 shared secrets operation and that creates this bytecode. And now we're going to want to wrap that contract to deploy it as an actual code that runs. So to do that, we're going to load it as the result of the execution of the collection that creates the contract. We're going to start Bezu. Start Bezu, we run it with a very specific genesis file. That's really key is worth the attention here. So on those extend folder, if I do cat.genesis.json, I can see that we have a workshop block of zero. That's our own hard fork and the zero block where it's changing the way the behavior of the chain to use our workshop approach. We're also using some known accounts. If you notice, they are a copy of what was dev.json and we're using a fixed difficulty. So we can actually build and mine our own blocks using a very low difficulty here. So now we're running Bezu in here. So let's take a look. We're running Bezu is.genesis.json. We're naming an HTTP. We'll be seeing it's okay to use it with localhost and this is going to be localhost and metamask. And we're going to use our hidden CLI argument, the pre-brownfabs is going to be set by default. We're also enabling also have to mine other CPU and then we're going to use a minor con base of a known account. Okay, no existing database. We're starting up fresh, which is a brand new. And you can see Bezu is already starting to produce its own logs. Okay, now going back to our instructions. We're going to deploy a contract. Our contract is right here. And we want to create also, we want to first sign it so that we can sign the collection to deploy. So let's go to our Bezu workshop. And what's up and on this, I have a little bit of a previous code that takes our contract uses our private key for a particular account that is actually this is the genesis.json and then generates the EAP 1559 compatible collections, signs it and outputs its collection hash and it's signed collection. So if I do node, deploy.js, you'll see that I have a signed collection here, which is this main feed chunk of hex. And then I have the hash that should expect out of that here. Now I'm going to take this signed collection hash. Let's take a look here at here in here. And then I go to my RIMI for example. So we're going to deploy the contract. I'm going to copy this. I'm going to paste it in here. And I'm going to copy this content and curl it using the send raw transaction to our local Bezu. Good result. The result is the hash and the hash matches what we expected, which is great. We can get the receipt that actually matches the connection itself so we can see whether it's being done. Oh, sorry. Wrong, wrong collection hash. So let me get that. And that gets us the contract address that we can use to work with this particular contract. Make sure, let's make sure it's been deployed properly. Okay, we can see the sum coding here and it does match what I would expect to see. There's more F6 bytecode being generated. And now we can call the contract. So we're going to call the contract and we're going to call it without any arguments, no data whatsoever. When we do that, we get the result of 0x00, interesting. Now, let's do that. But this time, we're going to take the hash of the quick model fox. But trust me on that one, that's the correct hash. I can show you how to do it yourself. I get a result of 0x01. Excellent. Now, I'm not sure this is all working correctly. So let's change the shared secret. I'm going to pass in params of the hash of Bezu is awesome. Success, great. Let's try it again. The hash of Bezu is awesome is 0x2544, blah, blah, blah. Let's use it with the old hash first. I got zero now. Oh, no. It just changed publicly. Oh, yeah, I know what's happening. I need to change the contract address every time. Now, I can call for that and call it and it returns one. So here we go. We're able to extend Bezu, create an old app code. All the code of this is in the workshop wrench under my fork and I'll make that available to you so you can play. Any questions, any short or interpretation, anything you'd like to know that we've been learning through this. See, isn't that hard, right? You can extend Bezu anytime you like. You can create your own app codes and you can make them part of it and you can even map them to the rest of the clients so you can extend it very meaningfully. Anything I missed, anything I've lost over in the last two minutes we have at our disposal? And question and discord. This was great. Thank you. I really appreciate that more than you know. Yeah, that's great to hear. I'm glad this was really helpful for people. And thank you, Antoine. This is really nice and it's going to live on. As soon as this is over, it will show up on the YouTube channel so it will be a resource people can check out going forward, so thank you for doing this. You're very welcome. I would wonder like, was it useful to see code? Was it useful to see me type away? Okay, you got lost during the plumbing, yeah. There's definitely going to be replay available as soon as it's done here. Yeah, let me grab the link. Yeah, it will be on YouTube and you know, I'm available. If you have questions, I'm on Discord. You can go to the Bezu Workshop channel. You can keep asking questions there. How can I get help to deploy and premise? Well, there are multiple companies out there that actually provide professional support and help when you come to Bezu support. I believe that they're listed in the contributors or as part of the Bezu website. You can also ask questions on the Bezu channel on Discord. We'd be happy to help you on a best step for basis. So it's all depending on what type of help, what are you looking for? Exactly, David, thank you. Yeah, the vendor directory, you can go for use and many partners have been playing with Bezu as a thing to deploy. I would not get myself involved too much in supporting people in code. I'm just not that person. And Antoine, I need to drop here in a minute, but thank you again. And yeah, for everybody who joined, yeah, feel free to watch the replay. Hopefully that's a useful resource and as we've been saying that Discord, the Zoom channel will go away as soon as this is over, but the Discord channel remains, so join us there and let's keep the conversation going. Thanks a lot. It was first time for me doing four hours. Hopefully you took away from this and I look forward to meeting all of you one day. Great, thanks everyone. Thank you. Bye.