 So we have three groups together, it's a long time, if you need to go, whatever you need to go, classrooms are here, coffee over there. So we're going to talk about this soon, I'm going to try to manage it and come to life for you as possible. We have different businesses today, we have people who are job developers, we have people who are more product-centric, we have people who want to operate in the zoo, there are very different concerns. So with that out of the way, we're going to introduce what this week is and at the high level it does for people who have no interaction to blockchain website versus you can understand what this whole thing is doing. We'll talk about how to operate that software, we've come up with ways where we could do meaningful things. Hopefully, out of this whole session, we have a little bit of a jumpstart in the session this afternoon where we talk about what we want to do next and also about the items that we can do. All of those materials that I'm going to talk about today are actually available on the big key on the handle later. I gave the first version of this workshop back in July for hours, so we're about to end this little bit of the agenda, maybe in some places we can have and then go into doing some good visits and we'll pass on that because I'll take time to go with that mission today. But just if I, if you were to do this at home, if you're sitting in front of that right now, you can get all those things going, Java, get Docker, from all those different ID, as much as possible to all those different sources and the consensus folks have done a great job with Chrome at the quick start if you're trying to create a simple development environment at home or you want to create a whole consultation in the laptop. The thing is that when I work at Sponk, I'm actually the manager, advice for me from a Sponk standpoint, before working at Sponk I worked at consensus in the past life where I helped with station, but also helped with Orion, Chrome, a few other things. In open source, since when I'm leading myself to 7.6 first eclipse, I'm a business of a foundation member. I am now involved in the opportunity project where I am the one of the approvers for the collector project, collector contract. As mentioned, Sponk is my employer. Thanks to Sponk, I'm here today. Also thanks to the High Quality Foundation for sponsoring this trip for me, so I was able to come here in person. Sponk is a monitoring solution that applies itself to the enterprise. If you have data that is coming from your machines and you'd like to make sense of it, that's how you end up using Sponk. Do you have any questions about Sponk? Talk to me in the meantime. Okay, you said it, I think you're really here. We have had a group of that term Grant's Synology Script, though, by market cap, studying 2014 with different clients. That's very different from actually Bitcoin. Bitcoin enters one client, and instead of a single application, it's a programmable layer where you can execute smart contracts with CPM. That was a major innovation back then on the directory. Sponk is the enterprise here in this room, people here, that's in existing businesses. What did it well supply the client with different approach deployments? They'll just maintain it, but no different sensors. Everything permission bed defaults, and security from different audits, data management, data sharing, privacy, all the things you can do. So, the enterprise family alliance, for example, was one of those elements, and you can see some of the actors there, some of them in the room here, who have participated in these efforts. So, differently, the influx of folks interested in doing work on top of Ethereum, but with a slight different set of goals. So, first thing was form. If you haven't heard of it, it's a fork of Gath, which is a very popular good client for Ethereum. Built by GM democracies, eventually owned by consensus through different business deals, decisions, and we talked about its private enclave. It can host the data for private foundations, which allows a shared nothing architecture, so you can actually have folks not share data only to everybody else. It was cool as well that it had new business algorithms, a kind of like craft, because it's in all professions, it's a job, and we're bucking, you know, proceed, but it's fine. And NGFT, could give you all those different flavors of consensus, which allows a set of validators to validate what's going on. And then out of the intern, became a budget ratio between 19 of the contribution from consensus. If you like that, we'll just keep going. Okay, so we're still in the introduction of this entry at the stage. Being cool. Now, to hyperledger itself, hyperledger, I think it wasn't mentioned yesterday of the greenhouse, right? This is a view of what hyperledger is trying to be, it's not just for projects, not just for particular stints, but fostering an ecosystem of projects from blockchain in general. So you may have your favorite project this city here, if not, it might actually be because it started this channel all these months ago, but there are quite a few project activities on hyperledger itself. So basically place the part in that as part of the ecosystem of hyperledger, which is why it makes sense to have all this together. At a high level, we're going to now talk about what an entry point is, and I think this is going to be a useful set of terms and definitions people will think about this at a product level, if not, or can help try. So first off, when we talk about different clients, it's a little bit longer, because it's actually a good agent in the server, in the first place, right? It's serving data where it should be. One thing that is true to every certain client so far, except maybe it's getting better is it runs a single process in a machine. In a year at first, when they started to fill in, why is it should run a laptop? In a burger, that's just great. It's independent, meaning they're good at some of the blockchain, they don't trust anyone, they can perform policy changes, submit connections, get into the chain, or from one piece of software. These are the future requirements of different clients. This is some of the foundations of why the client is built the way it is, and we'll take you into that in a second. So type of the jubilee is a complex software stack, and this is just a diagram of all the parts that it opens. Some of those parts should never have to use them, thankfully, but these are the holes in the department they're facing to get out. The most famous one, of course, is there's this here, where it goes to other agents, 3.3.3. If you were to connect to this node to get RCC in and out, you can connect RCCP at 8.45, always get 8.46, and that's known. You can also use this to mine by connecting over a persistent TCP connection, over tritone protocols, over 4.800 or 8, and this to the past, and if that protocol, it can connect out to a NIFTAT server on the Web Service, so it can send data to a NIFTAT server on the NIFTAT server itself. And NIFTAT server itself is just a web front-end that you can see, where you can see what's happening with RCCP. So another thing to note is that RCCP is the database, so going down a little bit to stack complexity, we mentioned yesterday that it's using box-dv storage, has multiple store of RCCP, multiple, and here's one. So we have all the blocks that have to be stored on this, all the collections have to be stored on this as well. These are the actual collections. This is a treat for them to be cryptographically compliant and making sure we have a rhythm that can satisfy the blocks. And then we also have all the account states that are going to be stored on this. So all this information has to be stored somewhere. These are different files, different storage, so this, by itself, is complex. There can be multiple databases, multiple tables, new databases. The database is a connection tool, so it's also able to sort through some set of connections according to its best principles. The gas price counts, all the software users need to think of to create a sequence out of sequence incoming requests of different parts. So this is also a network agent, I mentioned, it gives up here, agents. It's completely independent and it requires some configuration. When it first connects, it has no idea what it's going to talk to, so its initial configuration has to be exact, so it can actually enforce the addition of any connection coming in. The first thing to ask you for is a genetic block and a set of process engine parameters so that it knows when it connects to others that it's able to judge the blocks which are being shared. And the last thing is it needs good notes, right? So when you connect over the network, you're going to need to be able to find both peers. There's a couple ways to do that. If you have an enterprise, maybe you just want to have static clearing where you know exactly who to connect to, you have a list of them, but if you're going to go with mainnet, then you're going to go to a server, ask you for these peers you can connect to, keep asking if you have enough of those, then you can keep going. So this discovery is you connect over to TV. First you connect to good notes, then you connect to all the peers that it's passing on, so that you can start having a bit of network effects. You're using the Ketemia hash table to connect to all of those. I'm sure that's that, rather than for you, because you're an enterprise you can do discovery. No, right? No, right. Okay, you do. Interesting. Oh, sorry, I was, that was a, I heard you and understood. Just for your enjoyment, you're sure that you're using mainnet. This is the way that it's used to be, right? You would connect blindly and just ping back and forth notes until finally someone answered, and eventually you get a collection of those, which keeps them really close to you. You use a hash table to have different buckets of those based on the keys of those notes, so that you will be able to share them with some of your kids, but not all of them at once. You have like the notion of islands where different parts of the network sort of talk to each other. But that's now kind of old. There's a new discovery mechanism using DNS, using TXT records of the DNS is kind of scripted from the foundation servers every two hours. You can actually see it on the internet when you get up. You have a project where it's close to a huge key of all the notes that you will get to. And then there'll be a next and using this TXT approach, you can photograph and securely get all the information, make sure it's valid, and then connect all of them. On the surface, most of the time, it goes very enterprise-looking to go with static gearing. Static gearing is the safest possible approach to things, because you can exactly do what you talk to, and the URL of the other key of each of them, so here, this is what we call the invalid URI, URI, where first the first factors ferment appropriately are the exact small representation of your project key, the compressed, so that's a complete, you know, that's your identity as a node. This key is stored as part of the TXT note. On the first note, it creates an identity and then refers to it becomes a reputation because it really teaches you to express itself on the network, right? You can also define those keys before you start the network, which is kind of important in consulting, because you want to make sure you have that defined type of time. It's followed by your path, as you can see, and then you have the host, the port, and you can find this as a disk port, a scope port, a new port. It's going to be on by default, 3-2-3-0-3, you could keep on it. It's going to be pasteurized from, if you think about it, right? You could mainly search for it from the Unity messages, and eventually, for all the needs that you're doing, because we're testing, we're doing a bunch of things, it enables your eyes available, and the rights folks say, well, actually, this is what we want, we want all the stuff, we need to want this. It's just the thing at the bottom here, right? But you're a little safe on this. This isn't compared to what the original intent was, right? You're a network client, welcome. Just join the client on the network. You can have anything you want to connect to. You're going to now find a peer, find a peer here to say hello, right? You say hello, when you say hello to another node, you say, I happen to be on chain ID, 1-3-3-7. I will talk the following methods of support goals, if I can talk whisper, I can talk anything, as some sort of messages you may also create your own, which is a lot of fun, right? And there's a note in your client saying, great, I'll meet up with you. And I just discovered we won't have to stay in chain ID, so I'm going to park this thing right here. To get all this work there, find them, connect to them, tell you, actually, we're not going to set chain. So that's a known behavior for the agent. And so I've seen a little bit of the software, if you look at this from the point of the life cycle, when you first start your busy client, the first thing you need to do is look at these changes, find if it's got any data and store it locally, get new blocks and peers, using the connections you have, eventually reach what they think is the head of an equal chain. You never know for sure which you have the head, but from everybody you talk to, nobody has a better problem than you do, therefore you must be at the top, right? That's how you reach the head of the chain. In that case, it's safe to start trying to meet new blocks, which is the thing that this will do, it will try to keep an idea whether it's at the head or not. In that case, it will allow itself to meet new blocks. Then it will participate in propagating those blocks because that is necessary to make sure that the collection gets through, right? So it's really just self survival on something else. And then you find out that it's behind, it needs to get synced together again, get new blocks and it goes like this. So you'll find at any point in time, you may go back and say, oh, I didn't know what to do, I'm fine, it goes behind. Stop everything, do not need new collections, put them in the pool wait a little bit, and then I have to start getting blocks again. So usually what it does is it will continue taking collections and the collection pool will continue to flow. And it will also have a mechanism on the side mentioned in this diagram that you will also gusset all collections and send them to other mines to maximize the body for other mines to send the collection. So you might be behind, but you have a local find telling you pay up collection coming in. In that case, you may gusset them to other and maybe one of the nodes which is able to commit will be able to meet. So for example, if you're in an activity network, you have big five nodes that can actually create blocks, right? But you may have 200 nodes still on the network. And most of them are not going to be able to meet. But through gusset, eventually the nodes that can meet will have the collections being released that will put them into blocks based on their own collection. So you're also doing the work where every time you see a new block, you can just see if any of the collections you had in your pool or inside your block, because then you can drop them from your collection. Okay, that's surprise. As soon as running 24-7 at all times, trying to catch up, when you go up, trying to send data, that's all it does, right? So then we have one client to see if it had, here we have one, two, three, eight of them, right? They're going to be one consensus and they're going to have choices. So you can choose all sorts of different approaches to having a way to decide how blocks are being created based on the best actually to do it in suit your team. So for example, by default, if you're which you've made it, you'll be using both of work, which is that everybody's going to be using everybody. It's not easy to have a path on the side, trying to find the equation that's being used to allow you to create a block with a lower difficulty than a certain threshold, which is based on what you'd like to see from the network. That's the default mechanism for public maintenance. Do you have any room for that? Well, maybe I'm not on the proof of stake yet. I'll talk about that. If you'd like to talk about that, we'll talk about that. That's another consensus person that's available to base for consultions. I don't think we've seen that yet. Anyone here that if to stay in the network, you can know about that, but I don't think it's been used in the enterprise right yet. That's a good question. So there are others, right? They gave developers when they started doing development on give, they added new to do a little toy consensus mechanism that would allow them to also do tests. Who for what is expensive? They're spendingcipient time trying to complete math all the time to get the next block, which is not ideal if you're just trying to do a testnet where you don't have that much money to do it. So the current is something called click, which is proof of a free. What you do is you create excitedness about who's going to be allowed to meet blocks. And then you're going to allow transitions later during the life cycle to stay here. I have a new sign. The new authority seems to be allowed to meet blocks going forward. So you can pass on the baton that you know, for example. Now, if you remember to know that you have shown him is email derived, that identity is what he's going to define who is able to meet. Which is a different mechanism that you have a number of nodes that can update the terms. And some of them are going to be proposing blocks. And that's a subset of their total. And then, yeah, so oversteck is when we dedicate all together balls, the actual consensus layer, so decisions of whether to include into a different set of machines, which run based on state, which is based on how much money is at stake to guarantee a lot of that. So something is that if you do stating that it needs to make sense, which probably requires three lawyers to make sure that the state has a natural build incentive. Okay, there's one more thing that this does. It's a natural server. So this is our business server. We've used it to truffle any of the tools that allow you to interface with clients. Most of the communication is done over the board. If I find a sugar here, and most of the interaction there is need to be around calling or sending connections, calling and getting data out of the chain, or sending collection to sudden interactions. I think that one is that there are multiple ways to do that. You can do it over HDTV. When you do it over, you can actually batch request, which is very useful in terms of performance. You can use it with metamask, otherwise, that's not done. You can also use WebSocket, which is great for different set of use cases around subscriptions. So you can ask to get notified in real time whenever the client sees a particular set of connections. Be able to react to that. It's extremely expensive to have a CPU. If you're doing this on the Internet, go up and do it in a second. I'm busy, which is just a file socket on your server. That's great. I'm showing it secure by default. Yes, you go and default to that. It will allow you to still connect to your server. You can do all sorts of jizz and RTC, but you're no longer exposing it to HDTV because you're blocked on your server, which is probably ideal for security standpoint. That was just pending to be approved. So that was a contribution from in Diego. And one thing that's not well known is that this one has the GraphQL interface, which is also supported by it. This did a lot of work to get that GraphQL API to be in good state. It's quite stable now, but it allows you to do is to go deeper in what you ask. So what I've seen GraphQL use for is people who want to stop the competition or go deeper into how to go from storage slots, for example, or understand the ingress of what is being stored on this because you can make very precise queries. Just useful, automated, for traders, for people who have about a second to reply to things. So asking a version to be, if you call, it's an optimal track, where things are at, what's the value of your storage slot, how to use it, and some things for CDN. But if that's all it is, go to GraphQL. As to the what's in storage slot 00x001 of this contract and did the value out in such debates, it is actually going faster. It doesn't make no sense. It's okay. It doesn't make no sense. I want to know what you call that set, right? Just the GraphQL idea streaming as well. Is it off streaming? So in your cap, we have this is the client being, that is, gets to be the agent, an API server, and queue. Let's see if there's way too much. If you're ever anyone doing IT in any company, you're screaming at me right now, because you have four different servers, these different levels, if you analyze, reliability settings, like if you've done any ops work, your queue will have a very different set of resonance monitors than the database. So it's good to know that when you do this, you're going to have to put together, and when you upgrade this in pod, you're going to have to put together all the parameters from different systems and think about them as one. So now you have a lot more constraints. The question that I get is, you mentioned EDM. Is that not a big deal? And the thing is, the EDM is everywhere. It's involved at every step of the way, of updating the blocks, updating the work state when you do a connection, but also when you just call and want to get some information that's not on track, it will execute the EDM. So the EDM is in everyone's components, we suit all of those. And the EDM here stands for if you're interested in this machine. Any questions so far? So this is kind of a new big question, but even for... Not if you're just talking about sending data from an EDM or not, you're going to need the EDM. What do you mean? In other words, what do you mean it's in every one of them? Okay, so I'll show you. So let's say you've got a block that's going to be, right? You want to make sure the block is valid. So first thing you're going to do is you're going to check the block and see if the block execution on all states matches what is in the block foundation itself. If it does not match, that means that all of the information you have in terms of data base so far, you're going to add a block X and X plus one says that it's going to go make those changes and the final set of changes does not match. Then you have a mismatch. The data block isn't valid. It needs to be checked. When you update your block state, because the block is valid, you want to update your block state, you want to change your information on this. You're going to have to run all those functions and then store the result of the execution of the functions inside the data cases. So this is a readback diagram by Lucas Sabinhan, who is a commuter on this team as well and the consensus employee. So for example, when you get a block, the block itself is going to have a header. The header is going to have a number of information which is cryptographically important because it shows the state of the block sheet at the time of the block in generating. So for example, they show the state itself to database, which represents the word state, which represents all storage accounts in all storage entries. So as I have my own personal money, I'm going to have my balance here. Then my current balance, which is the number of times I have to continue on my account. And I'm going to have a number of entries which I cryptographically verify to set it to my hash. It's my storage huge on my account. That is then rolled up. It means that the word state try, which is itself also rolled up to set it to my hash. So I'm going to call Patricia with three so that I can actually verify that everything here is added at the end of execution. So if I have any difference between the points, I want to take that block and all its selections, which are going to be here in that body. I'm going to be able to say for sure that the statement does not match to his little C execution. This box is that. Just is how we cryptographically verify that everything makes sense. Right? Does that make sense? Yeah. Okay. That's how it goes forward. So when you execute all those connections, they have all the receipts of the execution of those connections. You also calculate all that down to study to write hash again. So that you can make sure that the execution of that block, even though you got the right states at the end, that the receipts also match so that doing nothing for somehow causes collections are so that's where it's easy. Yes. I guess as a follow up to make sure I understand, if in the block, one of the transactions is a token transferred without all this contact. Yeah. You don't want to do it again because you're just doing a balance change and then you have announced, okay, because you're changing the amounts of the sender. You will also have a gas cost of 21,000 gas by default in every execution in collections. So you're going to end up both things anyway. That will change in one state. So you still end up having to cryptographically verify everything. So it's all cheaper. The truth is also that you also need to have a much better mileage of increasing and having a number of getting changes through all the contracts you call. Yes. Just rather than a question to comment, another useful thing, at least on the enterprise where you have you can ban notes and you have a loud list that you can manage, which on the enterprise set. So I kind of think it's useful because sometimes you can get invalid blocks and if you recurrently identify them from a specific note, you could ban that note from the table near the enterprise set of the wall. So you mean to read what you said for people out there? In an enterprise setting, you mean allow this specific set of notes to allow them to send a lot of social actions and that will limit the ability for anyone to come in and do something foolish. That setting is something illegal to do. And you need it more. You can do permission at the smart contract. Which is something that is in support. It's quite advanced. It's possible for you to limit what some accounts can do such as creating additional contracts. That's probably a lot. Questions on the product. Okay, I'll go to my figuring. Which is, you know, really the most expensive part. I don't know about the code, but if you understand this thing, I'm not a guru. So the average of this would be a lot of fun to run the best possible way. There was actually for the longest time, someone on a team who's job was to just make sure we have my options and the right consumer user against the API to figure things out, you know, making life hard in the United States. And so the way it works is that it has the ability to set online arguments. Those same online arguments will be modified to be also set as common variables. We can use configuration files. And it's in that order, meaning that you can have a profit file and then you can decide to operate it. It's available. Then you can provide available with a common file, with a CLA. To make it easier now, they're trying to, as much as possible, make sure that those, the keys in profit files, if you match the common line arguments, you can match in a common variable. It's some transformation that I need. So in a common variable, by default, it's going to be of a case. You replace unscored dashes and there's this, sorry, unscored releases dashes. So for example, dash, dash, minor, dash, convex, you can become, please do underscore, minor underscore, convex. That's important because once you know that, you know, figure it's because of this arcane to one ground and you can take conflict file and you will know how to operate it instead of CLA or what you want to play around. So we have great docs. The docs themselves are open source. You can pick any better, but you know, the docs, if you don't understand my stuff much, it's fully published in Markdown format. On the basis of the definition of org, I recommend you take a look if you haven't. So a few first options. If you're doing Vizu, you can choose a number of preset networks that allow you to, you know, bypass all the things that you need to talk about in these segments, right? So set yourself some, some hassle. If you're just running Vizu for the first time ever, the first thing you need to do is to see dash dash net for people's data. I'm going to explain exactly that. And if you wanted to participate in many, but you don't want to be on a national language, if you want to be on a distance, you could say dash dash net for people's trust. And another very important element is where you want to store your data. So data dash path and then you place a folder there, right? A folder to contain multiple files. Why? Because you have multiple databases. Then you need to expose your business client over just to your network. Therefore, you need to tell it where to find its, its closest. This is your local host, if you're on Docker, you want to be on 0.0.0.0. That's one of the most common issues in your Docker local site container. You're not able to break out of your shell. You can do that. You can go to that to 0.0.0.0 to find out. And your port, which by default is 0.3.0.3. And then you can decide how you want to do discovery. You can say it's enabled. You can say it's a good one as well. So, next thing people have trouble with, you run this thing, you try to connect to it to report 8.5.4.5. It's not there. It's not enabled by default. We don't trust you. Okay, that's pretty much what's going on, right? Important about chain. It's not enabled. Not only that. If you enable it, it will only work on the local host posts by default. Yes? Question from the chat. We just need to give Edo when using that network, right? You don't even need to do an email. You can just do that. We'll generate egos for you on the first boot. And then the network dev, I'll show you, is actually cheating by creating an overworked or fixed difficulty. I'm thinking of myself. You don't need to do anything. It's good before it works the first time. That's the only thing I want to use. It has to work. So, when you enable RPC, if you don't do anything, it won't be enabled. If you enable it, you still need to tell exactly what to enable. But if you don't, you don't need to get a set of APIs to allow access to. Because some of those are actually pretty dangerous. Okay? So, if you're an operations person at this moment, please take action. Write this down. Do not allow admin by default. Because it allows you to delete bugs. Someone gets a hold of your node and is able to send a message. You can say over this admin API, delete all the bugs so far. We said about zero. You are. Okay? So, admin, our user, super user setup. Then we have things which are more related to different instances or different types of things that you might want to use. So, for example, remember, click on each other. You have a click method. You can call them here. It says, now change the leader of my network. T-bug. Super useful when you're deep inside debugging smart contracts. It allows you to do tracing of the execution. But if it's not on because it's actually an instance, it is for all the very advanced smart contract abilities to set permissions, which is true to events for today. We'll talk about that in a minute. If it's all the things that you want to have by default, get blocked by number, get the chain head. Give me the range of those blocks. Give me information about those. And if T is for anything related to setting the network, the proposers, who you trust. So, if you want to run on software, when you have data, it just fails out one day. It stops responding. And all of the error. I don't know if they work for SVT or something. Okay, you need a new validator. How are you going to do that? Your network is marked, right? So you connect to every one of your nodes and you tell them, here's a new ADVT validator. We place former with new one. Here is the identity of this validator. Now you trust it with your entry. So don't enable that. However, on the internet, you will have disastrous consequences, of course. Miner, so anything related to mining, such as setting your network, you can set all sorts of things for you to do for work, which I think is interesting for you. And then we have more things around plugins and things that are more specific to this tool. It should give you the mirror of what you can do with this ADVT in terms of setups. And by default, you're welcome to know if that's right. So by default, those three are available and they allow you to send collections or coals from the blockchain to get information, past stats about who you're connected to, and I'll try to go over the questions. Any questions about those? These are not taking the time to go through these. These are not at all from this tool. These are just an RPC from JN at large. And you can see the spec at the bottom down there. It goes down to, yeah. So as an enterprise user, you need sometimes to have default for elements which are related to maintenance being enabled by default or available to you, which might not make that much sense if you're in the enterprise. Part of the problem to verify on is when the transaction is stuck. Is any of these services useful to detect that situation? Can you repeat the question? One of the hardest issues we've had is the transaction is stuck in the pool. And we talked about that yesterday. You have a bad announce, right? So the transaction got knocked out and now we have collections with a higher announce and that connection. But also in the pool, we have one pool in sequence. So now we do need to get everything from the pool if it could. It's a different state and you go back. That's why my buzzer is, which I like, is you pull the button and you know what's going. That's time too. TX4 allows you to see what's in your conversion. It can get too busy on maintenance. On the maintenance you may have that other question. It will take down your node just ending up on that which is not the same. Don't do that. But ensuring enterprise should actually get some meaningful stats, much more important and you can get anything from the TX4. Remembering, right? But if you want to actually do collections at a high rate, there's a product out there called iFly. I will do that for you. I'll do that and you can choose that, which is very useful. And it kind of speaks also to the middle where the key may be if you're operating at scale. You can't just rely on how you should do that. You do super human stuff. I need you to persist them, reorder them in a nice way. Some of those collections will not influence each other. So having a very nice subtly algorithm that can do that is very useful. And that's what that product does. So talk to the guy over there. Okay. We will not know this because it's hidden. We don't talk about this, but we can create hidden facts. That's the dirty stuff. If we want to get the actual value of all the things and all the, or not, you can configure it. This is very useful when you make something available. When it was finished, it was shy. Don't know if we're going to have trouble. If we mark the CDI argument as stable, we can't take them away anymore. So by default, when we introduce something, the first thing we'll do is, as much as possible, make them hidden. So sure to find you and go deep, this by default is what you need to be able to tap into. So to see them, you can do, instead of help, that fashion, you can still make some help. It gets you involved in the fact that it's available to you. And as you can see, unstable options here. So for example, you can configure how many haters you would like to get from the chain when you ask for haters. By default, we ask 1002. Meaning I come to you as a peer and say, I don't know where I am on the chain. I think I might block 312, but I need to get 192 mix-up itters. But maybe here in the enterprise, maybe when I get faster, maybe, you know, you have fiber optics between the machines, right? Maybe you can ask for 10,000 of them. So that might help, or they might just come just to network, which is why you don't need them because you don't know what it is, but we'd like people to try and tell us what you would say to them. Hopefully you find that's exciting, I think. Okay, so probably you've run Vesu. Sounds maybe help you, but let's go. You can go and get this, we can help. You can download the release, unpack it on your machine, or what, right? You can also install it with Homebrew, which is on the Mac, running for Vesu. Do the job. I don't think this is going to work for anybody here. So, guys, thank you. Maybe Vesu's homebrew will get on today. And finally, of course, we push everything to the worker and for help, so you can just hold on. Maybe if you're in an enterprise of some character, you might be tempted to have your own distribution. Absolutely, of course, you should, right? This way you can verify the binaries, but that's your own security policy. But if you're just starting it now today, you can do this. The next step is, you've got the source. You can also just do rattle.w assemble, which is the defaults task in rattle. It's built to use like Vesu. It will just run through and compile all the java code and create the RGC that's in file for everything that goes in Vesu. It will just work. And we'll run it in test. But defaults, you know, we're running with native libraries. We're trying to work, I think, on support. It's not ended, so simply, but we have native libraries separate repository that we maintain as well. If you want to dig out, we'll kind of talk about that in a bit. It's a lot of fun. Okay. Ageless installations. Let's go to the advanced options. Anyone here use those? Do you mess up your stuff? Okay. So, remember, network equals dev. You can also just remove all the go wheels under the bike, and then go jinx this file, and then point to a jasm file. Jasm file is the definition of your network. It's allowing you to customize how you see money, how, like, what it did or seemed to use, how you set the initial funds, all the information. For RGC, we're going to talk about national considerations for security, such as having a course, for example. If you've done any font and development, you guys have heard of courses. Okay. By default, you know, this will only accept a request coming to the host. So, you should be able to enable this over a city opening to X, but if you're back into the Q&A state, that people will use to connect to it. And then this is specific to this. We can set additional configuration items to have some authentication. Is there a jdpinting, which is a web document, so for the wolf and the light, or you can have also, I think it's a busy path. One of my love is, if you have a metrics, you can enable metrics, which will open up on your field server by default, that will run on an initial slash, slash metrics. Then you can see, you can set some port and host, so you may choose to enable that internally, or you can decide that all these interface needs to be available on an independent security interface, for example. And recently, we had the ability to choose between commit fields or open telemetry, if you were to use open telemetry, which allows you then to push all the data out to an open telemetry collector, or an open telemetry backend. If you're a miner, you can decide that you want to enable mining is for work, especially on the network, for example, and you can also decide that you will enable 8.0.8 to connect to you, and send work to submit. And that is the point that is necessary, the point that you count is only every time you make a call. The JWT public key file option, is there a way to configure it such that the JWT token comes from a centralized service, that is, expect some other organization? Certainly, that's for everybody. So, the JWT right now will take a file, is there a way to make that more, I'm guessing, more dynamic to come on from a service? I don't think we've done that. I think that would be a very welcome enhancement to this, because, yeah, files don't really scale up, and it would force you to start with a client. And one that I think most exciting this year was, it's very good, it's very exciting to be in, it's been in some new place, right. So, no, I don't think it's supported this time. Yes. Good question on that, Rex. So, my default is the client which is set up, or you can just change that to, like, Spark. Now, by default, none of those things are set up. So, what's the default network server handle for a service? What's the default? Network, set in there? Oh, by default, none of the metrics are being connected. So, you know, if you want to go parallel, webinar, not make any tricks, because if you're confused, you think they need to go from one eccentric set of CQ, set it at your age, you know, work in this thing now, and don't make no metrics, right? It's a bit more, but not that much. Go ahead. Yes, one more question. For authentication credentials, file, an option, similar option, but instead of getting the credentials from the file, get them from a vault somewhere using a PLS connection. Yeah, so the question here is about, I think, credential file, again, goes to a file and should be using vault or service to allow that. Yes, I agree. Now, in the middle of the game, we've had to do that with Kubernetes, for example, the collector, different project, but I agree with you, that's the next step, you want to have a pretation of the topic, but for example, you need to compare all of them right there. We don't have that, so I'll take that one. Thank you for watching the file. So, if you're thinking into Kubernetes, some exercise, it's your understanding what happens when you do dash dash with a simple step. So you can run this, run it with a simple step, and then enable RPC, try to play with it. And the first thing we're going to do is actually, we're going to do the coworkers, right? So when you say, before people said, what's pretty happening is that you're taking a Genesis file, which is a shipping space, which is going to be under a shortcode on the FDOTG. So we can take a look at that together. And Twa, while you're doing that, there's a question from Chad. Is there any option to get metrics like network latency or transaction latency? Yes. So we have a ton of metrics collection. You can see, yes. Network latency, no, because we need to talk to you about what network is now, but then we can choose. Our blockchain itself is a network of things. So what's the latency? How much time it takes you to get to the entry level? That is useful. That is something that I think we catch here using the specific metric that tells you which, in the moment that head was generated, to the moment you saw head, how long it took, which is extremely important, meaning let's say a proof-of-work network, right? So it takes you 10 seconds to see what's going to take, what's going to take. If it takes you five seconds to get this block and it's being published, then it won't have any more time to get that block before the network happens, and then you have even less time to publish pages. So this is a problem. If you're talking about network latency itself, which is the value that it takes to just ping two mils, I think they have that bit default. So as you can see, this is very nice. Here in the room, I'm trying to zoom that out. This is not coming from this room. This is the standard for your family, right? This was not invented by some of these developers. This came from Gav, this came from Italy. Everybody here worked on this thing. So right now it looks a little like that too. If you notice that you can go gamble along. So to reduce this file, the tile is going to define its configuration, which is the most important tidbit. Here it says chain ID is 1337, right? This chain ID is very modern because when you connect to the peers, we're going to check we're sizing this chain ID or we will disconnect because we're about to get hacked. If we don't have a chain ID, we cannot talk some language. Every transaction is due to the chain ID in the signature to ensure that we don't have replay between chains as well. Okay, thanks for working to decide where we are in terms of EDM and behavior. So what we're saying here is that we will go with a hard fork called London starting on block zero. That means something. We can show you an example with mainnet. Mainnet is then block zero, then there are some changes, about three minutes have changed, then again and again and again. Because it's starting fresh with a dev network, you can go with the latest hard fork that says it starts block zero. So if you really create your own network in the consortium, you will have to make that choice too. You will decide, okay, I'm running with this first node EDM, which is capabilities and functionalizes, and then you're probably studying at this block. If you were to change there, the consortium would be to agree that you're about to change hard fork over to a new version of the EDM or the version of the other effect and network specification. And you would change your chances about to tell it, hey, this new hard fork at block five million. Everybody's interested in trying to change over to this new definition, which will change the rules by which we really did loss. So if you have one striker who does not have the right configuration valve proposed, he cannot talk to you. Don't look at robots and say, I don't understand what you're saying. But he did, he does not match what I have. Therefore, I will stop accepting loss. And we've had those issues over and over every time. Every time we have a hard fork, we have a third of the clients that would just to give, right? We're no longer able to talk about this head for a minute. And they eventually come back because people realize, oh, my client's behind. What's going on? It's not getting you lost. Well, you need to change and get a new version of death, which comes with a new definition, or you need to change your design valve by hand to make it match. So if you're operating a network, this valve is very important to you. Let's continue. You have your enterprise, you like code, right? You need to have a contract size limit that may be custom. It may be longer. But default on mainnet is trying to keep the contract small, because they don't want to hold well, right? They don't want to hold well. The enterprise, you're never going to get to the level of complexity you have to deal with. So might as well allow you to have longer contracts on this. This is this feature, this network. We're now talking about how it's going to do consensus. And we're picking the if hash hopeful for consensus. But we've got three thousand. We're seeing that the face difficulty is 100. We need, frankly, it's pretty easy to push the buck. The difficulty is, yes, much, much. So now we go into the first one. Yes. Two questions from Chad. First of all, what do you think about using web 3j to connect with Bezu? Second question, what is the highest transaction throughput registered with Bezu? Which consensus algorithm was used? Some people have said that. Can you repeat the questions? Of course. First question was, can you use web 3j with Bezu? It's a great tool. It gets the job done. Bezu uses it. It's going to be a great implementation that someone showed us. How to use web 3j for unique testing, which is really cool. You should use that. All tests are better. If anyone is not going to do a great test, please ask for stuff. We hope you'll have as much for the time than I would. So you should use that. Second question was, what is the highest transaction throughput registered with Bezu? Which consensus algorithm was used? So what is the highest connection throughput with Bezu? Which customers was used? I think we've seen practical term. So first of all, like, inside the experience system, I've heard of people who were able to get throughputs. This could give me something like 1200 transactions, but they were just sending money to each other. And that was very frustrating about it. Really, it seems to be around 100 to yes from what I heard. All right. So these numbers, what we have right now, also one thing. I think functions throughput is actually a bit different from this number. And we can find ways to batch transactions or play two transactions to do a number of very interesting things that allow us to do things. The thing that I'm looking at in for many, for example, this use case is much more positive is it's not so much sending data. How do I get data back in the blockchain? How do I batch my calls to get more data back faster? That's a bigger problem. Then, yes, because I use the blockchain, most of this is a blockchain to just store hatches and do very little logic on it. Just use offline logic to check your hatches against what you have. It's up to you to decide how much of the power of the blockchain you want to be able to get to it. And that changes quite a bit the equation of how much you care about the blockchain. Some of the important EIPs are not listed, for example, EIP 1.5. Yeah, EIP 1.5.5 is turned on by default based on patient risk law, which is a bit of a hard fork now, because in this batch of case we're using London, what we say, and I'll show you the code, is that people inherit every hard fork before it. So all the EIP 1.5, all the features before were there by default. To find the hard fork. That's true, yes. The way this setup right now is community being you inherit all the changes that we made before. It can make sense, right? It's really difficult to get it. Some changes happen much later without changes that were happening from EIP 1.5, for example. It's kind of really built on it on top of each other. Are all available at the same blockchain? Yes, so the block here tells this to change the whole validation engine it has to move to this new way of editing blocks, submitting connections, even just in architecture calls will change based on what the block is. Can you talk about batching cross-sections? Yes. You're not perfect to add a do-heal, right? That's your collection that we talked about, sorry. You're not perfect to add a do-heal, right? No, no, no, just very simple. If you were to make a call, there's a neat trick you can use, for example, is that you can create a contract that does things for you, right? So one thing you can do is you can have a contract with five things for you. And in that case, it's one collection that the collection itself becomes one of those because you make it do multiple things, right? And so if you were looking at the collection throughput at a high level, and the collection is just, let's say, calling some of the stories you've set in dead values. Sure, you're going to have a raw value of the collection throughput. The pretty much exciting thing at that point is the whole stack of bits from the beginning up to stories on that. But another place you can play is you can create a number of contracts that call each other. And then you can do a number of things in those contracts that just key to each other, is that you don't have to do as many collections, but a single collection can change 12 things. It's up to you to decide that. But your collection throughput becomes meaningless if you can approach it using this approach because it's not a number of collections that won't matter as much as the number of times you do as well. Like I mentioned earlier, right? So the number of times you go from this is really what should be your idea of performance if possible. So I guess it comes down to the competition unit, like how you define the performance. The competition unit and the level of the UDM in its outcodes is what we do to define the performance. And I find the TPS are meaningful, but they're not as deep as if you're able to understand the impact on this kind of what your participants are. Yes. From the chat, could you explain the best way to manage a private phase of network where the set of known participants changes constantly? Okay, so you have a private construction and you have members coming in and going. So not really a big problem because it depends a little bit on how you set up. In those private participants' thoughts and people's senses, if you find yourself in a situation where you don't have loading modes, you can not speak in your senses. It's a big problem. We cannot have that. The network is supposed to get about 25 seconds out. You cannot play with that. Because they just are easy notes and they're just being all listed inside your network. You need to find a source of truth that allows them to participate and for every other node to know that they exist as it can allow them to connect to the peer, which is not solved either. Right? So for example, I think it was at around which used the whisper based protocol to allow out of band messages being sent to all the peers in the network. They have about 300 years of time, all banking institutions. And if you need to whisper to all of them, hey, a new member joined the network, here is their identity. Let them in. If they were trying to connect to you, please let them in. Let them connect to you. They're good. I haven't seen any standard day submission that does that very well. So yeah, management, opinion is not solved. There are solutions out there. Providers like Kaleido would have built middleware to allow you to do that in some level automated fashion. If you were to build this and you have a database of process events, you're going to have to think about that. And I think it's not solved. It's custom to do that. I guess I'll just add to that. We just rely on the static nodes to manage that because we have a way to manage that. And it gets picked up by the nodes right away. Of course, that's just on the permission P2P layer. It's got to manage the validators to make sure they're properly voted in. So yeah, it gets pretty complex. So for everybody out there, so Kaleido is using the static nodes file to represent the state of the membership. And you distribute that to all the nodes in the consortium. And you update that file and it gets flushed. And then we've got the changes, transparency, division, and GIF, and GIF itself will go wrong. So it's in which P2P layer. Here's my entry. Just change it this one. Yes. I'm good. Good. So, the rest of this division file, M.5, do you feel excited about this small values? I'm part. So, this is your nodes, right? It's the nodes of the account that is part of this initial block. These are all information that are typical in a block, right? So gas in the other block is extra data. So it's about free entering the kitchen and whatever you want. So miners, for example, whenever they get a chance to mine a block, they put some extra data and say, mine by this port, right? So the vanity thing. If you do it, you have the exact same thing, where people have story muggies and nice things there. Talk about that. If you're a bank, I think you're excited that it may be useful to start from mission about when that was built or something to which you may use. Timestamp. Timestamp of your blocks, number of seconds, it's a block. It's nice. Don't really care for it that much. So this might be used, some bath to decide whether you're back from head or back, right? So it's useful as a kind of heuristic. Difficulty. Difficulty of your block. It is one for work. It's not used in most process of tourism, but you're going to have it anyway, because it's a hash to block it. Meet hash. Meet hash is used about the proof of work. Meet hash is a subset of what is stored in the header. It's set for signature of a block. It allows you using the mouse and the signature to verify to the block inside it very quickly, if you're ready to do it for work. But if you can see this, meet hash is zeros. Wow. It's a registered block. You don't care. People are going to have to trust it. If you're to do anything on the enterprise, please let me see. Meet hash is not used. Nowadays, who is getting the money from this block? It's zeros, right? So this is called a burning rest. It's called zeros. And it should not be possible. The key is to use zeros, but it's a graphic curve, except something that happens if you were to have some interesting tests. So for now, the money of these blocks is zero. The interesting part is a knock on it. That's the initial fault. If you were to miss that, what do you think would happen through a network when you're running it? Nobody can do that anyway. You know what I'm saying? You call that shit, so do anything on this network. It just can't create an if out of there thing. So this is the initial allocations of the private key. So here's the buck key. It's used as a key in JSON. Actually, it's the address. So it's the hash of the public key in the first 29, sorry. So it should be the exact decimal here. The private key here never has that in an actual real JSON. It would never put private key anywhere, but we're playing. So I'm going to put that here. Low comment here says don't use private key in a real JSON file. And then the balance. The balance here is the decimal format in ways. So if I were creating something like that, it's got a decimal something, and it goes on. So we have a few accounts with money in them. If you were to run it based on yourself, it's never positive, enable RPC, it should be enabled, then you would be able to connect to any one of those accounts, import them to an Intimask, start playing, send money to each other, contract your product, and so on. Makes sense. Which is what we do here. So if you were to run this, if you were to run RPC, it should enable the enable just on the default effects of infancy. Then you can do a current request. Current request came to the local host in 545. And the host sent me JSON. Okay. And it's going to go if underscore get balance, which is one of the ifs effects. And parameter, the first parameter is, I think, the URL, the actual value on the latest block. So you can ask which block you want to ask for, that's the bottom. And if you need to give an idea to your dyslexic, let's look at one, this way we get a response, it will be sent to one as well. And you get the version of the dyslexic, which is 2.0, which is very much nice. That is being to tell you how much money is on the account. Just done this before. Okay. Great. Is this consulting for anyone? Does this make sense? Okay. So, on a state keeper, a match, this is going to be a challenge. It's always a challenge. We're going to take an instance, so I'm going to create one for an idea to network. Now, we're going to do big leaks. I need to do something that we'll do often, because once said, you don't have to do it again. Supposedly. So, we have a tutorial here on the docs, which I mentioned are awesome. It doesn't work for me, so I'll have to explain it as much. I'm going to talk about what goes into an idea to network definition. So, just to go over the docs themselves. Look at what they make you do. We're going to have four nodes on one end part, so we're going to ask you to create folders as first. Most folders, the long data, bunch of data, most data, all those folders are black. It's because we're going to generate keys for each of those, right? And we want to make sure we got them down already before we start the network, because after it will be too late, we won't be able to secure those keys. But that means that when you start something, you may have, at one point in time, all this information may be on one machine. If you were smarter than me, we would have done that on a different level. That includes different permission settings, so that the only thing you get to see is a public key of each of those. That means possibilities, for these possibilities to sign your app. The next step is to generate agencies on goods, really just like what we just built. Instead of eFash, we're doing 952. Note the change. Change here is meaningful in the sense that we're sending things that are specific to the eFash. In eFash, we need a fixed difficulty. Here, we're able to control how often we want to get the block out, because we have a lot of version, you know, to be based on a number of seconds. So every two seconds, we're going to have a block. New block length is how many times we rotate for the consensus, right? So we need to say every so many thousand seconds, we need to change the different blood sequences. And then we have request time of every four seconds, if something goes wrong, we'll be able to move on back. And then, if we ask the same, right, we still won't use the fund days. Notice that we don't have extra data anymore, which is done. And the bottom we have is Regism. It says we need to generate performance. Now, if you generate that file, it's incomplete because missing extra data, or we're about to put it through the setup, it's going to take as input the JSON file, it's going to generate all the things we want. So this is going to generate the JSON file and also generate private key and public key of each of the participants of the document. Keeps themselves out of folder. Here, the name of the folder is the address, the name of the private key. Then we can then copy the key files in the right place, and then we can start dispatching that to different members of the document to go find this information. Okay. We can run one, for example, we pass the private data that we have to pick up the key file. So, what's not explained here, that when we pass this private key file, the JSON, one thing that it does, that it will generate the extra data properties. It's spending the form of the private data. So, in this moment, it generates four keys, and it hashes them together into an RP file. Sorry. It takes the public key of each of those, puts them together in the list, and puts them into the extra data of the initial document. Anyone using this genesis configuration will be able to read that and tell, okay, if the next block comes from any one of those four keys, I know it's good. I know I can trust that. And it's the last few years, and then later on, you can have one specification of those four words. Cool. Yeah. I got a question from chat. What is the roadmap for basing now? When could we expect to see those changes? Okay. So, the roadmap for this, we're talking about this afternoon. I'm going to have a long, long session on that, and talk about this. The changes there, this is our stable condition. We use that to get to today. It's within production. So, if it works, I'm just trying to appeal to you to specific touches of what happens in this country or something. I'm trying to also make sure I can go deeper into that, but I'm trying to extend the high level of if you want to get to the leads of what actually happens. Yes. You could also have like the validators, like the points separately, and then do the extra data so you could also do the right thing. Yeah. If you were to do a really trusted set of ceremony, you never had to follow how to kill one machine that's longer, right? We're only doing this because we have right now this development scenario we're trying to create for construction-bound machines. This tutorial ends with, hey, you just need to run four times based on your machine. She starts working a little bit, eventually closes both. That's just a development. For natural pollution environment, what happens is that each consulting member has been asked to generate on their own private key and only shares the public key. The same thing that happens on the internet is to provide a price when you share a private key. Two more questions from Chad. Could you explain the differences between IVFT and QBFT? Which one is faster? Which one is more secure or both ready for production? And then there's another question after that. Thank you. So IVFT versus QBFT, an interesting question. So there's actually different flavors of IVFT. There's IVFT versus IVFT2. IVFT2 is closest to QBFT, but QBFT is different in the way that it also deals with private connections. So the different is minimal, but created all sorts of good conferencing differences. One key difference is how the box has already been created. So there is a key difference for every wrong change that happens in QBFT. There is a certificate that needs to be passed saying that the QBFT is the wrong change. Thank you so much. The other question is, what is the best practice in generating these nodes as part of the consortium and share the private keys with them? So that, you know, not what it says. Not your money, not your keys. So you don't share the private keys better. You only share your private keys. If you want to do this properly, you have a trusted setup where everyone will be generating those keys, storing them on something that's in the ledger or something like that. The way you store them and the way you make them available to a node that has to be through, you know, trusted ceremonies, right? You would not have those flying around who would drive, right? And you want to share your private keys. So the question is that you have to have a security expert in the room, but you know the best way to solve that problem is to use your private principles. Show you all the steps. Show you the dev. I've shown you how to by hand create your own network. All this is cheaper to see, but maybe you'd like to come to a chase. You can do that today. You don't need to do much. Just need to type npx-prone-dev-pre-start on your laptop. If you have Node.js, you will bring up this executable with Ruby and Ken, and it will start asking you the right questions. It will ask you what you want to create on the nodes, the type of nodes, the other strats you'd like to bring on top, and it will generate under the sheet right away. All of the objects of all the nodes you want, it will generate the keys for you, it will generate the changes file for you, everything comes out right away. So if you're a developer, you can try to go for two hours of training. This is one night that you can just use today. This is a consensus product. It works. This one just continues with a few things to it. So it has some integration, for example. So that's it. Do you have any questions so far? I do have a question that's maybe more relevant for Matt's session later about Bezu documentation. Is it the documentation staying up to date? Yeah. So the way things are staying up to date is usually whenever you have a request that comes in that has impact on docs, it will be tagged as in docs or impact on docs and then docs team will know and to pick that up and start working on it. I'm not sure that's so bad and Matt will be able to talk to that. Sure. Can you sound full so far? Do you feel like you understand Bezu now? The elsewhere right now? That should be great. We have two events, kind of schedule, what is your original product or we have an hour and 20 minutes. It goes from five to six minutes. The problem is you can get those in 30 minutes over there. I'll give you some of the tools you need. These resources and then you probably start to just show you. Use Bezu for meeting departments, tasks and stuff like that. We have ways for you to kind of find your way around. That has been the one way to get it over. You might know this. Then this is going to try things. So Bezu tried to kind of get down a little bit on the toy. I think, you know, some opinions about how you should format your code. What should be done about it, right? Okay, so hopefully people will be able to see it. Oh, can't see it. You're good to go. Okay, so there are two tools that I used. These are not special to Bezu. They're just a feature that might be using my query. One is called Spotless and the other one is called Aeropro. Spotless gives you format codes for you. Soon you'll have to. Aeropro is going to apply a number of heuristics and checks so that you can find all the obvious errors, such as not closing resources or doing things certain ways, streams, for example. But it may happen. This is a tool that then we clip down about reviews, work just to make a device this time. So for any change you make, it should push each one you apply. Spotless apply. It will save you a little bit of frustration. If you want to go deeper and do a little bit check, which will run Aeropro, it also checks licenses in the key. So, can you hear? Like this, Bezu. Got the tempo down there. So, you can also do Spotless check. It will just fill out the client changes. So, you can see it's running right now. If it's not installed, we'll actually install it for you. That's a warning, compared to all version of the clips. That's a right problem. As you can see on the Spotless channel, we also check for any file. We're done. We'll just do this for six seconds. You should reach out like a little bit growl. I'll try to make it bigger. Okay, perfect. So, this is your growl. And you can redefine the number of cities. This is Spotless. In the end, on links that you can spot this. Very simple. We're going to be using boards, making sure we use the right hook. We're going to be using boards, making sure we use the right hook. I have a format, including the version given here. We will import, have some folders where you import. We move any whitespace in the new line. This is going to save you a bunch of money. If you're at your project, if you don't have that, you'll be tested just for this. You just need to go to your checker. See that out of the way. I'll talk about Bezu itself. So, Bezu is a rental project. Just talk about that. It's a multimodal project. Which is a type of rental product that has multiple sources, repository source, directories. It's simply for all the versions, all the dependencies, pasteurized in one file, which makes it much easier for us to manage all the dependencies, so that they are always mentioned the same across the whole source. One place where all the dependencies are, exchange of dependencies, or dependencies, as was mentioned. But also to make sure the licenses are practically dispatching. We don't want to have a GPL dependency. So, all of these dependencies are stored in one place. So, if you were to assemble Bezu, let's just go and assemble it. So, just like we did in the podcast, you go in there and do rental.wu and get these combined. It's a rental project. On my one, when you're over there, it's going to sort of start with the outline. Let's fix the handle dependencies and then I raise it half dependencies. Just bring them together. Then it's going to create a target. See file. Any questions about that? Sure, Javascript developers. Yes. I've got two questions. From the chat, which is the recommended Java version to build the project? A historical Java 17 and 19 on a back end, too, but had many tests fail when running Gardner build? You should be running with Javascript 17, our goal is great. Generating is fine. We are actually using things like RLTN as well. So, but 17 needs the version too. Open JDK 17 is over. Open JDK 17. Open JDK 17. The other question is when a new node joins the consortium, how do other nodes identify that they are supported as part of that consortium? How does each node's mapping? How is each node's mapping updated? When you have the new node joining the consortium, how does every other node know that this node is found out to join the consortium? By default, they don't. And by default, they don't say no. If possible, I think they have a configuration that allows you to permission which nodes are allowed to clean everything, which is a good security setting. That's a little irony, but that's necessary. So, as we mentioned earlier, in some setting, what you will have to do is change the duration file on the file to change the number of seconds that you allow to use. Great. Okay, so. Interesting. And that happens because I have a little cache. So, I'm going to do that to do a clean assemble, which then removes any cache. The issue I'm hanging here is around some dependencies that I'm playing this in a branch. So, having a clean assemble will then do the job. So, if anything wrong happens to you, if it's stuck, or if you have a weird issue, you know, always try a little bit clean. You know, it removes all all compile code and all the batteries on that, and then you can assemble again on your good back caches. The repository is made of modules. Typically, the source code seems to be under source main Java, the test, in the source test Java, so, it's literally regression test will be under source and regression test Java. And so, these are two more listening steps. Acceptance test is actually one of these with a number of business together in most of the situations to actually create situations where those can actually color, you can test all the permissions, all the settings there, and reference test can take enough to retain. So, if any one has a number of tests, then it should be through performance with each version of the hard forks on OEF. So, in here there's a project, this crypto thing here source name Java, and you can do this on the test, which is a great thing there. Should we debug her? You might be expecting some package, but it's not the case. I'm good. So, it's time to do the question. We're going to see some of the stuff related to OEF, but we have a number of middleware, high stuff, and it is easy to be as well distant. So, if I was to go through here, and you go up the top level, you can see that you have this different folder, which itself has a number of projects we need to plan. And then you have a number of new chargers for. So, for example, we have a whole module also of integration, how we create integration. You can actually see for this is going to mixing together and finding variables, time of hour, say arguments. So, we need to have that be almost a library of the sun is able to read all those options. We have a basic top level project, which is kind of tying together all the things for those modules and putting the executable together. For example, crickets. It is who has this name on it. So, Pijala, that's the first you could execute from. Run. This is the first runner that will start the execution of the whole project. That's the cost we have. We have T-gram, crypto itself. Things, some of them put your mentee that may resist this whole thing around building new covers for it's own project because they're just permatives of the signatures in Christian like that, that will play out in the top level. Well, the metrics. So, using the metrics and put the metrics in previous all that information is stored in and sold in some projects. The EMM is now a top level library because it's becoming available as a module that is published independently in this. That was mentioned yesterday from 3J is using this module to inject an EMM in the test to detect the test. So, you can also do that on your own. You could use this library for all of these cases. You'll just need to configure the EMM for this case. But that means too that you do have a very, very good program out there channel and you just need to have the EMM for whatever reason. If you have some legacy if you have some smart contract still need to hold up so you just want to execute something easy and cool. First of all this it'll be available to you use the right configuration parameters to start it and then you can you could do the EMM pointing to your home budget. So, for example, if you were to do a fabric network and you were to create a 10 code and for some reason has to tap into ECM budget you would be able in Java in 10 code to pull to an ECM budget you could do a good smart contract. As much as you're able to pass out all the information about what channels are available what the balances are to have anything stored in them you would be able to create kind of a medium environment to do so as a java to block a gradient system. Okay, so an example of an actual use of that is the person who made that contribution to move that out is not a fairy who is another commuter of this who did that for HEDERA HEDERA had meaning for an ECM that is in Java is that it used this particular approach and now it's published as part of that not. So, a very common right is this one if you're a developer you throw a string in the way you like to use string limitations also some modularity like that allows you to swap components if you're not a developer who you think most development out there these days is going to use a great step connect to each other over to faces and the number of dividends in injection and then you know take order to do it into the good days but common right is this one is that we didn't think about it we studied it didn't know me better and we created a bunch of things that have no dependency injection so I'm going to about to show you what happens we don't do that in advance and what happens you have to pass a bunch of objects around to difficult structures it's good and bad but to be good about it is that doing your ID really follows the path of all the models I think maybe satan was of part of the monetization of things I might be a good way to think about that if you have questions also about one map or map then we'll come out and we'll be able to stop about that in future so what also is used here is the building pattern so rather than creating an object like a traditional pool with a foundation called builder that will take a number of arguments that are extremely tight well configured like you know it's actually pretty easy to read the API and then again the place where this foundation called builder is integrated collection so any opportunities for me a couple of those this controller is this controller builder so the physical controller is it's just for that it's awesome this controller is a physical controller is the thing that controls the whole execution of the clients so it's the top level of objects everything that happens since this it has the current portable schedule all things in which it executes the thing related to the E3M itself so if you also have a network your Genesis config option so if you can import the Genesis file or offer read it and come on an argument your note key the synchronizer which is responsible for keeping your updates just need to have a whole batch back in portable thinking to keep you on top of the box um anything you're all just obviously I said all that so how do you initiate that simple just as opposed to things in right that's a possible right if you're just changing and that that's that then you can hit all of those the builder itself gets much more nice maybe better to look at these hotline perspective the builder isn't considering the this controller should never really have still with it it does that allows you to create the controller from you know a simple configuration it's just a free file seem like that so this controller actual constructor is not available to public it's not very popular so it can build itself it's about to be organic so we use that better you can have you get these young people when the user starts so we don't have to pass holds the domain objects what I want instead we have helpful methods people like us so static will build their coaches to get out to build things any questions about that those to mention deal of I've seen those objects around that have a domain object called the fork taking manager the fork in the manager is used and initiated from your chances complete file and you start to read this you know of which work you need to change between different cardforks it's very very important you need to have that the multiple components throughout it's very good this type of concerns apply across everything so we check this fork in the manager as a hash which is this the fork ID for both numbers and it's able to do things like we'll get the fork ID for chainhead create fork ID get all the forks check this here to see if it has that same set of fork IDs but it's not likely to tell you and should be able to to be to be used so where is it used is used both in network and in ZEPM itself it's going to be used in your if political manager is going to be used in your discovery layer using both sets the if political manager is going to rely on it that if the the clear communion clear connection it has great availability because the version is over 64 and the fork ID is the same we have a right to do this as well on both notes right so this fork ID manager is where is it built if I was to move from this engine in to go on for the world use test so we go this is a test this is a test it's used here transplanting the if political manager so this is generated along the way as being free those it's going up things right the blockchain no forks this is a default behavior from the magazine is supposed to be as thing one that has all the information so this this is object here to be around different components of your stack right it's shared between discovery and your peer to peer layer that's just one example what it has it's a share concept and we have to build so um you know clear consensus so when you have these hard forks and talking about forking managers positions that you'll see that um hard forks change the EDM they change how to sign collections they change how you reward people for fighting for luck it used to be that we would install a little money on people and they would fight for luck and then we would agree with them to do whatever certainly right and eventually just not making that much money just making money of fees keeping change um not making changes so we change that position for fees regulations and more so I'm going to use political schedule to show you a little bit of that so the political schedule is going to apply a political spec according to about number right political spec itself it's going to have all sorts of interesting methods around what what can what they're going to use what kind of functions we get to apply what kind of data we're going to need to use and those by the way data is like we should change over time so we have a way to build those and to go back to what Dima's is going to do we need to see how they build on top of each other so we have a political spec builder for all those specs and he's going to take all those different uh approach for example of the buck who already playing to find usage of this we need to see that for example for a quick political schedule about the word zero and we have all this set up by by the place it's going to be not pretty much a bunch of kind of answer mapping so your perspective is going to be based on the IPA version it's going to be very data according to what Maninette says it should be going to be more advanced according to what Maninette says it should use it's going to take the difficulty going to create which is going to just set that really to zero it's going to just apply difficulty based on who is is a validator and then it's going to to use a click block header functions to run through the the header and check the header is how people for the political schedule has that as well so if you were to look at what MDFT is doing there are some changes here to whatever methods you're using or change on which they did the header based on on what is being used so which header is not spacing otherwise we're going to take the block which our political speaker we're using and we just don't copy them on us so that will change a little bit very deep about how we link a bit connection as well and so again build a method build a pattern right we are creating using the other all those elements together this is a bit of a it's it's painful for me to then actually get even our plantation mode but I'll take that so we're about to create a way to identify how we're going to run with IDFT meaning the word you specialize how if anyone is going to behave according to the IDFT power photo right so we're changing things first off the header validator is going to use the IDFT block header validator using conditions about see if it blocks all the information just about on that so we're going to use this very data you should check that the here presented the block the mentor is allowed to present according to the current handbook we're going to put the immersion also so the immersion are the one calls the vegetable blocks that are not selected to be in we're going to use IDFT logic as well which probably doesn't have the distance that you need as owners in the first place we'll use we'll keep using my net but by validation so we'll keep using the if I am public if I am by default right so what I'm making do is these things can be modified for your use case in enterprise you could change some of those things according to some specific use case you may have so for example if you were to have a different cost of security even you would change that you would be able to also do about three words if you wanted to right by default we send them to zero because IDFT is no reason to take people for any of that but you could meet make create some type of inflation in your network for example every running table and all those toggles are available to you when you create the program and they are all important for all of you to do is is your own consensus agrees but it's interesting that the company is everything you do all across from collection signing validation building blocks all those things come together because those are an object that really about EDMs is more about planning EDMs so so we're going to move a little bit at EDM I haven't talked about EDM yet who knows about the EDM a little bit who knows what how good I'm going to cover it's EDM EDM 15 minutes I'm going to give it a little bit of time I I sense it's your time to give up so I'm going to always be you have questions about that I'm happy to answer so EDM itself is a series of bytes that's how the EDM instruction comes to you each byte they have a quote an upload and each force jazzed the sequence of bytes maybe either uploads or inputs is a problem so I go to EDM so here we have the first version is EDM which is a frontier EDM is a gas calculator just testing which is defines how we need to compute how we need to take positive operations into the EDM gas calculator could also set everything to zero so there's no gas then it's used to define operations how we need to pay for them and the version of that the frontier operations themselves are here we're just treating them and here they are so if you're going to define today a language from nothing I mean to say you're a bit inadequate right there'll be the time you'd like to create those other things that are used to do math or some blockchain that you start from first principle all things you start because what you're doing is all the math operations you can think of right adding subtractions multiplication exponential yeah so multiple operations larger than less than all those operations that are available to you they become really small breaks small computation units of your smart contracts each time you present the final operation does that make sense so I mean you can see it keeps going right and those operations become more and more interesting and some of them are actually able to interface with outside of the media doing all the information from say your storage or from your environment so in the end you can say based on that background work that it is in the end you can say if the current executor of the smart contract has this information running an particular account store that has some value then do this those information are all available to you but they're available to you as a level of the response contract size also for example so if having a big day for now do you have this token we're going to say can you load these the storage account for this particular user of this particular slot and look at the balance of that particular value right and it in itself has a number of things that you because you're you know pretty much you couldn't go about the attack here you need to multiply things memory so you can get back to it so has a national memory where it can store intermediate results so you can go back to purchasing more things then pull things out of memory again to finish your computation right so you pay for all that every time you do anything you have a gas cost associated with it and of course they have very old attacks and something like that what happens is that you may try to use some of the EEM to take longer than it should right so you can store our EEM lights inside the memory for example right to see EEMC to open what happens is we do a bit of destination with let's see what happens when they're counterfeit this and we do when we do we say the whole foundation is in line we're not even trying to continue to do a lot and we will reject if you send this here's your envelope would you like to know how many ad numbers in EEM? sure you do so how many ad numbers one of the ad code ad is 0.01 right so easy that name is ad and some two items can stack we'll just want to add that on a stack you have a stack I can just like a book it does good or that's going to have a stack of all the amount of units so in that they have two integers you write from the stack into sine number positive we're going to add them together we're going to return them to by array and we're going to push it the stack so if the length of the result of the configuration is over 32 you just overflowed right so in that case we will send you the offset of that we'll remove the bytes so over 32 otherwise just wrap the result from those bytes into an item that you push on the stack okay now how do you push it off on the stack? go to push it is on what is being sent to you so think of this as a normal diagram just have like one room a series of things going right so if you read the byte then you read what's after the byte and read what's after the byte so the finished stack operation says don't consume anything we've got this one thing it is on the length of the push operation read number of bytes I'm telling you if you read from the rest of the ethium code coming in push them on the stack so often a program from the AEM starts with push the next three bytes push the next four bytes push the next five bytes oh okay we've got enough on the stack how far this operation against this make sure that it makes sense so one thing to take you can push for example is the address of a person that may be how you would possibly push a parameter through the Qt inside here so if you're a diagram developer you would be able to create normal outcomes if you're in the enterprise for any reason you decide that you might have some meaningful change through the ethium you're going to do that however but that means that off-spec is what the ethium does then you have this additional output that the ethium code gets no sense so it is actually great the ethium code can make you to make it so that's safe right in my way so we've seen we've seen people like in 2018 that make changes to ethium so they would add that one up to this for example if you ask business developers who you know project functional they can read ethium code and it's painful to watch and they should do that for times it's fun so in the end I mean like you know which light is what how we need to reach for this it will be useful for you you need to have a little bit of knowledge about that so there's some utility out there like there's a VPN executable from the app that you can run and you can delete the whole execution of your code and it will tell you exactly what operation are being run one by one and what the stack and memories are from each of those steps it's very useful you should be thinking something that makes no sense so you don't get there as a joey's helper we also decided that you should just try to work properly so you build this java code you execute a unit test using the web-stretching and by the way you can see what the situation is and maybe something is missing in the stack that you're very sensitive you'll be able to actually infer that in your situation with the magnetic beams so what's kind of cool about those is that it has all the flavors of all the gifts it needs to start obtaining them building on each other and you can see what changed at the time what had to change so the first one is frontier it's a good works which will show you all the operations in frontier there's quite a few and this one's there it's there same setup these are the operations and which is serious it just adds one more the league it's kind of cool because not even java you can actually see what's going on the stack level of the team this was how TGV did it by people that did the league and others how they gave change the flow of the time and we keep going it's a serious driving it's a serious driving it's using the exact same definition as homestead because there's no change because in the end there's operations but it's using a different gas calculator because what they discovered that some of the operation will be surprised if you were starting to see those things by understanding the extension so they have to change the gas calculator for that and the tension with all adds to the exact same thing changing the gas calculator makes things more expensive expensive places to encourage different behaviors of the product and the extension so the extension here has its own method that's operations it's registering solo operations it's registering written data written data size reverts and stats so but it's just the additional things and so to go back to the example that was being earlier when you set it on on product zero and go straight to London Park for you need to see how they build up on each other and how all the operations which were added throughout now we can pass those to you that's possible but so possibly this work and boom sell buyouts store turning and London so London has its own test calculator it has some operations what it did it added the back the basic operation which was new age price collections so what's interesting is that for to make sure that people could build of the the end of the failure all those changes in senses which are made available so you could call this fee in a contract and make choices based on where the base fee is so for example if you say hey the base fee is over this much cancel do not brand this collection this collection can be too expensive today it's not good and cool right so this very much the consensus there becomes available to you as a smart contract developer to make more intelligent tools and what is should be done around here the execution so that's the thinking here right we got a couple more first and I think we had Shanghai was recently which is traditional operations and Ganku is the latest to store to load to keep it out okay right so when it becomes available we'll be able to also do transit store and transit load which has different gas calculation and again this is all about maintenance so it's useful when you are trying to to do also the fix around minimizing how much you're spending executing this smart contract to someone else's computer so for it to always going to be like keep eyeing eyeing and supporting from the back or at some point like okay now from now on we're going to pack everything here so you don't have that brand new idea of things so I found this kind of way it's going you can see how good it is so thinking about here's the video it's been happening and that's easy to layer things on just for maintenance compatibility for your use case I think we find is not done for the next five years you might not be sure of dates as much I also think in many ways we need to start figuring things out much more grander level down the road where you might want to get some of those possibilities so this and ATVMs for example also has a counterpart for if you're in classic if you're in classic went about things anyway right and they have their own heart for it so it would be totally doable for someone to say you know what we in our company have decided to customize this before use case and we're going to create all sorts of of good ourselves you can do a bunch of different things that we're going to have zero gas circulation for example if you will be free therefore I can let go of a bunch of those assumptions that we have to pay for the end and it will then create your own digital operations that may behave differently and in your own distribution has a different impact on you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and you and I want to show you that this is one how could it this is composable and you maybe as a general developer in this situation you may be able to change things however you want the underlying assumption here though is that all participants in a given blockchain need to have the same set of Hopkins so right like you can't have different participants with different Hopkins yeah so it's really fun the underlying assumption is that all aspects have the same set up the same opcodes the same validation yes painful what what steps are taken to on the chain to ensure that that's actually true so like if I try to if I have a has a certain set of of opcodes if I try to join as an EVM that it has some other set of opcodes is there any kind of validation that well I guess and no so the way we do it right now is that we have focodes which are going to be stored in your Genesis file that activates a certain number and you could say that we can get a focodes during the exchange of status we check that you behave the same way but we've had but like even the the magnet between parity and depth like in 2018 we implement the refund operation in yeah and the parity folks were using rest rest is pretty strong about signs versus unsigned behaviors they were using unsigned as a refund for the first time ever is some subtle changes to refunds to be needed really amongst the scissor all the parity nodes started going their own way and only accepting blocks that were being mined by parity nodes all the gift nodes were being going their own way only accepting clouds mined by gift so it's really difficult when you have multiple clients and to agree even if you have the same clients if you have different versions you may need in versus price so you have to maintain to do this as well make sure it's to do this and the kind of force are this is the property and also we have to have the same version of things otherwise it may be some breakage sometimes it's bigger there's precedence right we call call to have half complete the change yeah wouldn't you also have that small contract probably like Solidity or something to take advantage of the new outputs or the behavior yes yeah so the version of Solidity will define how you can balance it right so Solidity itself is like this pseudo-journal script right maybe you may look at the team you see that you can do assembly level stuff sometimes and when you do assembly level it's kind of cool because you actually go back to outputs and you do some version of those outputs where it's like push push bar and like that it's useful because you're trying to reduce the amount of code you need to generate by being you know hands me with it it's telling you exactly what outputs you want to see but it's also dangerous because sometimes you may now be worried when it seems that otherwise Solidity is equal to generic so what Solidity does it's just like that script to the script it's going to take your JavaScript code and output bytes that represent the information with new outcodes in the version of Solidity you will be able to generate better more performance outcodes since new outcodes that you give so that's the version of Solidity must be in tune with the hardfork you're using so that you don't fall for an output that does not exist in your version so just work your hand with the VM which is your base which is your machine is your stack memory working frames it's kind of cool about the VM it's important in different calls to each other right which we've probably done that if you've done these Solidity at some point you're like oh it's just kind of close other coming back to the DST for you right well that's really happening kind of seems that your call was not putting another in your call right you can may have some level of recursion some level of calling each other periods with the and yet it's useful for varieties of things right um so here in uh the business it would be manifesting in the form of a way to call the message right we call to each other so you can create a lot of code you can also talk about having some pre-conval contracts which are extremely useful sometimes we'll see you have the HSM that's you know running in the data center and back you have to call the HSM every time you call the HSM you pass in some obviously number of bytes and get a signature for it the whole security department told you you will not get you the calling card if you're not with the HSM therefore why don't you just build the pre-built uh contract as stored at a specific address and you will call some java code that may then connect to your infrastructure and perform signing in a way that is compliant with the security that's one way to do it if you're to create a new code then we'll talk about that and I'll show you another thing you can do so I've talked about the just an RPC and all the families of methods if that minor she exposes all those things how about you create your own and you can do that here's the factory class this is a good so this is new to generate methods for is on the information that's interesting that all the domain objects that you like to have they can answer questions that I think that's new to just an RPC server and here we are creating all the families of just an RPC methods so I mean I mentioned that earlier this is going to be all the methods for finding either easy A you'll have DL so you should name it if so if is the kind of the oldest of all right speaking to get you all the methods that you call it's kind of four years let's say if our numbers need to pass you need to give it the domain object that allows you to create a blockchain but it's just a little simple object so if we want it we create the one word of this if our number itself is going to be just an RPC method and the it needs to be asked to respond you know this is that you can use the interface sunshine it's got a name yeah you can call it response method and what it does it calls out to the domain object that says hey what's my head what number give me the value as a loan and now return this response to the success and create the one RPC method that's a lot of this is too crazy right go in there create your one screen the question is how do you inject that from the RPC server in the next way but we can definitely do something like this okay any sense I'm going to exercise that I've used which is going to create a new pop code I'm going to dial it up to our disaster server is new pop code is a shared secret so by the idea of talking to someone all right it will allow us to validate that only client with a shared secret can validate a lot of good care about and you said that configuration of Jnox those are good you're going to have is regional variable will not go for for this just revise multiple so add I'm going to keep those a bit here and say we have a shared secret holder the convention system in object right to get past it into your your run builder and you need to go now and and precipitate the decision on PC factory though this new shared secret holder is just sealed with them when you pass it the heating configuration will be able to pass it down as well that's about the same we can set its initial value based on what has been pasted through our unstable if you imagine you know heated flags to set the initial value it starts so medium object configuration our initial shared secret pass them in we'll pass it into our region for for action which is going to have is about to be pasted through the just since it's now mission is just pasted this we create our own office method that we use today we can set a short secret create our own just an office method just like I showed you allow us to have the its method name you need to get the new secret from the context of the request you need to share and set very simple right so I haven't seen nothing that's busy why that I inject into everything at home this needs to be able to be set through a common line argument or it would be through just an office method and I'm able to set that up inside my that about and I can change it later is the shared secret holder that's still and you can see here adding this initial project just an office method to all of this so that's what we have here so now we're passing our own or all space over where we're going to have what's your definition we're going to create one version is in our own mainnet for the end for watch out over things in the workshop operations we find our additional operation our additional operation is our shared secret operation right here and our shared secret operation in the end is going to exist fixed cost operation that says take this value best that it matches our shared secret and that deals with that one or doing errors allowing this to be changed so you can pass in a simple and shared secret that I can set which is our to see you comment that argument and I can have that code will behave differently based on what configuration I keep it so if you were to have mental parties for example and you want to have a rotation of course you can also do that it's a type of idea let's do it why is it terrible replace and change later and the shared secret has changed then you can get a different result which will invade it all the blocks you have to talk action to that but it shows you the extensibility that the approach you can take using configure and set things over RPC or online arguments and you can change the idiom moving forward to perform that you know so one more work and then is then credit contract then we'll interface with this this is because Kotlin code as a product I have that generates my code for us you see our custom shared secret 0xf6 as a form of code and pretty much this which we're going to need and generates that my code for you that you can develop that you can deploy the problem wrapper that's the argument to it sign that and send it to to basically this is one way it's actually taking this with me to a different set where you know very much that was my creating your own client you're creating your own code you're creating all your cuisine things right might not work which is trying to be up to date and the next time you create a lot of this we're going to have a hard time so this will actually get a number of grades that allow you to send this to me fully and also to me get a chance to show you that as well let's get questions up in the end what you think like that I'll just show you so business ways to to do this right there's opening up an API and also to do all sorts of choosing things creating your own data what it really allows you to do here is a particular model to define your own additional elements of the domain so all those classes that's what is basically exposed to plagues most of those have a fixed API that cannot be changed if there is change any of the classes in here if you look at the top this is playing an API we have a hash that you choose and if that hash was to change it's a breaking change of idea and that breaks all the current right so that is actually on purpose it's the idea possible we want to be back to principle so I know that committers will pay a lot of attention to this password check and this point if you have stable so that allows you to play these elements inside business for example locally hash loads collections collection receipts like that and allows you then to perform additional things as you can it's what being your own service can do one policy request so you can create additional elements for you to to build and extend this new at the edge and all that you need to do here is that kind of plug-in we have a plug-in for RCD so RCD is the storage of choice for business it's stored on disk it's it's available to see these projects we interface with the regionality to your level there's a lot of action there's some good discussions on the merits that's from this standpoint and how you can configure it according to like yes excuse me so for example are you going to provide a bit to me that it's good news but Mikey is one of the two is just about RCD for something else so you can see RCD itself is a plug-in and it will take some options which is kind of cool because for cases of this RCD actually get it configuration so we make it do things for you and it's going to be registering itself as a way to use the value storage because that's of course to ease at the end of the day so it's pricing as keys as a system so I wanted to do try to use a different project for for starting all the sites just like for this presentation one I want to show you is it's possible for you to extend and leave this with a new spot on the storage data on this which is good news right if you were to have an Oracle server for example and if it has to be in an Oracle that pays that tells you this we can run on totally confidence-wise that's where you get better boost okay you know that's that's that's the that's the that's the safety cost so this defines another plugin to this map as this is the plugin it just shows it runs to the plugin exact structure to create here we're going to register it to the top all right I'm going to stop running that's for testing or we can run it as part of of this command we just register it as well and then we can see is so here we have the the build from grow up to build this here we are the actual key value storage what do you think is that that would be dealing with key value pairs and then you have the factory here which allows you to build a key value storage just like your boot with this first speedy this allows you to change the key value storage beginning time for this right so this single cache manager defined by you can use that gives you if they teach you to cache set some properties that are synced to each cache configures cache itself in that case for example I was saying a store fees inside the data folder store inspired formation just those two caches together inside the inspired folder by some compression by some cache size by some website all those things are suited to expand initial stuff and install this what's kind of cool is that if you expand self it's just a layer around workstation so allows you to apply a level of caching and new memory optimization so instead of being to do this for the time we can ask for values this thing is maximum in the around the level to buffer a little bit the other thing it does it's a very easy here which is not true it will have a thread to write this and make all the rights so you're not constantly nothing's on your mind so since the solution is probably true all those things are just to comply with a key that is storage factory and your running itself is a key of what's available for for for plugins to register the factory here so again for both operation factory is register QB sorry key value for privacy as well here you go some options so you can that's you know also so the options about the you can actually like for Roxy the inside if it's that not for your friends background all the things that you like to tune in when you're an operations person to really see if you can squeeze like 5% more out here with this so you can get over to be as for example all all the availability and then the version itself is just the main object of the version where it's AI and sometimes this is that's if you create plugins that allow this to to behave completely differently previously instead you could be using the son right you can be using a rush from that you can be using 5,000 right yes so there's a wealth of options here here I hope it's up and so also this discussion this afternoon it was to break down about how basically a composable extensible has the ability to change itself to strategy for high TPS or think to you know just play around and give you options you may not have otherwise I need to for example change to a level DB yeah you could change your level DB things of course absolutely I think that makes performance performance from TV or not really no I think what's the has surpassed what's the I think you know you try it it's they use it in tech out for the consensus layer it's the problem is it's like not as it's not maintained the database well maybe so that's why we stop with rocks and then recent performance improvements are making kind of negative difference you could you could do the same yes to for people outside so they describe level to me it in really get under boost and the latest performance improvements for words to be make it much more for for our to see good yeah it's okay um but then there's the questions also with a feedback because there was pretty much out there went through everything just a question so what's the scope of like plugins that we can that's the scope of the plugins it's actually pretty restrictive it has to be set as part of the API in general as you can see so you would need to do that there's a way to go dynamic plugins of a folder so that you're kind of controlling in if you go to jar reading and try to to let them in but I'm not too familiar with that trying so I think you can do storage you can do things from metrics there's some yeah is that you have to think you can generate all of RPC methods using the plugins for example so you have some some things some specificity to move across some options clear has it work been done toward so many aspects back there so your question is it is it work towards standardizing multiple operations into a group of operations to kind of just keep on the coast the cost or do that on things that's been done at this level maybe there's some of the defaults would be happy to have the discussion because they are constantly tweaking how that generates them because I'm just thinking about like in particular data loading data data storage those repeated stores might well when you load the first time it costs more every subsequent load is is cheaper so there's warm and cold costs in the EVM and they're constantly tweaking to Antoine's point like what is considered warm what is called check minus some changes to this again I guess you're kind of bound by the protocol so like you can't you can't have a different cost on basically you could it's just why bother to break compatibility if you're running a private network that has a gas costs you can to lineate the gas and the costs that you want anyway like you can kind of charge this piece I mean I imagine there's ways you could write a smart contract to batch workflow with a single transaction yes going to but yeah let you do that but you the problem the data load is her saying is there's their way through to batch or to upgrade to smart contract and then they will optimize your guess usage school gas often there's some trigger spreads about that and people like you know try to up another about the best way to load things and go also to my discussion earlier when you do a collection inside the collection the gas costs will change means on what is warm what is called if you have 10 collections and end up loading the same storage column this same storage slot over and over again you're playing a bunch of guests for nothing you must rather have one of that should be pulled out and does a number of things together as much as possible so you can reduce the amount of work and having to do and that might change again in next work after this just transient data storage between calls Wow stay tuned yeah it's been it's been a journey so the all the discussion toward gas costs extremely intricate and end up about you know going down to the economics by looking at the detail man that for example and seeing what people are doing we saw some I know that there are some discussions at some point where folks realize that destroying the contract and rebuilding it repeated inside one collection which gives them some gains I think there's all sorts of very unhealthy behaviors that are coming from this type of optimizations we're not concerned by that thankfully hopefully through on private networks where that doesn't matter right but that is in some extent that some of the difficulty some of the decision there will be made there will be back to what comes from maintenance and may there be sponsored to be finetuning those parameters you should be aware that they will continue movements around those particular elements it's just a fact of mind yeah one of the it's annoying thing that is so like if that there's a best to have this same problem so you're saying that one of the biggest annoyances we get from the particular case that the storage itself is stored to come hard on this and may be corrupts if you were to you know about it's the codes that you have to see by choice that's that's that's that's that's that's that's that's that's no it goes away has to reveal all that state that's true I just rewind it's the the the block to 100 thousand thousand yeah that's that's that's that's that's it's a storage that's that's something that you see when you start get like it builds it's it's huge like love love catch catch yeah uh you know if has to be able different choice I don't think they soon met a choice of us that missing sense that it does not stop you from execution on it's rebuilding the cash the cash he's lazy instead doing sense if the node is not constantly washing to box to be okay no it's it's washing to rush to be okay so by that yes it's constantly writing all the changes to go to be from each both condition stop stop keeping things that's that's different on maintenance and in bonsai though okay but for your purposes I'm guessing it's correct what you just meant all the time of the school if you're on maintenance it's probably the best way to build on this so all the time your storage needs to have to to grow and keep growing right you're going to keep it in store so keys because that becomes storage stores like that but it is on the activity some of the skills are normal very good some of this data is not good right so bongai is able to track but that's possibly treat all the time to keep the storage apartment of basic most just in here instead of exponential so another really a question since you mentioned bonsai is it possible to run a hot hot store to run on on well not because so you're going to know maybe in the process of keeping every of position value of every state of value of the time so that's actually a very good excuse that I had to do this punk right so someone's need to come and say back two years ago I'd like to know how much money I had and I'd like to so they we need to make a decision our bicycle that says you know give me balance and well to move in if you have a rocket no then you get a copy of all the values of that well to keep all the time across and you need able to answer that but most chains for you know trying to they go this whole day I think it's six that's just a problem without me I mean you you can do it there's just if you're taking a CPU hit versus a storage hit when you have an archive note that's using bonsai you'd have to recreate the state all the way back and there's like a maximum number of layers that we load that's you can parameterize yourself and if you make it like I don't know what would happen if you made it 16 million blocks to like go back to genesis but it would have to use the trialog and create the differential between the current world state and the previous world state which is just at that point not useful I see the tries yeah it's like it's like really small the numbers are good but it's only good on paper because to recreate the state would be so costly that you'd rather just run a force yeah almost might as a presumption but optimization is pretty uh yeah mean to be clear the competition so you're saying for everybody else the competition the bond I know and how bonds I work is confusing but I would even compliment that I think that the competition about an archive note is or full note is confusing and still to this day is the if you want to meet the different developer just as that what's an archive note has you know you have a half an hour in front of you