 Okay, you were live. Awesome, let me pull up my screen share quickly. There's gonna be some back and forth on some screens. So just bear with me for a moment. I'm gonna move this stuff out of here. Right. So let me know. I'm gonna go into presenter mode. No issues here. Oh, it seems like I can see it fine. Perfect, awesome. All right, we're ready to go. Two minutes after the hour. So thanks everyone for coming. My name is Matt Nelson. I work at Consensus and I'm a product manager for the Consensus teams that help develop Hyperledger Basu. So I know you're probably all familiar with Basu. If you're here at the topic of today's workshop, we're gonna go through basically where is Basu after the merge? How did the merge impact client development? And then how does that allow you to basically get involved with public networks? We think now that the merge has completed and we switched to proof of stake, that there'll be hopefully a lot more interest in developing on public networks. So I'm gonna talk about kind of what that means from the perspective of running and operating a node on public networks using Basu, connecting to your Basu RPC to do different tests on things like smart contract development. I'll just skip to my agenda slide, which actually goes into this in detail here. So again, we're gonna be talking about the merge. I'll do a quick overview of the narrative. I won't get too deep into this unless there are specific questions because I'm sure that a lot of the folks are familiar or heard some of the other talks that I or others have given around this, but specifically how it's impacting Basu. I'm gonna talk about Basu in the context of staking and validating in the Ethereum public network. And then I'm gonna talk through kind of a more technical part of the workshop. My intention is not to necessarily have you follow along right this moment, just because syncing nodes is frankly, it takes a few hours to do. So there's no point for us to really sit on the call while that happens. I have nodes in sync and I have nodes that I'm going to spin up live to show you kind of both sides of what that looks like and how to connect to the networks and kind of how easy it is really to connect and work with public Ethereum networks. I'm also gonna talk to you about how to connect wallets, test apps, test your own application applications that you might be developing and how you can even potentially kind of create public-ish test nets among your own nodes, depending on what computing resources you have available. And then I'm gonna talk about development environments really quickly, but mostly that's kind of a next steps thing. I'm gonna just gloss through what that looks like on how you can continue smart contract development if that's what you're interested in, but mostly I'm gonna focus on two through four today. So whoops, whoops, I'm going the wrong direction actually. Before we get started here, I also wanted to talk to Basu as an open-source client and how we are actively looking for contributors. So I know that folks, we are active in our discord, learning and kind of testing things out, going through node infrastructure, kind of learning the dos and the don'ts here, but frankly, we are actively looking for contributors. We want you, right? Why do you want to get involved with development of Hyperledger Basu? One, you can shape the direction of the project. We want to make sure that what we're doing is valuable. We are focusing in this instance on public networks, but we want Java developers of all skills, all types looking at all use cases to really make Basu the best client that we can have. On the contrary to that, is that Basu is now makes up roughly 10 plus percent of the Ethereum network on public mainnet. So we kind of have a duty to the network to do really the best work that we can. So we're always looking for new engineers to help smooth out some of the edges on Basu. Of course, you'd help us decrease time to market and accelerate the work of the project. You will be able to be a part of the community and leverage others and leverage tons of people to get logs, to get information, to help develop the client in the way that you're seeing fit. So we're really looking to engage more community users. It's frankly, we've had a huge influx of users as in relation to the merge and it's becoming very hard for organizations like consensus to work with all these different folks. So we're always looking for new people to get involved, whether that's just from a community engagement perspective or literally developing against this. We love to see more people get involved here to help fix bugs and you'll learn a lot, not only from running nodes, from working with community members, but you get to work with developers who've been building Basu from the very beginning and learn a lot about the software engineering from the perspective of blockchain protocol engineering, which is very, very unique and very, very challenging in some areas, but there are a lot of good issues that first issues to get there. Of course, you'll gain experience that'll be great on a resume for web three jobs, blockchain jobs, software engineering, really getting your hands wet on open source development. And of course, we have a worldwide group of people who are interested and want to help. So participating in the open source community is very valuable. Another plug is that if you work on mainnet focused things for Hyperledger Basu, you can literally be paid for that work by the protocol guilds. There are a number of grants and incentives being run by organizations like the Ethereum Foundation. So if you work on Hyperledger Basu and you improve it in the perspective of mainnet development, you can be paid as a part of those programs. So I encourage you, I'm not gonna do too much of a plug here because we are independent of that program, it's not necessarily related here, but there are incentive programs being run that will potentially pay out contributors. So definitely worthwhile looking at that if you already have the skills and the time. I would encourage you to look up some of those and you can email me, my emails are on the slides and we can, or chat on Discord, whatever, we can talk more about what that specifically looks like if there's interest there. Last things is yeah, like don't wait for an invitation, open a PR, like you can fork Basu and open some PRs like I, people will, there's nothing too small. I make PRs for like logging changes all the time, like one-liners, feel free to lurk, check out our channels on Discord, check out the Wiki pages, there's a lot of good information there and just, however you think you wanna get involved, that's when you welcome all of it. So feel free to message me, email me or David as well. David created these slides and has a lot of good background on the Hyperledger Foundation so he can help you get plugged in if you'd like as well. I'm gonna gloss over this, but basically, yeah, we use GitHub for managing the Basu repository. We have a Wiki for proposals and for kind of other information that we share and we use a lot of Discord for our chat. So use these tools, check them out. Here's some links, I'm gonna share the slides after so don't worry about collecting all these links because there's a ton of links throughout the whole thing but yeah, there's a bunch of stuff here that's really good. And yeah, good first issues tag on GitHub also. Those are some small changes that like I said can get you rewarded and get your hands dirty on Basu. That'll be really useful for everyone. All right, so I'm gonna dive right into it now. So the merge and Basu, right? First of all, what is the merge and what is it trying to accomplish? So I kind of wanna give some background before I dive into the technical specifics but the merge which happened about two and a half weeks ago was intended to build a more sustainable ecosystem around Ethereum public networks that is more secure and very much so carbon efficient compared to the previous iteration in proof of work. The goal of the merge was also to create a bigger diversity of clients and a bigger decentralization vector by removing the hardware requirements to run validating nodes. So previously you would need very intensive mining hardware, now you just need the ability to run moderately powered virtual machines or Basu can run on machines with reasonable amounts of RAM and memory. So the requirements have gone down a lot. Yes, you still need to stake ETH but there are many technologies that allow that 32 magic number to be brought down a lot lower. So there's really openness of participation in proof of stake which we're really looking forward to and it's helped Basu grow dramatically as a client and is really helping the decentralization of the network and the security of it. It's much more expensive to attack Ethereum now. Energy efficiency, the second key tenant, this is really what it's all about the kind of reason to be for the merge, removing the proof of work mechanism for consensus, bringing in the proof of stake mechanism for consensus and eliminating 99.9% of Ethereum's carbon usage overnight which is many to believe is the largest reduction of carbon by any technology in human history which is quite exciting. Although when you contrast that against the fact that proof of work was horribly wasteful, it's not so great but you know, hey, we gotta make incremental progress here. The third is about transitioning. Basically the developer experience is exactly the same before and after the merge. We worked hard to make sure that the technology worked in the same way. I'll explain more what this means when I talk about Basu as basically a part of this tech stack but in reality, a lot of hard work was made to ensure that the technology would remain the same before and after that a lot of the APIs and a lot of the connection points and a lot of the smart contracts would not have to change and there's very, very little that actually changed. So this is really a resounding victory for developers and for people in the ecosystem to not have to rebuild and it just signals kind of more about what proof of stake has to offer going forward. Last is the economic model update. I won't dive too deeply into this but basically the economic model of Ethereum has changed. This correlates to production of issuance of Ether, changes to the fluctuation of supply and like I mentioned a security model change where the 51% needed to control the network is much more than hash power. It becomes a monetary incentive which is in the billions and billions of dollars at this point. If there's questions at the end, we can go deep into that but I will kind of pause on that one for now. What does this mean as far as technology is concerned? So I know you've all probably heard but the reason it's called the merge was because we initially had the proof of work chain in Ethereum. We had the kind of beacon chain which was running separately but not actually performing consensus for Ethereum mainnet. We merged those two chains together by signaling a transition where these two pieces of software that were previously independent. So Ethereum one software like basu and consensus layer software. And in this example in today's workshop I'm gonna use take as our example but the merge basically signaled those two pieces of software to work in tandem to run consensus for the network. This is a oversimplification but essentially new sets of APIs were created to allow the execution clients like basu, like Geff, like Nethermine, like these execution clients to speak with the consensus clients and to allow them to steer the ship and to secure the network but the execution of smart contracts and transactions still happens in the kind of ETH one layer or now it's called the execution layer. The reason for this is because these ETH one clients execution clients were good at what they did. We didn't need to throw them away. We didn't need to change what they did or create kind of a new Frankenstein monster. We simply remove the portion of the code where they or shut off rather the portions of the code where the execution clients handle consensus and we've shifted that over to a new client. So the new stack in proof of stake Ethereum is two clients and there are many combinations and they all interoperate based on something called the engine API. I'm gonna get into this a little more in some later slides but basically you need two things. Previously you needed one and the gains are all in the carbon efficiency side and scale was not the focus of the merge. Scaling will come later on slash now from things like rollups and sharding. I encourage you to kind of do the research on those but essentially we have other technologies to address scale and data availability problems that are coming down the pike now and our already, you know, layer twos are already gobbling up a lot of the transaction volume on Ethereum mainnet which is only a good thing because it's driving the price of gas way, way down and it's really increasing participation in the network but all the good security guarantees come from these two things, these consensus and execution clients. So proof of stake merged. Now we kind of have this one thing and we're adding on new technologies now that the merge is complete to work on scaling and data availability and other problems that we have in the protocol right now. Whoops. Quick merge timeline. I'm really not gonna get too deep into this but the Beacon Chain was launched quite a while ago. The staking deposit contract around 2020. So we've had this launched on mainnet for about two years. A lot of hard forks in there, a lot of interesting development but this is kind of how protracted the timeline came up at the end where we had one, two, three, well, so the Robson, Sepoy and Gurley merged three test merges, test net merges and then we had the big event in September of 2022. There's more information at this link here if you're interested in diving into the timeline but it's been a long, long journey to kind of figure this out. Now, how does this impact BASU? It really comes down to the fact that BASU is no longer a do it all kind of client in terms of Ethereum proof of stake. When it comes to other consensus mechanisms like proof of authority or private network consensus mechanisms, BASU remains largely unchanged in that sense. Proof of work is also still available within BASU to support networks like Ethereum Classic but in terms of Ethereum proof of stake, we've changed the way that the code operates to, like I said, use this engine API to communicate between any number of consensus clients and the execution client in this case, BASU. So take who I've included here this kind of little logo is one of the consensus clients but there's multiple others. There's Nimbus Lighthouse, Loadstar Prism. There's a diversity of clients and they all are specced against the same engine API standard to ensure interoperability and that's the same on the execution client side. The networking stack remains largely the same. The Json RPC API remains largely the same. So developers who've built tooling around BASU, around RPC and around these APIs, they don't need to change their approach. Instead they get more APIs via the consensus layer REST API and basically there are some new end points on the execution side which give us more data but it's largely unchanged. And again, the reason for that is because we wanted the developer experience to remain the same. We didn't wanna have to throw away infrastructure and we wanted to be able to have clients that already exist in private networks, consortium networks, sidechains, what have you. These will continue to be compatible with Ethereum mainnet due to the fact that the execution client remains basically the same in both of those capacities. So this was done intentionally to keep compatibility at least on the BASU side with private networks and mainnet and have those standards allow the things to cross those boundaries which is pretty awesome in my opinion. Yeah, like it all comes down to what does this mean for enterprise? What does this mean for users? BASU is an execution client now but public network participation is rewarding, right? You can literally be paid to secure the network. You can learn to operate node infrastructure. You can secure the network by staking and engage new communities, developers and users. This is a pretty big kind of lure here that the staking rewards are roughly around 12% now. That number is gonna go down. These are annual returns rather. These numbers will go down as more validators enter the network but with things like MEV and some of the other innovations the staking rewards are pretty high right now. And I think that the time to learn and to understand what this means for different enterprises who are interested in Ethereum is now because the network is just getting used to proof of stake. It's running successfully and things are working but the infrastructure is still a little bit fresh and there are a lot of things to understand before staking that I think would be good for public network participants to know. And we're gonna of course dive in a lot of that today. And it's a new opportunity now that Ethereum is sustainable and it's scaling and it's open for business, right? You don't have to worry about deploying new applications on public infrastructure because there's no more kind of carbon boogeyman over the shoulder of any business that tries to deploy NFTs for example or things onto Ethereum mainnet because that has been totally removed. So I think the barriers to entry are much, much lower to building on public infrastructure. And I think that this is only a good thing to encourage more people to not set up costly infrastructure and to think through how do I engage public network users? How do I engage liquidity on these networks that already exists? New use cases, kind of new paradigms without having to, again, throw away. I already have a knowledge of Basu. I don't need to throw that away. I already have infrastructure potentially in place and all I need to do is learn how to operate this kind of new stack in a way that makes sense and to learn some of the opportunities that are at the space. But I believe now that the merge especially has been completed I think enterprises can feel secure in participating in public networks. Again, there's a lot of nuance to what the actual smart contract development looks like but we won't touch too much on that. But from an infrastructure perspective I wholly feel this is the case. And I think that now is the time to start looking at what data can we pull from the network? What apps can we deploy? What new business models can we try out? Things like that. So that long-winded intro, I'll pause for a hot second. If there are any questions in the chat I see we have some questions around the EBM and zero knowledge EBM. So quickly, this is a little bit of a kind of off topic but I appreciate the question. Zero knowledge proofs and zero knowledge EBM we are actively looking at what that looks like for Beisu. So we are like I said my, I'll get into this actually as we get to the next the slide after this one but we are actively looking at expanding the picture around Beisu execution to include more than just Ethereum public networks or sorry, more than just Ethereum mainnet but to be EBM based public networks and that means a lot of things but in reality what it means is that we're tailoring we're gonna continue to build off these I'll get there when we get to the slide, but anyway to reiterate why is participation rewarding from a staking perspective? Currently 5.2% APR here that's a pretty big number post merge where again we're seeing a little bit more now that block rewards are involved up to around 12% with MEV which I won't touch on there's essentially new additional software you can add on to get even more value it could be more than like 12% but yeah, these things are very, it's valuable right? Depending on the size of your stake there's money to be made and there's a lot to be learned here and soon there will be the ability to withdraw on stake we think that a lot of people think that that might actually remove the need to stake on the beacon chain but I think on the contrary it's gonna bring more validators online because they have the ability to enter and exit at will which I think we'll see more participation as people become less burdened essentially by the process here, but these rewards are large there's the potential, so the first proof of stake block that was mined had a very, very high minor tip of 43 ETH so somebody made probably around $50,000 mining that first proof of stake block because of these essentially tips and fees I won't talk too much about how that works but it depending on the block that you mine these things can be very lucrative so these are the average kind of aggregated awards I won't dive too much into that but there's a lot more information out there I'm actually gonna come back to this slide after I go here, so basis architecture as it is today right, we have a whole mess of components a whole mess of things that are going on we have an interface layer here at the top which interacts with your DAP or your wallet so this is MetaMask, this is a specific application this is any other wallet that you might use and then again, we have the engine API built on top of that kind of RPC layer used to connect to that proof of stake consensus client so at the bottom we have our traditional Basu funds stuff, we have our world state storage so this is really key because in the new proof of stake stack the execution client still handles a lot of the state especially historical state so they keep track of what is up with the accounts what's up with the codes, what's going on in the try so like basically the actual blockchain itself and the kind of the try nodes that connect everything together and then the overall picture of the blockchain itself so those finalized blocks headers, other information and then of course we have our networking layer we have the execution core which is kind of the really key piece here with the EVM at the center so the Ethereum virtual machine and then we have of course things like the transaction pool synchronizers for the differences in the way that things come in and the block validators which makes sure that transactions are valid before they are put onto the chain so the previous question about zero knowledge and other EVM based chains our goal is to, the EVM is currently decoupled in Basu and what that means is you can extract the EVM and not use all of these other components we wanna kind of change that up a little bit further to be the EVM is kind of using the execution APIs these engine APIs to do a whole bunch of cool stuff we haven't fully baked this out yet we're focusing on a lot of post merge bugs that essentially means exposing execution of the EVM to other network types, other prover types so like zero knowledge EVMs, pluggable EVMs different kinds of approaches that will allow us to support other chains like Gnosis chain or different services like Oracle services like Chainlink or even layer twos like arbitrary and optimism because they all use slightly tweaked versions of these core codes and we're gonna lean into that as kind of a modular Basu architecture that we're looking to continue to build out but of course, this is the point in time snapshot view and I'll have more, if you're interested in that topic that I'm discussing right now check out the talk that I did at global forum at hyperledger global forum it explains in detail exactly what we're discussing here which is basically what is the future of Basu as an execution engine so I would definitely encourage you to go check out that talk it talks a lot about the decouple EVM and the execution engine APIs and all this good stuff so yeah, I got a question about explaining statement words in a little more detail so I'd absolutely love to do that basically, when you stake ETH there are two things that can give you rewards and there are two things that can kind of take those rewards away so I'll start with the things that give you rewards so in the proof of stake world your node has responsibilities to witness and attest to other transactions that are put into the blockchain by other users and your validating node is also responsible for building blocks when it's your turn and when it's your turn is basically random chance so they do rounds of voting and they move things around kind of round robin random style and when it's my turn to propose a block I put that block up, the network votes on it and if it's correct and has all the right things I get all of the individual transaction tips that go into that block the economics of that are determined by something called EIP 1559 but the basic is people can pay more money to have their transactions gain larger priority and those tips go directly to the validators so if I build a block that's filled with big tips I get a lot of money for that block and since block production is relatively small in this case the chance that I produce a block is somewhat smaller those rewards are a lot higher so the attestations are much more constant and much more smaller stream of rewards because my node is just witnessing and voting on what's happening in proof of stake to ensure the chain can progress correctly but when it's my turn to propose a block that's when you kind of get the big bucks and if you advertise and like average those rewards this is where we get these numbers but it really depends, right? If someone like I said pays in one specific block and you get really lucky for a huge tip because they really want to try maybe they really want to mint an NFT and they want to be the first person to mint that NFT they can pay a huge tip to get their block or their transaction included in the next block and if you just happen to be the person that proposes that block you get to take the entirety of that tip that's how like I mentioned the 43 ETH block reward was mined on that first proof of stake block because people wanted the kind of the notoriety of mining the first proof of stake block so a lot of different users whomever put up big tips to get that block to have their transactions so that's obviously a hedge kind of extenuating circumstance but it really depends like these rewards can be quite high and there's the steady stream of rewards and then there's kind of the big blips of rewards that bring you to this number on the contrary the two penalties that can be incurred are basically slashing and penalties so penalties is the wrong word the two instances where you can lose stake are penalties and slashing penalties are very much so penalties right they're not big that you're not screwing up you're just potentially like going offline is a good example of a penalty it's called an inactivity leak if I go offline I don't like lose all my ETH but I basically miss out on those attestations so I'm hit with the penalty that's roughly the same size it's very small the way that people like to view it is essentially you can for the amount of time you spend offline you can re-earn back those rewards just as long as you're online so if I go offline for a day my node if I'm back online tomorrow and I run for 24 hours I will have made back the amount that I lost roughly so the inactivity penalty is really just made to keep people online and keep people validating keeping the network decentralized and running smoothly because we don't want the number of nodes to drop too low but it's not meant to be like oh your power's out for a day or your internet goes down you're totally screwed that's not the case these penalties are again they're roughly correlated to the amount of time that would take to re-earn them back online however slashing is much more dramatic there are a few reasons to be slashed but mostly it's intentionally nefarious so if you're being an intentionally nefarious user other users who witness that behavior can put up a proposal to slash your stake so for example if I'm running my validator keys in two locations basically trying to double up on my rewards with two validators using the same stake I can be found out and slash for that behavior and I will essentially lose a lot of my stake half potentially more depending on how long that process takes and you're also forcibly ejected from the network meaning you cannot regain the rewards that you would have running a validator going forward so slashing is intended to prevent the economic manipulation of the network it's intended to take the stake of those who are trying to do that and it's not intended to harm average users so most people will not encounter this like there's slashing protection built into a lot of these consensus clients by default because they know that you don't wanna run your keys in two locations on purpose like so if you accidentally deploy a node that has two sets of keys attached to the same for your recipient in that way like there'll be things that are like don't do this so as long as you're following the basic rules for the most part you are not going to be slash okay I got a question about how will the decoupling of the execution layer and the consensus layer impacts basis used for private and POA consensus networks the answer is it really won't like if I'm gonna go back to the let me try to find that yeah so these privacy features and these pluggable consensus mechanisms they're basically considered part of this consensus framework any attempts that we will make to kind of modularize the execution component will keep the ability to plug in these consensus mechanisms and we're not gonna like throw it away or discard it our goal is to make Basu more kind of lean modular for execution which will frankly remove cruft and like unnecessary or vestigial code from private networks because as you kind of pick the Lego blocks you want to build a consortium network you don't have to include all of the like proof of work or like public network stuff so you know we're hoping that it will streamline the release process for both and make it a lot easier to deploy exactly what you need and not have overhead from say you know I'm like my node is trying to run public network stuff but anyway yeah so that's that's that the I'll have more to share on that later like we the Basu teams like the we haven't decided necessarily on a way forward on this exactly quite yet our immediate goal is to clean up all of the merge code and make everything a lot more you know robust but again we're going to continue to support these private network use cases and the consensus layer should have no impact because if you're running a private network you don't need to talk to the consensus layer because you don't care about proof of stake so that's the answer to that question can I talk a little bit about soulbound tokens is another soulbound tokens which is another question from YouTube I can do that super quickly it's very off topic but in reality soulbound tokens are NFTs which cannot be moved so for example if I have a specific wallet address that's associated with me and maybe I complete my like I go to university and I complete coursework maybe my university can provide me a soulbound token that says I completed you know four years of university whatever with this degree what these data points and that can never be transferred so there's no risk of kind of like scams where maybe someone scams me and I click a link and it just automatically transfers the token like there's none of that because these NFTs cannot be transferred they're basically bound to the address to which they are issued there are some like nuances to that but that's why they're called soulbound they're basically bound to the address that they end up on yeah and you know there's for sure arguments about verifiable credentials dids and soulbound tokens but again that was an example for a question from YouTube we can if we really wanna dive into it we can do it more at the end but yeah so okay let's switch over to the kind of tech side of what I'm gonna show today so I'm gonna quickly go through this slide on what are the things that you need to do to follow along and to get started and then I will switch over directly to a tech demo and we can go into that as to low code no code platforms that I'm not sure I presume that it depends on what you're trying to do Basu is essentially infrastructure software so I'm presuming that if you're trying to build applications on top of it you would need low code no code that targets solidity and all that other stuff but I would need to do more research there anyway so to get started there's a bunch of things to keep in mind so these two on the right hand side rather these are the clients right I've chosen again take you in this example I work at consensus, consensus develops take you there are multiple other choices for consensus layer client and they are all compatible with Basu so there's no extra work that needs to be done in order to connect those things together but I've provided links to the docs there's docs specifically for connecting to a testnet which is what I'm gonna walk through today so if you wanna follow along directly for the tutorial I encourage you to go to this docs page it has breakdowns that will link you out to other areas of the docs so if there's any areas where you have more questions definitely stop me during the demo but at the same time the docs this tutorial is in these docs and they will have more robust it'll push you in different directions of the Basu docs to get you what you need also link to the system requirements on the take you and the Basu side let me yeah let me throw the link to the docs page in the chat that's a great idea actually how did I get this back now okay just to reiterate everyone can still see my screen correct yes no maybe so cool I'm just gonna guess that we are cool yes the answer is yes perfect yeah so I just dropped the link to that specific tutorial that I mentioned in the chat system requirements docs for take you for other tools that will help these are not required but the postman collection in Basu is super helpful for hitting the RPC endpoints to determine things like how many peers do I have connected what's my log level can I change that to get debug logs like there's a ton of stuff in this postman collection that I'll walk through but it's really easy to download and set up and I'm sure that if you are a developer or a software engineer that you're familiar with postman or at least the RPC API stuff the girly launch pad so if you want to test a stake before you actually stake real ETH which is 1000% recommended do not stake ETH on mainnet without testing it first on a testnet you can go to the girly launch pad the girly launch pad is basically run by the Ethereum Foundation it connects you directly to the girly staking contract where you can stake 32 ETH or in this case testnet ETH also ping me if you really want some girly ETH I am a girly whale and I will send you some to test this out but anyway it will like there's of course thinking about staking things to understand and then it'll walk you through the actual exact process of basically breaking down what I've talked about in a large part and it will allow you to deposit directly to that testnet stuff and it'll even, yeah I am not a reliable girly faucet let's just say that but there are, I can also link out to some girly faucets when I share the slides I'll add in some faucets there but yeah, I do have a lot of girly ETH so if you're looking to stake I'll send you guys 32 ETH 32 girly ETH but anyway, yeah I would 100% recommend reading a launch pad it is absolutely you actually have to do it if you wanna stake but it really asks you a lot of questions on like what you are looking to accomplish you know, what is your staking setup you know, do you have enough disk space do you have enough CPU the Basu docs cover a lot of this as well but this will let you know what you really need to be concerned about from the perspective of actually staking so you know, a good place to go Postman collection I mentioned the MetaMask download any wallet is allowed I'm just gonna use MetaMask in this demo so no worries if you wanna use something different but for example, we have our MetaMask connected to localhost 8545 which is my running Basu node in the background so it's really useful to be able to one connect it to see that your Basu node is providing you correct RPC and two to make sure that you just like you know, you understand what's going on you can see the ETH and your balance you know, check assets and do whatever you need to do so if you're not familiar with MetaMask it's a wallet of any kind is relatively required when it comes to doing stuff with testing and then also Grafana so setting up metrics for your node I'll show specifically what I mean and what the metrics are there's a ton of metrics available if you're a Basu infrastructure person you've probably already seen this so I'll go through everything and if we need more detail on certain parts that I think people might be familiar with we can just dive in there and then there's also a Docker approach for the same thing I'm gonna be doing everything in this demo bare metal on Mac OS the reason being is that it's just easier to demonstrate what's happening but there are plenty of tutorials around Docker and around other environments as well everything should work relatively the same between Mac and Linux as for Windows there is support for Windows with Basu there are some constraints when it comes to the cryptography libraries and when I say constraints I don't mean it won't work I mean you might have less optimal performance but other than that it's pretty much the same but yeah there's tons of information in the docs on that before we get into the demo one more off topic question what does it mean that Hedera claims to use the Basu EVM? Does that mean we could host a node on Hedera using Basu? So Hedera uses essentially a custom implementation or like I said they extract that Basu EVM into another environment where they hit the endpoints of the EVM with like essentially the same commands that would be needed for smart contracts but the inputs come from the Hedera ledger so different inputs, same execution environment as Ethereum and then the outputs are translated whichever way they need so what that really means is it brings EVM compatibility to Hedera because you take the chain inputs of the hashgraph you parse them in the way that makes sense to the EVM you run the smart contracts in the Basu EVM environment and then Basu will spit back out information that Hedera can understand and can change the state on Hedera so it's really just using that EVM as kind of a state machine for Hedera to run Solidity based smart contracts and the reason that that is is because like I said there's been a ton of good work done in the client to make the EVM extractable meaning I don't need this is actually a great segue to my diagram again I don't need all of the other stuff to make it work I just need this and some of these things like so it's really just what inputs do I need I have smart contract code I get outputs I don't if I'm running on Ethereum I get all this other stuff because that's just the way that it works but again we're looking to expand a lot on this topic so if you have more questions on that I really do encourage you to drop to check out the YouTube that I gave for the Hyperledger Global Forum talk and David if you could maybe share that in the chat on YouTube and on Zoom that would be great just so people have it okay before I continue let me pause for any burning questions on what I just discussed we're gonna spend the rest of the time going basically through my demos stuff I also want to take a drink of water here then I will continue to answer questions as they come up but I will at this point in time switch over to some of my other things so I know this is a lot of gobbledygook where I'm actually gonna start is the configuration files I'm gonna make this a little bit bigger for everyone to see so I have two files open right now and two environments so first I'm gonna open my so again for and for the purpose of proof of stake you're gonna need two clients so in this case I'm using Teiku and Basu you can learn where to download those and how to get them installed on your machine in that like I mentioned in the links in the previous slides but it's pretty pretty straightforward to get this test set up these things are a little bit heavyweight so make sure you have not heavyweight rather but like they'll use a lot of memory make sure you have enough this I actually ran out of this glass night by accident when I was seeking this webinar but I have it it's all ready to go you have these two nodes running in the background and again they run on bare metal they run in Docker however you wanna do it it really is pretty straightforward to set it up all you need is one command and a config file so to start Basu and connect it to Gurley and connect it to my consensus client all I did was run this one command and I pointed it to my config file which is this gurleyconfig.toml that I'm gonna walk through in just a second it's dead simple as long as you have your secrets and everything set up the port stuff but yeah basically it's dead simple and it's all based around these configuration files so stuff you might be familiar with if you're running Basu what's my log level? fine I'm at info I don't wanna spam you guys with logs while we sit here because it'll be impossible to read data path basically a location to your directory easy straightforward this is where it's gonna store your database and again make sure the drive that this is attached to has plenty of storage Gurley is around 150 gigs or something like that but mainnet is a lot more so prepare to have about a terabyte free I would say Basu on mainnet uses maybe 600 and something gigs so state will grow over time just make sure that you have plenty of memory available both in RAM and in storage I would recommend at least 16 gigs because you're gonna need to run both Basu and Taku 16 gigabytes of RAM my machine has 32 which is great I'm able to like run the zoom webinar and run two nodes in the background pretty much without breaking a sweat so again just RAM RAM will be likely your constraint here RAM and CPU so make sure that if you have the ability to spec a machine now that mining is gone we don't care about GPU power really I would focus again on the disk space making sure you have it can work on eight gigs of RAM but would recommend 16 and then a CPU that's decently powerful and within the Basu docs in the system requirements page there's a lot of info on what this means in depth I don't wanna spend too much time because there's also specifics for cloud environments if you're gonna use AWS or something to the like there's info on what types of machines and you know and more so back to the things the big two new big items here are these right here so that you're gonna need in order for the engine API to work like I just mentioned let me close this actually you're gonna need something called a JWT secret so this is a JSON web token it allows your engine API these two Taku and Basu instances to talk to each other this doesn't need to be hosted locally I just put it in my local folder wherever and these can be generated from the command line and very simple to generate these the Basu docs contains the exact commands you need to generate these it's dead simple you don't need to like fiddle with it just make sure that you've generated a token and that your environments can access that token from both the Basu and the Taku side it will need to be the same JWT tokens so yeah I can't comment too much on the security implications of using JWT tokens just don't expose these depending on how you there's plenty of guides specifically on on ETH Staker which is a discord community on basically securing the JWT tokens I won't get too much in depth on that but yeah there are certain implications using these I would encourage you to read up on them but the engine API port the RPC this is the command in Basu it's slightly different on depending on what client you're using but you need to make sure that this is pointing to the same one in your Taku config so I'll go over there in a moment but 8551 is the default I would recommend if you're testing locally just stick to the default these obviously need to be changed depending on the environment that you're using port forwarding will probably be the biggest headache if you're using things like Docker and things like Kubernetes so you know make sure that you have the right ports mapped and the right ports enabled this is probably going to give you the biggest headache when it comes to connecting two machines together if you're using containerization that's just a word of warning there's information in a lot of these online guides on how to make this work simply but yeah this this is really truly this is the biggest change between a Basu config pre and post merge is this right here we have named networks in Basu so you can simply just type in the network name and it'll configure pretty much everything else for you on that part this can be pointed to mainnet to other test nets we provide the specifics and we help you peer based on this so you don't have to do anything with DNS like as long as your machine can communicate outward the names network should do whatever you want to do to answer another question we have one here if you want to run a non-validating node like an archive node for a data analysis do you need a consensus client the answer is yes if you want to follow the mainnet chain you will need a consensus client to keep your execution client can really only go up to the merge boundary without a consensus client so in order to fully sync an archive node for mainnet you will indeed need a consensus client there so one thing to note yeah but there's also a nuance to that is that there are consensus archive clients and there's execution archive clients and they both mean slightly different things if you're interested in data analysis around smart contracts and accounts the consensus client the execution client excuse me should be archive so basu would be an archive node if you're interested in pulling that kind of data a historical state all that good stuff the archive client on the execution side that will be more geared towards like beacon chain data so the history of syncing or a proof of stake all that good stuff less geared towards kind of cool smart contract data consumption etc but yeah so in this case I'm using my sync mode to be snap sync this is an experimental flag however we the only reason that we keep that experimental is because we are not like we're going to be getting rid of that in 22.10 which will be released quite soon so don't don't worry about this this is the most tried and true method we also have a checkpoint sync I'd recommend sticking to snap at this time it's just it's fast it works it's tried and true and we you know we recommend it data storage format and bonsai so bonsai you know as you might have heard is a big kind of approach to state storage within basu where we provide essentially a backward looking differential in the state what that boils down to is you don't have to prune the node which is quite nice if you've run geth or if you've run other nodes before bonsai provides basically implicit pruning and it keeps your state state storage from growing really out of control so if you're trying to keep your if you're trying to keep your state storage from bloating a lot use bonsai we also have forest mode as another option if you're trying to run an archive node my suggestion is use forest the reason being is that bonsai will again it it doesn't fully store all of the chain data it stores kind of differentials that allow you to recreate historical state but if you're trying to query historical state a lot of the time just use forest because it uses more data but it gets you to where you need to go without using a ton of cpu so it's trade-offs there's more in the docs that explains this but you're also going to need to use full full sync mode if you want to run an archive node so full and forest is what you're looking for it will take a long time to sync just a heads up we're looking to improve this it could potentially take up to months depending on your latency and the speed of your machine but to clarify what this does is that it it syncs and executes every single transaction from genesis to present so if you're running on ethereum mainnet that's like millions and millions of transactions that need to be fully executed it'll just take a long time so if you're using that for data purposes that's totally great but you know if you're just trying to play around and you can still get near head data this way like when I mean near head I mean you can still get data from the head of the block like close to the front of the blockchain using these two options and snap sync also doesn't prevent you from getting historical state so you know play around with these see what happens there's more information on what exactly you can you can and cannot get when using all these modes in the docs but yeah that's there identity you don't really need this rpc so these are some of the commands you need to run around your rpc my port is 8545 like I mentioned and when I was talking about metamask this will need to be exposed in a way that makes sense I don't recommend avoiding port conflicts we have a web socket port exposed on 546 and a graphql port exposed on 547 so depending on what you're most familiar with we offer multiple ways to interact with basically the rpc or the graphql I'm recommending that if you are running on a main network or if you're running in production do not expose all of these apis I would recommend removing admin and debug but the view will want the rest essentially there again there's tons of information in this in the docs but you just don't expose admin to the broader internet metrics I'll dive into what this means grafana great stuff just it goes over in detail how your node is running node health all that good stuff and then peer to peer host this I'm not going to comment too much on this it really depends on the environment that you're using for me it doesn't when I'm exposing bare metal on my machine I have pretty much zero issues peering that is not always the case if you're not using upnp and if you're not using if you have specific NAT environments and differences depending on docker especially check these settings out if you're having issues with peering because you might not be able to peer outwards and you can only accept incoming peer connections which is often the case which will lead to slow performance if you have again if you have low peering performance and I'm talking about let's see I have 25 peers right now because that's my max the default max peer config so you should be filling that up once you're in sync relatively fast so if you're not seeing peering numbers if you're seeing like constantly stuck at zero or one check this out make sure you're looking at those settings one other thing that I wanted to dive into quickly that I forgot to actually put into this config I apologize is e-nodes let's see so you can set up local networks basically where you can specify ethereum node addresses and you can connect to those nodes so you can kind of set up local proof of stake test nets and have your nodes communicate with each other you need to obviously supply them with data and you need to use specific things but for testing it's really kind of cool I'm going to try to find sorry yeah here we go yeah so these are the boot nodes if you're looking to set up this is also useful for folks just running private networks of course but basically you can set a your own list of e-node URLs and I'm going to go back to my logs here my nodes e-node is sorry I'm trying to find it anyway I think it's here we go this is my nodes e-node so if you wanted to hit this node from the internet you absolutely can I don't really care if you do this is on girly you're not going to hurt my my computer or anything like that and this e-node will also change based on if I spin up and spin down this node but but this so you're going to get like these node addresses can be parsed in through your command line through your config file to test local networks to peer for example if you have one node that's synced up with mainnet and you only want to peer with that node or if you're running an infrastructure environment where you have maybe you only want to expose one node to the broader or you know a handful of nodes to the broader internet and then propagate the data from those nodes to other nodes you can use these boot nodes to to get really fine-grained control over how your peering works it's not for everybody I mean if you're just running a solo rig or if you're just running one node you need to use the default boot nodes because you won't be able to sync otherwise but yeah just one thing to take to look at I don't know who's would want to use this but there are really cool stuff there's really cool stuff that you can do with those boot nodes and you can get kind of creative especially if you're having those peering issues with Docker because you can point to fully sync nodes you can have like a snapshot node that you know serves other nodes you can get really creative there okay so that was all for the basis side so again I'm going to share these config files they're in the the I've shared the links already in the kind of sign up page for this webinar but I'll share them again they're they're two gist links so it really is just a copy of this and I've changed my secrets like with other you know stuff one thing to note too is that if you're on a post merge network if you're having trouble peering you can drop the minimum peers to two or even one in a post merge network you don't need a minimum number of peers to basically validate a head block to sync to because that's handled by the consensus layer previously the number was the default number is five because in a proof of work network or a pre merge network you need a good sampling of nodes to understand what is the source of truth but since we're relying on the consensus layer to determine basically the head of the blockchain it's safe to drop this number down so I used to just because it allows you to start a snap sync faster that you know you can keep it at the default but if you're having again if you're having trouble with peering which really depends on your environment check out this flag and you can often just drop it down okay any immediate questions in the chat I'll wait two seconds awesome okay now take a new side this one's even easier it's pretty great and the you know log destination console same thing data path where are we looking here the great thing about the consensus layer is due to a property called weak subjectivity which I will absolutely not get into we will first you can basically sync the full network in about two minutes I will run a full sync of the consensus layer while we're doing the webinar but the essentially you pull these initial states from another node in the network that has or another endpoint that has a recent finalized state and due to the properties of weak subjectivity we can assume that to be the correct head and we can go straight there and ignore the other stuff so I 1000% recommend using a checkpoint sync unless you're running an archive node why you know this is for you to decide but the you know this is a really cool feature of the consensus layer that we're looking to bring to the execution layer we're trying to decide how that works can we do a checkpoint style sync or we just pull from the consensus layer and let you sync a node in like a little bit we have a checkpoint sync the checkpoint it doesn't work quite the same way so they're not misnomer it's just a different name but anyway splitting hairs this one use checkpoint sync it's really cool you sync really fast you can copy this direct endpoint the ethereum foundation provides these endpoints free and open to use so if you go to the ethereum foundation website you can find endpoints you need to make sure that it matches your network so if I'm on girly in this case if I was on sepolia or a different test net I would need to obviously swap these links out we got a question is there any difference in running basu bare metal docker versus wsl2 that is a great question I'm not so sure I can follow up on that and get that back to you I would encourage others to chime in in the chat if you have experienced any differences here my guess is that there's always nuance when it comes to the ports exposing ports is always weird and can be a headache and also kind of some of the memory consumption things java does interesting stuff with memory and with garbage collection so making sure that your note health is in proper shape really it just changes depending on your environment and there's also a lot of java options on that can change things like heap size and memory constraints I would encourage you if you're not familiar with any of those to read up we have some information in the docs on some of the best java options to change but as for wsl2 I'm just not too familiar with that environment so I can't really comment too much but I encourage folks to chime in in the chat okay so Ron Gurley as well teiku provides the specifics with the releases you need to match these configurations one to one so the the JWT secret I'm pointing to my exact same file path as my previous one the execution endpoint so EE is the execution engine endpoint is the exact same endpoint again these need to make sure that they match the semantics are a little different in this case teiku is a YAML file expecting a full URL whereas basu is just parsing the port and doing it itself these specifics like I said I'll share these so you don't have to worry about this too much but the the documentation provides very clear instructions same with the JWT secret if you're if you're actually running a validator you got to go here and read this validator basically there's a ton of configuration options you have to set a fee recipient or you're going to lose rewards if you don't configure your validator properly and you're staking your rewards go to a default address which is like fake it's like not a thing and you might you're just losing your ETH so make sure you at least set your validator proposal default fee recipient this is a random address that's provided in the docs but you want to change this to an address that you control so that you can earn your rewards teiku will yell at you if you don't if you don't have that done so here we go the API is enabled but no fee recipient has been configured it is strongly recommended to configure to avoid possible or lock production failures in case a note has not been prepared to receive proposers by the validator client that's a lot of jargon what it means is if you don't set that up and read those documentation components you are going to lose out one piece of nuance to note is that a consensus layer client like teiku is actually split even more into two pieces of software so there's the validating client and then the beacon client so the beacon client is the one that lets you just run consensus check the chain do what you need to do you do not need a validator running to connect to the network and to interact the validator client does need to be running if you're staking in order to gain the rewards and to do some of the additional behaviors that are required of teiku to earn those rewards so a lot of information on that in the teiku docs it's pretty straightforward it's basically one piece of software that acts as two but you don't need those components if you're not validating I would recommend just go checking that out the nuance is actually not that crazy it just is if you if you configure your stuff correctly in the config file then you don't have to worry about the validating client ever existing it'll just run in the background and do what it needs to do and it'll complain if you're doing stuff wrong so one other thing I want to know is the Prometheus file these are your if you're familiar with Prometheus it's a metrics collection that's not good it's a metrics collection engine base who is configured to work with Prometheus and Grafana I see that Martin shared a Prometheus or a Grafana link in the the chat which is great all you need to do is copy that ID and then run it imported into Grafana and then add Prometheus it's a data source and you're up and running it's this is literally a copy paste from the docs so if you go into the docs you copy paste the things to keep note of are the scrape intervals basically your ports obviously and then if you want to change any of the schema around naming then you're more than welcome to do so okay going to pause again before I switch over to the actual nodes to talk a little bit about what they're doing and I'm then I'm going to start actually spinning up two new nodes we'll talk about the nodes we'll talk about metrics I'm going to spin up some nodes some new nodes and then we're going to talk about connecting to your nodes via MetaMask and using your own basic node for RPC which is really cool if you're worried about you know censorship or RPC failures run your own node and connect to the network yourself that's my best advice and use you know use any wallet to connect to your RPC so I did say that I was going to pause and I didn't pause so I'm going to pause and if there are any questions I'm going to take a drink and then if not we'll get started cool and I just keep them coming in the chat I'm basically just picking those up as they come in I cannot see the YouTube chat though so if there's anything burning in the YouTube chat I would ask that those someone I don't know somehow could get it to me that'd be great cool all right so let's switch over to my running nodes I'm going to make this a little bigger try to bear with this text it looks crazy but I'll walk through it really quickly to show what's going on really so like I mentioned once you have basu installed and built you really only need to point it to the configuration file that you set up so the this is the data the path to my config file so I mean if you're what you need to do is this is not a real command by the way that's an alias that I wrote so don't try to run go basu the basically just get into your basu directory and basu the directory is a little confusing frankly when it's built so it's not this basu directory frankly you're just going to have to memorize this it's basu build install basu again this is if you're building from source if you're using I build from source because I like to test things from main I don't recommend you do this we provide official releases and snapshot releases use those releases but if you build from source you go here but you need to make sure that you're in the right you want you want these been and live files like this is the executable you want this basu one right here so if I run in basu version I should be kicked back with a version I'm just going to do that actually just for fun so I'm running a weird dev build it should kick you with the normal you know a version if it can hit this if it can hit this and this works correctly then you're off to the races I'm going to close this one out then all you need to do is point basu to your config file if you want to do everything I just mentioned from the command line that's also possible these can be written as command line arguments so feel free I don't recommend doing that it just makes debugging things harder for us and it makes debugging things harder for you because maybe you didn't escape a character correctly or there's just some nonsense with your shell parser just use a config file I mean it's it's cool to use config files you don't have to be cool too cool to use a command line so anyway I'm starting up with logging level as info a lot of boilerplate a lot of good information that you can gleam from these logs initially so I of course I'm starting my basu version here I don't have a static nodes file so it just uses kind of a default one which is basically just a list of girly nodes but you can set you can set this static node file yourself and that's similar to using the kind of E nodes that I mentioned before where if you want to pick direct nodes to peer with you're more than welcome to do that and that static node list will not it's static right you just pick a much of nodes that you want to peer with and you can check this yeah so the file doesn't exist no static connections will be created that's fine I want to do regular peering no static nodes correct connected it looked up a database it found one this is because I synced yesterday so I wanted to make sure it'll be version two if you're running bonsai version one if you're running forest this is relatively unimportant but if you think that you're running bonsai for some reason and this runs reads version one then you are not indeed running bonsai this is my nodes public key also doesn't really matter trying to think what else is interesting to discuss yeah starting external services so again my metric service is localhost 9 5 4 5 my json rpc is localhost 8 5 4 5 and that's just more of the same I guess that's from the vertex amount loop yeah json engine rpc 8 5 5 1 graph ql all this good stuff so just you know double check that this is in the correct order I did not use a NAT environment there are options around using NAT if that's your thing check the docs in the options folder and then this is where things start to get interesting when you start the network what this means is that you're going to be starting your peering your peer to peer agent will start listening on the default port which is 30303 that is specified here don't really change this unless you're using private networks the ethereum spec will provide a port and it's 30303 so these these these are kind of just decided by the network you don't really need to change it okay cool since I synced my node last night I kind of went straight into real stuff so my node is operating in proof of stake mode which is great I'm receiving payloads or I'm rather I'm generating new payloads four blocks so this block is number seven seven one six nine five three I'm receiving these payloads from my peers since my node is not validating it's not really creating new payloads right now but I am you know this block is the most recent block on girly it contained 83 transactions it had a pretty typical base BF30Y on girly and it used 79% of the gas limit for that block if I were to mine this block on mainnet that would be pretty great you want to use a bunch of gas within the block because that means as a proposer you get more rewards I don't suggest that you nitpick these logs and be like how much gas that I get you know how much it did not get in my block today but if you do propose a block maybe it'd be cool to just go back and check it out and look at the info so you can see you know what's going on totally up to you these are pretty much the only two log lines you're going to see unless you're doing attestations and and like validations so the fork choice updated this is your consensus layer telling you that the chain has switched to a heavier fork and what that means is proof of stake has chosen the correct chain and we're migrating in that direction so my my execution layer client which is basu is moved from this fork to or sorry from a whichever the previous head was which was o x 5 e a to o x 8 8 9 0 9 9 and this can be due to multiple things this can be due to the fact that simply more blocks came in or it can be due to the fact that there is potentially a reorganization of the blocks regardless the logs that you really want to be looking at are on the take you side for that one but I'm going to switch over here again I have 24 peers importing blocks not too much craziness is happening you will see pretty much the same logs in your normal operation of basu when you're validating and proposing blocks you will see new logs those proposal logs are not really that crazy and they're going to happen so infrequently that I would like you don't really need to pay attention look at it I know this is a kind of information overload but it's unfortunately not very exciting when the node is actively running but I'll answer questions here if not I'm going to go over the take you side and then I'm going to talk about metrics all right seems not that's good okay so like I mentioned take you doing its thing logging to the console talking about warnings it was able to load the jwt secret it was able to well you actually will see execution client is online normally this is bright green but since my text is already green it's you can kind of see it I don't know if you can on the zoom that it's slightly different but take you normally use as white text so I and basu uses similar like don't use this green background this is crazy I mean you can if you want but you this is what you need to see right execution is online I'll show you what it looks like when I spin up my new nodes when basu is syncing and take you cannot really talk to it it can but it can't do much because it's waiting for basu to get in sync like I said the consensus layer gets into sync way way faster so it's normal for take you to be waiting quite a while for basu to be in sync but anyway this is what we want to see it can connect to the execution client these are more warnings this is when it was syncing because I like I said I fell out of sync last night when my computer ran out of space but I was able to get back and sync pretty fast actually I only had nine slots and for those that don't know a slot is essentially a block to be they're now called slots because there's potential for empty slots but essentially a slot is somewhere where a block could go if you are familiar with slot like they're incrementing by one it's very straightforward every 32 slots there's an epoch event and these are basically more proof of stake nuance there's tons of explainers on what this actually means in reality it's just a way for the network to achieve additional finality across the boundary of these epochs so and what it means is we're we're running those 32 slots when we have enough of the epochs done we can justify the epochs when those epochs are justified we can then finalize the previous epochs so everyone votes everyone chats we agree on the chain once we finalize blocks then those are you know they're they're considered out of the reorganization depth so we can't reorg those blocks they're they're good in the chain and that's one of the other key things about proof of stake is that we can finalize blocks a lot faster than in proof of work because the chance for a reorg is a lot different and less in certain instances so we get justified epochs economic finality is what they call it okay uh let's see here what else is interesting yep epochs again lots of slots epochs again um pretty boring tiku logs today actually um this one this is one interesting one there was a regeneration event what i believe i don't actually want to comment too too much on this because i only have a cursory understanding but i believe this signals some kind of reorg around 4988 however this was the first block that i resinked when my node started back up so there potentially was just the fact that the execution layer was not online and it had to wait to replay that block pretty sure that's what that is but you can get some logs like this around reorgs and things okay basu running all good teiku running all good postman i can you know got to set these up before like i mentioned this is just apis i can hit my basu node and get some data there's my enode this is really small i'm positive this is really small this is the primary way that i use to interact with the rpc you can do whatever you would like but tons of good information in here they're not exclusive to proof of stake like these are for click consensus algorithm stuff these are for debug specifically just to remember if you have these these apis map to your config file so if i have these enabled then i can so like these are the ones that i have enabled it means i can hit any of those you know i can't necessarily hit these endpoints well yeah method not enabled there we go so really good stuff here like you know admin is useful for debugging issues i can check on my peers what they look like so i have a geth peer here another geth peer just another mind peer you know all good stuff right check that out yada yada yada this this this collection is available from the wiki you just download a json file you import it dead simple it's all in the docs i'm not going to spend too much time on that next thing i want to discuss is metrics so prometheus is the primary data source i know this is my old terminal basically all you need to run is brew install like if you're using home brew there's other ways to use prometheus i did home brew it's just easy to test you know start prometheus in the background fine same thing with grafana i use brew again brew service start grafana yada yada yada this is prometheus it's not the interesting graph but you can add a whole mess of there's a ton of metrics yeah block gas limit of the current chain head it'll give you like little things on what they actually mean so if you're looking to do kind of unique metrics approaches again i'm not going to spend too much time getting on this but there's so much good stuff in here and then once you've added them into grafana or sorry prometheus you can check over in grafana and you can add them as new like you can add more panels to match what that is so this is part of the default dashboard that we provide the basic overview dashboard it provides my endpoints the current chain height target chain height i'm no blocks behind which is great you don't want to be blocks behind if you're blocks behind it means your node is stalling for some reason it might be either starve starve for resources or you know it got stuck somewhere or there's an issue so you want to be as close to head as you can time since last block 23 seconds actually doesn't make sense but i'm thinking that that is an error of not refreshing interesting because the average block time is 12.39 seconds so yeah we must be on the boundary of a slot and i'm just silly anyway 100 percent of my peer limit is used that's good healthy amount of peers 96 percent what have you so this is kind of a misnomer this is a five minute average the block time and proof of stake is set exactly at 12 seconds so this really shouldn't fluctuate i'm not quite sure why it is fluctuating it might be a girly weirdness girly girly does some weird things so as i mentioned to import this dashboard you just go to import and then you type in the url from the docs so that's the grafana one that was shared in the in the chat and then it has really good things if your node is in sync this panel is useful it's around passing passing progress i'll when i'm sinking the other node i'll come back and i'll show you what this looks like this we love to see this zero blocks behind block time is super weird yeah this must be a girly thing unfortunately i don't know why it's not 12 seconds stable but anyway yeah cpu utilization there's going to be peaks and valleys depending on what you're doing this is actually pretty quite low i'm very excited about that as well the memory is also super low do not expect this on mainnet this is not to be mainnet is much more resource intensive um like i said you can expect this ram number to creep up much much higher so don't use these as a representative sample i've also done no garbage collection which is also not representative on mainnet but definitely make sure to test your staking like your test mostly your configuration on girly but test your like you got to make sure that your hardware is ready on mainnet i would recommend running a non-validating node on mainnet for a while and seeing how it reacts before you actually provide a mistake okay so for me this is grafana gross oversimplification of what they actually do they're super powerful tools and a lot all this information is in the docs there's tons of stuff there's java stuff which is also really good too if you're trying to debug java stuff but most of this is for us to make sure that we can properly debug basu trying to decide where i want to go next okay switching back over to here so again i'm going to pause again and see if there are any questions i want to again show that we have these great nodes running in the background cool so in my browser i'm over in my metamask here this is a account that is just we have a consensus i have a lot of girly but if you this is the interesting part right i'm currently looking at local host 8545 if you remember what i put in my configuration my rpc api is at 8545 so i'm connected directly to my basu node via the rpc and metamask so you can run your wallet like this all the time metamask holds the keys and it exchanges the stuff via rpc on its own so i can you know my i can have my eth and eat it too like i don't need to worry about the wallet here but i can run my rpc commands through my own basu node which is pretty great if i go to my settings one thing to note here you have to make sure if you if you are connecting to who i clicked the wrong one so if you this is my local host again this is the local host port make sure the chain id matches your network so girly was is chanity number five this might not default to number five because it doesn't know what network it doesn't know what network you want to start with or what network your local node is connected to so just make sure your chain id matches the network you can go to chain list chain list will tell you like for example arbitrum great four two one six one is the chain id like these are just random numbers so you got to just go look it up whatever you're trying to test if my node was synced to mainnet i could put my chain id as one and i can go off to the races and you know use my own node as rpc but anyway make sure the chain id matches the node that your base was synced to i'm just going to get out of this screen right now but yeah so like i said i can see this balance i can send eth to whomever i guess i'll transfer it to myself like one eth and i can run it through the network and it will my base node will put it up cool it worked so i sent myself an eth but i did it using my own node that might not seem that cool to everybody but if you know how wallet software works that's not really the case in most circumstances it's plugged to a public rpc endpoint but in this case i'm running it myself i'm putting my own transaction into the mem pool of girly and then other nodes are executing those transactions via proof of stake but my node my node propagates that transaction so that's really kind of it's you know exciting stuff right that's like i said it doesn't sound exciting on its head but it's it's pretty cool that you get to run your own node and you can do your own rpc if you want to be even more decentralized than than just basement mask so i sent that eth over it is in my account i'm not going to pull up my other account because that's my other account but let me go to an app we're going to go to sushi swap here says unsupported network in theory it is you don't really need to be sushi swapping on girly but you can let's see if they'll give me anything back I think the answer is no so again I'm interacting with the DAP here on local host 8545 it's giving access to bento box to move the funds I'm going to sign this transaction I don't really think it's going to do anything because I don't know how sushi works on girly okay well anyway you can get you can connect so if you have DAP that is connected to whatever network your base you notice on so if you're running a consortium network and you want to do you know RPC there sure it works if you're developing a DAP and you're you deployed that contract to girly you can also run it this way so that's an easy way to not have to use public RPC to test your stuff you can also like yeah there's a number of things that make this useful mostly from a smart contract development perspective but regardless it works right I can connect to a DAP on my local RPC node and do whatever I want whether it works it depends on if the DAP supports girly or not which in this case it seems like sushi is not but that's cool a lot of things do work early whatever doesn't matter so that was pretty much on the wallet front there's a lot of like I said it mostly applies to developing things with either private smart contracts or public smart contracts so I'm going to quickly switch I'm actually going to go here first you know talk about truffle smart contract development you know this is a smart contract development framework there's others like hard hat and you know a variety of things truffle works with the s code and all that good stuff I'll just drop the link I won't really plug it too much base you get hub repo cool check it out open issues if you find bugs or contribute code if you want to get paid and do cool stuff and work with our teams there's also the hyper ledger discord and the consensus discord so if you have questions about things like metamask or other stuff the consensus discord has you we have basic channels there but we also have basic channels in the hyper ledger discord so just more options really quick I want to step back talk about what it looks like if your node is already staking beacon chain will keep track of what you're doing oh sorry do you have a question or just someone all right I think it was a ladder no worries so beacon chain great resource once you have your stuff like this is a random validator that I picked from rocket pool running basu very effective validator 100% of the block proposals were included in the network love to see it 99% of attestations also love to see that that's really good no slashing whatever this is a really great resource because you can again you can check the status of your stake at any given time you can also set up alerts so if your validator goes down for some reason beacon chain will send you an email to like check it out so the state that these are the states right you deposit your eth into the smart contract the deposit contract you wait for it to get activated your validator becomes activated and then you get a whole bunch of stats so in this case I'm actually going to go directly to the validator we can see that the validator proposed a block on what day yeah that was yesterday that's why they so if you look back at the why is the income so much higher on this one day the reason being is that they propose the block on that day and you get a lot more rewards for proposing a block so they got 0.03 eth just for that block proposal that is about and I want to do math but it's you know point it's 3% of like 14 hundred bucks 13 hundred bucks so that's a good bit for one block proposal this number I can go much higher depending on how much space in that block is used how much tip is included et cetera et cetera we can check again not to bully this validator but they've earned 0.1 eth in about a month so that's you know 100 bucks this is one validator this is actually kind of low as you can see this validator is in 47 thousandth place which is maybe not so great this validator though if we're looking out has really absolutely crushed it 35 eth that's pretty cool since active since okay over over two years so that makes sense but they've made three and a half eth on just one validator so you know the numbers as as I said it depends on the amount of stake eth you have and the amount of blocks you propose but you know beacon chain is a good resource to check in on once you have your validators up and running if it supports girly I don't know that's a good question that I'm thinking of right now I'd be curious to see if you can check on your girly validators this way I don't think so well I don't know I haven't actually I only have looked at my girly validators on my local machine anyway I will switch back to here yes to answer the question are the webinar docs public can we access yes absolutely I will share the slides immediately following this webinar I've pretty much wrapped up the content so we can do we have like you know 20 more minutes can do totally random question hour we can talk about the content which is what I would love to do but I am more than happy to answer general basu questions or just general ethereum public network questions or even potentially other random questions so whatever you have I'll give a couple minutes for folks to fire away if not we will end a little bit early and I can give you your time give you your time back and then as David mentioned he'll be sharing the sharing the slides sharing the recording so the recording will be made available and when the youtube links for a global forum are ready I'm presuming they will also be shared to the same group like I said if you have questions about the future of basu as an execution client go check that out I answer a lot of those in that webinar or it's not really in the in that talk that we were giving it at global forum questions and if no questions that's totally cool I'm going to actually bring it up to here so that folks can see my email one last time do we plan to do a standalone session on how to start contributing to the project David I think we absolutely should do that great idea yeah yeah we absolutely can I have been meaning to do that for kind of quite a while I want to be able to pull together information on protocol guilds to give people an understanding of what they actually get if they contribute to mainnet stuff but truly any and all Java developers can just go into the code base we've tagged things with good first issue so good first issue label that's a good first issue and we have a pretty robust maintainer community to work with you so if you open PRs if you open PRs then you you know we can review them and you can get rewarded if they're directly related to mainnet stuff so that is that did didn't you need to connect configure take who to connect to the ETH2 network yes we did need to configure take who we talked about it in the kind of middle of the session I will share again I'll share my configuration files in that they're in the webinar open like the first page thing I don't know it's called again but like the opening page kind of but yeah I there is also going to be a recording and there's there's tons of information that part about how to configure take who but it's very straightforward contributions from non-java developers that is a good question our codebase is like 98 percent Java I'm trying to think how we could work on that that's something that I'm going to take note of actually for the session David I'm going to take a note on like how can we bring in non-java developers it's a good question for sure but yes like any developers we'd love to get involved non-java devs I will think of something on that one does hyperloader have any tools that help with KYC if U.S. lawmakers crack down that is an excellent question the answer I would say from the basic side is it is no we're exploring we're exploring how to best support these regulatory frameworks at the protocol level what that means is if regulators do decide to move more and more towards kind of whitelisting in certain things that basu might might I'm not not agreeing to anything at this point in time we might have to support those kinds of allow lists depending on on where you're living or where you're running but we would support like multiple flavors of basu since it's a decentralized network user choice is paramount so if you want to use kind of like a U.S. friendly one and maybe we provide that if you want to use a non you know like something else but again I think that we there's a lot of discussion around this just for Ethereum as a whole so there'll be more to share on the future right now we don't have any specific tools for this however there are tools already out there that can basically connect with your basu instance that'll allow you to relay your transactions through what are what are being called regulatory friendly blocker layers so there are ways around this for now but nothing nothing concrete to share about basu protocol level support for basically censoring or KYC is rocket pool a good way to stake one's ETH and earn rewards yes I like we are a very much a big proponent of rocket pool the basu team works directly with the rocket pool team we make sure that our client works well with rocket pool smart node software and with you know the hardware that they recommend so yes definitely recommend rocket pool I love the like liquid staking approach where you can provide some ETH and still get involved if you're not familiar with rocket pool it essentially allows you to stake with less than the 32 ETH amounts and if you want to host your own node it's typically around 16 ETH but I think there are changes to the way that that's working with rocket pool ETH our ETH is what it's called but yes rocket pool is a good way to stake one's ETH there are plenty of other like ways to get involved but you know rocket pool we base the basic team we do work with rocket pool to keep everything hopefully you know in good order basic requirements to run in windows 10 I would suggest Java 17 and appropriate memory like I said around a terabyte of memory in your disk and at least 16 gigs on the and the memory side or the RAM side with windows since you are not you don't necessarily have all of the native crypto libraries that we use so like the cryptographic operations might be a smidge less optimal than the the native versions we are working on this but we don't have any timeline yet you will possibly need a beefier CPU but we recommend anything with like you know two to four cores and decent power in the last you know however many years is typically fine you the bottlenecks really depend so you have to go once you've run basu on your machine to see the best about like what's going on what's my effectiveness is check out these dashboards if your CPU time is maxed out the entire time like if you're if your IO is maxed out you know these are not the only metrics so add metrics around memory memory add metrics around IO and disk if your IO is constrained you might need a faster drive we don't recommend like slow storage we recommend like NVMe that's pretty much the same for all these kind of blockchain clients you're going to need relatively fast SSDs so like network attached storage is iffy and if you're using things like AWS or cloud services you'll want the NVMe SSD or better because IO contention can be I mean it's doing when you're especially during a sync IO is very important as well as bandwidth so just make sure that when you test things out on girly you're looking at you know you're looking through what your Grafana is saying and you're checking to see where there are hot spots to the question are there any public private teams working on KYC or regulatory frameworks for the Ethereum network I would say yes I would check the enterprise Ethereum alliance might be one that would be good to look into see if they have anything I think more and more teams will provide guidance as we go forward as you know we understand what the regulators are actually looking to do right now there's kind of no official stance of like Beesu or Hyperledge or anything like that around what this is but I'm sure there will be more to come but frankly I think a lot of the KYC will come from like client side or like front end applications where you access the blockchain through kind of a federated thing that's my own opinion I don't really know that that's necessarily the best or most robust answer but there is more coming on that so I have a question around what is the suggested operating system and processors I suggest Linux or anything Unix based just off the cuff because it's easier to like I said we have more native libraries for Linux and Unix based systems and the native libraries what that really boils down to is faster performance around cryptography and the problem is windows at this point in time though again we are attempting to we are working on fixing not fixing it's not really a problem we're working on getting better improved support for the cryptographic libraries and in windows but I would suggest Linux or a Unix based system or you know I'm running on macOS right now macOS works fine if that's your preference I like I said I did everything bare metal so if you have an old mac lying around that has again the memory and CPU are going to be kind of the constraints as for the specific CPUs you know I wouldn't use anything that's more than like it really comes down to I'm trying to think about how to best phrase this like specific threaded performance is useful so having like two to four cores is definitely valuable like you Java like will make it will make use of a lot of the the space that you give it let's see what my CPU is doing right now yeah so Java's actually it's pretty happy right now because I'm not running a sync so it's not using very much so one of these is the take who process and one of these is the basic process I'm not quite sure which is which but regardless there are it fluctuates and during a sync this can go as high as 300 percent which means it's using like three full hardware threads to run one instance of Java so it's mostly during sync that you'll see I'm going to share there's a place in the docs that talks about like CPU profiling this is the yeah this is the page I'm going to share this will give you a ton of information on that exact question like what is the CPU like I don't know what's the best way to look at CPU to understand Grafana to understand CPU usage within Beisu because again you start off really high you get as you sync things and import blocks you reduce your CPU usage so if you have an older CPU your sync might take longer but you might be totally fine to run in normal operations so at the end of the day it's not really a big deal right because you have to sync once like you sync once and then your CPU is running so it's kind of like okay it sucks that it's taking longer but once I'm caught up to the head of the chain it doesn't matter as much so I would read this page that I just shared I'll put this somewhere in the in the slides too anyway let's see any specific distros on Linux I don't really know whatever you're comfortable with I mean they all kind of run system D so it doesn't really matter whatever you you know you're familiar with regulators can make the use for soulbound tokens or it will be their wetter way like the federated heiler way to KSC I think that that's a better question for the did groups at hyper ledger I'll let David plug those if he wants there's like a bunch of decentralized identity working groups at hyper ledger as to if soulbound tokens are you know fit into that did framework that is KY seed and all good and dandy I don't know I think it's a little early to know but it could be it could be a good use case for that we'll have to see it also depends on what the people are trying to do right within the network because not everything needs to be KY seed only certain things and like read only access versus read write if I'm putting data on the chain do I need to be KY seed if I'm just reading information yeah David shared a link about the identity working group I would definitely look there for details I don't know if they touch much on soulbound tokens but that's probably a good place to start and you can probably apply soulbound tokens to a lot of the um concepts that are relating to the same dids and like credentials in the other perspective other working group I'm going to stop my screen share here yeah it's not so great that we have to figure this all out but hopefully we come to a good agreement at some point any other questions folks all right I'm going to share my screen again because the girly transaction did go through on ether scan so we're so I swap some tokens on on sushi swap on girly all well and good very exciting not sharing again cool I think that this is a good point to wrap up to reiterate we will be sending the slides and I will include a lot of the links that were mentioned feel free to shoot me an email or message me on discord if you have an additional questions I'm going to actually that's a good idea is to go through the discord right now to show some where some of the things that we discuss today are mentioned so let me go back to the zoom really quickly just one last moment bear with us and then I'll let you all go so we have multiple basu channels this is the hyper ledger foundation discord the link is in my slides so the ones you are interested in are obviously the basu one and some of these other great projects of course and there's the identity stuff is in here as well but anyway basu channel this is just our typical channel for whatever the basu nodes channel is about specifically running like staking nodes so if you have questions about staking check out here the contributors channel is mostly for contributors if you have questions about getting started to contribute you can check here but otherwise please don't spam too much in this channel and then the announcements channels where we'll have major announcements on the consensus side it's basically the same we have a basu channel where you can ask questions and if you're having issues with metamask or if you decide to use truffle like I mentioned there's discord channels for those two cool all right I think David we're good to wrap up here with the recording and the YouTube so thank you everyone everyone very much yeah thank you Matt and thank you everyone and as Matt said I'll be following up with some of those links and resources so thanks for your time everyone