 So thank you to everybody for joining this virtual Hyperledger Meetup today. We have Matthew Whitehead, principal engineer from Kaleido, also active Hyperledger Bezu community member. He is going to today share about running Hyperledger Bezu Enterprise, what's new and what's next. And again, as we were saying, we want your input thoughts, questions. So feel free to share those with us on the Zoom chat. And we looking forward to the discussion with that. Why don't you take it away Matt? Great. Thank you, David. Yeah, so welcome to this webinar on Hyperledger Bezu. I'll skip straight to the apparently page. So yeah, as David says, I'm a principal engineer at Kaleido. I'm in Kaleido's UK office in the south of the UK for anyone who's been down towards Southampton in the UK. And I'm also a maintainer on the Hyperledger Bezu project. And as part of that, I lead the enterprise roadmap that we're going to see in a couple of slides time. And I work very closely with the rest of the community in terms of how enterprise oriented features sit alongside public chain oriented features. I've been in the IT industry generally for over 20 years, almost all of that working in the space of enterprise software, transactional software. So yeah, I've spent quite a lot of time working on things which manage transactions reliably, but it that way. So just a moment to talk about the relationship between Hyperledger Bezu and Kaleido. So Kaleido has a long standing history with both Hyperledger and Consensus who lead the Bezu project and Bezu itself. So for the last at least five or six years, Kaleido has been offering hosted Bezu solutions for enterprises who want to run Bezu chains in a way that is a kind of a managed experience, highly available, highly performant, typically zero gas fees for most use cases with all of the monitoring and management that the enterprise users come to expect from managed services. So Kaleido has been running Hyperledger Bezu for many years and continues to be actively involved in both running nodes and running chains for enterprises, but also in helping to shape the way that Bezu can best meet the requirements and the demands of enterprise users. Kaleido has a wealth of experience not just with Bezu but with other Ethereum clients and particularly with the requirements that enterprises have as opposed to the user who might be standing up nodes to run as part of the public Ethereum chain. Kaleido brings a whole amount of knowledge and experience of real world enterprise use cases and kind of feeds that back into the Hyperledger Bezu community and product. So you get this nice cycle of Kaleido and Bezu working together to make Bezu the best possible option for enterprise blockchains. What I'm going to try and do over the course of this webinar is talk through why Hyperledger Bezu specifically is a really, really good choice, probably the best choice for running permissioned EVM compatible blockchains. I'm going to talk a bit about the public roadmap just to kind of familiarize the audience with the way that Bezu manages its roadmap for what's now and what's next and then what's kind of future and then focus specifically on what we've been doing in the enterprise space around the Bezu roadmap with some demos hopefully of some bits and pieces to kind of break up the session. But I wanted to talk just initially about why Hyperledger Bezu really is such a good EVM client for the enterprise and you might think it's slightly odd that what I've shown on the right is the breakdown of the variety of EVM nodes that are running the public Ethereum chain. But the reason I've put that on screen is to show that over time you can see the majority of nodes currently typically are Geth or Nethermind. What we've seen over the last few years is that Bezu's proportion of the public space is now kind of around a 10% mark and that's up from probably very low single digit percentages only a few years ago. But that growth in use of Bezu in the public chains actually is driving a lot of development and hardening of all of the core parts of Bezu that are applicable to both public and permission chains. So as we see growth in the use of Bezu in the public space, we're seeing performance improvements, stability improvements, a growing number of contributors to the project itself and so on. And that's making Bezu a really good base technology to underpin those more specific use cases around permission chains, QBFT chains, chains that are running inside the enterprise with trusted validators and so on. It's also I think particularly beneficial that Bezu is an Apache 2 licensed project. So for enterprises, even if you're not in the business of building and packaging software products to release where things like LGPL licenses would be particularly problematic, Bezu's Apache 2 license is kind of what you consider a very enterprise friendly license. It allows enterprises to onboard software like Bezu and not have to worry about how they're going to use it in the future, how their use of it is going to evolve over time. But actually, they can use it in any way they choose to and that license is a really good starting point when a business is looking at what kind of EVM software should they be buying into. It's also evolving in terms of how Bezu fits into more complicated topologies like the layer 2 space. So we've seen the linear test net that's been built out by the consensus team and that's a layer 2 zero knowledge roll up base layer 2 that's based again on that called Bezu technology. So Bezu's evolution is not just into the pure, the kind of hardened public space or the enterprise permission space, but also into more complex, more involved topologies that involve things like roll ups and bridges and all of the technologies that those require. Bezu also has a really, really long history of permission consensus algorithms. So the majority of nodes that run public nodes, things like geth particularly, are all focused on working with the public chain proof of work previously and now proof of stake. Where Bezu actually has got quite long history of integrating some of those enterprise kind of originated consensus algorithms like IBFT, CLEAK, and now the QBFT, which is the one that we really recommend for enterprise use cases, but because it's a long history starting from CLEAK as a kind of development permissioned consensus algorithm through IBFT and hardening into QBFT, Bezu now has a lot of experience of running chains with permissioned validators of varying numbers across various kind of stars of deployment. And as part of that really, if you look into this space, Bezu's really the only client that's got a lot of active development, again from people like myself and organizations like Collido that has a lot of active development in the enterprise space and has companies like Collido providing that feedback and contributing time to improve it through maintenance like me. Bezu really is the only EVM client that's being actively developed in that direction as well as the public space. I should say we talked early on about questions. I'll try and pause occasionally for questions. I'll look in the chat occasionally. I think other people will call out if there are a few questions to look at. So feel free to put questions in the chat for in a little while time or do on me if you want to ask as we go. So let's talk a bit about the Bezu roadmap. This is kind of what might be particularly interesting for an enterprise looking at what's been happening with Bezu recently and which directions is Bezu looking to go in? What features is it looking at? What discussions is it having about directions for Bezu? So this is what I'm going to spend most of the time on particularly because there are some interesting new features just recently released and there are some interesting kind of directions that Bezu might go in over the coming 6, 12, 18 months. So the Bezu roadmap is totally public, totally open under the Hyperledger foundation. This screenshot here is taken from a public web page that anyone can go and see and as I say, working for Kaleido, I kind of leave that the right hand side of the roadmap which is the enterprise roadmap and you'll see a lot of the other maintainers, project leads from consensus and elsewhere leading the public roadmap and you can see as you look down those, if you go to the website you can see how some of those will be complementary and some of those will be very specifically in one space or another. Obviously there are lots of features which are purely for maintaining Bezu's kind of maintaining pace with the public Ethereum chain which just have to be got to otherwise Bezu stops being able to operate as a node in that environment. So this roadmap is open for anyone to go and look at and there are links at the end of the slide deck to things like the public Discord channel where all of these topics are discussed and where you'll see links to the monthly contributor calls where again they're discussed in more detail. So again I'll keep retracing through the slides. Bezu being a totally open source project, it's run by a whole community of maintainers and is totally open to other members coming to join and participate. So this is a picture of the the roadmap particularly around the public space and I took this from last year, I just took one from the slide deck last year that I think probably Matt Nelson presented on one of the Bestly Pitches just to show you the way that the way that the roadmap kind of pans out over time and you can see in this public-oriented roadmap there are lots of features around staking for the proof-of-stake chain around the Cancun development to reduce data storage requirements and improve scalability of the public chain around things like the Shanghai fork to allow proof-of-stake withdrawals. You can see lots of stuff on here that is largely focused on how Bestly was keeping pace with developments in the public chain space. What we now kind of talk about in parallel to that is the Bestly roadmap, the Enterprise roadmap. So this is the roadmap that I lead and again I bring to contributor calls to talk through kind of directions we think Bestly could go in and this has features that are probably going to look much more kind of maybe not familiar but the kind of things that you might see of perhaps other traditional enterprise software features that allow you to protect yourself from installing little versions, features that allow you to pre-configure Bestly in a way that's much more suitable for kind of an enterprise deployment. Most of the slides here are going to be us talking through some of these recent features, particularly the top box that's kind of labeled Q1, so this quarter of 2024 the latest release which is just coming out kind of almost half the press just waiting for the next release candidate just to finish testing. I'm going to talk mainly about some of those recent features because they're features that some of them are features that people have been asking quite a lot about when are these coming and then towards the end I'm going to talk maybe about the bottom box and the top right box about coming and some of the future things that we're looking at to kind of again give some of the directions to where where Bestly might go in the enterprise space. I'll just briefly mention that bottom left box that some of the work that was done in 2023 I think maybe moved along some of the enterprise features or the enterprise shape of Bestly a little more and by introducing a few kind of fundamental things which were almost put Bestly into an enterprise mode from the get-go particularly a new transaction pool which this time last year Bestly had one transaction pool then a new transaction pool called the layered pool was introduced which gave you much more control around how many transactions you put into the pool and how different types of transactions went into different types of the pool hence the layered kind of description. So what we now have is two two transaction pools and the previous one has kind of been re-engineered or refactored into what we're calling the sequenced pool and the sequenced pool and you can choose it you can see the command line option there that I've put on the slide. The sequenced pool is the one that generally we'd recommend using in the enterprise space and the reason it's called a sequenced pool is because we found a lot of the feedback from enterprise users in terms of transaction processing in an EVM chain. People were looking for more typically FIFO like first in first out processing of transactions that's somewhat different to what you see in the public chain space where you have a whole kind of crypto economy around well my transaction will go ahead of yours even if I submit it later because I'm good also to pay more. In the enterprise space those kind of models don't really kind of port across so well so basically it has now a sequence pool which is intended to be much more like a first in first out treatment of transactions where as transactions come in they're mined into blocks and generally speaking they're going to follow the order that they were submitted into NOS. So that was some of the kind of maybe you'd call out the groundwork towards the back end of last year that's now kind of being developed at the beginning of this year. So I'm going to focus really on this top box for the next couple of slides then I'll probably during these bits I'll switch out to a couple of demos just again to break up the slides a little bit and I've got a slide on each of those features really just to talk through through some of the latest things coming in Bessie. So first one is that the QBFT which is again as I said earlier the the consensus out going to be tend to recommend for permission chains until the the release is just coming out. QBFT didn't work with all of the the fork milestones that the Ethereum community had had kind of agreed upon and generally speaking I mentioned Shanghai on the on the public roadmap slide most of the work in Shanghai was around the public chain. So it was part of the Chapella network upgrade that allows for the first time anyone who had staked currency to run validators as part of the move to the proof of stake world it allowed validators to withdraw those stakes for the first time. So that was part of the evolution of the public chain and again not something that is typically something that an enterprise environment would would worry too much about and certainly in a lot of cases you might think that as an enterprise Bessie chain user why do I care about most of these forks? I know that as long as all the nodes running on this this blockchain are running at the same fork which could be as far back as it could be back to Istanbul or Berlin as long as they're all on that then they're all talking the same language why would I be worried about moving up to newer forks particularly because I'm not involved in running public nodes? The reason comes from not only some folks not only bring in features around the public chain they also as we we see in the Shanghai fork they also bring about improvements to the EVM the virtual machine that runs smart contract code itself and indeed the Shanghai fork did exactly that. So the Shanghai fork introduced a new opcode so for those of you aren't familiar with with kind of CPUs the CPU runs instructions that you give it and and it has a set finite number of instructions you can you can give it and the EVM over time evolves as a kind of a virtual machine and in the Shanghai fork a new kind of machine instruction push zero won't mean a lot to most people and indeed you don't have to worry about it at all but it improves the performance of smart contract transactions and that's that's why it was introduced. Now again you might think okay so there's a slight improvement in transaction performance for certain types of transactions but I'm you know I'm not I'm not too worried about that again as an enterprise user why should I worry about that and the reason is really on the right hand side of this slide that it's not you know as an enterprise user you're not working in a bubble you're working with lots of tooling that other communities are building and developing and moving forward and we have seen exactly this in discussions on discord the solidity compiler once you get to a particular version of that starts using that new opcode and that means if you use that compiler version against a chain not not running that fork the contract's not going to be accepted by the chain and again think tools like open zeppelin if you've built in sample contracts that that use the version five open zeppelin samples that uses the given version of solidity by default that requires this this new opcode so you get this kind of chain of dependencies that it's kind of unavoidable that's largely driven probably by your developers and the tooling they're using and the requirement that then puts on your chain so I thought I'd skip out here for a second and just show show a little bit of that in action so I have I have some notes this is the collider platform running some besu notes I have zoom in a little bit I have a London node so that's configured at the London fork and I've not turned on Shanghai for that I've got a Shanghai node which is configured for the Shanghai fork and if I just switch back a second just to put a bit of context into some of this and talk about the tooling a little so if we go to the remix popular web based ID for smart contracts you can ignore most of what's going here but on the right hand side we have a load of project templates driven by the open zeppelin tooling so if I if I decide as a developer I want to pick an ERC contract from the open zeppelin library I've got a nice example ERC 20 token to deploy I've just come straight to the d4 options in remix and all of a sudden the the options I'm given here without choosing anything particularly without saying I want to use the latest and greatest version of open zeppelin I've now been given a contract which has has a prereq of the the version of solidity that requires this Shanghai fork and a lot of these imported dependencies will be exactly the same so I can't just change this this one line here and compile through a different version I'd have to go and refactor a bunch of the the prereq contracts in order to get this to run on a chain that doesn't have Shanghai so that just just to kind of give you an example of you know why in the real world I might I might care about this so I've got these if I switch back to my notes I've got these these notes here running different versions and I'll just show you if I switch to a hardhat development environment so I have I have a smart contracts here which is again an ERC 20 and because this is my one I I can decide which solidity version I'm going to compile against and I have some configuration I can close that now I have some configuration here which which just points at the two two nodes I just I just showed you so I have a network configured in hardhat that's going to point at my London chain so that doesn't have support and I've got another network defined which points at my Shanghai chain and what I'm going to do at the top of this is I'm initially going to use an earlier version that doesn't require Shanghai and I will do a deploy to the London chain first so I'm going to just deploy a smart contract to the London chain and all being well it's going to recompile some stuff and give us seconds okay so we can see that my application has got the nonce for my signing address is deployed the contract and then it's just printed out what's the nonce it's going to use next time so this is just deploying smart contracts to a regular chain now if I decide I want to move up to a newer version of the compiler so now I'm going to run with 0.8.20 which is what I'd get if I was using those samples I showed you earlier and if I run that again and and I you can probably just see here that I'm pointing this at my London chain and then try and deploy that contract it'll do a bit of recompiling quickly okay so now we get now we get a horrible error and I've tried to make this big so people can see but in Amazon I'll get it getting a bit of kind of word wrap on my terminal what you can see here is that there's an error deploying that contract so I've tried to deploy a contract that uses compiled code that just doesn't doesn't work on on the the chain I've created I should stress here the two nodes I've got running exactly the same Bessie version nothing to do with Bessie versions just the the forks that I've configured on those nodes so exactly the same Bessie version of the covers so what I'll do now is I'll just clear that out and now I'm going to deploy to Shanghai that one there so now we're going to deploy to the the Shanghai demo chain and the I've done this a few times so my nonce has gone up from five to six and you can see that worked absolutely absolutely fine and if I if I just show I had some some curl requests that just query those chains a little I can get rid of this code a bit here so this is a curl request to assure Jason RPS equal to ask about the London chain so we can see that we have the Berlin block on by default we have the London block but no Shanghai Shanghai block here and then if I do the shame the same against my Shanghai node we can see this time I've got the Shanghai block it's actually set as a time stamp now for forks from Shanghai onwards but we can see that I've got Shanghai enabled for my for my other node and if I want to be really kind of show you that exactly what I was saying here these are exactly the same versions of Bessie this is 2420 if I go back up the London one was running exactly the same version so no difference in versions just the way I've configured those chains differently so I hope that was kind of putting a bit of context into some of what I've been talking about in those slides it's kind of hard to make some of these demos super exciting so I hope that they're at least adding a bit of context so let's go back so that's the feature the latest version of Bessie just coming hot off the press that has been much asked for and Bessie now supports I'm just gonna just before we go on thanks I can see that there's some people answering some questions in there that's much appreciated I'll come back and look at in a moment so then another another feature that that Bessie now supports comes from some discussions around the fact that Bessie has lots of configuration options and again and you know until recently it didn't really prescribe which options you might want to use in certain environments particularly it had the concept of naming a network so if you configured to the go early testnet it had some pre-configured options for that that's really around the Genesis configuration but as a public user and a permissioned enterprise user your experience of setting up Bessie was really similar you had to pick and choose the options that best suit you and what we what we found talking to enterprise users was getting Bessie to perform really well and be stable and achieve the kind of SLA requirements they have could involve quite a lot of manually kind of trial and error of how to configure Bessie and indeed to get a permission chain to work effectively and to be performant you do need to go through and make careful choices about some of the the kind of intricacies of the way that Bessie works and the way that Bessie knows talk to each other so Bessie took that on board and what we have now in the latest latest version of Bessie is a profile option so that I've put it again down there it's just a command line option or you can put it in your config file and what it does is effectively applies when you choose your profile it applies a config file a toml file is the way that the Bessie formats configuration in files it pre ships a toml file for different types of node that you want to run and Bessie now offers three or four different profiles for different types of node the two main ones I thought I would call out here is that if you are running a public chain or a public node against the public chain you can choose profile of staker and that pre config is a whole bunch of options that are much more suitable to a public node so I've talked already about the transaction pools so in a public node you want the layered transaction pool which is better suited to handling transactions from a wide variety of different transaction applications the transaction pool itself is typically going to need to be resilient to the fact that large numbers of applications might submit transactions to it you might have a large number of transaction accounts sender accounts and you probably are conscious of any one account monopolizing your transaction pool because once the pool is full it can't accept transactions from other senders so typically you would you would tune the pool for that kind of behavior you might also if you're running your own node on the public network what you tend to see for a lot of people running their own nodes is that they give their own transactions priority when the transaction pool is selecting the next ones to use it's fairly common it doesn't kind of skip past any of the crypto economics of which transactions which blocks get get mind but it allows you to prioritize transactions that you yourself are submitted to this node not transactions that have come in from another besu node through the peer-to-peer communication of transactions again you probably if you're running a public node you want a balance of how many P2P connections your node initiates to other nodes and how many other nodes can initiate yours you you typically want this to give you a sort of a kind of a denial of service protection from your node being swamped with connections from other nodes reaching your node's limit of number of connections it wants to maintain and then the connections it's got to other peers a kind of semi-random depending on who you know the race to connect to your node when you joined it into the network so typically a public node you want you want to allow some of that but you want to reserve a proportion of the number of connections to be ones that you configure that you control you might want to connect to particular boot nodes or you might want to connect to particular other nodes that you're aware of and public public nodes you would typically see that balance of inbound and outbound connections and then of course the gas price is going to be set by the way that the chain the chain works on the permission side we have this profile called enterprise and the intention of the enterprise profile is to give you a good pretty performant very stable besu node for the kind of scenarios we we have seen in enterprise use cases so we pick the sequence transactional by default I talked about that being the one we recommend and so that one's picked by default if you choose this profile the transaction pool is configured to by default allow many more transactions from one sender so again typically we we would see for very high performance environments people perhaps using the same sending account to drive a lot of transaction traffic into the transaction pool and the only way you can do that if you've allowed an individual center to use a lot of the pool if you haven't then that that sender gets stuck after say five transactions and it can't get the sixth one into the pool until the other ones have gone into a block so what we what we configure by default is that any one sender can have much more of the pool of the order of hundreds by default out of the kind of the few thousand that that typically it's configured with again if you're running commission nodes where where the nodes are kind of acting as infrastructure as part of the rest of the infrastructure in your IT network what we have tended to find is no enterprises don't want the order of transaction selection to be based on where an application happened to connect if the application connects into this node or this node or this node you just want the transactions to be to be prioritized roughly evenly you might be low bouncing across them you might have some other kind of components in the way and there's no real need in an enterprise for anyone nodes to say the transactions I receive are much more important you can manage that in other ways so typically yeah typically that's the behavior we see people using and so that's the default for the enterprise profile and also again because you all of the peers are typically trusted because the number of them is not going to be the same order of magnitude of a public chain typically and because you you know that really if you're creating a kind of a mesh network which a lot of users are you want all your nodes to be connected to everyone else the the order that those connections happen in is not really important and what we saw if you did if you didn't set that is you could get nodes only having half the number of nodes it should have because just by total coincidence of the timing of when when connections were initiated so the default for the enterprise node is that it doesn't matter which order nodes try and connect to each other they all end up as peers and then finally you know the obvious enterprise use cases that gas is free so so gas is set to a minimum price of zero it's worth saying that all of all of this is just about setting up initial defaults setting up good good settings for a lot of typical use cases it doesn't prevent you then setting any number of other configuration options you want and your options take precedent so if you want the enterprise profile but you want a gas price of 100 then you can set that or if you want the enterprise profile but you you want to tune the transaction pool in a slightly different way that's totally fine it doesn't mean you have to totally switch off the profile and go and set everything manually you set the profile to enterprise and then you go and add and tweak the the particular settings that are relevant for your environment and then another another feature that's just coming out in the latest version is again a lot of these are driven by real world kind of use cases we've seen when it comes to actually consuming software we've moved out of the world where you have an operating system installer maybe a person manually running an installer and that operating system maintaining information about the versions of particular software you have we've moved kind of away from that we tend to see DevOps GitOps being the way that people are managing their infrastructure software what what you particularly don't want to have in the case of blockchains like anything like a messaging system or a database system the integrity of the data is really really crucial to your business and in in most cases it's very unlikely that what you're going to want to do is downgrade the version of a of some software and run the risk that that might not be not be supported between particular versions it might not be the case that any given version of Bessie was guaranteed to run correctly against previous version and because the number of cases you might have of wanting to downgrade are much fewer than the protection you you you desire over accidentally corrupting your own data what Bessie has added in the latest release is some protection to to avoid doing that this protection is is kind of a level up from the storage component in Bessie so Bessie has its own database component and in that there is additional protection but but crucially the the database component in Bessie is pluggable so if if someone plugs their own database solution in they're going to have to remember to manage data integrity backwards compatibility preventing downgrades that are going to break data and so on so Bessie has looked at that and said um we're going to have a layer above that we're going to have just a fundamental version compatibility check uh doesn't matter if downgrade would work by default in in the case of non-named networks basically permissioned networks by default um if Bessie starts and it detects that the version it's starting at is earlier than the last version to use that data directory it will fail to start which is much more preferable to starting and potentially corrupting some data um so this option as I say it's on by default if you're using a non-named network which most enterprise users will be it basically means if you're specifying your own genesis file you'll get this on by default if you want to make sure absolutely it's on you can set this version compatibility flag to true and if for any reason you do want to do a downgrade um you can set this to false and it won't apply these checks but as you can see here what what it can allow is that slow and steady upgrading of Bessie if it hits a point where it detects a downgrade it fails to start very early on obviously this has to happen very early on in Bessie's startup process well before it's starting to interact with data so very early on it will perform this check um and again I'm just going to jump out for a second just to um uh show a little bit of this um working action again I know this can be a little dry some of this uh but I I think it's really useful for people to see this action how it's going to protect their enterprise networks um from um from this kind of issue so I've got a um uh a net a node here I created it here and I just called it no downgrade node all of these are no downgrade because they're using the new version of Bessie with this protection but um I thought I'd create one that I can go and hack around with and and play about with for the demo so um this is um this is the data directory uh for the Bessie node that I showed you there happens to be running in a pod but that's not particularly relevant although you know kind of talks to the fact that that kind of my movement to to GitOps and DevOps um is real so what you see in in here now is is this file which is called the version metadata.json file if I look in there um we can see that what Bessie has done when it started up it recorded its own version in here um as as the means to uh apply that that downgrade protection now in order to demonstrate this um Bessie has only just added this feature um so I can't show you two different versions easily without kind of playing playing about under the covers a little bit because um there isn't a second version to kind of show this downgrade protection working but what I'm going to do is I'm just going to edit this to show you how this works in action and I'm going to pretend that Bessie was running at this later version 24.3 you know at some point Bessie will release this but I'm going to pretend Bessie started at 24.3 and remember I'm running 24.2 so Bessie started 24.3 um and then uh I'm going to just delete that pod so I'm going to delete that pod and and so Bessie which is running at 24.2 is going to restart just watch that happen in a second so Bessie is now restarting I've made this too big I've got some horrible word wrap um so Bessie is now restarting at 24.2 but I have edited the the protection file to pretend that previously it started 24.3 and we're just going to now look at the logs for that node and we're going to watch Bessie start up here especially it's starting up um and and it's basically it's just quit in this particular case it's going to go around in circles because this particular pod is designed to kind of restart it but I'm going to I'm going to kill that now we're just going to go back up to where Bessie started if you're familiar with Bessie starting this is the kind of the um the header that you get printed out as it starts it shows you the version it's running on um and you can see I've got some of those enterprise features that we talked about I've got this sequence transaction pool and and so on and as we scroll down here we can start to see Bessie was performing some of these version checks on startup again some of the first lines you see in the log here so we can see it's looking up the version it's checking that an existing version um uh file was was found again I'm trying to make this big enough for people to see but not so that we get a horrible line wrap too much just go down a little more okay that's probably good enough so we can see that Bessie's detected a version had been started and it knew that it was 24.3 it knows that it's starting at 24.2 and it's and it knows that's lower than 24.3 and um what it's um going to do now is basically crash out um so if I was running this outside of a container it would just quit back to the command line um in this particular case the pod will will just occasionally restart it and you'd see this cycling but what won't happen is Bessie won't start um and start using that data directory it's just going to keep looping around rejecting the fact that it's now trying to start against the wrong version and that will just loop around um and give you that protection around some issue in your pipeline some some error in a command you ran um some error in tagging an image incorrectly that meant you pulled a version of Bessie you thought was at one version was a different version anything like that um Bessie will now self protect against that um and it'll self protect even if you're not running you know in in things like containers it's going to apply this protection um and if you really want to downgrade to a given version if you if the data isn't particularly kind of if it isn't production data or if you've tested it and you're happy the downgrade is going to work you can remove this option um and say no no it's fine do the downgrade and what Bessie will do is then it'll downgrade anyway it will remark the file as being at that lower version which means from then on it thinks that's that's its starting point again for future checks so again um I I know that lots of staring at um a terminal window during a demo is not the most exciting but I really wanted to kind of get across some of the the context of these features and the the extra protection they're adding in in kind of real deployment situations okay so I'm going to pause there that feels like a good point just to see if there are any questions please feel free to come off mute if you want to ask any I can see Jim has very very nicely been answering some of those um any other questions before I just move on to the last kind of 15 or 20 minutes okay so I'm going to carry on I think yeah we're probably going to run to about an hour maybe maybe five minutes over um um and I'm going to be here for for a little while afterwards if there are questions after we finish finish the slide thing um this this last part of the presentation is is thinking a bit about what's coming next for Bessie so we've seen some really really good uh development of Bessie in terms of how good a fit it is for the enterprise um and and what we are now kind of moving on to is okay so what's next what what are the things that continue to evolve Bessie and make it the best um solution for enterprise blockchains I've got some slides on like two of these particularly so I'm going to call out two of these specifically one is kind of um maybe in the nearer future and one is a little more kind of having some slightly long-term thoughts about ideas um but I'm just going to whizz down some of the ones on this chart because I haven't put slides for all of these things just to give you some idea of what's being looked at so um currently being kind of worked on is is support for empty block periods so if you have a block period of two seconds but you have a period of the day where no transactions are submitted then um empty block periods will give you the ability to decide well how often do I want a new block if there are no transactions if I can reduce that from two seconds to 20 seconds or a minute then um I can reduce the amount of storage wasted storage um that's unnecessary when no transaction traffic is going through the chain because remember even if you don't have transactions if you're mining blocks those blocks will need to need to be chained in um and you're you're slowly increasing your storage or quant so something that's being worked on um is support to be able to configure a block period that will apply if no transactions are present at that point in time um another area that's kind of being discussed looked at worked worked on is QBFT's recovery behavior and this is something I do have some slides on um but this this is around in an enterprise world where an outage needs recovering from as quickly as possible the behavior the kind of nuances of the way these consensus algorithms work means that the default behavior you get is is kind of not what you would typically be looking for as an enterprise user and again I'll talk that through in a couple of slides support for QBFT and snap sync is is being worked on so snap sync is the the preferred way for a public node to get um the world state the picture of the world state from its peers in as efficient a way as possible when a new node is being created so a new node requires all the historic blocks which it's going to ask peers for but it also needs a picture of the current state um and what it doesn't necessarily want to do is start from block zero and rebuild state a block at a time which is what you would get if you used forcing today um so snap sync is the way that really Bessie was going to be moving in terms of the preferred syncing mechanism the problem is that in a permission chain for snap sync to work you have to have a node that knows how to serve up snap sync data and currently in the public chain space Bessie can consume snap sync data but it can't provide it so that's something that kind of this is no development which is kind of both public and permissioned development the public side of Bessie really needs to add support to serve up snap sync data because it needs to be a good citizen as part of the ethereum network but in a permission chain um any node that wants to consume snap sync data it needs to ask someone for snap sync data and currently because Bessie doesn't uh support serving up snap sync data and it's got no one to ask if it's a Bessie chain so um snap sync support will be coming once that comes support for running a QBFT chain with snap sync which is going to make it much more efficient if you want to add a node in the future and your 10 million blocks into your blockchain um uh that's going to make it much much easier to start onboarding new nodes as your network grows and so on and in slightly slightly future um the bonsai database uh again this is an area that Bessie is moving in in terms of um the way that data is stored and again the preferred option for storing data in Bessie again in the latest release has just moved to bonsai by default but the way that bonsai works again is really good for public chains but but if it doesn't uh it's not a good fit for an environment where you want to query an arbitrary snapshot of the world state at an arbitrary period in time typically in public nodes you don't really need to do that very often but in an enterprise setting you might have archive kind of requirements to access historic data um or you might have reasons to go and query what what was the state at a given point in time you know 10 000 blocks ago or a million blocks ago so with the forest database and um the uh the full sync mode with Bessie you can do that with a QBFT chain but it means you're storing an awful lot of data um bonsai will improve that and and bonsai will become I think the preferred option for QBFT chains at some point in the future remember it's now the default for Bessie but um for QBFT chains it probably needs to remain forest however the the the ability to move to bonsai for a permission chain really relies on the ability to have some more archive features that give you the equivalent of querying world state at a given point in history so that ability to to do archive like queries but on a bonsai database again that's being worked on that's coming um and I'm hoping we'll see over the coming 12 months that that that comes along and then there are kind of general thoughts around things that we might look at um can we make nodes more highly available I've got some slides on that maybe block exports for backup could be automated so I don't have to do that manually myself and stop the node in order to do it maybe some performance tuning recommendations for really pushing performance not just to good but to to really really driving high loads of traffic through um we'll see so last couple of slides are just going to talk about two of those in a smidge more detail um so QBFT recovery behavior and the reason this is relevant is because the recovery time if you have an outage of validators uh can be surprising so I've got a couple of slides which explain that and talk about why this is something Bessie needs to look at so let's say I've got a healthy chain on the left and all the validator notes are at this block I've tried to make it obvious that the use sensible number so you can see this block number is not going to change over the next kind of slide or two so we've got this block one two two three two three and they're all of that and two of the validators go offline and if you remember back to QBFT consensus algorithms um you need a super majority of those in order to mine blocks so with two validators gone the remaining two validators can't mine blocks so again this this chain is stored no new blocks are going to be created however the nature of QBFT is that the validators will over time try and move the QBFT round along after a given interval moving the the round along progresses the validator that is allowed to propose the next block so QBFT is designed to just slowly bump this up one at a time and effectively move through the validators in case it finds a round that means enough active validators can now firstly one can propose a block and that there are enough other ones to mine that block so what happens is you get this this kind of twoing and throwing where the remaining validators are going to propose a new round is going to go kind of zero one and then a bit of a delay and then two and then a longer delay and the delays are going to get bigger here so the proposing round three might be twice as long as proposing round two proposing round four twice as long again and so on so what you see is these two validators they slow down because they don't want to increase the round time um in a linear fashion and that's kind of fine until you get to looking at what what happens when we bring these two validators back online um is then is the note is the chain going to start mining blocks straight away the answer is absolutely not so let's say we have we've hit a point we're about to restore these validators and um the two that were online are now up to round six and seven and we're talking now minutes between the rounds being proposed why is that why is that relevant well let's look at the right hand picture and we brought the two validators back online and they're going to start they technically start at zero but I just thought I'd do this starting from ones they're going to start from from one and they're going to propose that that they think we should be on round one and the agreement between these the cube ft algorithm is that they have to have a quorum of agreement on what the next round is so the two validators proposing one they can't agree because there aren't enough other validators proposing it and they can't accept the other round nominations and I guess what I'm trying to say is um it takes a a number of minutes proportional to the outage or the validators to come to an agreement on the next round and mine a new block so what we have is finally we have a new block 1 1 2 2 2 3 3 4 if those two validators are down for x minutes whatever the value of x is for the period of time that the two validators are unavailable the period of time for the the active validators once they're restored to mine a block is proportional to x minutes so if I have an outage for five minutes and I bring the validators back online I'm not getting a new block for five minutes if I have an outage for 10 minutes I'm not getting a new block for 10 minutes this is a really kind of important aspect of QBFT that it's not totally obvious if you have an outage for 10 seconds you're probably not going to notice this has happened but if you have an outage of many minutes um and you and you restore your validators and you think everything's fine everything is going to crack on and start mining blocks again the period of time to mine a new block is proportional to the time that it that the outage occurred for um and in an enterprise setting so so the fix for this you know the kind of the skip to okay what's the fix for this the fix for this is you restart your validators you go around you stop them all and they start at zero again and then they quickly quickly restart that that's today's solution to this problem um and I think where Betsy needs to go is saying well that's not that's not really what an enterprise environment really um can live with always restarting everything when there's when there's some downtime and I think there are some interesting things that the Betsy community could look at in terms of making that um a much more acceptable kind of recovery time um an RTO return to operation time a much more acceptable time than the current behavior which has that link between the outage time and the recovery time so that's one area that I think I think uh Betsy um probably needs to spend some time kind of looking at and I think other nodes do have a slightly improved behavior over this they do have a slightly more lenient kind of progression through the QBFT rounds so so one step will probably be Betsy looking at whether it can just adopt that but I think there are other options around can it be better than that other other cases where it can be more pragmatic and enterprises enterprise blockchains are often about I want a blockchain but maybe I have a few pragmatic decisions I can make which don't compromise the security they don't compromise the Byzantine fault tolerance but um they might just be kind of some some pragmatic tweaks that allow me to to have um a much healthier bit of infrastructure for my applications okay then lastly this this is again maybe slightly much further down the line kind of looking at things thinking about things um working out what options Betsy might have this is this is go back to my favorite picture you see this this picture in lots of the slides I draw about blockchains just blue blobs being validators so in the case we've got four validators and we've seen a picture just a minute ago where two go offline um if if those two go offline as we know we haven't got enough validators to mine blocks so what could Betsy be doing to um to improve this uh just generally what what fundamental architectural um changes or options could be be looking at so one kind of thought process slash discussion is to is to look at options where maybe maybe there might be ways of running more than one instance of a given validator maybe I can have uh distributed validators where um I can tolerate an outage much much better my SLA my kind of ability to keep mining blocks and keep mining transactions into blocks is much better even in quite dramatic outage scenarios um and these are kind of some of the thoughts that again if you're looking at the requirements enterprises have for their SLAs for their return to operation times um whether or not blockchains based on things like QBFT can meet those is an area that is I think still um you know we know we can in certain topologies can we do that um and be more resilient can we do it in other topologies um can we can we do some really quite interesting things um that actually make um make a blockchain and its validators much much more clever and able to withstand particular types of outage so again this is more in the kind of looking at options having a think about um where where's a little way down the road um but I think it's really useful to talk talk about it particularly on these kind to these audiences around we're spending a lot of time thinking about really difficult really difficult things really difficult enterprise scenarios um and the intent is to feed all of that back and and just grow best's ability to be the right fit for the enterprise okay so um that is is the end of pretty much most of the content um I want to just kind of recap um so so currently collido is is um is driving this enterprise roadmap um and I as a maintainer on the basu community um very very hands on keyboard put it that way very hands on um with um exploring some of these things fixing things um working on features um and you'll see me on discord and you're very welcome to reach out to me directly on email or through discord and and I'd welcome anyone who wants to come and discuss some of the things we've talked about here um and again as you've seen I did all my demos running basu on the collido platform running um uh nodes with various configurations all highly available and highly performant um and that that's the service that the collido brings uh to this space um please come and find basu uh both on discord um github um and so on um and then that's just some links some collido links and things um if you're interested in trying some of the the platform that I showed as part of the demos um okay uh David and or and or Kevin I guess um I think that's pretty much me done um I'm just looking on the chat for any any other questions I might have answered but but also I'd like to reiterate if anyone wants to come off mute and ask about any of those those things please please do feel free now David I don't know if if you I just want to check I can actually hear for anyone saying anything because my volume's up I'm uh are you about now yeah sorry I was on another I'm I had another conflict for another I just want to check I wasn't missing people coming off mute because my volume was down oh got it I don't think so and I'm checking on YouTube I don't see I see people watching but I don't see anybody anybody asking questions over there I'm very happy to stay online on the live stream for it well uh there is a raised hand yeah I see it yeah somebody just raised a hand do you want to come off mute Jim hi um yeah thanks for the session it really really is it was a great session to sort of move me forward rapidly into where the current world of basu is and where it's headed which is a really wonderful thing and I hate to say it but I could care less about public network honestly um only because the enterprise can support both public and private and somehow that message completely gets lost to the entire defi community so they have no understanding of this but um I'll introduce an interesting concept you're familiar with uh Zero Trust right mm-hmm yes yeah okay so there's a new concept so you're familiar with the Facebook outage right uh a little bit more after refresh my memory a little well the the recent one this week they had a serious outage on Facebook and what's happened some other stuff that didn't affect the entire network but it was significant it was like whatever it was 40 50 percent of users were unable to access the network right but I came up with a sort of an enhanced concept I'll call it it's called BZFT which is below Zero Trust and so that's where you don't trust the Zero Trust network itself and so um which is an interesting thing that in a sense I'll flip it out there and say that's a um a pattern in a sense that you'll be thinking about going forward I'm sure um that said when I look at the enterprise side the one thing on QBFT and you've already hit this in your planning roadmap per basu there's a wide spread of different application use cases for enterprise blockchain very wide and you know the more you touch it the more you find it opens up and what you really find is that in a sense one model or one pattern for any of this never works and I will say the public side of Ethereum while to you know Vitaly's credit he's the one seems to be driving the enhancements there with the EIPs on the public side which I very much appreciate that in reality the public side will always be incredibly limited um and what's interesting is on your end as the lead and I'll say collido collectively the lead on the architecture for basu enterprise networks based on EVM which is not a it's a potentially it's a very good thing the challenge of it's going to be that to support those wide very wide use cases you're going to wind up having it in a sense do what you have already started to do which is create what I call multiple architectural options and patterns within frameworks and I suspect you know picking on collido um you guys are going to unfortunately it has to be what I call the lead it's not that you're the only but you're probably going to be what I call the architectural lead for the community if you will moving in that way big time and I don't expect that what I call the old style DeFi stuff sitting in the old crypto world will even have an understanding of what these use cases are so what that means is when you look at something like a consensus algorithm like QBFT what you're really saying is no no no we have a different concept of what a transaction is and something like QBFT as a consensus model is just one uh one consensus model what you really need is that consensus interface much like you do in firefly um and say there's an interface here and you're actually free to swap if you will different um consensus algorithms back to your point which you've already done very well like other software advanced software with a profile concept so you can say okay for this use case jim if you're going to do very high transaction volumes then QBFT is not an appropriate consensus method and we have a different architecture for that stuff so I would argue because I've looked at um I was working off of what I call fabric trying to drive it to a different place but I think you're in the same boat with basu and EVM saying okay uh it is a private EVM network and in this private EVM network this particular use case you know I'll need to get to the capability of you know half a million transactions a second and what that really means is it's not just the consensus method that's different it's even what we call the lifecycle um and the assets are all going to change as well so there's going to be some significant what I call and you're already doing it nicely I think is a process but there'll be a continued evolution of what I call the rethinking of what a transaction lifecycle really means and so you know one of the biggest areas that I came up with was the difference between something which actually advanced databases like telco databases based on my sequel already have these I'm stealing I invent nothing I just steal from other people and so my sequel already has this which is pretty elegant but they have a separate uh separate concept between um for a transaction between commit and finalization and so I think in your world you know those kind of concepts are going to have to come into play for sure uh to get the kind of scale for certain use cases that in a sense your model and your architecture is going to need to support yeah they're all very good points Jim really really interesting points um and I think you're right I mean I think that your general point of it's it's an environment that's in flux and that there's the space is going to evolve even more than it has been is certainly true um I think the the things that most interesting around some of the things you talked about are what what is the transaction like when you have multiple chains so you have discrete chains for certain things but you have have bridging or rolling up between them um what's what's your um measure of what a transaction is in that in that world um and also just to touch on you you mentioned the kind of the plugability of consensus from things I actually on a on another side deck I took a little bit about just how pluggable Bessie is and plugability of the consensus algorithm already kind of sort of happened with the proof of stake side of things where the engine API means you have Bessie not doing the consensus part but handing that off to another process and you have this kind of IPC between the two um and there has has been talk around um some of the permissions well the all of the permission consensus algorithms don't use that model and they're not they're not kind of plugable components they're baked into the Bessie codebase but making that an additional point of plugability has certainly been discussed um and I think some of the things you mentioned you know good arguments for where maybe more of Bessie could be plugable you know you know quite a lot of it is that I mentioned the database there is already implemented as a plugin in using Bessie's own plugin architecture um I think Bessie would like to become more that well the discussion topic tends to be about modularity it'd like to be more modular allow third parties to build on their own at their own cadence um and allow you to yeah plug in many more of the core capabilities of Bessie of which obviously consensus is a really really big one yeah really good points thank you Jim thank you any other questions people want to come up with me to talk about or comments or feedback or thoughts yeah I'm not seeing any questions coming in on YouTube and I don't see any other questions in the zoom chat yeah no no agreed um yeah I think that's that's uh that's everyone um yeah I can't see any I rest any others on the chat um and thank you to anyone um saying thank you on the chat really appreciate it yeah yeah thanks for sure man and thanks to everybody who dialed in and hopefully this is the start of a conversation if you're interested in Bezu yeah definitely join us in the different channels and we'd love to keep uh you know having the discussion with you great well um if we're all set thanks everyone for dialing in thank you David thanks everyone bye everyone