 Well, thank you, G. I want to welcome everybody to DEF CON and the Blockchain Village. My name is Ron Stoner. I'm going to talk about securing the cosmos, and I appreciate that wonderful introduction. I'm a repeat speaker here. I spoke last year when we were talking about the cryptocurrency security standard with Michael Perkel, and it was an amazing experience, so I'm happy to be back. I will preface this talk by saying I'm only on a few hours of sleep in traditional DEF CON fashion. A little bit hung over, and I was working on my presentation up until a few minutes ago, so if there's any inconsistencies in here, or you guys see formatting issues, I'm more than happy to clean that up if you want to copy the presentation, or if you want to discuss further just to clear up any ambiguity after the stream. So who am I? I'm a senior security engineer for shapeshift.com, one of the best digital assets I've got exchanges out there, and as a result, I get exposed to a lot of different technologies and I get to play with a lot of cool different things in the crypto space, including Cosmos, the AtomToken, and some cool new validation software that people are releasing and trying out. I'm also the curator of the Cryptocurrency Security Standard Auditor Exam, so that's an exam regarding the Cryptocurrency Security Standard that I referenced earlier. I have my own consulting firm where I consult on security and blockchain technology, and then if you guys want to reach me on Twitter, you can hit me up at forwardsecrecy.com. The reason I'm giving this talk to you guys today is because I do have a lot of experience on Testnet and Mainnet with deploying, configuring, administering, and securing validators and validator infrastructure. I wanted to set up a goal for this talk, so you guys can read it there. I'm not going to go through that. I do want to highlight my redundant joke here. Hopefully you guys like that one, if not, oh well. But my goal for this talk is to have you guys walk away here learning something. There's a lot of smart people here at DEF CON in the blockchain village, but while we may be good at infrastructure or we may be good at security or we may be really good at the blockchain aspects of it, nobody's perfect, right? Nobody's an expert in all areas. So hopefully you walk away from this talk, taking a tip or some piece of knowledge that you can run with yourself or implement in your own system. So what are validators? Everybody's kind of familiar with the concept of Bitcoin and SHA-256 and mining, right? So when you're mining, you're throwing a ton of resources at cryptographic problems to solve them and write data to a blockchain. Mining was awesome. It was very profitable for people for a long time. It helped secure the network. But there's a lot of downside to miners that people have brought up over time, including energy consumption and administration and things like, I can't get the newest graphics card to play PUBG or Fortnite because there's been a run on them, right? They're $500, $600 now because everyone's buying them up and mining. So some new coins and chains came out that implemented what was called proof of stake. And unfortunately, it's not the stake that I enjoy every night. I would love a proof of actual A1 type of stake. But this proof of stake is where we're actually staking and bonding coins and doing our network through validators. So those validators actually provide a whole bunch of energy savings and they can achieve kind of the same end result. They're able to validate transactions, validate blocks, provide signatures, create chain data and do it in a way where it's more beneficial to the environment and it's lower cost and overhead. The way it works, as I said, people end up bonding or locking or staking the coins with the validator. And then for this talk, we're going to be focusing a little bit more on cosmos. But with things like ETH2, they have some different systems where they kind of bet on blocks and transactions and things for the validators to process. But on the back end, what it's really doing is it's taking cryptographic signatures and signing and broadcasting votes out to the chain. The incentive for the validator you can see right here is money, basically, digital value. So as a validator, you're validating all these transactions and you're running your infrastructure and you're putting in all the time and effort. And what's the incentive to go and do this? So there's a few different pieces. The first one's the security aspect. So you're providing back to the security of the chain. The decentralization aspect. So you're participating in a decentralized system. There's a governance aspect where as a validator, if you meet certain conditions based off of the chain and governance that's been set, you can actually participate in governance votes and help to modify and make changes to that chain and it's different variables moving forward. But there's also block rewards and transaction fees. And that's kind of where the money comes in at. And this is just a very small example. So I think this is one day of rewards for a validator that I was running on a new chain that was just released. And you can see here, I'm getting some dye and I'm getting some tick. And this validator, while it's using tick as its denomination and also paying some commissions in dye, this could be whatever coin or whatever chain that you're running your validator for. So in the instance of Cosmos, that would be Adam, right? And today it's kind of set to a single chain, but the real value for validators is that you can run validators on multiple different chains and tokens and provide the security and the governance and get the kickbacks from that. But in the future with some of the scaling efforts and new technology that's coming out, those validators are gonna be able to validate multiple types of coins. So if I'm a validator that's running a whole bunch of different chains and I'm validating on that, my rewards for the day could be Bitcoin, Dogecoin, Litecoin, dye, tick, whatever I choose to validate on or stake my tokens with. And that's something that's gonna be important that we're gonna talk about a little bit later. But from the validator aspect, like I said, the rewards, transactions, governance, being able to participate in the system. And the other thing that's important is the more stake, the more coins or tokens that have been bonded with that validator, the more power and say that validator has within the system. So it's very important to make sure that you're keeping up on that and that you're bonding those tokens and contributing back to the ecosystem in order to be a good neighbor. The other reason validators run this, so while you're getting block rewards and you're contributing, it's pretty cheap to do this. So this is an unnamed cloud provider and this is just some simulated costs. So as you can see for the month, it's extremely cheap to run these instances or they can be virtualized instances, they could be bare metal instances. There's a lot of benefits to running up in a cloud provider, which we're gonna jump into. But the overall goal of this is it's extremely cheap to do this from an infrastructure side. Not necessarily as cheap from the management side and the personnel, the human cost, but at least getting things spun up, running a validator, you can get this done in an afternoon if you're the type of person that has some familiarity with cryptocurrency, has familiarity with the Linux command line and has some familiarity with best practices and security because you are dealing with things that have a ton of value associated with them. There's a lot of different networks that currently run validators and do these proof of stake type of systems. Cosmos here in the middle is the big one that we're focusing on for this talk, but a lot of the information presented here can be applied out to a lot of these different networks that run validation systems. Microtip down here on the right is a brand new one that just came out within the past couple of weeks. They launched their mainnet and it's one that I practiced some validation on and they're doing some interesting things with price discovery and stuff of assets and it's really interesting because you're seeing stuff use some of the Cosmos SDK where a lot of these things that we're talking about are gonna be applicable. Other chains are now forking those off and kind of doing their own things with it, creating great new products and great new services for end users that we can all jump in and play with. So if anybody after this talk does have interest in validating, I would definitely go out to some of these networks, look at their documentation, spin up some virtual instances or if you have some spare metal spare hardware, try jumping on their test net and seeing how this all works out. With Cosmos specifically, we were talking about value and we can see here some stats from one of the block explorers for the Cosmos Hub and right now the atom token, I think is at 415 at the time of this presentation. It might be 417 now, it might be 413 in a second from now. They've got about three million blocks on their current chain and this is really what's important. So this is showing how many of the atoms are bonded or staked back into the system. Right now it's at 71%. So 71% of all the atoms that are out there are being staked and held securely for these validation services. Now, when we take a look at the current price and we equate that out, that's about $760 million currently that's being bonded behind this network trying to protect it and prop it up. Another important number here is inflation which you'll see at 7%, remember that one because we're gonna talk about it here in a couple slides. So we now have validators set up in this system that are validating transit. And these validators could be run by you, they could be run by an entity or some other third party, right, or trust. Anybody can run a validator. But the other thing you can do as a user is you can stake your own atoms or tokens with a validator. And the reason you would want to do that is to additionally profit. So this is kind of similar to the whole staking pool concept with mining where I don't wanna run all the hardware and I don't wanna take on the risk but I do wanna contribute and benefit in some way. And this is a great way that anybody can go and participate in these networks and contribute to these validators and also potentially profit and reward from it at the same time for contributing to that network. There is some things with validators where they're gonna get block rewards and the reason you would wanna risk your money with a validator is that block reward there's an inflation component here and that was the 7% that we were talking about. So there's an incentive for validators and holders to stake their tokens over time. You can see here, the average yield in this example is 9.46%. So if you compare that to a 7% inflation rate, you're up to 2.4% at that point. So it still makes sense to bond tokens into this. And as long as that average stays above the inflation rate on the network, it's profitable for you to be bonding your tokens over time and getting those interest tokens back essentially. So you would think of this as kind of like a savings account or a high yields account where you have to manually go back in and redelegate your tokens to a validator, pull out your rewards and redelegate them and then over time, your reward amount's gonna grow. And just some statistics here. So for 686 atoms monthly, you would be getting a return of 5.41. So if $4, you're looking at $20, $25 for basically just doing passive income at that point. Now, when it comes to validators, another thing that incents them outside of the inflation component is slashing. And when we're talking about this, we're not talking about the guy from Guns N' Roses. We're actually talking about penalizing participants in the network for violating certain conditions. And this can be in the network, this could be in the consensus layer. But it's a huge financial incentive for validators to play nice because anytime you introduce these types of systems, you're gonna have hackers, you're gonna have bad actors, you're gonna have people trying to gamify it and say, how can I increase my rewards? How can I knock you offline? How can I trick the system or the state of the system to go in there and profit for me? What's the incentivization to not do that? And slashing is a great component for that. And there's two ways that you can really get slashed when you're talking about Cosmos and Proof of State coins currently. They are downtime and double sign. Both start with be pretty easy to remember. Downtime has some conditions where out of the last 10,000 blocks, if you're down for more than 95% of that, you're risking a downtime slashing event. You can see here in this block explorer, these blocks were being sent and everything was great. And then at some point, something went offline and now we're missing them. So if that continued for the next 9,500 blocks or whatever and we are not able to get our validator trouble shot or figure out what's going on or move to another system, we risk incurring a downtime event and having a slashing event occur. The other big one is a double sign. And this is an example of a double sign that happened on the Cosmos mainnet where what ended up happening was the private key for the validator was used to sign two transactions, sign the votes for them for the same block and that was pushed out, that was detected. And as a result, that validator was jailed because you can't double sign on your vote. So once you're jailed for a double sign unfortunate, you are in jail, there is no getting out, your validator will not be able to rejoin the system. You've screwed up and you've screwed up in a major way. When it comes to downtime, the amount of stake that can be slashed is very minimal. It's not a huge amount, but depending on how many atoms are staked in your validator by yourself, as the validator what's doing called, or what you're doing is called self staking. So a validator may have a million atoms and stake their million atoms to say, I believe in my system, I believe in myself, I'm gonna stake all my atoms with it. But as I said, users can also stake. So when these things, when these slashing events happen, not only is the self stake affected, but the user stake can be affected too. So as a validator, if you do have a slashing event and you lose stake, you're losing money for your end users tokens and digital value for your end user, they're not gonna be very happy with you or wanna contribute to you in the future. The penalty for that, you miss your rewards while you're in ATEM, and that goes for 10 minute period. And with these types of events, usually you only get hit for the first one. It's not reoccurring. They put, I think it's called a tombstone. I know there's a tombstone here for double signing, but I forget what it's called for downtime, I apologize. But it's basically kind of like a buffer to say that we hit you for downtime, we're not gonna hit you again if you're in a certain period. So if you are being slashed for downtime because your server's down or you're having some sort of redundancy issue, it's still within your best interest to get that up as soon as possible, because these things are gonna start to get costly over time, but there is some protection there for end users so that it's not just gonna contestantly stack over time, unless certain conditions are met. With double signing, it's huge, it's 5%. So if you do have a million dollars an atom on your validator and taking a 5% slash stake, you guys can figure that out and see how much money that is, right? You also hit an automatic unbonding penalty. If you double sign, you are, and it also incurs a three week unbonding period where those atoms cannot be touched while they're being unbonded from the validator. I put the little tombstone here for the RIP because if you double sign, you're done. You're not gonna be able to move forward with that validator. You're either going to need to redelegate all of your stake to another validator or allow your users to unbond all their stake and move forward with somebody else at that point. You're out. Because of all this, you wanna make sure you're doing your due diligence when you're either staking with a validator or thinking about running one yourself. These are all points you should be thinking about. So it's recommended from my experience that you wanna diversify at minimum, probably three is a good bet. I don't wanna say best practice. I usually like to stick to three to five different validators for different tokens. The reason being is that we're in a decentralized system and we should be decentralizing as much as possible. I'm also mitigating my risk. So if one of those three or one of those five validators do incur a slashing event or a downtime event, whatever reason, I've mitigated my risk to where I'm not losing all of my funds at the same time. You wanna do some open source research, some open source intelligence and look up the teams behind them. Look up the history of the validator. So what does their history look like? Is it clean? Do they have a lot of downtime or maybe a misconfiguration where every 99 out of 100 blocks they're not signing for some reason? Well, why is nobody noticing that or fixing that? Are they really paying attention to their system? Have they been jailed before? So it is possible to go to jail and get out of jail as we said, if you double sign, you're done. But for minor events, you can get out of jail if you're levy the penalty at that point and you wait for the time that you're jail. So if they have a history of jail occurrences that you're not comfortable with, you shouldn't be giving them your money because this is essentially what you're doing is you're letting them hold digital value on your behalf, essentially. And then the other thing is a lot of teams have gotten really good in the past year with putting a presence out there. So there's people and there's corporations that are doing this as a business, right? Where they validate on all the chains I listed earlier and more because there's a ton of them out there doing their own little niche with different things with proof of stake and their chains and their use case. But some of those teams are getting really good with publishing a social presence. So they're listing their teams on webpages or they're showing you what chains or they're actually showing you pictures of, here's what our setup looks like, our validator setup looks like. Be careful when you do that because it's very similar to the ICO craze of 2017 where some of the team pictures, if you go and reverse Google search or Tyni reverse search those, you're gonna find that those are some LinkedIn picture for somebody that works in like aviation or something like that that has nothing to do and they're just stealing their information to try to fleece you. So do your due diligence, don't get scammed. The reason you wanna do that, so this is a personal example. I was validating on a testnet for a coin that was not released yet. And because it was a testnet and because I wasn't taking it as seriously as I should have, I ended up using some spare hardware and ended up experiencing like a 36 hour power outage. So this spare hardware was hooked up to a Comcast cable modem, no redundancy across the board. And during the power outage because I'm focusing on saving my food and a toddler and what are we gonna do? Supposed to be 95 degrees tomorrow. And all of those things that you have to deal with when life hits you and kind of shit hits the fan. Those are the things that you're not gonna be thinking about your validator, right? You're not gonna be thinking about the thing that you've set up in the dressing room that just runs off the modem that your wife doesn't like or the kid resets all the time. So unfortunately, I'm a victim of this and thankfully it was only on testnet but I ended up losing 10,000 coins because I did not have a redundant infrastructure set up. Now if that was to occur on mainnet and I would say a validator, I would have just taken a big PR hit and lost my user's funds. In my instance, thankfully it was only testnet and it was a great lesson learned. And this is actually the picture of my validator. So this was an Intel NUC that I had a spare machine that I set up. And they said hooked it to a cable modem with a single ethernet line, single power line, right? Single SSD in here somewhere. So no redundancy, no infrastructure, not even really any status lights that tell me what's going on. So if you are thinking about running a validator, mind that the people you're thinking about bonding your tokens with are doing this, you should run away as fast as possible. Great for development and great for testing should never be done in production. That's what you wanna do in production. So we wanna see the data center, we wanna see the cool, blinky lights. The reason you wanna do this is you're getting cooling here, you're getting monitor key and power and network and all the good warm fuzzy feelings that go into your hosting bill when you have to pay that at the end of the month to justify all the servers and things that are being filled in these racks. So stand on the shoulder of giants, use the experts, use the redundant infrastructures that have already been built and hopefully been tested correctly to host these types of things and do validation. We wanna implement all like the standard best practices. When it comes to redundancy, you wanna hit all the major monitoring we're gonna jump into with a couple of different solutions that can actually alert you when you do have the 36 hour power outage. There's a lot of administration concerns too with different configurations and things like Cosmos has published a security update for its SDK and it's an emergency upgrade. How are we gonna do that? Do you test the data center tech with going in and upgrading that binary or do you have a subject matter expert on staff that can handle that? Do they know the difference between GoPath and your environment variable path and all of those types of things because all of those are considerations that you're gonna hit and speed bumps that are gonna occur while you're working on these types of systems. You need something to do some sort of repo monitoring be able to check when upgrades and new checkpoints and releases are released and then deploying it's another big consideration for administration. There's a lot of different schools of thought on deployment and we'll jump into some of the use cases as to why some are better than the others but it's definitely a concern when you're architecting and administering these types of systems. And then we wanna jump into the lot of the security components with controls and the different aspects on how you're mitigating your risk and how you're delegating access and who knows what and why do they need to know it? The keys that actually work on a validator when it comes to security, there's two keys specifically for Cosmos that need to be protected and thought about with all those prior best practices for redundancy and backups and things of that nature. And that's your tenderment key which is actually doing your consensus voting and then the application key which derives your validator addresses things. That signs your actual transactions but both of those need to be considered when you're building these systems on. One, where does it live? Two, how did we generate those? Did we do it in a secure way? And I'll show you some ways that you guys can do that. Three, where are we backing it up? So does it just exist on some EC2 instance somewhere or do I have key material written down in a waterproof, fireproof safe that's access control? A lot of that stuff needs to go into the thinking and mindset and paradigm of how you're architecting this system and how you're doing it. And you really wanna do it right kind of from the beginning because if you deploy a validator to make a term in that, I didn't generate my key material securely. You don't wanna have to go back and redo all that stuff and redelegate and try to make sure that people are aware that we're validator two instead of validator one now those types of things. So when you're building this stuff kind of encompass all this in your thinking and do it right from the beginning because really what these keys are protecting, I checked coin cap IO right before this presentation, 1.3 billion market cap for Cosmos Atoms right now. So for a coin that's still relatively brand new in the scheme of things, we're already at a billion dollar market cap at current day prices. So if the bull market really, really starts taking off at that point, these keys are protecting crazy amounts of value. Are you comfortable leaving those types of that type of information sitting on a server that the data center tech down the street that happened to work on the day that you called in and said, my server's down, can you check the lights on my Dell system? He's looking at console and he's single-usering and things. While your key material may be protected by local key stores and things like that, what stops him from sniffing memory? What stops him from imaging the drive or breaking rate or things like that? And then clearing logs. So it's considerations that you wanna think about when you're building those systems. For redundancy, it's all the great stuff with power supplies and phasing and networking. And I've seen some validators where they run on a single networking. So what happens if that network card fails or the cable goes bad? A pair and that cable goes bad. You're gonna be at risk of a potential downtime event. When it comes to storage, there is a lot of data that goes into running a validator. A validator essentially needs on itself a copy of the full node, the chain data, or it needs to link to another server a century that has a copy of that full node chain data. So if you're building in redundancy to say if this instance goes down, once you rebuild your new instance with your new validator binaries, you need to sync all the chain data again. So if you're in that window, that downtime window at 95% to 10,000 blocks, which I think is like half a day and it's starting to get down to the wire, I've seen where we've started binaries and things like that and my validators, they still need two hours to catch up to sync chain data because I didn't have a copy of it anyway. So I would definitely recommend looking at things like RAID, if you're using sans, doing snapshots and things of that nature. The Cosmos team has published snapshots over time of chain data where you can go and download those types of things to sync it, but I'd be careful with what sources you're pulling that from because if a bad actor wants to seed bad data to you, you could potentially risk another issue with your validator at that point. And then there's some other gotchas that people don't necessarily think about and like there's some funny stories and some instances you're standing in front of the server in the data center and you're going, I'm here, I have no way to log in, there's no keyboard, right? There's no crash card available in the data center. So run through some of your BCDR type of policies with like what do we need in the cage, right? Do we need spare hard drives? Do we need installation software for OSs if we have to spin it up? Do we need config variables to point to a Pixi server or to point to a Kubernetes cluster or however you decide to architect your solution that best works for your information system? And I note down here at any time that you're running through this type of stuff and architecting it and running through these exercises, you always want to think about the worst case scenario which is double signing. And in order to achieve that double sign, it means that those Cosmos keys need to be either duplicated somewhere or pulled into a configuration or have two instances that are running that same material. So always think about that. That's kind of the grenade in the mix that needs to always move forward and can never be copied because if it is and it goes live, you're bristing that 5%. I put this slide in here just because I hate DNS. DNS is a big one and you wanna be careful with this because when you start to build validator infrastructure and you start to do additional security controls and different servers with copies of chain data, DNS is one that may get you that you don't even realize because nobody ever thinks it's DNS, but it always is. So make sure your host names are correct, make sure you have redundancy upstream in your network stack, not just the cables, even the routers and the rack, if you're using things like Netgear or Cisco equipment, do you have that in a stack configuration? Do you have an AB failover? If your switch or your Cisco ASA license expires and you start getting throttling or your feature pops off, what happens into your system? So there's all things to simulate and think about when you're building that and DNS is one of the ones that I don't want people to forget about. So now we need to monitor all this stuff with our validators. We've figured out how it works. We've got some coins, we've got a system set up. Now, how do we avoid that 36 hour outage, right? So simply DC is a company that's out there that does a ton of validation and change things like that. And they release this open-source software called Panic and I'm a big, big fan of open-source software. But what this will do is this will actually monitor things from the tendermint layer and from the local server layer and it'll do email, telegram, call you in through Twilio integrations, it's not responding. And it'll also do the repo monitoring we talked about earlier, you know, a new version of Cosmos K or Gaia D or whatever binary has been released. You need to be aware of that so that you can go check the release notes and see if this is something I need to apply now or can I defer that to later? I love it, I use it for all my validation stuff. This is my telegram integration. And you can see it doesn't just do things like the services are down. It actually tells things like my peers have increased or the node was inaccessible. If people delegate to my validators and get updates on my voting power, so I know that I've got more power in the system for consensus votes or that there's something happening and I need to be looking at my validator because people are delegating to me, I wanna know why. And then as I said, it gives you peering changes. So if other people are doing peers dropping off the network that may be indicative of a network event or a binary upgrade or something else is going on with those other servers, they're doing some work to that stuff. And it's good to be aware because we may need to take mitigation efforts. Prometheus is another great one. This is all packaged with the current Gaia installation with Cosmos. It's a very simple config change. And this is basically kind of like a logging port that opens up a whole bunch of data that's going on in your validator that you can feed into Elk stack or different dashboards or whatever you choose. But it has really good support for Grafana, for example. And this brings it into the kind of DevOps infrastructure world of I wanna build dashboards and I want cool graphs and stats and stuff like that. So if you don't wanna do that through necessarily a block explorer, if you wanna run something locally, these things are, it's great to turn Prometheus on it's great to throw Grafana or some sort of Elk stack monitoring on top. So we've got our monitoring taken care of and now we're, so we talked about the data center tech that let's use AWS, for example. So you're running a validator in AWS and you've got this private key material on there that's validating transactions and participating. What stops the data center tech from logging into your system and grabbing your key material? What stops the hacker from getting on your server and grabbing that stuff remotely if you haven't configured your security groups and your firewalls correctly? So that's the problem and the solution is HSMs. And HSMs are great if you guys have never used them they're a little bit expensive but it's very similar to the way that you know UBQs and things like that work where it stores the private key material actually on a hardware device and then science messages without actually having to expose or reveal all that material. In theory, it's not supposed to allow any private key extraction. Now there have been attacks and that's not to say that things can't fail in the future but currently it's not supposed to allow any private key extraction. So what you would do is when you were generating key material to either set up your validator or your wallet or functioning you wanna generate those keys in a secure way and then store that private key material on a hardware device. The two I've used is the UBHSM2 and the Ledger Nano S which both work very well with Cosmos. I know they work with Microtik and they work with a few other chains that are out there. But this would actually physically sit into your bare metal server in the data center, in my case, in the dressing room and provide the private key material and signing functions without revealing it. So if somebody was to get into your server remotely or if the data center texts in their single user in they're not gonna be able to extract this without a whole bunch of different authentication information and secrets and things that they don't have and then the actual physical hardware, security capability of the device too. One thing I will note to be very careful with this stuff is that the UBHSM in particular is a very small file and in order to firm our reset this device I think it's like hold this front side for 10 seconds or something. So if you're in a data center working and you're failing over from server A to server B because you're doing maintenance and you pop your HSM out be careful how long you're holding it and where you're touching it on because if you wipe that private key material that's not gonna be recoverable and you better hope to hell that you have a backup. The ledger works really great with it too and it's got the little graphical display and same thing, there's always security vulnerabilities out for hardware wallets but people are keeping up on those and they're auditing them and it's still a better layer of defense than just leaving the key material on the server itself. Another thing you wanna do from a security aspect is you wanna set up century architecture and a century is basically a copy a full node of the chain that you're validating on. So you could run your validator straight out to the internet and start broadcasting these things but it's better to put it behind another layer of obfuscation and network connectivity here. And what we're doing is we've be connected to the public internet that was gossiping a lot of information about our transaction or validator itself and we've moved it up a layer behind a century layer and when we do that, this actually enables us to now load balance and have some redundancy at the century layer where we can deploy multiple centuries here. So I've used two in this example but it can really scale, I don't know if there's any technical limits on that that would be a great follow up but as far as I'm aware I haven't seen a technical limit on how many centuries you've run but the whole point of this is that nobody's gonna be able to get to your validator directly to any public IPs or anything like that. You're gonna have private communication between your validator and your century itself and I've seen this done many different ways where this validator could be inside its own VPC and you have IP sec tunnels talking out to your centuries which are in its own VPC that consists of two or three instances. This could just be two or three EC2 instances that are talking to each other through private IP space. This validator may not even be in the same location or site it could be bare metal talking up to cloud centuries somewhere bare metal centuries and another data center across the country. I wouldn't recommend that for time to live and packet communication but it is technically possible. This stops people from denial of servicing your validator. It also gives you some redundancy here for failures with your different centuries and things like that. And as I said it protects your communications between your validator and your century. So the validator would get the information from the century sign transactions broadcasted out through the century out to the public internet. So from an attacker standpoint they would only get as far as this layer they wouldn't get to the stuff that actually matters. Other considerations with your security architect we never wanna double sign. A lot of cloud providers offer auto scaling and I think that's great for centuries but I would avoid it for my validator infrastructure because you start to get into instances of where does my key material live and how do I migrate? And if server A still has it active and server B spins up and I've copied my key material and that service starts I've now double signed, right? And I'm in jail and it's game over. Any automation and orchestration pieces with the century you wanna think about the blockchain data and the state data. How are you moving that? How are you verifying that? How are you loading that into new instances if you're doing auto scaling or automation or do your new centuries have to come up and sink the entire chain before they can be active? You also need to worry about doing your config updates. So if I have a validator infrastructure and I have two centuries and at a third well now I need to go back to all of my existing systems and do some config updates so that I'm now listing that here in my config to say talk to century three don't just talk to century one and two. So when you're building the automation if you have centuries and things that infrastructure you need to kind of backfill all of those changes to stuff that you've already implemented. And then centuries are just like validators because it's a full node. If there's security updates for the binary if there's issues with it where it goes down you need to be doing all the same monitor all the same repo stuff all the same binary upgrades when it comes to those centuries. So if you're scaling your century architecture out horizontally to 40 centuries just realize you need to do updates for all of those centuries when things happen. And then there's some final other considerations here. As I said earlier, there's a lot of different mindsets on should you virtualize validators? And I've gone back and forth on this and I kind of say I feel like validators should be bare metal. And while it is technically possible to deploy like an EC2 instance or Google or GCP instance and have it validate with key material on that validator that's not gonna be the most secure optimizer done in solution that's out there. And if I was considering and looking at two validators on where I wanted the funds I would not pick that one, right? I would go with the person that considered some of this stuff and even if they were using one or two bare metal servers and like a hot or cold fail to me I'm a little bit more comfortable with that than somebody that's doing pods of validators that's moving key material back and forth because all that needs to happen is a weird edge case or anomaly where you're gonna hit some sort of double signing scenario, right? And unless you have your deploy scripts tweaked so good with so many tests and error catching and then yeah, it's scary when you look at the amount of money with that 5% double slash. There's other things you wanna consider with minimum config values. So I've done some attacks and I've seen some defense on centuries and things with spam dust transactions where people can try to tax your validator your century resources. And the reason they would do that is to try to edge because maybe they run their own validator or you were jailed and they had money with you or tokens with you, some sort of value. So there's some minimum config values that validators can tweak and centuries can tweak to say ignore more of the spam and only give me the good stuff. It's a there's an art to it and you gotta be careful because you wanna be a good system but just know that you can get pretty granular and your validator big with all the different values that you're participating at the tendermint layer and the consensus layer. And that's goes for denial service too. And then this last one here that I don't think a lot of people and I've actually seen a lot of wallet software and validator software getting better with this. People get in the habit of validating and starting it rewards and they'll pull out all of their rewards and say redelegate every single token I have. All my rewards in my wallet send it back to the validator because I want that compounding interest. And what'll end up happening is it'll work. They'll still sign the transaction and send it off the validator and then they literally have zero in their wallet. So you guys, you know, you're familiar with ETH and other systems. If you send all your ETH out and you try to do an ERC 20 transfer, what's it gonna do? It's gonna complain and say you don't have enough ETH for the gas for the transaction. So be careful with that because if you redelegate all your reward thing that's in that wallet, you're not gonna have enough gas to do any more transactions at that point in the future. So you're gonna have to go procure atoms from somebody else or wait until you have enough rewards to get enough to be able to start to continue to do transactions. So I've gotten in the habit of like leaving half an atom or an atom or whatever denomination or whatever token I'm using, leaving one or two of those in the wallet just so I know that I'm always gonna be covered for transactions in the future. And that stuff's getting way better with doing calculations and fees and even notifying users. Cosmos station is a mobile wallet that I love and it'll come up and actually warn you and say like, hey, you shouldn't be delegating max. Like you should be delegating 90% of max or whatever you wanna do. So with all those things we talked about with redundancy, I wanted to ask the question, should you be running a validator virtual? And in my opinion, no, not for the validator. If centuries and things like that where it's not actually signing transactions or dealing with value, that is great for scaling up and auto scaling. And as long as you can take some of the data considerations and things I was talking about earlier into that system, it's amazing for centuries. I fully recommend GCP, Amazon, digital ocean. It takes the onus off you and it gives you all of those redundancy and good warm fuzzings. But for the validator, I feel like that's something that you should be controlling a little bit more manually doing your own internal orchestration orchestration and things like that. You wanna be careful about where you're hosting that. Who has access to it? What is their security posture? What is their uptime in their SLA? If their SLA is more than a day, that's not good for a validator because if you have a downtime event, you could be slashed. So all things to think about when you're negotiating with your service providers. The last thing for best practices I wanted to talk about was currency standard. And this was more in line with what I talked about last year here at the blockchain village. But it's basically a standard that's amazing that was presented by C4 that dives into like a lot of different security controls and aspects that one can implement in a cryptocurrency information system. It gives you the guidance. It gives you a control and different levels from level one to three to say here's the very basic control I can put into place versus the level three. Here's the ultimate paranoid control and kind of give you an example for that. One would be the key generation. So we were talking about keys earlier with the keys and which ones were both for consensus and which ones do transactions. And one of the things of the cryptocurrency security standard is how are those keys generated? What was the device that you generated them on? What was the entropy device? Did you do it on your network connected computer that you do work and Netflix and torrenting and all that stuff on? Or did you do it on an air gap machine with a secure clean room type of setup? Did you back it up correctly? So level one may say I made a backup of the key. I'm good right now. It's on paper somewhere. But level three would say, that backup needs to be sealed. It needs to be fireproof. It needs to be waterproof environmental proof. It needs to be access controlled. So as you kind of go up the stack here there's different requirements and different levels of security and paranoia. And while you may not be able to use the cryptocurrency security standard to certify your information system depending on what works or what you're doing. It's a great framework to check your validator against for different levels of compliance. And it also gives you some really good ideas for things like data sanitization policies. So if you are spinning instances up and down where are your logs going? Do you have a process for clearing all of that stuff correctly and securely so that nobody can run forensics on it if they wanted to profile you or get more information? So there's a lot of great things in here that I think definitely apply to that information system even if it's not every single one of them. So what's next for the future? And this is really kind of exciting for me and validators and cosmos. IBC is right around the corner and I just took place in Game of Zones which was a attack defend scaling competition that the interchange team and Zachy Manion and Jack Sample through. So we had a whole bunch of different validators and people that were taking place in this. And as you can see in this graph here these were all the participants and they were connected to a main hub, a main cosmos hub. So this is very much indicative of the wheel and spoke infrastructure that I think everybody's kind of used to. But this was to theoretically prove that all of these people could connect into the main hub and get them prepared for the next thing which is the IBC protocol. And when we're talking about the IBC protocol we're talking about validating on chains that we're not running necessarily. All we're doing is syncing state with them through light clients so that if Alex chain or I'm sorry, if Bob chain, for example wanted to hold Alice coin, Bob and Alice's two chains can now talk together through a light client through IBC. So now we're breaking away from that hub and spoke model and our validators can now choose who they wanna talk to and where they wanna validate. So you're seeing here the very beginning where different zones are kind of breaking off into their own sovereign nations and saying, I wanna validate on this chain or I wanna participate on this chain in addition to my own. And this is the magic where I was talking about earlier for validators where when they go to do their rewards they could be validating on Bitcoin. They could be validating like coin. They could be validating Ethereum and Adams as well as Ron, Bob, Alice and Jim coin because they see some value there. And when you start to open up that ecosystem and that level of connectivity there's some really, really cool applications and cases and things that people are talking about and playing with and it gets me really excited. So that's what I have for my talk. If you guys do have any questions feel free to reach out to me at my consulting company Stoner Consulting. I love talking about this stuff. I'd love to go into detail if anybody wants to jump more into it. And I also wanted to throw a special thanks out to Sean Martin. One of the people that's worked with me on validators and structure and system and he's really an infrastructure magician. So if you guys ever have infrastructure questions or scaling issues or anything's like that please let me know and I will get you in contact with Sean. I'm Ron Stoner and thank you for attending my talk. I don't know if he came but I can. Sorry, I was looking to see if there was any questions. So I did see one in there correction. Yes, it is called a liveness check. That is correct. And that was in regards to the violations the consensus violation and the liveness violation. So downtime. So that is the correct terminology for it. Thank you. Cool. And I don't think I see any other questions unless anyone has any. I would love to. Yeah, thank you, Jit. Thank you, Nathan. Thank you everybody that put on this village and is promoting blockchain. I love it.