 I'm Marius. I've been with the guest team for two years now. What were you doing before you were in the guest team? Oh, I was working on permanent state channels and actually mining software. My name is Guillaume. I've been working on the guest team since 2017. Initially on a project called Whisper. It didn't work out, so I moved on to more core development and currently working on the topic of stateless Ethereum and vocal trees. Before that I was an engineer at various non-crypto related companies and I'm very glad not to work there anymore. I'm Peter. I've been on the guest team for about eight years now. I started working on Whisper and figured out that it won't work out in about two months. Essentially, Ethereum is my first real job. Hello. I am Matt. I've been on the guest team for four months. I've been working in the Ethereum space for about four years. It's also my first real job. I worked on the cool team previously doing various R&D efforts on future ideas for Ethereum and now working on the JSON RPC for the guest team. This panel was supposed to be an ask me anything, so it would be super nice if you have any questions, starters. We work up with the news of Arbitrum making an accusation for Prism. How do you, how your team looks for an accusation and do you think Prism will be a leader consensus clearance for Ethereum? I actually just heard it two minutes ago before we went on stage, so I didn't really read anything about it or made up my mind about it. I think consolidation is, it's always this two side of a coin, right? We want to be as decentralized, we want to be as distributed as possible and that also means that the team should be independent. On the other hand, if you have like a bigger company working on different things, they can achieve stuff faster. So let's see what the future holds for Arbitrum and Prism. Let's just add to that. I think the theorem vision long term, the idea is that the clients will be, or the most important clients will be maintained by fairly important companies. Whether those companies grow out of the INCO system or external companies that's up for debate, but I think in general it's good for Ethereum if you have solid, well-funded teams behind them. So whether this particular case is good or bad, I mean that's up for debate, but the idea is okay. Alright, I have a question for you guys right here. So thank you for being here. You guys are rock stars. So this is a question for the ones that have been here many years about what is your most memorable experience working for Geth? I remember DEF CON 2, the Shanghai Attacks, so I don't know what's your experience about that. So I would like to know what's the most memorable experience for all of you for? What is the fondest experience working with the Geth team? What about the merch? Like how do you celebrate it? Was that a happy celebration? Was that full of anxiety? The merch celebration was basically okay it's done now, let's move on. So for me I think the most memorable, I haven't been in this team for that long, the most memorable thing I think was the Greece interop where we worked on the merch together with all of the different client teams, consensus layer, client teams, execution layer, client teams in a really nice hotel in Greece and we spent like the whole day sitting in the basement hunched over our computers working while it was like 30 degrees outside and after the third day I made the decision that after dinner, after lunch I'm going to take one hour or two hours to go to the beach to enjoy life a bit between working in the basement. And as a, I'm in space around five years so as a go developer I had the opportunity to read your code and participate in something in your discord. But after the years passing I was feeling that less and less maybe or you're so occupied or with less support I could saw you less and less in discord, also the documentation will become more and more outdated. Now after I saw Peters I think last year one of our tweets that say oh we need more support, the other stuff, did you get some support for the Ethereum Foundation to support your team and maybe to hire new people to help with the documentation and the stuff because you did an amazing work, Go is still being a good platform for building stuff but I see a lot of people say okay I'm going to use Rust and other stuff but we have a very good base knowledge in Go that you guys built. So how is the future for GAF? So basically this is my question. Yeah so that's actually a very good question and I'm kind of happy that I have a positive answer for that is that well in the past most of the documentation that we wrote or stuff we published were all written by us and I mean that's kind of nice, it's kind of we have to be the origin for those but we didn't really have the capacity to do it and for the past couple months we've actually had somebody on the team who helped write documentation I kind of feel that we still don't help him enough but we're trying our best plus it was seen as initiative to actually revamp our entire website which will come in a few weeks, months I'm not entirely sure but there again we kind of a huge shout out to the EF, essentially Ethereum.org website team because they are the ones doing our new website and so we know that there's a lot of things to improve but there we have received an enormous help from the EF towards improving it. I'm sure there will be a lot of things that still depend on us but things are getting better. Also regarding support on our Discord and everywhere I think with the increasing complexity of Ethereum and the increasing demand of shipping stuff just being there, like our time being spent on supporting individuals in the community is very limited and we need more people from the community to take over to help educate others about how to use Gath and this kind of stuff. Yeah I just wanted to answer the question about the future of the Gath team so far people were quite positive and upbeat so I wanted to be the downer here I think it makes sense that so much of Ethereum has been relying on the Gath team before there's been an effort like the help also comes from other teams like for example the consensus layer clients are taking part of the burden away from us there will probably be this kind of other efforts in the future and hopefully we can just become a bit more redundant in a way and that would be true decentralization. Also talking about support if you guys have experience with Gath and want to help us with the Discord yeah we're really looking for people who can do it although I know if you guys can work with Gath like you have experience then you probably get better jobs elsewhere but you know just for technical support yeah This is Sina by the way he's also part of the Gath team Yeah so Sina didn't want to be here but thank you for being here But Sina gave a good lead into my question which is to ask everyone who's in the Gath team either on stage or off stage where do you most need help individually like in parts of projects you're working on whether it's like strategic stuff for the future or like something right here right now Is there anything you can share with us as community to help us better support what you guys are doing like can that work? The first thing that comes to mind is please be polite on PRs Yeah so I guess with the Gath team our bottleneck our single actually single bottleneck is poor request reviews we have a lot of very nice members on the team who can really turn out code super fast maybe a few people way too fast like Gary and we can't keep up with him and we also have a lot of external contributions and there's the problem is that the code is super sensitive so it kind of always boils down to a couple of people having to review everything and having to do a lot of context jumps to review them and unfortunately this is our Achilles heel because we have absolutely no idea how to solve it definitely more people on the team more reviews help but it's not an easy to solve situation because we can't just hire somebody to do code reviews since I mean the entire network kind of depends on it so that's we're really open for suggestions on how to solve that I think two things also come to mind for me the first is helping create better onboarding mechanisms for people to contribute to Gath right now it's kind of you know look at the code base, find something to fix look at the issues, find something to fix I'm taking you don't like your onboarding experience no I think my onboarding experience has been excellent it has been such a fun time working on the Gaths no but I think seriously a lot of people have talked to me saying I have no idea how to start working in core development and you know some people like work through that filter themselves and they find issues and they start contributing and they end up in that position but I think that we could build some mechanisms for more people to contribute and I had this idea that I've started working on a little bit is like having some sort of like capture the flag that has like basic flags for people to capture where you do like a core development task like add a new opcode or fix some sort of bug or go through an invalid trace of the EVM and you're able to resolve it and run a program and verify that you did it correctly and so that's like a very you know binary system of like I did this right and so now I can move on to the next thing and then once you go through that then it's a little bit easier to come into the whole Gath process and figure out like okay why did they not like my PR how do I get this PR to be accepted and everything so I think that would be a cool thing for people to contribute if that's something you want to contribute to I'm happy to talk offline about that then the other thing that I think that we could use help with and I think like all of the execution layer teams could use help with is working on the JSON RPC so that there's an interface that's a little bit more standard across all clients I think the ideal world is that a client is just simply a black box and no matter what client you're running you can always interact with them almost exactly the same through the JSON RPC and we're trying to get to that point but we're still not quite there and it's just generally one of these things that's not as high interest for a lot of people and so it gets put on the back burner for the most part and I guess one more thing I would add is that no shade for the other execution clients but very often I do feel that the guest team is driving a lot of the changes a lot of protocol changes, networking changes so it's kind of a bit asymmetric in that we are the ones bringing the for example the networking EIPs to the table and everybody else just well okay if it's backed out and if Gath implemented it then we'll just roll with it and I mean that's perfectly fine just it kind of means that our tasks are always not only to maintain Gath but also somehow to try to advance the execution layer features whereas the other clients are kind of I feel that they are sometimes just playing catch up and this is fine for them which is less fine for us I think that that also goes that not only goes for features but also for the testing so a lot of the testing efforts right now are driven by Gath and we're currently trying to build out a new team within the Ethereum Foundation specifically for testing for cross client testing so in the past it has been Gath is the majority client and we have to take care of the network so we have to make sure that everything is really well tested and we can create tests that can also be used by the other clients but in the future we would like to get to a state where researchers and the testing team create tests for everyone and so we don't like and they don't have to rely on us doing the work for testing and something kind of in this area is also there's Gath Gath is its own monolithic piece of software but there's a lot of I feel there's a huge gap still between the solidity development world and Gath and I would really love to see a lot more tooling developing in between like to bridge this gap so I don't really have any suggestion about such tooling but I feel there's a gap here that would be great to fill I have to think about questioning because what I really want to do is just use this as an opportunity that I may never have again to say just how sincerely appreciative I am of everything you guys have been doing over the last couple of years from one developer to another I know that sometimes that doesn't come through as much as it needs to and it's just what you guys have done has really helped me personally and get me involved in the ecosystem so if I came up with a question right now does Clef have any future? So whether it has a future or not I would say so Clef was one of those projects which we really think would be very very useful however I kind of feel that we as developers kind of took it to a point where it's secure and it's highly unusable because it's very very console based and everything and the only way to make Clef more usable or friendly even the least bit is to actually turn it into a product and unfortunately I will admit that that is way outside our capabilities because there you would probably need a UI team you would need a completely different team to do that work and whilst I do think it would be super awesome it's again the question is we could hire somebody or multiple people to work on it maybe get an EF to work on it but there are a lot of other wallet software out there and the question is is it worthwhile to try to compete with them I'm not sure so I would say Clef isn't going anywhere but I don't really see it going into product quality either so it's probably it will be a bit in this limbo space for now But something that we are currently thinking about retiring is the personal name space and the wallet within Geth so if you are depending on that we are sorry no actually there is a discussion to be had but we would really like to get rid of this Yes so definitely that's the direction we so Geth is kind of like this huge monolithic monster and that was kind of born out of the necessity of Ethereum launch there was no other software so the clients had to do everything but obviously having your accounts managed by a node is a bit wonky that was partially the reason why we built Clef and we will definitely try to take Clef up until the point where we can remove account management from Geth and hopefully it should be as easy to manage your accounts via Clef as if you were managing it via Geth but I think that's the threshold and if somebody picks it up awesome if not then it will be used as kind of as a developer tool in the future So a long time ago there used to be a very nice proof of concept running Geth on Android and I feel that now after the merge maybe it's more relevant and it can be actually useful that people are running their own clients on Android so I think that's the trick being done on that So originally when we shipped Geth for Android we shipped it as a full node which obviously doesn't work anymore then later Joel shipped the lite client it still I wouldn't call it really production ready but it kind of worked however it turned out that the pre-merge lite client is still too heavy for a phone so now that we are post-merge we can somehow pick this thread up again I think it definitely would be interesting however usually with mobile phones at Android you have very very strict limits on how much, I mean okay on iOS you have very strict limits on how much your background process can run Android is a bit more relaxed but you're still eating battery very very fast so I could imagine some LES client or some lite client on demand with your thing then very temporarily just pull some data from the network and that's it then I think the post-merge world is kind of compatible with that so I think that would be interesting however at least using the Geth code base I think it will always be a bit heavier than ideal so if you were to really, so the Geth lite client even if we ship it as production ready we'll probably more cater towards running on a laptop and if you want to run it on a mobile phone in a production ready environment my guess would be that it would take a different team maybe a slightly different approach to get there No so it's like an additional thing that we have to maintain and we would whatever Android users cross-compilation is always broken and there's always something that's just there and so it never really works and sometimes is a bit clunky the code that gets produced and so I think another team just building a lite client from scratch for mobile phones especially after Verkle we will definitely see some stuff there and I hope that at that point we can get rid of this So there's also a bit of a roadmap issue here it seems that you know now the direction for Ethereum is to have all of the day-to-day operation happening in rollups and having the core layer be something more not going to say inaccessible but maybe not the thing you do on a day-to-day basis is the case if that trend continues I don't think it makes sense to worry about mobile because no one is going to try to access the layer one from a mobile One more thing I would add is that it's an interesting twist post merge world that before the merge even a lite client had to have a lot of connections I mean you either connected to a trusted server which kind of beats the purpose or you had to connect to multiple lite servers you kind of got the ground truth kind of followed the correct chain based on proof of work Now post merge you don't really need that because you have the signatures so in theory post merge you could connect to an untrusted Web2 provider and download the necessary data and you could still verify it so this kind of opens up a bit of a different design direction for making Ethereum trustlessly work on mobile phones Can you repeat the question? Yes, this level is good so what's your take on the open Ethereum parity saga and what can the community, what can we do to make sure that alternative good clients don't die again I think that's I think that's inevitable so in my opinion good clients will die I mean look at the guest team currently has is about 10 people now I would say that out of these 10 people you have probably 4 to 5 people that are kind of more familiar with the very very internal details now should these 4 or 5 people leave it is very very hard to find replacements for them and this is essentially what happened with parity okay the leaving it was a bit different but the idea is that if you actually manage to simultaneously lose enough of your main contributors then it's very very hard to onboard enough people fast enough so that the project survives and I think that it's as bad as it is to lose a client I don't really see what you can do I mean the same happens in open source software too that eventually you have a couple maintainers that just I mean life happens they go to the other side of the world and they don't have anything else either because they get bored or whatever reason and I'm not sure that yeah so I think it is a very real chance and it will definitely happen that good clients will die there's always a I mean one way to protect it is if there's a very very good funding behind it maybe they can always decide that it's not worth it to keep going so it's I guess that it boils down to the fact that you need this is why we would need client diversity because that kind of affords us good clients occasionally disappearing and I think within the guest team we're trying to prevent this by onboarding new people and making sure that the new people that come in get familiar with a lot of the different parts of the code but it's very hard for us being relatively new to meaningfully contribute because there are so many just invariants within the code that are not explicitly stated and so sometimes it happens that we break an invariant and it's usually Peter who has all of the invariants in his head and says okay we're breaking an invariant here don't do this this is going to fail at some point and it just takes a long time to actually learn all of these invariants that are implicitly in the code plugging back to one idea I was talking about before just the fact that for example we kind of spun off consensus if another one of those steps happens in the future like maintaining a client becomes easier and easier so that would be a good like forgot the word but it would be a good way to make clients not be at risk of dying but until we do we simplify the protocol it's going to happen and one thing that I also really like on the roadmap is the perch where we try to eliminate some of the old outdated stuff and so for example history expiry would allow us to delete all of the rules that we have for executing all transactions and at some point if there's a way to execute them in a different client and all of these caveats but it would make maintaining a client a lot easier another issue that we always run into is that someone wants a feature and we don't really think about it and care and implement the feature and merge it in and two years later or sometime later we realize that this feature is not really used by anyone and we cannot change something because we would break this one feature and at that point we have a decision to make either we just don't do anything or we delete the feature and someone is going to be upset and we don't want to make people upset but it's inevitable with the way Gath is right now I guess just one final thought there Mario said that if we implement the purge and get rid of a lot of features then that would really help other clients get up to speed so currently it's very hard to write a client and starting a new client is essentially impossible because Ethereum is moving so fast that you never catch up but I guess once we reach the point when Ethereum starts to ossify that would be a nice place when actually new client developers can join in because then you can actually say that you can't just specifically for iOS that has these properties and they can work on it for three years without the protocol constantly changing the invariance and I think my expectation will be that when we reach that point of stability we will have the original clients be quite marginalized by new clients that are very very focused to some specific sub task some use case for caring so much of the Ethereum ecosystem on your shoulders what is the current sustainable business model for Gith and probably other clients that want to copy that and the follow up question is what is the plan to do something similar to what's being done for the core protocol we want a certain percentage of it to be staked as a security a percentage of the total value so is there some sort of research going on on how do we get certain percentage of the development effort in the Ethereum ecosystem to be going towards get security to answer your first question it absolutely makes no sense to write a client I think there's no business model behind it it's currently most of the clients are funded by not fully funded by the EF we have absolutely no income I'm assuming other clients usually how they try to fund themselves is that many clients get grants from other projects so that they support different blockchains different layer 2s different scaling solutions that's one way to somehow try to build a business model around it but at the end of the day since Ethereum is kind of like a public platform a client as a client you cannot really make money out of it it's yeah so that there's no business model behind creating the client itself the only thing that I would say is that there's the protocol what's the protocol protocol guild yeah exactly that allows projects for example DeFi projects to allocate a percentage of their token distribution to a pod and out of this pod client teams get right now they get bonuses and this is the general idea behind it was not to fully fund client teams but to provide an upside that they wouldn't have if that they currently don't have and that they would have if they were to switch to DeFi because we've seen a lot of good client developers just say okay I can make ten times the money if I create the next token and so the protocol support protocol guild thingy is like a way to give some of that upside from the DeFi projects I appreciate it I was wondering if you could start experimenting with things like maybe Kodark funding towards PRs you said there's a big backlog of PRs and only a couple of people that are capable of reviewing but you could imagine that with some sort of Kodark funding you can't hire or at least bounties for other people layers of developers to review and write tests and integration new tests before it gets to core developer so I guess this depends on the granularity generally people do do that at the EIP level so famously Uniswap is pushing certain EIPs very very hard and I'm assuming they are actually funding the people who are doing the necessary research doing their sorry presentation etc etc to get an EIP through now that I don't think that really works at client level because we've tried for example at some point we gave out a couple bounties for some work the place where it kind of backfires is two fold one of them is that you usually get contributions of not the greatest quality because a lot of people see that oh there's I don't know one ether bounty on this so let's just jump on it so you will have 10 different implementations all just kind of trying to hack it as fast as they can to get the bounty so the code is not the best it requires a lot of effort from our side to somehow try to fix it up or try to guide that person and the other side is that after the bounty has been paid out since this was a bounty work they just disappear and maintenance is our problem so we kind of have more problems than gains with funding PRs at that level what we have done previously is we have funded research teams for example who have helped us on the discovery protocol the discovery v5 we I think Felix was managing a small research team at I'm not entirely sure which university and the EF was funding them for a year or so to just investigate find possible solutions to different challenges write some papers that one works so that definitely works but at the client level I feel it's a little granularity I just wanted to say to the original question of what is the business model for clients there's no reason to build the client there's not a lot of money in doing this it's always going to be considered a public good but I think one way of making it sustainable is like Mario said this protocol guild project in a way that people can help make this sustainable for the long term is whenever you're developing new projects to consider adding a small allocation to the protocol guild in your initial token launch if some of these if this would have been something that was around in 2018 or 2019 when a lot of these blue chip v5 protocols were starting we would have over a hundred million dollars dedicated just to core development and so I think like going forward it's like a good thing to consider because you want Ethereum to be around for a long period of time and the best way to do that is to make sure that the people who are making it happen continue having the funding that they need to do it and adding to this we not only do is there no business model for a client but I don't think there should be because otherwise you would kind of adapt your strategy to increasing their revenue stream and you lose your independence basically my question is related with what Mario just said and well I'm still a student and I'm still trying to figure out what I want to work on after I graduate. I see the stuff that you guys do and I think it's amazing and I still see something more on the application side of things so my question is really why did you decide to go to core development and stay in core development instead of going to something higher in the stack because it's way more fun and way more interesting and so for me the big thing is also I want to work on something that makes sense to me where I have the feeling that I'm doing something good for the creator of humanity and I don't think DeFi is a lot of the DeFi stuff is this and so that's why I was so interested in working on the merge helping push the merge because I felt like this is the biggest thing that I'll ever be part of regarding the CO2 consumption and everything and so just having this big level of my work can my small work can have a really big impact that was something that was pretty magical from the beginning and that's why I am in core development I don't know about you guys it's not the money I mean with that so my answer is kind of boring because there was nothing when I started working at Ethereum so I think Matt's answer would be much more interesting since he did have the choice of picking one or the other picking one or the other isn't working on protocol versus working on Dapplayer stuff I mean I originally started working on the Dapplayer when I came out of university I was very interested in how the types of applications that you could build on Ethereum and I was very interested in dispute resolution because I had done a lot of e-commerce and buying and selling things on the internet growing up and I was frustrated with the way that these systems were built and so I was very excited about this as an application on the protocol and so I joined Consensus and I was working on something pretty similar to this and just immediately realized that A, the user experience for Ethereum at that time and still today was too bad to really onboard hundreds of thousands of users to have this dispute resolution system and even if 100,000 users decided to show up tomorrow we didn't have the scalability to support that many users on the protocol and so that was kind of where I started getting really interested in protocol development. I was like this is broken, we need to fix it so that we can build an application on it and I've just slowly become more and more indoctrinated in the idea and now I'm kind of at the point where I feel like I would rather be a small piece of a very large puzzle that I feel is going to become extremely important and impactful for humanity rather than trying to build an application that may or may not have any kind of impact for anyone. So I guess your reasoning for joining platform development was you guys suck, I'm going to fix it myself. Naively, maybe so. Yeah, I don't do that for humanity, sorry, you guys suck. No, yes, of course. I think also it's where the biggest problem like the impact indeed is going to have reverberating consequences in the future. So that's where it's the most interesting but it's also simply because you want to have good tools to build better societies, to build better companies, to build better software and I think simply the core I mean if you want to work on your tools you always end up working on the core no matter I've worked on Linux kernel before it was the same thing, you get dragged into the protocol as long as you want to improve your tools. Hi there. I have a question about databases, the underlying database and get as being viewed as a database. You guys have talked about get as being monolithic and wanting to remove or modularize components and I was wondering since database through it had been traditionally kind of constricting aspect if there were any changes to that coming up and this question comes from seeing a remote DB option in the command line options and not being able to find any documentation with respect to that. So that's a very nice technical question. So one aspect I'll ask a previous question. A lot of people sometimes ask us on social media why did we pick level DB and that was also my question I joined the team and Jeff's answer was because Bitcoin was using it. So that's essentially how get started out using level DB. Actually we've tried switching databases many times. Generally what's not so visible from the outside is that databases are kind of built up, have two components. They have a storage layer which is kind of like this very dumb layer that just has some very primitive ways to store data and retrieve data and then they build on top various transaction mechanisms, journaling, all kinds of stuff and most people kind of convoluted to and most people don't realize that level DB is essentially a storage engine. It's not a full-fledged database and there have been databases built on top of level DB that have all the bells and whistles. Now the issue is that the moment you are adding if we were to use something higher level then essentially we're not only paying the costs of the storage layer rather we also have to pay the costs associated with running the transactions running indexing the tables etc etc So at this point since Gath was kind of architected from day one to just use a storage layer as his database, if we were to plug in any full-fledged database instead of it everything becomes just insanely slow because Gath was not architected to use these high-level primitives. Gath always assumes that it has more or less direct access to the data and because of that it's we've tried we have a lot of BRs trying different databases and they always crashed and burned. One thing that we currently are working on actually Jared is working on it is to switch out level DB to pebble. Pebble is kind of like a next-generation version of level DB but it's still just a storage engine. As for remote databases the problem is that one of the bottlenecks of Ethereum, running an Ethereum node is disk access, IO operations per second. Now the moment you move the data away from the node actually the bottleneck is getting way way worse. So it's instead of making things better you are making them a lot worse. Since accessing I mean usually you have if you want to access an SSD a modern SSD can do maybe half a million IO apps per second. That's not the high-end SSD that's a payable affordable SSD. You cannot do 100 million roundtrip times on the network per second so it gets slower if we go down that path. Of course you could always say that we could create an Ethereum node architecture which has a remote database and then you could have multiple clients using that same database that's a very interesting architectural decision but that is essentially writing a completely new client from scratch so you cannot really retrospectively retrofit Gath to use it it would be its own new client. Hey guys, seconding all the previous comments thank you for the incredible work you're doing in the ecosystem. You guys are the unsung heroes of Ethereum. Quick question. If you were to hypothetically re-architect Gath from scratch today, knowing everything we know factoring in the merge, what would you do differently? I think we would at least I would be more careful about the features that we accept into the code. There's some stuff that barely anyone uses but it's still used and something that kind of bugs me is that we write a lot of small tools and instead of having them separate from the client, they end up somehow in Gath and I would have a stronger policy of not including them for example, the ABI gen stuff should in my opinion be a different thing but I already know what Peter is going to say but then you need someone to actually maintain it and if it's not in the client it's not going to be maintained if it's not in our code base. No, I actually agree with you. No, so I just wanted to add what Marie said that previously it was brought up that the problem is that when Gath was started we had to be this monolithic thing that does everything and this kind of leaves its mark on the code base and it's very, very hard to get rid of stuff and I mean we could get rid of stuff but some people somewhere always depend on it so I think if we were to start over Gath from way back eight years ago it would be this exact same situation because we would need to make the same tools and so I don't think the fact that we have a lot of legacy junk in the code base that's not a bug it's just the way the ecosystem evolved so I don't think that would have been preventable if we were to start over now then all of a sudden you can rely on all the awesome tools that people created and that allows the client to get rid of a lot slimmer I think a lot of stuff changed over the last one year and a half but especially what changed significantly I think a lot of other networks appeared that actually locked a lot of billions of dollars that actually are like Gath forks with some changes and was it somehow useful for you to take a look on it are you somehow monitoring what is happening over there and so are there any cases when you somehow inspired by some changes in these clients and you want to bring them into the Gath the original Gath and some cases when some bugs were opened in Gath forks that actually are pretty close to the original Gath as well one thing I wanted to say is about bugs so whenever we find a bug in Gath then there's usually a lot of different clients that are that are also vulnerable to this and so we are looking into them and making sure that they are not before we actually before we actually fix the bug and publish it the problem there is in which layer once are we going to look and are we only going to look at Ethereum alliance layer once or not and so I think the way we handled it up until now is that we kind of look in the biggest ones and just send them an email that hey you should expect an announcement at a certain point and then we are going to make a public announcement because we don't want to be king makers we don't want to say which client which fork gets the bugs first over another fork and there's also been bugs that have been exploited on other networks before they were exploited on Ethereum and we are kind of like not really monitoring them the other networks but we are trying to be on good terms with the other client devs I guess one kind of satellite question here would be that way back there were a lot of drama around when it turned out that we fixed the bug and we haven't necessarily announced it other networks had a problem with it people very often asked us why don't we have a standardized way of reporting bugs, reporting vulnerabilities have usually what big web 2 companies do is that they have their own little private consortia where they share bugs and then when they publish it everybody in that consortia already updated and we people were always very vocal that we kind of felt that that's a bit problematic because in the blockchain world if we were to create one of these consortia it's not really clear whether everybody in that group would be friendly or how friendly they would be so usually what we try to do every time we found a hairy vulnerability or bug is that depending on the nature we always try to announce external people or give enough details to external people, external teams to minimize any potential damage now for example in the case of Ethereum what we did is that if we knew that we are going to fix something that that only affects minors or if the minors are good then the worst that happens for average user is that their node crashes usually what we did is that we just gently pinged a couple of the bigger mining pools that hey we will release a release you really want to be on this release nothing more just a gentle reminder that it contains something that you want to run and by having the majority of the hash power updated the network is kind of safe and there's not much damage that can be done but this is kind of completely arbitrary bug by bug decision on how best to proceed how best to minimize any damage anybody in the either in the Ethereum network or the external blockchains that use Ethereum so we really try to be as friendly as possible within the limits of keeping the Ethereum network live hey it's me again I just want to add that apart from the people sitting here and some of us off stage we also have people watching the stream from the team who is from far away who couldn't make it unfortunately and yeah who is from far away trying to contribute to the conversation so somebody asked about the remote DB flag I have an answer to that so it's basically it does the low level database get over RPC so basically you can have you have a node on a server and you can have your local node connect to that to the remote one for some only read operation so for example if you do DB metadata command locally then it would basically give you the metadata of the remote node that's it do you see snapshots as the get solution for the state database layout for the first year of the future yes one question we are in this beautiful Columbia South America Bogota I just want to know what each of you have like most loved about Bogota so far hiking in the rain at the waterfalls I actually I actually really like working from here it was really fun with like we had some workshops all of the client and today we had a really nice working session in the morning yes if you haven't noticed Marius is the new workaholic on the team it was really nice just getting the people in the room that need to be in the room and work on some of the interesting things that are coming up and I'm really really really excited about the new stuff that is coming for Ethereum I also enjoy the hike quite a bit that's the only time I was at the hotel besides coming to here so I heard different Matt I heard different I also really like the energy at DEVCON it's been a while since we've all been together and I think it's easy to forget that the community is so large now and there's so many people excited about this protocol I'm not very energizing for myself Jared tells me the food no yeah the food has been nice I like the hotel as well no but I don't know I saw the center the other day when we went looks quite interesting I guess we will explore more this weekend after the work is done one last question who from the team is not on the stage today besides Sina who from the team is not on the stage we have Jared beside you so shout out huge shout out to Geord in the back Felix who's in Berlin Martins in Stockholm and a giant actually a giant shout out to Gary to any of the Gath meetings for the past three plus years due to him being stuck in China and the Chinese rules on leaving the country and coming back due to COVID are super strict insane shout out to him for still tolerating us and working for us without all the upsides of having the fun like we are having now big round of applause guys thank you so much