 What's that? Okay. Sorry. We are recording locally. Sorry to the internet. You will see this later. If anybody watched these things. Cool. Welcome. I am pulling up the agenda. The agenda is in the chat. So this is the last call before the phase zero spec freeze. We are working hard on the last few PRs on some known stuff that we want to get them. In general, the intention here is to be. To not be just meddling with things to make them cleaner. We are trying to clean up everything before and to really get it in a stable place for implementers, auditors, people, buzzing, et cetera, to dig in. Obviously, three out of the four of those people, the intention is to find issues. So if we have issues, if they are relatively minor bugs, we are going to be releasing minor releases. We are also going to continue our testing efforts. So we are going to be releasing the 080 T1, T2, if we are incrementing just on testing and a substantive thing. So that would just be increasing the test factors for you all. If the increasing in test factors found some minor bug, we will fix the minor bug and release as a minor version. On the audits and formal verification stuff, there might be some structural things that come up. There might be some deeper change where maybe some sort of additional abstraction is warranted for X, Y, or Z. We will deal with those on a case by case basis as they come up, potentially releasing as a major version. If it's small, isolated, and worthwhile getting out, or potentially piling on a few of these and maybe after a long run of feedback, maybe on the more of the three, four month time horizon, we would do a semi-major version bump. But again, I don't know what those are because we haven't found them yet. So we will address them as they come. Cool, spec freeze, it's happening. Thank you, everyone. I think the amount of people they contributed to this is pretty awesome and unbelievable. Okay, onto the first item of the agenda. Testing is needed here. You want to give us an update? Just a short update. So there's this PR open that basically aims to complete the spec test coverage. There's these two open issues remaining, one for how we formalize the finalization and how we deal with the bit field. There's really just days of representing data and I hope to complete the test for these two edge cases rather soon. And all the other tests are complete. So we get much higher coverage of the spec. Great. Thanks. So there'll be much more test factor coverage with the coming release. I believe that there is a fuzzing of the PySpec and the GoSpec still ongoing. It's still ongoing. We've been trying to improve how we move on from our initial set of states to like a more diverse set of states. The difficult thing here is it's not like a virtual machine where there's many, many different input states. The input states are like relatively sparse because there's all these inferences that have to be met by the state. So what we do is we have many create input states, but then first block changes and then when there's a valid post state, we continue from there. So we expand and expand the output states. The difficult thing here is to limit it in an intelligent way to not overflow this pre-state collection. We want meaningful states or the first set of states, not like small changes. If you're interested in fuzzing, please do join the Telegram chat or the Gator channel and we can talk more about the efforts. Great. Thanks. Any other updates on the test? Can you give a link to the Telegram chat? Maybe in the agenda? Yeah, we'll pop it in this chat. Okay, any other testing updates? No, that's it. Let's move on to client updates. We will start with Trinity today. Hello. So, yes, we just joined the Python team's snake cameras retreat and had a great week involved to discuss and play around the new Python libraries. We plan to migrate from Syncio to Trio, which is another Python synchronized library. And the reason most important progress is that Alex has a huge PR for the version 0.7.1. And also, since the spec fridge is coming, so we plan to bump to version 0.8 altogether. Yep. I think that's all. Alex, do you have anything to comment? That's it. Thank you. Cool. Thank you. How about Harmony? Hi. We have updated our client to the latest spec, 0.7.1, including SSZUnion. We are placing all tests from GitHub. And our benchmark shows no significant performance changes since 0.6 version of spec. Also, we have added validator PC server part. We will add client part someday in a future. And we are working on libp2p, minimal implementation. We are porting it to Java. And we have finished secure parts and moving forward. And next, we are going to add persistence to our client. And that's all, I think, here. Great. Thanks. How about loadstar? Yeah. We've been building out a few last kind of stubbed pieces of the client. Things like getting a valid ETH data, ETH1 data for creating a new block. Getting our deposit processing, actually working with a real miracle tree. And syncing, getting a real sync between a network and a chain going. And we're still in the process of moving to 0.7.1 of the spec. And we're also working on getting a benchmarking chassis set up. Yeah. So I think last time, y'all were discussing experimenting with some typescript conversions. Is that still on the horizon? Yeah. Assembly script. Yeah. So worse. So we have kind of a rough implementation of an LMD ghost. We haven't integrated it into the code yet. It's still an SAPR. And we are also thinking about doing SSC, or rewriting SSC with assembly script. I think the kind of the blocker there is a SHA to implementation. And so those are still works in progress. I think they're going to take a little bit of time, but we're still working. We're still, I guess, a priority. Thank you. How about prismatic? Hey guys. Yeah, we're passing all the 0.7.1 spec tests except for one final one that we're working on today. We finished our go SSC. It's passing as well all the spec tests. And the next thing up is we're going to be fixing up every part of our runtime so that it matches all the core changes and ensure that we can optimize benchmarking to improve the client itself. We, unfortunately, spent a bunch of time working on transforming the AMOs because they use HEX strings to represent binary data instead of base 64. So just a lot of hiccups basically based on that. But things are good now. Aside from that, we put together a central repo for Ethereum 2.0 API schemas. We'll be sharing with that. I'm not going to chat about this on this call, so we don't want to take time away. Oh, Taren's already sent over the chat. Yeah, maybe Preston can give a really brief explanation of this repo for everyone. Yeah, so the goal here is we just want to give feedback and sort of collect together these API schemas so that people wanting to build on Ethereum 2.0 have one place to go to. This could be going upstream into the spec repo or live here. We don't really have preference now, but we wanted to start getting feedback on this idea. So, like Rose said, don't take too much time on this call because it's not on the agenda, but let us know afterwards. Thanks. That's it. Cool. Thank you. How about Artemis? Pass. I'm just kidding. Okay, so I'm five each quarter now due to my asshole teammates beating me in the bed and upgrading from V5.1 to V7.1 of the spec. So I think that's the first five days. Yeah, I think that's the first time we've actually been like up to date this back. So, you know, interpret that how you will. But they really did a good job, so that was awesome. We are also working on some stuff with deposits, tweaking that whole process, incorporating some feedback received from the Hobbit spec. There are some good comments about, you know, like some cases it was, it needed to be a little bit simpler, like for its purpose and some more mods to match like other implementations so that it's less worth to use it. And then also matching, you know, the actual real viral protocol. And really a lot of credit goes to Dean and Renee actually for volunteering their time to like rewrite this back and it was a little bit spread out over some documents and they both kind of took all of the information incorporated. It made it new, much better version of it. So that was super cool. And I think that's pretty much it. Cool. Congrats on taking Johnny Z. How about parody? So last week we also updated to 7.1 test, which we are really happy to see all the bugs are fixed and we are able to remove all our work around. So that's really great. And for this week we did some fix for our iris in RocksDB. So it's more stable now and we did a major overhaul for our binary Markov library, which still hasn't been integrated, but it has a library. And the subject network stack is still in the works. So that's for us. Cool. Yeah. Thanks for finding some of those bugs. Lighthouse. Hi, everyone. We're at the moment passing all of the 0.6.3 tests and we've decided not to keep up to date with the 0.7.1 and we're going to wait straight for the spec freeze and then jump straight to that one. And since the spec freeze is happening, we've also decided to start doing releases and so we're targeting a version 0.0.1 release of Lighthouse next month, which will of course still be just for developers and researchers. So instead of doing all of the spec updates 0.7, we've been working on things like the reduced tree fork choice that was discussed IC3 and we're seeing already some good speed improvements with that around, you know, sort of five times faster than our previous implementation without any significant overheads, which is great. But we haven't got any direct benchmarking to sort of show that yet, but we should expect that. Expect that soon. On the networking front, we've been making some great progress with the PTP implementation and especially Discovery version 5. We're proud to say we have Discovery version 5 running in Lighthouse at the moment doing Discovery, although it's still just an initial implementation for our purposes and it's not the full spec yet. And also we've been having a chat to the Apache Milagro maintainers and we're going to start pushing some fixes and some stuff up to them as well because that's our core BLS library. Yeah, and that's Lighthouse. Great. On the Discovery v5, are you doing any of the advertising and kind of topic discovery yet or just the base underneath that? I'll let Adrian answer that one. Oh yeah, just base discovery at the moment. I haven't done the advertising stuff yet. Cool, cool. Thanks. How about Nember? Hi. So, regarding specs, we have most of 0.7.1 implemented and we also updated our test suite with the official test vectors for BLS, Shuffling and the integer part of SSZ. Regarding specs, focus for the upcoming weeks will be on performance, implementing SOS-style SSZ to enable the rest of your official tests and we are currently, we will start refactoring the state transition because with 0.7, now we have names for all the state transition functions like process slot, process blocks and things like this and with that we will also refactor the mocking part of the test suite like mocking blocks on state. Now, beyond the core specs, we continue working on Async library because we forked the name Async library and we add more and more functions to support peer-to-peer networking. We also launched our Lippie2P daemon-based test net last week so now we have test net 0 based on RPX and test net 1 based on Lippie2P. We will do a blog post probably not this week but a little maybe two weeks from now to explain how to install everything. We still are running out some details. We have Ethereum 1 deposit contract watcher ready. We did encounter some issues with log filtering and some RPC methods that are not intuitive on Ethereum 1 and also our team at the status now have a lot of interest on Ethereum 2 now that it's been stabilized and Jacques Wagner, the main dev of Viper started to use NIM as an E-wasm contract generator so I'm posting the thread in the chat so NIM might become like official E-wasm with official facilities and also we are starting to talk with the Embark team at status so that each team knows what are the challenges of developing dApps and of Ethereum 2 for the Embark team and that's it for us. And last but not least, Yves. So no real updates from us. We kind of stopped working on it while the spec was still rolling. We did some refactoring stuff since the last time I was here. And then once the spec is frozen, I'm going to try to catch to 7-point whatever faster than Artemis did just to get some clout there. But yeah, that's pretty much it. Challenge accepted. You want to make that up a little bit, Dean? I'm a one man show and I'm going to beat you. Dude, it's not hard to beat our team, man. Okay, okay. Thank you. Welcome back, Dean. Cool. Did I miss anybody? Perfect. Well, actually, Dejun Park is here. He is not working on a client, but he has begun a formal verification effort in formalizing the Deakin chain in K. Dejun, do you want to give us just a little update on that? Hi, everyone. I'm Dejun. Yeah, so we started this formal modeling of this client essentially says to stay to Dejun function a month ago. And yeah, so far we are trying to understand the details and rationale in the hood. And then now we are starting to actually modeling in K framework. And yeah, that's what you have right now. Cool. Thanks. Glad to have you. I would love to have like maybe in a month or two, 10 or 20 minutes session about how easy did you find the Ephraim 2 spec and compared to maybe Ephraim 1. Yeah, are you talking to me? Yeah, yeah. Oh, yeah, sure. Because most of us have been working on that for a year now. And it was hard at first. I guess now we are kind of insensitive to how it is. But when we have new people. I made it. Dejun, did you all end up using that accompanying document I sent you? Was that helpful? Yeah. Cool. Yeah, yeah. Yeah, so I made an accompanying document explaining a lot of things. And I want to refine it and figure out a good place for it to live because without it, it's definitely confusing. Okay, let's move on. Yeah, sure. Yeah, sure. Yeah, you will do that once you're at the point. Cool, cool. Great. So let's move on. We have, I think, a number of, I think we have a lot of people. It's a research team, research updates. We'll have to start. So, let's see. On the phase one side, I wrote up a small, definitely incomplete checklist of things. That we might want to consider changing or at least we'll have to decide what we'll have to decide on for phase one. I think it's the most recent issue in the issues list at the moment. So the big one, the big ones there that I can think of. So one of them is just a short walk time. One of them is it's going to be the same as the beacon block time, half a beacon block, quarter of a beacon block, something else. The second is just size of a beacon block. The third is exactly how the crosslink data, or another one is how crosslink data works. I also want to consider removing the attestation list and basically only having one attestation object, or at least pushing the data from the one attestation object up into the header. And the reasoning basically being that I'm not really convinced that there is a particular need to have space for more than one attestation, just the set of things that we're using these sharded attestations for as much less than the equivalent set for beacon attestations. There's a couple of other smaller ones. So I guess if anyone wants to take a look at that list, then none of it's urgent, but once phase zero is frozen, we expect that we would want to move full scheme ahead on getting the phase one spec finalized. So it's definitely good to start looking at. So that's phase one. On the phase two side, the main thing is, I mean, I've been talking to the phase two research people, on and off and trying to figure out basically how few markets would work and some of the issues around batching transactions. And this is more on the research, batching transactions, how to make sure the scheme is censorship resistant, how to make sure we actually get the efficiency gains from batching and so forth. So I think what were the kind of concrete possible changes to the basic execution environments, or sorry, the basic phase two spec that seem likely, one of them is changing it so that you can have multiple beacon chain, or multiple top level transactions in one short block. One of them is allowing larger execution environment states. So instead of 32 bytes, you could still have 32 bytes, but you could pay more and potentially go all the way up to something like 32 kilobytes. So basically, the upper limit being, it's something that still needs to be small enough to fit into a beacon block for a fraud proof, but otherwise it can be larger and it being larger has a lot of really nice benefits. Like you can have some level of proof batching happen between blocks. You can have some level of batching happen, even if there's, or you can have multiple transactions get, with more proofs get created independently, both get included without either of them breaking, as well as some other things. So that's nice. By the way, Karla, are you on the floor? Are you on the call? Well, if not, then Karla's been doing some wonderful thinking around taking plasma like ideas and applying them to in these two contexts where data just gets published on chain. And it turns out that you can do that to do some really nice things like potentially do cross-short transactions much more easily, improve efficiency a lot. Theoretically, you might, in the normal case, you might not even need to publish more goproves into the chain. One of the really nice ones was that if you stagger shard block times and add a protocol where validators predeclare when they're going to make a shard block very soon, then you can achieve an extremely fast de facto confirmation times for any application. Even if the individual shard block times are still longer, like four seconds or even eight or whatever. So that's only through research. That's something I'm also potentially really excited about because it lets us create basically user experience equivalence to all these more centralized platforms without us actually being more centralized. So, yeah. Cool. Thank you, Vitalik. Justin. Yeah, so I only have just one update. Basically, I was at ZCon and Vitalik was at ZCon and there was this excitement for a new curve, which was introduced with ZXE called BLS12377. So it's kind of similar to BLS12381. It has the same embedding degree of 12, but it is slightly different bit size of 377. And the reason why there's excitement is because you can do efficient snark proofs about snarks. So you have this one level of recursion. It's not like an infinite recursion, at least one level. And you can also do efficient snark proofs about signatures. So one of the things we were considering is whether or not we should move to this new curve, which has this interesting property. I guess the bad news is that BLS12377 is a bit more than just changing parameters and constants. So there is a little bit of work to take the existing implementations and port them over. The other downside is that it has a cost in terms of hash to G2. So that becomes a bit more expensive. So I think at this point in time, pragmatically speaking, we're looking to stick with BLS12381, which has more maturity, more infrastructure, more testing. And by sticking with BLS12381, we also can meet the DEF CON suggestion of launching the deposit contract during a public ceremony. So I guess it will be interesting to see how this space evolves in the future. I mean, it's mind-boggling how much improvement we're seeing over time. And I wouldn't be surprised if there's new suggestions that come up this year or next year. So maybe during phase one or phase two, we could potentially evaluate a change to a new curve. But I'd say in the short term, stick with BLS12381. So does that mean that because there is currently a BLS standardization effort, but it means that the community might be fragmented between standardization on 381 and 377, right? Yeah. So one of the things that needs to be done is kind of make sure the standardization effort that everyone is on the same page. It seems that where the, you know, of the other blockchain projects that are looking to launch with such a curve, where the first one we want to deploy. So there's about 10, maybe 10 different blockchain projects and where the first one, who would deploy the deposit contract. You know, Ethereum does have a little bit of weight in the space and momentum. So it's possible that the mere fact that we do go ahead with BLS12381 might be enough of an incentive for others to come in. One thing that was, you know, voiced during the standardization meetings is that other people also want, you know, no fragmentation and cohesiveness. So we'll see what the next meeting comes up with, which is a bit less than two weeks. But yeah, for sure. We don't want fragmentation. So what that does sound like is that there will be some level of fragmentation, especially going off further into the future, whether anyone wants it or not. Like basically, because if we expect to keep on finding better and better curves that have more and more capabilities in them. So you start with one that has one level of full recursion with pairings and you might find one that has two levels of recursion. Eventually, someone might find a cycle, then someone might find a more efficient curve with a cycle. So I guess there's definitely a high probability that we should be preparing for an elliptic curve world that continues being a messy one for the next decade or pretty much all the way up until we're going to give you the whole space. So one of the things we're trying to do with the standardization effort is to have the notion of a Cypher Suite. So a little bit of metadata which specifies, you know, which curve you're using, et cetera, which hash function. And so I guess this is maybe a good test of the robustness of the Cypher Suite. You know, how well does it work with the existing curves that we know of. The IETF standardization effort is not just for the blockchain project. So they will be interested in standardizing all the various meaningful options. So I guess that's good news for us because it means that we have some level of preparedness in this possible messy world of lots of different curves. Cool. Thank you, Justin. Leo raised his hand. What's up, Leo? Yes, just a very quick note. So from Barcelona, I have been contacted from the StarQuer team and they showed some interest in working with the simulator. And the idea would be to study how various network parameters are affected by block size. Yeah, this is in the context of the treatment improvement proposal, 2028. So that I just wanted to share that. Thanks. Cool. Thank you. Let's do a you as an update and then get an update from Quilt. Hey, the last month has been really busy. Probably many of you have seen that we released a tool called Scout, which is a black box prototyping environment for phase two execution. It uses wasm internally. It was based on Vitalik's phase two proposal two. There is a research post explaining while introducing scout and giving some background. Just look for scout on each research. And the code itself can be found on you asm slash scout. And now this black boxes. Most of the phase zero and phase one stuff except what is required. And basically it is a tool which operates on a YAML test file. And it can execute execution environments using that YAML test file. And the shard blocks can be defined as well as the wasm code for execution environment. And the main goal with with this design is to be able to quickly prototype different features in execution environments and be able to benchmark those features. Now, initially, we have implemented a couple of different execution environments with different basic functionality. So we do have a snipe's verification, which is integrated with Socrates. We do have BLS signature verification and some code for a token contract, some examples. And all of these are really nice to prove that all these features can be implemented and compiled to wasm. But actually right now we are focusing on the more important questions. And there are basically two important questions is the speed of all this wasm code and the throughput of what the execution environments have to do. And basically the key part execution environments have to work on is they have to get a witness for a state and they have to verify that witness and then they need to apply the changes on it. So our first goal right now is to prototype this witness verification. So one way to do that is using SSZ partials. We haven't have that implemented yet, but that is one of the next steps. And the main outcome we hope to get out of this witness benchmarking is to prove that a stateless model is the right direction. And that is the first thing we have to prove. Now, since, as I mentioned, this black box is pretty much everything from phase 0 and phase 1, because we don't need it for benchmarking. But we do need to have that at some way supported to have a proper infrastructure to test execution environments. So the other goal outside of the benchmarking we're doing is it would be nice to get this functionality implemented in a property to client. And such a client would also need to implement a lot of the phase 1 spec as well as whatever is needed based on this phase 2 proposal. The E-Bosom team is working together with the Quilt team, so I think we're going to talk a bit more about this part. But it would be nice to have an execution-only testnet at some point to be able to have proper hands-on experience with execution environments. That is my update. Someone from the Quilt team? Yeah, I'll go ahead. Cool, cool. So I guess, number one, I worked on a wiki this past week, and so I'm actually posting it here in the chat. So this covers a lot of the glossary terms, a lot of the material, a lot of the current conversations, and basically consolidates all the info on, so far, on phase 2 in one spot. And so I'd like to get this on the Ethereum GitHub wiki, but I'm not sure the best place to put it up. So maybe, Danny, if you have a suggestion there. I'll take a look at it. We can throw it into either that new ETH wiki or onto... It might get more eyes on it if we put it in the sec repo, but with a big asterisk, it's just like for research and implementation. But I'll take a look at it and we can make a call after. Okay, awesome. Yeah, that works. Other things on our front. We've been collaborating with the Iwasum team on various things. So one thing that we've been doing is trying to support Scout. And so we've been working on implementing SSE partials in REST and helping with that effort. Also, we've continued to dive into kind of the theory and some of the ideas behind the relay market. I think Vitalik talked about that. There's a discussion there on ETH research, so we're kind of thinking about that a little bit deeper. And there's been good conversation there. Another thing is what Alex just mentioned. We are looking to help basically get a phase one test net up that can support a certain number of shards that we can integrate Scout into. And a basic execution engine from that. So we can start having playgrounds with execution environments that can be and a number of assumptions can be tested and benchmarked and explored. Also from our front, we're in a kind of a transitionary phase, so we will have more of an official roadmap here soon. So I think on the next call, there will be some other things that we're looking at expanding on and diving into and contributing on. So that's everything from quotes so far. Great. Thank you. Super excited to see pretty much all three phases moving in parallel just wait. Cool, cool. Let's see maybe the Pegasus research team. Is there any update from y'all? Yes. So we had submitted the annual paper to use next. And we got accepted to phase two with some commands to take into account. So there will be a new version of the paper in August. Without any change the algorithm just something easier to read also. So that's paperwork and something that we're working on right now. We're looking at how to use rollups for Ethereum 2 as a way to execute transactions on any shard from any rollup. So it's the beginning for us. We're going to look at the simple case first, which is transfer between rollups. And we have one proposal ready. And we're going to discuss that with Barry White at next week and to see how we can merge the efforts. And that's it. Great. Thank you guys. Any other research updates before we move on? Networking. We have a couple guys from Protocol Labs here today. Any updates from your end? Yeah, hi. Pardon me if there's noise in the background. I'm at a conference. But yeah, Raul and I are here. So here's our update. Basically we don't have any update on grants yet. We're very close to making a couple grants, some in conjunction with other funding sources, some on our own. So we'll probably have an announcement about that next week, but all of them are aimed at building LibP2P implementations in the languages that all of the client folks in this call need. So they should be encouraging announcements and I think they'll bear fruit by September when we need them. The second thing and then people can ask questions or wherever they want to go. Last time there was a question about TLS and I think I didn't answer the question well because I didn't fully understand it, but I believe the question was sort of along the lines of what security is being provided by TLS versus what does the application layer need to provide. And so assuming that's a correct understanding of the question, we talked about it a little and I'm going to let Raul answer it, but we have an answer for that. And if that wasn't the question you were trying to ask, then after this just fire away with whatever the real question was. Raul? Yeah, I wanted to confirm. Was that a question to begin with? Like we want to differentiate, we want to differentiate why transport level encryption and authentication and security in general is necessary versus application level crypto? I think there maybe were some misunderstanding on the call, but I think everyone could would love just a quick on this. Yeah, of course. So transport level security is necessary to be able to first of all not be subject to monitor middle attacks. That could potentially alter the payload that's being transmitted to be able to authenticate the peer that you are interfacing with. If you, for example, have a key, a public key for that peer, then by authenticating them when initiating the connection and handshaking, you're able to certify that you're really speaking to them. Of course, for all kinds of reasons to avoid observability and censorship and so on, encryption is necessary as well. And of course, if the application itself needs to use quarter primitives to, for example, sign specific pieces of data as by specific roles or nodes in the network like validators and so on, then you could imagine very easily imagine the message and I think this is the case where you have a piece of data that's actually on the side of a block or a collision or whatever that is signed by a particular validator and that message is gossiped through the network. So as is being gossiped through the network, the transport security would be making that message, the actual transmission of that message secure between peers, but those peers would need to verify that the origin of that initial message is actually the validator that they expect to be. So that would be an application of a signature, for example. So I mean, these are, they are both, one doesn't exclude the necessity of the other essentially. And the reason why we are, there are two reasons why we want to adopt TLS 1.3 and one of them is that, is that it's a prerequisite for like, that is super important. On the other hand, adopting TLS 1.3 would help a lot as well with censorship resistance. If we, you can very easily imagine a transport that is when HTTP 1.3, sorry, HTTP 3 is deployed in real life. You could easily imagine a transport that mimics HTTP 3 by using Quake with TLS 1.3 over port 443, for example. So therefore, you know, it would be very difficult for sensors to actually block traffic unless, you know, at least to conduct any, any kind of deep backend inspection. Of course, they can always block IP addresses. So that's another reason why we want to adopt TLS 1.3. The Libby2B stack is designed for plugability at the security level, at the secure channel level, which means that another algorithm, another, another approach, another secure channel that we're looking into very seriously is noise. We have some experiments in this department. There are teams that we're actually probably going to be funding a team as well to implement some primitives that are lacking in the JavaScript environment to be able to conduct noise in a very, in, in, in, not use a land essentially. So that would be up. And like, if you want me, I can, I can go down into like why we're interested in noise and particularly what handshakes, but basically, we're probably looking at a system where we can conduct the IX or the IK handshake based on what data we have available by the parent. Does noise provide some clear benefits over SecIO? Yeah, yeah, it does. So one of the clear benefits that I personally am very excited about is that it allows us to send push data on the first message. And as the handshake is being completed, or it's going through the different steps, any push data or any accessory data that is conveyed in any of those handshake messages acquires different levels of security based on the state of that handshake. So you can imagine on an IX handshake, for example, the first initiated message to the peer that accessory data would be or that push data would be plain text. But then if the other peer wants to push back, so if the responder wants to push back any data, then on that second, on the response, because there's already enough cryptographic material to secure that push data, it would be, it would be effective. So it makes for a very elegant, I like this, DLS 1.3 also provides this ability, but for example in Go it's not, I don't think the Go SDK is capable of sending zero round trip data yet. It should be on the roadmap, but noise does already. So, and there is a variant of quick as well that uses noise for its handshakes is called then quick. So, I mean, I do see some very interesting developments there, and major adoption by projects that definitely have, you know, important reputation so I'm pretty confident with this. Okay. Is it relatively mature? Probably a little, would you say it's more mature than a second and more widely adopted? Yeah, I would, I would say SEC IO was necessary in the early days of Libby to be but we definitely want to move away from SEC IO that is just say SEC IO is pretty trivial to implement. And for a best baseline interoperability across Libby to be implementations you want to implement SEC IO because this is what all Libby to be implementation support and for example for the JVN implementation this is exactly what we're doing. Not all programming languages support TLS 1.3 yet, so that is something against the state of TLS 1.3 at this point, but that's practically a noise library for every language out there. So, it would, it would make for a very good, you know, second baseline encryption mechanism. Cool. I would just add one more thing. SEC IO as of right now has not been security audited that'll actually probably change by the end of the year but these other protocols like noise I believe has undergone it's like a formal verification and TLS 1.3 obviously it's an IETF standard. So, you know, there is that to consider as well. Got it. Thanks. People, users will be able to choose which implementation they'd like, right? Yeah, correct. So, in parallel with all of this, there is an ongoing rearchitecture of multi-stream to right now the selection of the encryption channel is being conducted in plain text, which is not great. But it does allow for that, that plugability so peers negotiate on what, on what secure channel they want to, they want to adopt for that connection. This will probably move to the multi adder as a component of the multi adder. So, you can imagine a multi adder like IPI before the IP address slash TCP slash the port number slash SEC IO or noise IK or TLS 1.3. So, that would allow peers to directly initiate a secure channel without having to conduct any plain text negotiation out of the open, which makes the system prone to deep packet inspection and censorship by way of that. TLS 1.3 is also vulnerable. Wonderful tool. Like side channel leaks. Break the RSA exchange. Yeah, from the point of view of the V2P, we would be basically implementing and adopting probably a SDK library in each language. You probably want to do that, like you want to adopt an SDK. If you want to make sure that the language has support, it's a support for 1.3. So, yeah, we are looking to be as a user of TLS 1.3 is vulnerable to anything that TLS 1.3 itself is vulnerable to. Okay, let's keep moving. Let's just any other questions for Role or Mike? Oh, yeah, I have a quick question. So, I think Mike mentioned in the beginning that y'all are providing a lot of funding and support for new implementations, which is bad to ask because that obviously helps several of the teams. I was curious about testing. What's the status on that and what we need to do in order to ensure that, you know, LibP2P and the gossip protocol are production ready? I know like y'all may, like our timeline for youth 2 might be slightly different than LibP2P. And so I'm curious what y'all's thoughts are on that, if there's going to be a grant or what not? Yeah, so we think of testing in two different, there's two aspects of it. One is interoperability testing between the different languages, and that's an area where we are very interested in making a grant. We have a rudimentary system called IPTV, which we think could orchestrate interoperability tests, but we would need some help. I mean, somebody with some time to actually turn it into a proper interop test, which could also be used to validate that particular LibP2P implementation needs the minimum requirements necessary to be called LibP2P, whatever that means exactly. The other side of testing is what I think you're getting at, Johnny, like sort of production readiness testing, basically integration tests of the whole system to get data on performance and longevity tests, leader running, see if it falls over or not. And for that, we've built a system that we call Test Lab. You can look at it, it's GitHub slash LibP2P slash Test Lab, L-A-B. And basically what it is, it's an orchestrator built on top of Nomad for those of you who are interested in container orchestrators. And what it does is spin up large numbers of LibP2P nodes, so like 1,000 nodes. Give it enough hard work to probably go beyond that. And that's our plan for testing, you know, real world like production scenarios. Yeah, we would be open to a grant if someone wants to build out that test suite. We do have an engineer for the collabs to work on that. So there's a couple of options there, but we haven't made any decision yet. We're open to proposals. Okay, so it sounds like you have this, so like a test suite to verify interoperability between implementations. Like maybe that needs some work on it, but that's badass, that'll be super helpful. As far as the performance testing, you'll build something on top of Nomad so that you can spin up a bunch of nodes and containers and do some performance testing. I think it would be wise if we could, you know, what's performance for us may or may not be. What's performance for y'all, like maybe y'all have higher requirements than we do. But it would be nice to be able to do sweeps on things like message rates, packet size, bandwidth limitations, like how fast we need actual gossip messages to propagate through the network just so that we're aware of where things break down. Because there's always options, you know, if something needs to be tweaked, you can fix it. So I feel strongly that that, like for ETH2, we really, really, really should focus on that. So is that something that y'all would have funds to work on as well? Yeah, I think we're open to funding something in that area. I guess we started out with sort of the idea that we need to support language implementers first. And so we're people fixing deficiencies in the existing languages. So that's kind of why we're, you know, I don't want to test this on the back burner, but it's not the first version. ETH also is interested in funding such work. Let's, and there's a couple of proposals under evaluation and trying to figure out the best way to move forward. But let's maybe take this offline. I very much agree, Johnny, that we should be doing this as well. Okay, I just wanted to say just one last thing real quick, like if it makes sense that they want to do the interoperability kind of stuff first. But we have this deadline of, you know, like of January 3rd. And maybe we could offline talk about how everyone feels like like realistically, like can we play out some different scenarios and see like with these tests with a long running test net, like how realistic is it really is January 3rd. January 3rd was a suggestion that obviously it's a nice target, but it's not, that's not a deadline. And I want to tell the reporters that are listening to this, that is not a deadline. And that is something that is more in the hands of influencers than it is even in the hands of the researchers. So, you know, that that was purely a suggestion. And something that is maybe feasible, but not that there's a lot, you know, I expected the last mile to be long, there's a lot of things, a lot of little things. So I don't want to harp in on that January 3rd date as a deadline that exists, because there's a lot of things that we're juggling right now, and a lot of unknown. It's just, I'm not one of those groups, but like, you know, obviously I'm a model of sensitivity here, but like I would imagine that groups that get, you know, they're funding from the EF they hear January 3rd, and they're like, Okay, shit, we have to do everything possible, you know, not to do that. And it's just, you know, I think that the focus should, you know, like, you obviously have like a good push date, but like no one should be like, you know, should logically think about like what's realistic so that we can test everything and, you know, and do it methodically. But, you know, maybe you're already thinking that I'm just saying it out loud. So just FYI. Yeah, we'd like to do it quickly, but more importantly, we'd like to do it right. I mean, the January 3rd suggestion was very tentative. And I think it was mostly to try and avoid the December holidays. So basically it would be, we wouldn't launch before January 3rd. January 3rd onwards could make sense. What I have done is survey some of the implementers to ask them if they think they will be production ready in 2019 for launch on January 3rd. And two of the teams have responded positively with optimism that it is possible. So at the end of the day, we only need, you know, we need a minimum of two clients to be ready and we'll see how the landscape evolves kind of organically. But for sure, I'm not expecting the majority of the clients to necessarily be ready on in 2019. Okay, I'm curious. I don't believe you reached out to us and I'm curious what the exactly what the, what we're defining ready as like, are we saying that there's going to be a three month long multi client testnet starting in September? You know, so that we can, you know, that we can kind of sort out any bugs that are found. And if so, like that means that that that multi client testnet is going to run flawlessly for three months. And then we immediately go live and I just seem it seems improbable. So like, I that's just my thinking. I think all of us could push really hard and make January 3rd, but it's just, it's dangerous in my opinion. Can I say something? Yep. Okay, cool. So we've been working with prism and a few different other teams. Renee's been working on implementing hobbits. We're planning on doing your kind of like an impromptu meeting in Toronto next week for anybody that's around. I think it's going to be Preston and Renee Dean. So Greg chain safe guys, I think Anton might be joining us as well. So if anybody else is interested, we're going to kind of start ironing out some of the networking stuff and trying to come up with somewhat of a loose specification for what that stack is going to look like. And next we're going to try to move on to do some research in terms of like data sync and like peer discovery, etc. So anyone wants to join us please, please do we'll be in Toronto next week. Also, we have a few like Johnny mentioned we have a few updates to the hobbit spec that largely came out of the conversation with prismatic. So I'm going to post a link right now and any feedback is welcome. There's a couple of things on the agenda. I want to get to before we get to the hour and a half mark. And then we can come back to open discussion. So Greg from chain safe suggested that we have proposals moved to make communications to discord. Primarily we communicate in one getter there's a couple, you know, maybe two or three getters that we communicate in and then there's like tons of fragmented telegram communications and emails and things like that. So the first proposal is to have a more unified place to talk. One of the main downside that I've seen is it's just is a little bit more overhead to come in use drop participate because you do have to create a username and login. So, kind of, I think the minimum for me is to bridge the current starting getter to like the general room on this discord. It seems like people are generally very positive about this is anyone not like to speak up. Yeah, I like bridges to add a bridge to a telegram as well. Yeah, if we could bridge it to telegram that would be perfect. Does that does that exist bridging. And that would imply that a lot of us probably don't have to download yet another messaging app. It's like a new messaging app that default bridges them all. It's called matrix. I'm joking. I don't have right. I don't intend to download it. I'm kind of, I'm at the max anyway. I, it's, let's do it. I think this is good. I think it makes sense. I know proto that he would help us do that. So proto talk. And then proto has a proposal for standardizing. I think graffiti use case and testing brother. Do you want to give a quick on that? Sure. So the idea here is that you can use this one field that can contain any data in the body to use it for debugging during testing. Like this little matter information in it about who is like, what kind of client is running or producing the block? Where's the client located? For how long is it running all these kinds of matter information metadata and then be able to easily debug like large amounts of blocks. Sounds great. But we essentially need from others is to all agree on the same format. Hello. Yeah. Okay. So let's try to achieve is like, collect suggestions from all the other clients on what data they can provide will be, what will be useful for interrupt testing. And then standardize it or like so much standardize it. Does it persist after the test nuts just for testing? Yeah, maybe we should open an issue on the spec test repo. And we can everyone can add the ideas because 32 bytes is it's large and small at the same time. For example, IP addresses, you need four bytes at minimum. Well, we can run out easily. Yes, let's say like client vendor, a timestamp, maybe two types of timestamps. Some statistics and an IP address would be like just a set of four bytes each. It's all fit in 32 bytes. Generally, I'd say that the client version is more important. Also the clients. So maybe like, you know, the first four or eight bytes to get to hash. Okay, let's take this to an issue. Yep. So maybe like he lost a few. You cut out a little bit last few bytes. What a internet is unstable. Can y'all hear me? Yes. Okay, cool. Okay, let's just take this to an issue. Thank you, Proto. Okay, before we have just open discussion, just with it, are there any pressing questions about the specification, things that have come up, any issues you've run into? Okay, great. And now open discussion closing remarks. Clearly, there is a desire to figure out the minimum requirements to be production ready. Some of this is a little fuzzy because there's a lot of unknown unknowns that we're going to run into in the next four or five months. But it's probably worth at least beginning to enumerate the known there. So why don't we take that to an issue in the E2 PM repo and we'll just start a list and then we can from there engage in the conversation. Is that seem reasonable? A list of unknown unknowns. No. Yeah, yeah, but like how long do we need a test net before we feel comfortable? Do we need to do some incentivized test nets before we're ready? What sort of performance metrics and stress testing do we need to do on the network layer? Things like that. There are a bunch of things that maybe we can't quite enumerate today because we haven't hit them. But we there are some things that we should we should get out and start just kind of so that as we move into this interop networking phase, we're not blind. And thank you, Johnny, for, you know, beginning that conversation. Yeah, I'm not as mean as I sound, I promise. No, it's okay. I'm just here. The long. Is there any update on something that's supposed to be happening in the second week of September? The interop thing. I see him typing on television. Well, it's still the six through the 13th. Okay. So I will have him. I'm going to message everyone on the 18 different channels. Yeah, I know there's somewhat of a cap at some point. I don't know if we're going to hit it, but I think for planning purposes, we can start figuring out an RTP. Sometimes the next couple weeks would be useful just so people can get stuff firmly on their calendars. Okay, yeah, I think he's presentation. I'm sorry. I'll have him reply. Yeah, cool. Thank you. He tells me that he will go updates for like the interop on like the next. Next, like, call. So that all the details by then to me. Great. Thank you. That's, that's not right. I'm sorry. He's, you know, he's doing a presentation, like that's what he's prepping for. So invites are going out today, three to four per team that September 6 through 13th. So, thank you, Johnny. Yeah, thank you, Joseph. Is there anything else? We have a few more minutes before we close. Okay, I got some PRs to do work on in this last little sprint before Sunday. Great. Thank you everyone. I will be sending that release out on Sunday. I'm going to talk to you all soon and I'll get the recording up. I'm in pretty terrible in there right now, but I've recorded it locally. I'll get it up as soon as possible. Thanks guys. Hi everyone. Thanks. Bye. Bye. Thanks. Cheers.