 Nice to be here. My name is Janik. I'm from BrainBot. And I'm here with Kevin from the Ethereum Foundation. And we're going to talk a little bit about peer-to-peer networking in Ethereum 2.0, or how we call it now in Serenity. I will start and give a little bit of background and try to introduce you to the problem that we're trying to solve, and also show some simulations to see if our approach makes somewhat sense. And then I will hand it over to Kevin, who will talk about the implementation of all that stuff. But let's start with the very basic things, our goals of networking, what we actually want to do. We have some nodes that produce data, and others want to obtain this data, so we need a way to distribute this. And the way how we want to do this should be efficient, it should be fast, and it should also be safe. What kind of data in the original Ethereum, it's mostly transactions and blocks. And the important thing here is that all nodes basically download the same data, so there's no distinction at all. Basically all nodes do the same thing. In Serenity, things get different mostly because of sharding. And sharding means we basically split the data up into different shards. And one particular node only downloads the data for that particular shard. So that way we can save a lot of bandwidth. I plotted in this picture, it's eight, in reality it's a thousand. But a thousand are too many to draw in this picture, but just two, yeah. In addition to this shard data, we also still need some kind of global data that all nodes download. This is the beacon chain data. From that we can basically infer how nodes should connect to other nodes in original Ethereum. All nodes basically are the same. So one node simply basically picks randomly from the set of all nodes and connects to them. In Serenity, things will be different. If you pick peers, you first look at what shard you're assigned to or what shard you want to download in this example, shard three. And then you pick peers from that shard so that they can give you useful information. So here we have three additional nodes from the shard three. And then in addition to that we pick nodes from the global set that can provide you with beacon data in this example one and seven. And then we can basically abstract from that. Essentially what you do is we now have two networks and one is the shard network which consists of all the nodes that are connected to a certain shard and are connected to each other. And then with the beacon network that consists of all the different nodes and in this network only the beacon data is transmitted. Now another thing that's different to the existing Ethereum network is that we have a new class of nodes called the valedators. They are quite similar to NOMI nodes in that sense that they are interested in the data that belongs to a certain shard. But in addition to that, what's different to them is that they only care about the recent history so they don't download like the whole chain up until Genesis, but only the recent history. And another thing that's different is that they switch regularly between shards because they basically get assigned to a new shard which they're supposed to validate randomly. And a third thing that's different is that they want not that the rest of the network knows that their valedator or which valedator they are because they stake some ether and so they want to have extra protection and if people would know that their valedator they might get attacked. Now that's basically all the changes we have. Now we need to think about what protocols we have. Basically there are three functionalities. The first one is we need a way to find suitable peers and that's done by a discovery protocol. The second thing we need to do is to distribute data and we do this using a gossip protocol. And finally for new nodes that join the network or validators that are assigned to a new network, a new shard they need a way to synchronize the chain or the history of the chain using RPC calls. I will not talk about the last thing because that's basically no difference to how Ethereum 1.0 does it but I will focus on gossiping and the discovery protocol. So what's gossiping? Gossiping is a very simple peer-to-peer protocol that's being used in a lot of projects including Bitcoin and including Ethereum 1.0. The idea is we have one node in the network that has produced some data and it wants to inform the rest of the network of that data. So what they do is they pick a random peer and send this data to them and now we have two peers that know the data and in the second round they basically do the same thing. Again, they pick randomly new peers and send the data to them. They do this a couple of times and after a few rounds the whole network knows the data and that's a very simple and aesthetic protocol that's also very efficient and this process is also very fast and safe in case of network failures. We want to apply this in two different settings gossiping. One's in the chart networks and one's in the beacon network. The difference between these two settings is that in the chart networks we only have few nodes because they distribute themselves over the whole set of nodes distribute themselves over the different charts but we have a comparatively high throughput and so a lot of data is transmitted there and in the beacon network we have a lot of nodes because all of the nodes in the network participate here but it's much less data. So we did some simulations to basically answer two questions. The first one is can the network handle the numbers we have or the settings we have and how fast does it is the propagation? How much time does it take to propagate data and to do this we implemented an implementation and we used gossips up that's been designed by Lippie2P and which will be probably or most likely will use in the end and we implemented that and then another thing we need if we do simulations we need to make sure that they represent reality in some sense and in particular we need to know what the connections that the nodes have which each other what bandwidth they have and fortunately there's a scientific paper that has measured this and we just picked those numbers from the current Ethereum network. And here's what we got. We basically looked at how much time it takes to the propagation of a message through the network over time so at the beginning time zero no one knows the network, no one knows the message the message is created at time zero and then it starts to be transmitted or propagates through the network and at some point it reaches one meaning that the whole network has seen the message and we did this for a couple of block sizes one thing I forgot this is for a chart so for a single chart network where we have maybe a thousand nodes and the numbers we get here is so first thing we see is that it works so it's not like it takes forever or it does never reaches one messages are always transmitted and the larger the block size of course the longer it takes but for a small block size of 100 kilobytes maybe it takes two seconds and for larger ones it gets longer and longer but even one megabyte is still eight seconds. Now another thing we should look at is what bandwidth is actually used and we see here this is a histogram of all the nodes and how much data, how much of what fraction of their bandwidth they use we see that most nodes use about 20% of their bandwidth and some a little bit more but known nodes use more than 60% of their bandwidth this is important because it tells us basically two things the first one is that we're not operating at capacity even at large block sizes so if there were nodes that use 100% of their bandwidth we could be pretty sure that they will not keep up in the long run and this is not what's happening the second thing is we see that there's a lot of bandwidth available if we get new nodes that join the network and download new data and charts that download the data because this only simulates the propagation of data that's created newly and not synchronization of chains and so on so that's good and now we looked at the beacon network where we have a lot more nodes I simulated here 100,000 nodes and we see the same thing basically at time zero a block is created and it propagates through the network and at some point it reaches all the nodes block sizes are smaller here 64 kilobytes to 512 kilobytes we will see how large they will be in the end and here even though the network is much larger we still get propagation times of a couple of seconds so this seems to work very well and we are happy with that peer discovery that's more complicated the job of peer discovery is to find peers in the beacon network and also to find peers in a specific chart and this is the challenging or one of the challenging points here because there are a lot of charts and if you just pick a random node in the network you need to pick a lot of them to find one that's suitable to you so we would like to have a way that's more efficient than that we also need, this also should be very fast so that value data's who are assigned to a new chart can start operating as quickly as possible so that the dead time is as short as possible and fourth requirement is that the value data's as I said earlier they would like to be private and if it's easy to discover them using this discovery protocol then yeah, this is not good we are considering a bunch of options here three seem to be viable the first one is Discovery version five it has been designed by Felix from the Ethereum Foundation to be used in the existing Ethereum network but it actually has some nice properties meaning that we could maybe use it for us as well so and it looks very promising I think second one is a simple variation of Kadamlia and the third one is to simply use a global gossiping channel to propagate the chart preferences in some sense which seems kind of a boot-forth but it might actually be viable but we are still not finished with that we're still evaluating yeah, but that's it from my side and now Kevin will talk about implementation hello everyone, I'm Kevin and I'm going to introduce the P2P implementation status on our side so this project is named Shardin P2P POC and it is a profile concept of the current design for Ethereum 2.0 P2P layer and we implement it using the P2P so what is the P2P? and it is a library that has many useful peer-to-peer networking components so you can choose the components you want or you need to build your own peer-to-peer applications and currently we're using the TCP components and the Kadamlia DHT and PubSub and the goal of our project is we want to see if our current design meets the needs of Ethereum 2.0 and we also want to see if the P2P fits our needs and it can also serve as a temporary layer for Ethereum 2.0 networking Python implementation until the Python live P2P implementation is ready so the requirement for our networking layer for Ethereum 2.0 the clients should be able to subscribe to one or more shards so in this graph the client subscribe to two shards one is black and one is red and the client should only receive the data from the shards it has subscribed so in this case the client should only receive the block in black and red instead of the blue one and the time to subscribe to a shard should be short which means a node should be able to find the sharp peers in a short time so the design currently we map the shards to topics in PubSub so the concept of the PubSub is subscribers subscribe to some topics and they should only receive the data published to those topics in this way each topic forms a separate channel so we can segregate the shards with the topics and about discovery we use Bullnose for new nodes to join the network and to find the initial peers and we use Kudemnabit DHT and we for to discover sharp peers our current approach is we have a global topic for nodes to broadcast the shards they're currently subscribing so if a node wants to find a peer in a specific shard it can do it through subscribing the topic and we're still exploring other options and each node provides the RPC for other nodes to request for data so currently we support the request collation collation is the block in the shard and we're going to change to use the DPTP daemon so what is a DPTP daemon it is capable of supporting DPTP across languages and if you want to use it you need to implement the bindings so as you can see in this graph the left hand side is the DPTP daemon and it is a standalone process and it handles the DPTP components and you can control the daemon through the UNIX domain socket and currently it supports multiple methods so the identity you can get a peer ID from the daemon and you can connect to other peers and you can open streams to other peers and you can register a function to handle the incoming streams and the DHT operations and it pops up still on the way and this graph shows how we will change our structure so in this graph the blue ones means the part we need to implement and the left hand side is still the goal and right hand side is Python so currently we implement our logic both in goal and Python and we handle the communication through GRPC and we're changing to use DPTP daemon and in this way we can move all of our logics to Python side and we need to implement the Python bindings and after Python DPTP is ready we can use it directly so all of the logics will be in Python side and about the implementation status so we have finished the essential functionalities so including the joining, subscribing to the charts and broadcast data to the charts and request messages and we have a global topic for discovery and common foundations and we have tracing for the testing and we have the bindings of our code for Python and what's in progress so the white box team they're supporting us to do the testing with network emulation and we're still doing our own deployment and testing for a testnet and we're also implementing the Python DPTP daemon bindings and we still have to finish the peer management and reputation mechanism and do the further optimization for the overday and we currently will have cooperations with the original Ethereum DPTP designers and we got a lot of advice from them and the protocol labs they support us on the DPTP and PubSub white block help us to do the testing things and that's it so I want to give credits to Philex and from... Philex and Anton from EF they gave us a lot of instructions and the great work from my colleagues thank you yep if anybody in the audience has any questions well there's one right now was the propagation data you showed and bandwidth utilization data based on testing on a local area network or a global network? neither so it was a simulation it was running all on one machine but we simulated the latency and the bandwidth between these nodes based on a global network? yes my question is I know there's the devP2P implementation and now there's the E2.0 my question is if the previous implementation is kind of going to... everything is going to be switched into libP2P or there's going to be different areas there so in the devP2P right now there's no libP2P implementation there's academia for discovery etc etc etc but in this version you're showing that libP2P is kind of everywhere yes so our current plan is to use just use libP2P and not use devP2P for the serenity stuff and you're going for the go implementation in libP2P? currently yeah but we have a grant for the Python libP2P implementation so after it is ready because we implement in Python so after it is ready we can change it to that so other and this is why you also have the the unix, the demon right? yeah the demon is to solve this problem so you can communicate for different languages they can use the demon and without the without the actual libP2P is implemented thank you thank you are there any more questions from the audience? are there any comments from the audience? they like it, you like it guys alright great that's it round of applause thank you