 So hello, everyone again. So I write in Rust today. And at night, I'm thinking about how to bring P2P networks and web closer together. And today, I want to talk about WebRTC. WebRTC is an HTML5 technology that's been available since at least year 2012. And it's currently stabilized as a standard. But it's not currently stabilized as a standard, but it's already available as a draft. And it's usable today. It's not, this won't be a talk about intricate technical details of WebRTC. And this talk is not only about WebRTC per se. It's about what's possible and what changes WebRTC might bring. And at first, I thought about, to give you more practical examples, and about more details about the structure of the centralized networks. But then I thought that technology in isolation is not that important. Because every technology serves a purpose. We don't make things just to make things usually, right? So I want to give you a broader perspective on this technology. And because I strongly believe that WebRTC has a very big potential to change the world, and spark a new decentralization revolution. And I want to inspire you to take action and participate in that revolution and help to build the new internet. But you might ask, do we actually need that revolution? Why do we need to build the new internet while currently it works quite perfectly? Well, to answer this question, we need to get back in time a little bit. In the past, we had a dream of a completely decentralized network, not controllable by any authority, and not prone to censorship. A network where every individual has freedom to say what they think and what they want. And that network already exists. Today, we know it has modern internet and web. However, this is not the same internet as it was originally. In the past, we didn't depend on large corporations running giant data centers and providing, say, email services. The internet was small and simple. And at first, people run their own email servers. Then, in the late 90s, there were multiple email service providers. And because it's just not too easy to run your own mail server, right? It's easier to just register on someone else's computer and make it their headache to deal with the administration tasks. And nowadays, our email usage is consolidated around just a few of big providers. And how many people today are running their own email servers? Raise your hands. Well, actually, I see just a couple of hands and it's quite expected. And notice that this is a technical audience. And I think it's sufficiently safe to extrapolate and say that almost no one from the general public use their own email service now. But the email protocol itself is still inherently federated, decentralized, and open. And many of the original internet protocols were built in that way. It used to be IRC, FTP, HTTP. All of these protocols were created more than 20 years ago with the vision of open internet of equal peers with equal power. And we had a concept of homepages which belonged to users. And now we have Facebook profiles which belong to Facebook. It's just that in the name of convenience, we have sacrificed our privacy and control and handed them over to third parties. And the internet landscape has changed drastically. Now we depend on centralized services to do everyday things. We upload our files to Dropbox to share them, which are on Facebook, which are on WhatsApp, and we pay with PayPal. These services are perfect. They're great. They're easy to use. And this approach to technology helped to bring a billion of new users to internet and will help to bring many more. But at the same time, this approach has a downside. It makes it even more easier for governments to track all your communications and snoop without many hurdles. And they do that with good intentions, actually. They do that to protect you and children. But I think that it's the wrong direction because almost all of our data and almost all of our messages can be read by special government agencies, and we have allowed them to do that. And many of us know where this can lead. The government incorporate control, and they all seem wise big brother who knows for us and better than us what we can read and what we can write. It leads to censorship. And this is not some kind of theoretical problem. Our reality already in countries like Turkey, for instance, where they block Wikipedia, or in countries like China, where they have more than 3,000 websites are blocked already. But the problem is actually bigger than that. Our news feeds are controlled by Facebook and Google. And the news feed creation algorithm is in a packed black box. And for all we know, it just shows news that are similar to the ones you liked before and hides the news you supposedly won't like. And this algorithm alone has significant power to skew election results, for instance. Or in simple words, just this. Well, should it always be like that, though? I think not. We can fight back by creating decentralized apps that are easy to use. And I personally believe that this is critical and crucial because the user experience is just as important as privacy because no one wants to use bad apps. And no one wants to use apps that are hard to understand. We have seen that on the example of PGP. And regardless of all technical merits of PGP, it's just too inconvenient for everyday use, right? So we don't even use encryption for our email now. So we need to incentivize users to defy the status quo and go fully decentralized. But how can we do this? I believe that WebRTC can be the answer here. It's already available in Chrome and Firefox on both desktop and mobile platforms. It doesn't need any plugins or any installed software to work. It just works. You open a web page in your web browser, and you can connect directly to other browsers without passing your data through intermediary servers. And WebRTC is getting quite popular lately, but there are still many misconceptions about it. First, there is a common misconception that the WebRTC protocol can be used only for media. It runs for voice, video, and so on. But actually, it can do much more than that. Media is certainly one of the most important applications of WebRTC in the today's web. But it can be used also to transfer any text or binary data between connected peers. There is another misconception about WebRTC. That it's suitable only for the web and can be used only from web browsers. But in fact, you can use the protocol from anywhere. And it's a very powerful idea. Because it means that we can create WebRTC applications for desktop or for internet of things. And they will be compatible with web browsers, allowing to communicate in both directions. And WebRTC can play a significant role in the recent trend of redescentralization or bringing back the internet to its origins. And today, we witness a kind of renaissance of decentralized technologies that was started with Bitcoin. And many millions of people use these technologies every day. And I believe that it's very important because it shows that decentralized networks are possible at scale, and they are valuable. So WebRTC can serve as a kind of a bridge to existing peer-to-peer networks and can help in creating new ones. And the most important thing that you need to know about WebRTC is that it's simple to use. There is a concept that is called data channels. And the data channels provide essentially the same API as WebSocket. So first, you just initialize a data channel. And that is a kind of a data stream between two peers. Then you can set a hook to handle incoming messages. And then you can send a message yourself. And that's basically it. You don't need to worry about security or data transfer, because when you establish a connection between peers, it automatically initializes the session encryption keys. So data encryption is mandatory in WebRTC, so memory can't tap into your communications. You just need to make sure that you transfer the session information securely, and you do that with signaling. With the signaling here, you exchange the connection information between peers. And this is a complex topic, worthy of another 30 minutes talk, but basically it's just a setup stage when peers choose some point to exchange the connection information, which includes IP addresses and encryption keys. There is a caveat though. That point of exchange can be considered a single point of failure or some kind of centralization point that defeats the purpose of WebRTC, right? But it certainly can fail, but it's not more centralized than, say, DNS, because DNS can fail too. But if you know an IP address of a server that you want to connect to, you can just use it effectively by passing centralized name servers. And it's basically the same with WebRTC. You can exchange this information out of that. This is, that is, you can just email it or share it in some secure chat, or just send it with DHL or snail mail. Doesn't matter. When your counterpart gets it, you don't need any servers anymore, and you can talk directly. And this part, signaling, is not defined by the WebRTC standard, so you have to invert your own way to discover peers. And this gives you greater freedom in choosing the most efficient way to do that. And that is usually done by using the WebSocket connections, but actually you can just use anything you want. And now that we know the basics, we can try to establish a new WebRTC connection. It is a bit more complicated task because it can build different network topologies. You can just connect two peers between themselves, or you can build complex peer-to-peer networks. And on top of that, the connection process would be kind of different for each party. There are two sides of the handshake process. The connection initiator and the recipient. On this slide, you see the basics, the basic step required from an initiator. You create an offer, which includes a session encryption key, and set it as a local description. On the other end, when through the signaling channel, you get a request to initiate a new connection, you follow almost the same steps. This time, you set the remote description, telling the peer object that someone wants to initiate a new connection. Then you create an answer to the offer and set it through the same signaling channel so that the initiator can get it. And finally, when the initiator gets an answer, it just sets it as a remote description. And that's it. We have established a peer-to-peer connection between two browsers. Now we can probably use that deliciously simple data channel API. Well, never mind, because that's how it would work in some wonderful imaginary world in which we don't live, fortunately. In the real world, we have to deal with another problem. We have to deal with nuts, the cornerstone of peer-to-peer communications. Because the IP version for address space is limited and because people want you to feel more secure, network address translators were invented. And as you know, nuts translate IP addresses from your local network to the global IP addresses. And there is a wide variety of types of nuts. Symmetric nuts, port restricted nuts, full column nuts, and so on. And the bad news is that you have to deal with various types of nuts with different techniques. But the good news is that these techniques are pretty well known. And WebRTC takes care of that for us. So all we need to do is to provide known stunt servers to gather your public IP address. And then you need to send it along with the rest of connection information through the signaling channel. And basically, stunt works just as a mirror. You send a request from your local network, but the stunt server sees your translated address. And then it just returns that translated address and you get it to peers through the signaling channel. There are some cases, though, when that's not completely possible to connect directly, such as when both peers are behind a symmetric net. And the solution here would be a turn. The turn is a kind of a last resort because it basically works as a proxy server. Both peers connect to the turn server and then it just relates traffic from one peer to another. And obviously, it's a costly solution and it's not readily available as opposed to stunt, for which we have multiple public services. Running public turn server would be very expensive, but theoretically, we can reuse the existing peer-to-peer networks, such as Tor, to relay traffic through the existing nodes. But for now, it's not quite possible, so we have to deal with, so we have to set up our own turn server, perhaps. But fortunately, turn is needed only in minority of cases, so usually, we don't need to deal with it then. And to make our life simpler, Weber-TC uses the ICE protocol, which employs both stun and turn to establish a connection. ICE helps to gather candidates which are basically hand the thought of as pairs of IP addresses and ports. So we pass these addresses and ports to our peers, so that they can connect to us. And actually, there is a great feature in ICE that it also collects your local IP address. So if we are in a local network and we want to connect to our local peers in our LAN, we don't need to go through the Internet connection to do that. And the Weber-TC implementation of the ICE protocol allows us to concentrate on the application side because it does everything for us, so we don't need to do much to set up the connection. Basically, what we need to do is to provide the stun and turn server addresses and then set it as an event hook to gather our candidates. Then we send that information through the signaling channel. And on the other end, our peers gets that information and calls the add-ice candidate function. And essentially, that's almost all we need to do, all we need to know about setting up a basic library connection. However, what we have seen so far is the most basic topology because it connects just to peers. Well, it can be seen as a kind of peer-to-peer network as well, but it consists only of two actors. And it's fine for your cases when you just want to connect to peers to have kind of a chat or something like that, but very often we would need more than that. And for multiple peers, there are many ways to structure your PDB network. You can just create connections between each and every peer, but it won't be the most efficient way to do that because in case of significant traffic, the peers would have to duplicate it for each connection. So if you are aiming for even, say, at least 100 peers, the full mesh topology wouldn't be like, would require too much CPU power and network traffic. To alleviate this, we have a structuring pattern called start topology. And in this case, we choose one or more peers which have the most processing power. These peers serves as a kind of a multiplexer, effectively relaying traffic. So it acts as a proxy again. It works, it's efficient, and Google use it for their Hangouts implementation, but it has an obvious disadvantage. It's centralized, and now everyone relies on a single point of failure. Fortunately, there are many other alternative ways to structure PDB networks. One of the examples would be onion routing in Tor that builds chains from randomly selected peers. They interlace traffic through the build chain, encrypting the data at each step. So it becomes practically impossible to tell whether that traffic was related from some other node or if it is the origin of the traffic, if it is a original center. So we can't know who is the center. Or using distributed hash tables, we can structure networks as binary trees or rings. This is what the cadamely algorithm does. The cadamely algorithm is a base for the bit-oriented DHT implementation. It gives each peer a unique randomly selected node ID. And each node has its own routing table, so it knows about other nodes which in turn know about other nodes themselves, so you can imagine as a kind of a tree. And then when you want to store something on the network, you hash your content and that will be your key. DHT maps those keys, which are hashes to node IDs which can be thought of as hashes as well. So having some content, you would already know on which node you should store it. And you can reach that node by sending a request to a node that from your routing table that is the closest to the node that you're looking for. And then that node that is close to you checks its routing table and repeats the steps. That way, in multiple hops, your request will eventually reach the destination and your content will be stored on the network. But while the details of these algorithms are complex, so we'll skip them for now. And for the moment, let's concentrate on bit-oriented. Many of us use bit-oriented at least once. It unloads Linux ASOS. And at first, the bit-oriented protocol depended on centralized trackers to find the connection info and IP addresses of peers that have certain files. And now with the introduction of DHT, it just needs to find a single peer to be strapped from and this is usually done by contacting a well-known node and then it can continue to work in a decentralized fashion. But again, as with signaling, this is not a point of failure on centralization because if you know an IP address of at least a single peer, you can join the network. And now what will happen if we combine bit-oriented with WebRTC? We get web-oriented. It's basically a bit-oriented limitation in WebRTC terms that can work in browsers. It can't communicate with bit-oriented peers directly because they use different network protocols and you can't use UDP in WebRTC connections yet so it's implemented in SCTP terms. So if you want to connect with bit-oriented peers, you would have to go through the proxy world but we don't actually need bit-oriented peers anyway. We can do a lot of interesting things just like that. WebTorrent simplifies interaction with WebRTC even further and it can be a pretty straightforward API. You can send a file or a JavaScript string or a binary buffer and it will be automatically converted into a torrent file and that torrent file will be published and announced on the network behind the schemes. And other peers, knowing the torrent hash, can download the file and send it further and the WebTorrent API is very simple. We have seen almost all of it on these slides. It's also available for Node.js as well so you can use the very same API to build server-side apps that can talk to browsers using the WebRTC protocols. And files are actually not limited to those Linux ASOs. Basically, everything can be represented as a file and there are many practical applications for WebTorrent today. For instance, we can build a distributed GitHub. There is a proof-of-concept project called GitTorrent that is already working and with that approach, you won't even need to depend on GitHub availability to get your sources and Git will be truly decentralized. And another application that is practical today is content delivery. You can make your website a CDN and visitors will help to distribute your content of your website to other visitors or you can even make your website torrent file itself represent your website as a bunch of files that will be served by WebTorrent. Actually, there are multiple possibilities and ideas. However, WebRTC is an imperfect technology. It still needs to come a long way to support large-scale networks that can replace or improve the current state of web. It's not perfect and has many limitations. The browser support is not perfect and current modern web browsers such as Chrome or Firefox limit a number of connections that you can establish and this hinders the ability to build DHTs in the web browsers. And for now, it's quite unfeasible to implement the contemporary DHT algorithm on top of WebRTC because of these limitations and because of the handshake we had. And that means that WebTorrent library currently depends on trackers so it's sort of centralized for now. Another important issue there is that the browser sessions are short-lived. When you close your tab, you will lose all connections and the next time you will have to go again through the handshake process and the signaling and so on. This problem kind of can be fixed by allowing to run WebRTC sessions and service workers so that even if you close your tab, you will still remain connected to your peers. But it's not available today. Unfortunately, that's not implemented in the current WebRTC standard, but there is a tracking mission in WebRTC repository on GitHub. Another possible solution to that problem of short-lived sessions is hybrid servers because as you remember, WebRTC is not limited to browsers only so we can integrate the WebRTC protocol into existing P2P networks and other existing peers can serve as intermediaries meaning they can talk to peers that connect through the WebRTC protocols so WebRTC users can be just kind of clients and the server-side peers can serve as a backbone and that won't be a centralized network because the server-side clients are just the same peers, they're only, they're not limited by the browser. Still, while there are many problems to overcome, WebRTC is very promising. But peer-to-peer networks have a set of problems of their own because these networks are trustless, it's hard to solve the problem of malicious nodes or spam and if we store content and multiple nodes, we step into the territory of distributed systems which are very hard to grasp. So apart from the problem of spam and malicious nodes, we have to deal with consistency and churn as well because what will happen if all nodes that store some file on the network leave all at once? You lose access to that file and if that file is your Bitcoin wallet for sale, it will be very sad to lose it. So for redundancy, we need to make sure to have enough copies of the same file distributed over multiple nodes and of course, we should not forget about encryption as well because in trustless networks, basically anyone can read your files and most importantly, we need to make sure that these nodes are incentivized to stay meaning that it should be in their interest to share these files as long as possible. And speaking of incentives, I personally think that the economy of P2P networks is not less important than their algorithms because who would want to share their bandwidth and computer resources if there is nothing for them, right? Bitcoin kind of solved that problem by giving some coins to the nodes that are do mining. So to build large-scale networks, we need to find a way to reward users. And this problem is tricky because it can lead to centralization all over again. Remember that the modern web originally started as a distributed network, but the problem is that economy incentives and the question of who provides their sources lead us to the modern days of giant data centers and mega corporations. You can see it even with Bitcoin where mining is starting to concentrate around a bunch of giant pools. So we need to find a good way to overcome this because it would worth it. We can bring existing P2P networks such as store, such as store, Bitcoin or Ethereum to web by porting them using web assembly and adding the Weber RTC protocol support. However, that won't be quite possible for now because European TCP are not directly available in Weber RTC yet. However, they might be available in future. In the meantime, we can build new P2P networks and there are many, many ways to utilize them for the good of humanity because such networks are more secure and resilient as the example of Bitcoin and Bitcoin shows. It is especially important in such applications as Internet of Things, which is becoming our reality already. And we see that medical devices, self-driving cars and critical infrastructure already depend on Internet, which is by and large is controlled by a few of big players now. So I strongly believe that we should not give this micro-operations even more power. And thanks a lot for your attention and bringing it to answer your questions. So I have a few questions for you. Sure. Are there any examples of great web RTC websites which are purely decentralized apps? Well, this field is currently experimental and the projects that use Weber RTC are currently experimental as well, but I can name a few. Probably there are websites that replicate the Spotify or goal music functionality by basically creating a peer-to-peer network where you can share your music with a convenient interface. So basically, from the user's experience point of view, it's not different from Spotify, but underneath it uses Weber RTC to share the music among the peers. So it can be shut down or it can be controlled. The music is your own and you can share your music with your peers. Well, probably there are many other experimental projects, but I don't quite know about them yet, but we can build them. So that's the point of my talks, to incentivize, to inspire you to build these applications because there are quite many possibilities to build these apps. We can make Internet more secure by, for instance, by replicating the Tor functionality on top of Weber RTC and making it like uncensorable and private so that CIA and FBI can't snoop our communications. Awesome. How should we approach dealing with unethical content in a decentralized network since it's uncensorable? Well, that's a tough question because we need to, there are many possible answers to this question and government would certainly want you to, to that single answer is to prohibit the decentralized networks and make all centralized and controllable, but I personally believe that we should take this question out of technical realm and answer it in another way. So probably we need to educate people and just we can deal with the content that is questionable without also creating a possibility to censor content that is not questionable because in democratic European countries, the questionable content is like we all know what it is, but in some other countries that are not so democratic, this questionable content can be, for example, critic of government or something like that and they can censor that with a good intentions again. So it's a tough questions. We can't deal with that in a single way. So we have to, probably we have to allow all content on the decentralized networks. We just need to educate people to like, to make them not interested in that questionable content. Thank you. Very well put. How would we store data that we need to restore, for example, a Slack conversation when data only flows P2P? Well, if we are, if we are speaking about whatever to see in the browser, probably we can store that data through the W3C APIs. We have, if I remember correctly, there are new APIs now available in HTML5s, HTML5 that allow to store files on your local file system. But apart from that, there is a local storage and session storage that we can use to store the intermediate information, the information that would not require to go to the file system to be stored. So like, probably messages or emails can be stored in the local store and we don't need to go to the local file system. Nice. And the other possible answer is that hybrid scheme. So we can probably create server-side applications that would use the WebRTC protocol and these server-side applications would talk to the client peers that are working web browsers. So web browsers would just use the infrastructure that is provided by the server-side peers, but again, there is a question of incentives. So what is there for the server-side apps to store your content? It's an open question. Nice. How might you scale this for tons of users all communicating on the network at once? Well, the peer-to-peer networks are inherently scalable because the more peers you have, the more users you have in your network, the more efficient it will be because it will be, if you have, say, million of peers, your content will be scaled all over the globe. So it will be even faster for you to access this content because probably for some popular content you will be accessing your local nodes that are in your local network or in your country, for instance. So peer-replicate content all over the globe, it will be even more efficient than the current infrastructure. But with WebRTC, the problem is that currently, web browsers limit the number of connection to 150 or something like that so we can build decentralized networks at massive scale for now. But hopefully, and I hope that this problem will be solved and the browsers will be allowed to connect to more peers. Awesome, thank you so much, Nikita. Thank you all. Thank you all.