 Hello again. So this is going to be a longer talk. The intended audience are developers who want to build scalable and responsive distributed applications on top of Swarm using ENS. So first, I would like to talk a little bit about the distributed application paradigm itself because in many ways it's a very radical departure from the client server model that is prevalent on the Internet today and with which most web developers are intimately familiar. Then I'm going to talk about the various challenges of scaling and what the bottlenecks are. Then we're going to discuss how reliability issues are being addressed in a dynamic decentralized and essentially low trust environment of a distributed application. We're going to discuss availability of content, which is a very important issue. And finally, I'm going to talk about how a distributed application can be operated and developed and improved over its lifecycle. So first, the distributed application paradigm. Distributed applications are much more heavy on the leaf nodes. So endpoint nodes perform a lot more work than clients in a client server applications. Very large parts of the business logic are actually done on the client side. So on the slide, you see several possible nodes which have their own requirements but which can be the nodes in a, which can be the nodes of a distributed application. So it's not necessarily web-based. There can be native mobile applications, Internet of Things devices. And also, we need to keep in mind that key and identity management is sometimes outsourced to these hardware keys and we need to take that into account. The backend is typically generic. And I will talk a little bit later about the deeper reasons behind that. But the kind of backends on which we rely are the ETH and less protocols for consensus. So it's either a full node or a light node on the Ethereum network. So each node needs to be an Ethereum node. We use swarm for permanently storing large amounts of data and accessing data. And we use either PSS or whisper for note-to-note communication. So all of these backends are entirely general purpose. They're not application-specific. And even though I told you that there are no clients and servers in a distributed application, you should still keep in mind that the contribution of resources is by no means equal. So peer-to-peer applications are not necessarily balanced in terms of the use and contribution of resources. So even if you think about BitTorrent, there are cedars and leachers. And in the general terminology, we distinguish between consumers and suppliers of resources. And of course, each participant can choose and even change over time to what extent they supply and consume resources. But typically consumers are characterized by high churn. So they leave the network and join the network frequently. You cannot rely on them staying on the network for a prolonged period of time. Typically, they spend accounting units, whether they are ETHers or special purpose tokens. And they have resource limits, which in some cases can be quite severe. Suppliers are typically there in order to earn accounting units. So they are providing a service and expect to be paid for it. They have a low churn. So they can still leave the network and join the network, but they don't do that frequently. They have high availability and they have adequate resources for providing the services that need to be provided for this distributed application. So in the next part, I'm going to talk about the challenges of scaling distributed applications and identify bottlenecks and propose ways of dealing with them. So the most important bottleneck is the blockchain. Of course, we have heard wonderful talks about how scaling problems on the blockchain are going to be solved over time. And in an ideal world in which we had a essentially infinitely scalable blockchain, everything I'm talking about would be unnecessary because we could host the entire distributed application on the blockchain, keep all the data on the blockchain, and submit every transaction that the application ever does directly to the blockchain. But there are two reasons why I feel that the techniques that I'm going to introduce are still relevant. One of them is that the scaling of the blockchain is a slow process and it's a much harder challenge than scaling the particular DAB bottlenecks that I'm going to talk about it. So people want to create scalable applications even before all the problems of blockchain scaling are going to be solved. And secondly, because the techniques that I'm going to talk about are actually useful for scaling the blockchain. So for example, the initial motivation behind Swarm was to store the historical record of the Ethereum blockchain. So archive nodes which store the entire blockchain history, as far as I know, there are very few of them, maybe none. But we still want to keep the entire history of Ethereum available and Swarm is the perfect vehicle for that. So all the techniques that I'm going to talk about can be used for scaling various aspects of the blockchain. So on the slide, you see in what ways the blockchain constitutes a bottleneck. So because all information needs to be replicated over every node, that means that storing information on the blockchain is very expensive. So you don't want to burden the blockchain with too much information. Also, the blockchain can only be updated at the block time speed, which is on average a dozen seconds or so, which is not exactly responsive. And you have to pay for submitting transactions to the blockchain. So if you do updates very frequently or your updates are very large, then it again becomes overly expensive. The other bottleneck that we're going to encounter are individual nodes. So basically, you don't want individual nodes to perform all the work of a particular area of the distributed application. So you want to distribute all the tasks of a DAP or as many tasks of the DAP as you can over a large number of nodes that are working in parallel. And also bottlenecks can be network links. So for example, you do not necessarily want to broadcast every interaction to the entire network because then you're going to have a network congestion. And in case of systems like Whisper, you may need to pay in terms of proof of work for doing so. So these are the bottlenecks that we're going to be dealing with. So the storage bottleneck is best overcome by storing everything that you can in Swarm and put the root hash of your data in ENS. So the root hash is just 32 bytes and it can integrity protect as well as identify data of arbitrary size and complexity and structure, meaning that no matter how much data you're updating, you're still on the blockchain, you only need to change 32 bytes, which is obviously a lot cheaper than changing megabytes. The transaction bottleneck can also be overcome. So for those of you who don't know, ENS has resolver contracts. So each time you ask ENS what the particular address corresponding to a name is, you're going to have a contract answer you. And those contracts can be more complicated than a simple storage of 32 bytes. In particular, you can implement Raiden style updates on ENS, meaning that you can send those updates to all the interested nodes, but you don't have to commit all of them immediately to the blockchain. And yet all the nodes that are interested are going to know that the content has changed and any of them can commit it to the blockchain in case it becomes critical that a consensus is reached. So this way we take the burden of updates of the blockchain and we can have infrequent updates to the blockchain and yet have immediate and very, very frequent updates to the root hash of the data that we're using. Broadcast bottlenecks, they can also be overcome using PSS. So just as Louis had explained before me, PSS can be used to broadcast messages to parts of the network instead of the entire network. So if you have this pop-up model, then PSS will make sure that even though there is an overhead, you won't burden the swarm network with every message broadcasted to every node. Of course, this only becomes a problem if your application scale is really big. So for many applications, broadcasting is perfectly adequate and if you're more interested in darkness, you can still use whisper, but then you will have to keep in mind that you have a network bottleneck. So in the next part, I'm going to discuss challenges of a trade-off between responsiveness and consistency. It's a trade-off because the faster things happen, the more difficult it becomes to agree in a timely fashion. So the consistency is only guaranteed on the blockchain, but this consistency is an eventual consistency, which is rather slow. So you can see a back of the envelope calculation that tells you that you will reach consensus in a time that can hardly be called responsive and you don't want to wait that much time for every update to be visible to all the users of your application. So essentially, you need to broadcast the updates to all the interested parties and then you only need to commit to the blockchain once it becomes important to guarantee the consistency over a longer period of time. So here I would like to give you a few examples. So one is an online discussion where there's some kind of content, be it a video or a blog post or a picture, that several users are viewing and commenting. And the other is a massively multiplayer game in which different players are interacting with each other. Obviously submitting everything to the blockchain is hopeless in both cases. So in both cases, you can notice that it is relatively easy to scope the circle of interested parties of a particular update. So in case of an online discussion, it is perfectly enough to make updates fast to those who are actually viewing that discussion, who are participating in it. And it is acceptable that others who are not browsing that particular discussion right now will only have the updates much later. And they are only interested in the final state of the discussion. They don't need to receive every update at all times. And similarly in online games, there are typically locations. And there are some players in that location and you only need to send quick updates to those players whose characters are in that particular location. You don't need to send updates to all the players. And the update structure is also more complicated than simply a hash because if all you broadcast is the new root hash of the swarm content, then you're running into a responsiveness problem that it takes time for swarm content to sync even if we will improve and we will improve the performance of the syncing protocol. It will never be as fast as a direct message passing simply because a much larger amount of data needs to be transported and it also needs to be stored along the way. So if you only send the root hash, then the response time is going to be much longer than the message passing time because you also need to wait for the corresponding swarm content to become available for the interested parties. And instead what you can do is you can also include in the updates the actual information that was changed. So for example, if a comment was added, then in addition to the new root hash, you also send around in messages and PSS messages the actual comment that was added so that the nodes that are monitoring the discussion will be able to immediately update their local model. And yet they can verify whenever the swarm update happens and especially when the blockchain update happens, they can verify that they haven't been lied to. So this does not present a security problem, it just speeds things up. And also I would encourage you instead of creating very complicated update rules and then you need various ways to actually verify that the updates were legal or so they were the kind that is allowed. Instead, what I would suggest is to do aggregation on the front-end side. So for example, if we're looking at a online discussion, then every participant would update the root hash of all the comments that they have sent to different parts of the service. And whenever a participant looks at a particular discussion, they should gather the addresses of the participants of that discussion on ENS and then monitor the updates of those participants and independently of each other and do the aggregation into a discussion themselves on the client side. So that's a much simpler and much more reliable way of dealing with concurrent access. In the next part, I'm going to briefly address issues of reliability and availability. So here I would like to go back to the things that I said about distributed applications. Whenever possible, you should not try to build your own infrastructure. Try to rely on generic infrastructure because that way it becomes somebody else's problem. So in order for a distributed system to work well, it needs to have a sufficient number of nodes. Those nodes need to have sufficient incentives. They need to have adequate resources each. And all these things, they are difficult. So for example, if you only use swarm as a generic information-storing framework, then making sure that swarm content will be available is our problem as infrastructure developers. It's not your problem as an application developer. And in general, so there's a reliability kind of ranking between these services. So ETH is the most reliable. Light client protocol is less reliable. BZZ is even less reliable. And PSS is the least reliable of these generic services simply because of the popularity of that particular service on the Ethereum network. And if you do something yourself, it will be way, way, way behind all these on the reliability scale. So also, you should keep in mind that the only thing that is certain is the blockchain. Everything else is ephemeral. So if there's some important state update, eventually you have to make sure that it reaches the blockchain. And finally, you should be paying attention to incentives. So in case if you want to enforce compliant behavior, then the two tricks that can make things more scalable is instead of using pre-emptive measures, so writing a complicated contract code that can only drive the application mechanics as the smart contract executes at block time, instead, you can use reactive security that everybody updates the way they can locally. However, if there's some dispute, then you can turn to the blockchain and figure out who's right and who's wrong. And this way, if everything happens correctly, you rarely touch the blockchain. So that helps you scale. So instead of using proactive security measures, you use reactive security and you only bother the blockchain in case of disputes. And the final part of my talk, I would like to introduce some ways of maintaining and improving and developing and that big feature to a distributed application. It is very different from the client server model because in a client server model, you can do everything on the server and you don't have to consider the fact that your application is not entirely under your control. So when you're rolling out a new version, obviously, you need to update ENS to point to the root hash of the new version. But also, you need somehow to notify all the active users of your application that things have changed because otherwise, they just keep continuing using the old version and it might even interfere with the workings of the new version. So in this way, it is very different from client server application. Also, with DApps, we have the same thing as what we have with blockchains. We can have forks. It's entirely feasible that somebody likes a particular version of our DApp and they register it under their own ENS domain and people keep using that. So whenever you introduce a change that the community disagrees with or at least a large part of the community disagrees with, you will have forks. I think I'm running out of time. So the last thing that I wanted to mention is analytics, which is much more difficult than in a centralized environment and it's a DApp in itself that will be eventually developed. But unfortunately, I run out of time. So if you want to talk about analytics, please see me in Saturday at the breakout session. Thank you.