 Yeah, hello, everyone. I'm very happy to start us off on the update from the Swarm team. So today, we're going to talk a little bit about what Swarm is, just to make sure we're all in the same page, what the status of the Swarm project is, and also where it's going. We've got some new sub-projects. It's going in different directions. And the general scope of Swarm has grown. And so we're going to talk about where we're going. But I'm just going to start you off with what is Swarm to make sure we're all understand each other. You probably know Swarm does a network of computers, share data in a decentralized way to give us peer-to-peer decentralized storage with all the benefits that come from being a peer-to-peer network, such as resilience against network outages, DDoS attacks, and the rest. But what you may not know, it's really the same connections as Ethereum. So same with Anastas, because right now Swarm client is still separate. But the vision is that it's part of the same package of protocols that would run over DevP2P. So your ETH node could talk the ETH protocol or the Swarm protocol, all of the same connection. So it's really tightly integrated with Ethereum. So let me give you the basic first use case, the sort of baby example of what Swarm is for, the decentralized storage of app data. So the Ethereum blockchain is great to store critical information about your DAP, maybe username registrations or something like that. But your DAP still needs images, HTML, JavaScript, and all the rest. And right now, you'll probably need to host them on a server. And Swarm is there to allow us to have this in a completely decentralized way. So here's how that would work. You'd go to your Swarm-enabled browser and type bzz, which is a Swarm protocol, the name of your DAP.ETH. Then the Ethereum name service would turn that name.ETH into a hash. So the ENS record has an address field where you can store your wallet address. And that's what address would get paid if somebody sends payment to that name. But the standard ENS resolver also has a field called content. And in that, you can store a hash that's referencing content on Swarm, for example. So you use this hash of content to query your Swarm peers for the data. And in return, they will supply the data to you. And anyone who supplies you with data that you want, you compensate them using Ethereum-based micropayments. And then the last step is to just use that data to render your DAP. So the hash gives you integrity protection. You don't care where the data comes from. You know it's the right data. And it gives us a serverless, completely decentralized way to host one DAPs. So that's great. So let me also tell you what's happening under the hood on Swarm when you have a file or any content. Swarm is agnostic to what the content is. It's going to take anything you throw at it and break it into little chunks, about four kilobyte each. And the hashes of those chunks are collected into a Merkle tree. Everything is Merkleized. And the root hash of that Merkle tree is the ID of this file or this content in Swarm. And the chunks, they're all being sent to different Swarm nodes for storage. What's really happening is that the hash is the ID of the chunk and every chunk gets stored with whatever node has an address which is closest to that chunk hash. So really spread out the data. And your DAP data is uniformly stored throughout the Swarm network. The nodes share services with each other. It could be data, chunk retrieval, and serving data, or other services. But so the accounting, how much you have to pay, how much you get paid, is done by this so-called swap. So here's how that might work. Suppose node two is asking for a DAP and node one happens to have the chunks in question and gets to serve them. So like that. It's my own animation. I'm proud of that. And if there's more than one chunk, both nodes keep count. So that might be two, four, six. After a while, if node one is not consuming any data, node two is doing all the consumption, node two will have to pay node one. So it would use our checkbook payment contract to send a check to node one. At that point, the channel, we call it balanced. Whether it's chunk for chunk or check for chunk, doesn't matter. And when it's balanced, we can go back and continue consuming services. So what's happening is within the swap channel, in the middle where zero balance, neither node owes any node anything. And as soon as services get consumed, the balance moves off towards one side. At some point, you're going to hit the payment threshold. And at that point, the node that's in debt would have to issue another check to the debtor so that to bring the balance back at zero. So the node's always working to keep the swap within that middle range. And if you slide too far off to the side, well, then you get disconnected. That's the ultimate punishment in a peer-to-peer network. Nobody wants to talk to you anymore. This payment infrastructure that swap is a part of is very modular. It can go in many different directions. It was originally designed for data delivery, content delivery, and storage, long-term storage, as described in our orange paper. But it's become much more modular and we're going to extend it. And we're talking about this on Saturday in a talk called Swap Swear and Swindle Games, where we describe just how this can accommodate all kinds of different services beyond just storage. The status. Well, some of you might have tried Swarm already. If you have tried Swarm, you've been playing around with a release POC 0.2. And that really is a proof of concept release. It's not a final release. So the proof of concept showed us that the basic idea works. And we've had some fun hosting content and trying it out and playing with it. But the performance has been really slow. We've had data availability problems throughout the network. Data that hasn't reached where it's supposed to go. But so as a proof of concept, it was successful. And the testnet has taught us a lot. And as a result, several of the core components are being rewritten or have been rewritten completely. Yes, and additionally, the scope of the project has grown. As I've mentioned before, people are using Swarm not just to host content, but for example, the live peer group are using it for broadcasts, streaming, video, and audio. Really cool. They're also talking on Saturday. Another project that's developed is about communication, direct note-to-note communication. And to tell us more about that, I'd like to introduce Louis to talk about PSS. Hello, everyone. I'm Louis. This is my first DevCon. Actually, I'm a classical pianist, turned cultural magazine editor, turned funding bureaucrat, banned manager, turned taxi driver. And I was privileged enough to get a ride with some blockchain people in Berlin, where I worked in my taxi, and then told me all about decentralized consensus, war on privacy, and the looming dangers of opaque, algorithmically driven societal conformity. And this brought me back to the world of computers that I've been in all my life in varying degrees. Now I work for Jack, part of the Jack team. Jack is a company. First of all, I very clumsily forgot to include the logo of Jack here. But that's the logo. They're a sponsor. We are a sponsor. So Jack builds on Swarm, and we want to contribute to Swarm as much as possible, of course. And therefore, I was very excited when Victor Tron came to me and challenged me to do the PSS work. And that's what I've been working on the last seven months. Now, notice that among the things I said I was, I didn't mention public speaker. So that's why I have this, not to panic. I will be looking at more at this than at you. Don't take it personally, please. Now, Swarm is a network of nodes. And not only that, but Swarm is a network of nodes that speak the same language. This language is communicated on top of internet protocols and is regulated by a protocol framework. In Goa theorem, this is called devP2P. Swarm nodes uses this language to find their most advantageous place in the network and to figure out who to share content with. But content doesn't necessarily just have to mean files. A message, for example, is just as much worthy of the term. Thus comes the idea, if we can use Swarm to send bits and pieces of files around, why can't we use it to send messages as well? Now, wait a minute, you might say. There is already a messaging platform in Ethereum. And that's called Whisper. And this is true. But the mission of Whisper is somewhat different from PSS. Its focus is to serve the need of total anonymity and to secure freedom of expression at cost of performance. Now, since the Swarm network is rooted, it knows that the shortest way to the final destination is. And therefore, PSS can deliver messages fast and with a minimum of network load. Simply, it serves a different purpose. So this is what PSS is. It's a postal service over Swarm. Now, this sounds easy, right? If you want to send a message to someone, you just slap an address on it and away it goes. Well, it's not quite that easy. Since messages potentially pass through unknown nodes to get where they're supposed to be, they should be encrypted so that people can't snoop on the contents. That means that the actual definition of a recipient in PSS is not only the actual address of the node, but who can actually decrypt the message? Actually, the node address in the end doesn't unambiguously define a recipient. Yes, to root the message over Swarm using PSS, you pass it to the message you want to send and an address to send to. But let's say you don't want to tell everybody on the network who you're actually messaging with. Now, this is easy. Just disclose a part of the address. Swarm rooting will still root the message the best it can from the knowledge it has, so it will land in some neighborhood of nodes closer to the recipient than you. And as before, whoever in that neighborhood can decrypt the message is the recipient. Or recipient doesn't necessarily have to be only one. Now, what if we don't give address at all? This is analogous to what Whisper does. And the consequence is that all messages gets passed to all nodes by all nodes. This has nothing to do with rooting at all, obviously. But it might be handy in some stages of the communication, like when you send a broadcast at probe to enable a peer to send you its address, but encrypted, or to get some temporary encrypted keys that can be discarded later. Now, to make life even harder for scru-to files, PSS also employs a redundant rooting. And this means that when a message enters into the neighborhood of the destination, the node that receives the message for forwarding will pass on the messages to more than one peer. This happens even if the node can successfully decrypt the message. This one, anyone trying to trace a message path through the network will have difficulty finding out where the message actually stopped. That is to say, the node not passing the message anymore are not guaranteed to be the one the message was actually for. In fact, most likely, it won't be. So all good and well. You got the message, you can decrypt it. What do you do with it? Well, for this, PSS has the notion of topics. All messages that are sent belong to a specific topic, which could be any four byte value, but intended to be the first four bytes of the hash of an arbitrary value. PSS provides a registration method to attach code to these topics. Therefore, when a node gets a message with a certain topic on it, it knows exactly what to do with it. The topic structure is the same topic structure as whisper uses for its envelopes. Envelopes are meaning the entity that encapsulate contents in the SHH protocol. In fact, PSS actually cheekily steals this whisper envelope to wrap messages in. In reality, all PSS does is just to slap an address on top of it or not. The whisper envelope also provides an expiry parameter. This is used by PSS to stop messages from circulating in the network perpetually. And finally, PSS uses whisper also as a back end for encryption and handling of keys. As whisper PSS supports public key cryptography as well as asymmetric, arbitrary symmetric encryption. So any 32 byte value is actually valid as a symmetric key in PSS and can be set at the developer's discretion. So to sum up, to send a message to a PSS node, all you need is a key, the topic, and an address or not. So what exactly right now does PSS implement? The encryption, as said, luminosity control, which is a fancy word for deciding how many address bytes to disclose. We have a generic handshake module which gives you a simple Diffie-Hellman exchange of keys in case you're too lazy to make your own. And the keys generated by the handshake module are valid only for a certain number of messages. That is to say they're ephemeral. Handshakes can be optionally also left out of the compilation of Go other. There is an API, of course, available over IPC and web sockets for key handling, message passing, handshakes, and common supporting functions that are needed. A simple flood protection, all nodes that get the message, store the message on Swarm, and use the hash of the message or the chunk to determine if they've seen this message just before and can optionally not forward it. And finally, you can actually use PSS with the DevP2P protocol structure that exists already, the logic. So this means you can reuse DevP2P code that's already there. This can be either integrated into the code of the node itself or there is also a module that enables you to use, to do this in a separate process using web sockets as the transport layer. Now, I'm probably overstating my time, I'm not sure, but anyway, there is a more thorough walkthrough of PSS on Saturday in the breakout session, quarter past two. In this, go through a bit more of the internals, what makes PSS tick? The API will present that. And there will also be a demonstration of a chat app that's actually using PSS right now. I mean, it's not a production ready thing, it's a demonstration prototype. And this is not only a demonstration from the screen, but actually the audience will be, you guys, will be able to install the app and chat with us live, if all goes well, that is to say. Yes, so if you can't wait till then, here are some links, on top is the branch where the active PSS code is now. PSS will be a part of POC3 for Swarm, so this means you won't find it in the Ethereum Master repo now, it will be later on. There is a description of the API and also there are some code examples for both the general P2P programming in Go Ethereum, along with a bunch of PSS examples on the bottom link there. And I think that was it, yes. Okay, Danny with encryption. Hello, everyone. I'm going to talk about the first Swarm-based application that is aimed at a wide audience, basically the general public. So this is a cloud service that allows people to share files between one another and also to sync files between different devices. So this is expected to be a fairly popular use case and what we hope to get out of it is that we are going to see how Swarm actually performs at scale. So we're really hoping to get extensive analytics and user feedback out of it. And also eventually it will allow us the Swarm team but also the Ethereum project in general to sort of use whatever infrastructure we have built in order to host our documentation, our source code, our web pages and so on in a decentralized and censorship resistant fashion. And in particular, I was always baffled by the fact that Linux Store Vaults has given us a wonderful distributed source control system Git and then everybody started using GitHub which is a centralized service. It's a great service, very happy with it, but it's centralized. And there's already work going on for a module for Git on top of Swarm. Alex Berksas is working on it. So hopefully in the not too distant future we're going to be able to host all the sources of Ethereum on Swarm and have the development repository basically stored in a distributed fashion. The first thing that we did for this application was a GUI-based file manager. It was released last September, so over a year ago. It is itself a Swarm hosted web-based distributed application and it allows people to manage folders and files in a very familiar and intuitive fashion. It also has a pluggable previewers for various types of content. So you can view images, movies, and later perhaps PDF and whatever people can think about. So that's the first thing that we have built. Then Zahur Mohamed has in this summer released a Fuse module, which means that you can actually mount a Swarm volume as a directory into your file system if you're using a POSIX-compatible file system. So it works on Linux and Mac, no Windows support yet. It already has read and write support. You can already mount ENS registered volumes and there's one thing missing, which is sort of a safe to ENS kind of functionality where you can update the ENS resolver with the new content after you have modified the content of the Swarm volume that you have mounted. So this is also available for download and you can. You're also very welcome to play with it. This is a big one. So currently all Swarm content is unencrypted and in the open. And this of course makes it completely unsuited for any serious use. But, and this is what I am currently working on, privacy and security encryption. So here you need to understand that this fundamentally differs from a traditional cloud hosted server client application because obviously you cannot force people like not to share certain information. So access control needs to be implemented in a different way. So essentially write control, so who is allowed to modify something is equivalent to the permission to update the root hash in ENS. So it is something that is controlled by a smart contract. A smart contract is what takes care of write permissions. Read permissions are even trickier because where Swarm content ends up cannot be controlled, it just goes all over the network. You cannot hide Swarm content, you can only encrypt it. So what you can do is you can encrypt Swarm content and read permission essentially means the ability to decrypt it. And if you want to have access control lists, so you want a large piece of data to be readable for some parties but not others, then what you can do is you can take the symmetric key with which this data is encrypted and encrypt that symmetric key with a number of public keys of the parties that have repermission and use that as a access control list. And this obviously affects how content can be referenced. So in unencrypted Swarm, the standard reference to content is the Swarm hash of the manifest. In encrypted Swarm, in addition to the root hash, you also need the decryption key of the root. How this encryption works? So as Aaron has already told before me, Swarm chunks up content into small pieces and organizes them into a Merkle tree. And in Swarm, every chunk is going to be encrypted with a unique key and each edge in the encrypted Merkle tree contains not only the hash but also the decryption key for that particular chunk. Which means that if you reveal a key at some point in this tree, so one of the Merkle trees nodes key gets compromised, it means that only the sub tree under it is compromised, which is kind of a stopgap measure for catastrophic failures against losing keys. And also it allows different directories to be shared among different parties as a subdirectory of different directory structures. And so I would like to invite Aaron back to talk about the network testing framework that we have built. Yeah, hello again. I'm filling in for Victor, our team lead who is here but not able to speak right now. So I want to talk a little bit about the network testing. Well, the full title is the network simulation and testing framework. And I have to admit at this point, I'm not the right person to talk you through this diagram in detail, but that means I am the right person to give you an easy to understand high level overview of what's going on. So I'm gonna start from the right and talk my way over to the left. In the top right, what you see is our network visualizer. It is, it's a web based or a browser based view of a swarm network. So all those dots in there would be swarm nodes and connections between swarm nodes. This interface allows us to query any swarm node, what's its current status, stored chunks, every connection, the swap connection, what's the swap balance, what are the payments and so on. So it gives us a very, it's a great overview of a swarm network, of a test network to see what's going on. And it also allows us to input changes. We can click on a node and tell it to go offline, tell it to initiate a data retrieval, whatever we want. That would take us into events. So these events, we can fill into network, sort of simulating activity on the network, telling nodes to drop offline, come online, do various things. And these events get packaged into whole scenarios. So we can write a network scenario of nodes coming online, sharing content, downloading content, paying each other, refusing to pay each other whatever we want. And we can observe how the network behaves, like what the network does, what the emergent behavior is. In the back end, which is further towards the left, we can either simulate the swarm nodes in process very quickly, or we could launch a bunch of dockers with various, so we could host a cluster of swarm nodes and actually run them in full, or we can simulate them. So these scenarios are portable, so separating out what we want to happen on the network and the level of simulation, whether we just want to simulate the processor, actually. We also want to have a back end to our cluster where we can have our whole cluster of machines running a whole bunch of swarm nodes doing, running through these scenarios. And the reason why we're so excited about this or why we're developing this is to test emergent behavior of the network. So normally when we talk about testing a client, you're running code tests to see whether a swarm node behaves the way you expect it to. But a lot of the intricacies, the subtleties of swarm arise from the interaction of the different nodes in the network, the emergent behavior, like how data is distributed. One node requests data, the other ones forward the request, the data is found, handed back, all the nodes have to do the accounting. And if you want to see how this network behaves, we really have to simulate a full network and test the network. So this testing framework allows us to do protocol testing. You write a scenario, what you want to happen, one node uploads content, and then what you expect to happen, you expect the data to be handed from node to node, stored here, and all that kind of stuff. So it's more than just testing an individual node, it's testing that all the parts work together. And this framework allows you to test PSS, storage, retrieval, the entire, you know, the whole package. And that's all I'm gonna say about this today. We'll give more about all of this on Saturday. So upcoming stuff, what's on the roadmap. So one of the things that's changing is the hashing algorithm. Earlier I said when you have a chunk of data, you hash it and that is its ID, we're changing that to actually make the ID of a chunk be the Merkle root of a Merkle tree that we make out of the chunks, out of the 32 byte segments. And this allows far more fine grained resolution, proof of inclusion, and proofs of custody, this is where that comes in. Then the syncing protocol has been rewritten. The syncing protocol is the protocol that governs how swarm nodes share data, how the chunks make it to their destination. And that's the one that was causing us the most headaches in terms of performance on the test network and why if you were playing around with that, you got so many four or four errors where files weren't found because the data wasn't being passed on the way it was supposed to. So that's been rewritten and it will be merged soon. Indeed the entire network layer and the connection management has also been rewritten and is ready to be merged. And these changes are breaking. So that hashes need to change, the ENS entries would need to change. And in fact, we're gonna have a complete reset of the swarm testnet soon once these things are merged and then hopefully we'll have a far more performance test network to play with from then on. Yeah, so ongoing work, encryption, erasure coding, plausible deniability that is Donny's where he talked about that. That's what he's working on. PSS is an active development. And yeah, the contracts for storage insurance, we didn't talk much about today, but they were in our original orange papers and they are being developed and tested. They haven't been deployed yet. All we have so far is contracts for retrieval, like the bandwidth incentives, but the storage incentive part from the orange papers being developed too. And then the testing framework is undergoing active development. And then also on the roadmap, I want to highlight a few things. A light swarm node, we've been contacted by people who are doing mobile development saying we cannot assume that every swarm node will want to store a bunch of data and serve it and earn revenue. Some people just want to join the network and only consume very limited data that they want. If you have limited bandwidth. So we talked about various light modes of operation, nodes without storage or nodes without a lot of bandwidth. And so we've been developing the theoretical framework, how those swarm nodes should fit into the network. And we intend to develop those two, so various light modes of operation to make it easier for mobile clients and other light clients. The payment infrastructure, as I alluded to earlier, is being developed to be far more generic and to allow for different services than data storage and retrieval. One of them being communications, broadcasts and other things, service networks that we'll talk about on Saturday. And Victor will also be talking about distributed database services and how they fit into both the swarm network itself and the incentive layer. So that's the update from us. Our team is Victor, Donny, Fabio, Zahour and Anton from the foundation. And from Jack, we've had not just one, but two Lewises giving us great, a lot of time and code, so big thanks to them. And the swarm team is still growing, we're very active development and adding people. Yeah, so there's even more coming. And so I want to make special advertisement to our breakout session. On Saturday, the session on P2P technologies, blockchains in data and communication, there'll be updates from the swarm team. We'll talk about swaps where in swindle. We'll talk about distributed databases. There'll be a PSS update. There'll be an interactive chat demo and updates from other teams live here. We'll be talking about streaming services of a swarm, so a lot of good stuff. So we hope to see you there. And with that, I'm going to end a little bit early. Thank you for your attention.