 So, it's my pleasure to start off this team update. We're going to hear from everyone in the team, or at least a big part of the team on everything that's been happening with Swarm. So, normally, if you've seen my talks before, this is the part where I tell you what Swarm is and how it works. It's all about decentralized storage, serverless content storage, distribution, and all the great stuff that Web3 needs, peer-to-peer communication, all that great stuff. But we don't have time for that today, so come find us, talk to us if you want to learn more about Swarm. Today, it's all about the updates, what's been happening. So one of the things that's changed last year, we've updated our homepage because it needed an update, and it constantly does. But anyway, we've changed the address. So if you don't remember anything, just swarm.etherium.org, as it says in the bottom, that will redirect you to the Swarm homepage. It'll redirect you to one hosted on Swarm, so most of the time, it actually works hopefully. So you see up there, I want to point out it says installation, downloads, and documentation. Those are the most important links, because we've got updated documentation. There's downloads, but we're binaries, I'll get to that in a moment. Anyway, what I should have said is that the Swarm homepage is at bzz.theswarm.eth, but for that, you will need a Swarm-compatible browser, not that many around. But this is advertising for the future. So we have a new release process. Swarm now lives in the same GitHub repository as Geff. So we used to live in a different fork, different repository, now it's the same place as Geff. You get Geff code, you get Swarm code. That means whenever the Geff release is triggered, a Swarm release is triggered. There's no more separate releases, they all come at the same time. That makes it nice and easy. Whenever there is a release of Geff, there's a release of Swarm, and simultaneously all of the nodes on our cluster, the Swarm gateways cluster, are updated as well. So they're always, everything is running at the newest release version, the newest Geff release tag. And we are under heavy development, so we can and do introduce breaking changes quite regularly. So always make sure to update to this latest release. And there is actually right now a breaking change in the master branch, so the next Geff release, there will be a next Swarm release that breaks changes, so always stay up to date. Let me say that again, because it's important, always go for the latest release before you ask us any questions. Okay, so having said that, let's talk about how installation works, because that's also gotten a lot easier in the past few months. How to install Swarm? Well, there's always the option of installing from source code. I said it's the same repository as Geff, so that's no mystery anymore. It's the Go Ethereum repository. The second one is to download binaries. If you go to our page and click on downloads, it will compile binaries for Windows, Mac, and Linux in both the release branch and the master branch, unstable development, leading edge. If you're under Ubuntu, it's even easier, because you can install from the Ethereum PPA, and then it'll automatically handle the updates for you. So you install it from there once, the Ethereum-swarm package, you will always automatically be on the latest release. And finally, we have Docker images also now available, so it makes it easy to deploy it in a Dockerized environment. It's the eth DevOps swarm repo, so there will always be newer binaries there, both in the latest tag will be the release version, and the edge tag is the bleeding edge. Okay, so now we're going to move on to team updates. We're going to hear from various sub-projects. We're going to start off with feeds, which is a nice way of doing dynamic, constantly changing data in swarm with only a single on-chain transaction to the ENS. Then Donny will talk about encryption and access control, how to keep data secure, make sure that only the right people see it. Anton is going to talk about observability, which is about how we can actually observe what's happening in the swarm and in our cluster with some really nice logging and tracing. Lewis will talk about PSS, which is the communications protocol that's piggybacking on top of the swarm network. Then Victor will talk a little bit about light nodes, how we plan to introduce mobile phones into the swarm network, and then what's on our roadmap, what's coming up next. So yeah, let's have Javier up on stage, please. All right, thank you very much for the chance to present. My name is Javier Paletier, I'm the CTO of Epic Labs. I've been contributing to swarms since May, since this May. And today I'm going to introduce you to one of the features that I worked with the rest of the team called Swarm Feeds. So what are swarm feeds in a nutshell? So as a summary, it's a way to update content in swarm. So it's a publisher subscriber system, where you can post updates about a topic, and then also read others updates about that topic. And you can also retrieve all their values for those posts. And you can do all of that without any single blockchain transaction. It's not working. So you can think about feeds as a key value store, where each user can only write to their own key space. You cannot overwrite somebody else's value. And you can read your own values and everybody else's values, provided that you know their address and the topic they are posting at. And you can also retrieve all their versions of the values that you post. And you can do all this without an Ethereum transaction. So this opens up the door to many dApps that do not have to do any transactions at all. So applications that we can do. We can alter content in swarm without ENS. You can use ENS as well, but you don't have to. You can enable dApps to persist content easily. So from the browser, your browser application could save to swarm some value and retrieve it later. You could use it to improve communication protocol among your applications. And also IoT devices could push information to swarm just by holding a private key. And that's it, not even ethers on that private key. So how does swarm feed work? To post to a swarm feed, you need just two things. You need a private key. And you need a topic to post that information under. I didn't say that you need ethers to post that update, just a private key that you can generate. And to read a swarm feed, the only thing that you need is the user's Ethereum address that you would be reading the feed from and the topic under which that user is posting information. And optionally, if you want, you can provide also a timestamp which would allow you to retrieve data in the past, all their values for that key. You could retrieve as well. So just to put up an example, let's imagine that we have these sets of topics on the left. For example, a topic could be avatar. Another topic could be local weather. Another topic could be website. And then we have publishers or users on top, user A, user B, and user C. So in this case, for example, user A has posted an update under the avatar key. And the update is her profile information, her profile picture. So if we use the query feed primitive in swarm feeds, we can retrieve that value that she has posted, which is her picture. If the next day, for example, she updates her profile picture, the same primitive call, which is query feed, would return the new picture. But if we provide a timestamp in the past, we can retrieve the older picture as well. So this enables, again, any DAF to interact with swarm in this way to save information and to publish information. So how would this work if we want to use this to publish a website? It's not the only application, but imagine that we want to use it to update a website. As of today, when we want to publish a website with ENS, what we have to do is upload the content to swarm and get a hash out of that. And then we do an on-chain transaction to publish that hash. And that way, everybody can reach the content by the name. But then the problem is that when we want to update the site, we need to publish the new hash. And that means going again to ENS and update the content hash, which is another transaction cost. So it means that every time that we update, we have to pay a transaction cost. With swarm feeds, this is simplified in a way that you only have to do a one ENS transaction just to have that ENS record, my site.eth, for example, point to the feed instead of to the direct content. And that way, every time that you want to update the site, the only thing that you need to do is to update the feed instead of to update the contract. So that way, here you go, you have a way to update a site without having to pay some ethers to get it done. OK, so the key takeaway here is you can update content without doing transactions. And you can start doing that now because this is already available in the swarm cluster. So there is the swarm guide that explains step by step how to start messing with it. So I recommend that you guys take a look at all the examples in there to get started step by step. OK, great. Yeah, so that was about how to get content dynamic, changing data in swarm. Next up, there are two sections are by Daniel. He will talk about first encryption, which is the low level encryption of all the content in swarm. And then about access control, which is making sure that only the right people read the right content. Hello, everyone. So I'm going to talk about how you store confidential information in swarm and how you limit the users which can access it. So confidential information in swarm is encrypted using counter mode encryption, which you can find the basic security properties in Wikipedia. However, we use our own version of it, which is a slight modification. So instead of the block cipher, which really is just a one-way function with a reverse gear, which CTR encryption doesn't use anyway, we use SHA-3 twice, which is the same hash function that is used throughout Ethereum. And we use it twice because with accessing that little piece of data that is denoted by that pink arrow, what you can do is you can partially reveal plain text inside an encrypted volume or an encrypted file in such a way that a smart contract can actually verify it, which means that you can make various commitments on the blockchain in smart contracts regarding the plain text of certain encrypted content. And if there's a dispute, you can reveal a piece of data that does not reveal your access keys or anything else, just reveals a little part of your encrypted content, whatever it is. This is vulnerable to what cryptographers call existential forgery. But if there's any integrity protection in place, then this is not actually a security problem. So this is how we encrypt. And in practice, for application developers, this means the following, that compared to unencrypted swarm, the references are no longer plainly root hashes, but hashes of the ciphertext plus the decryption key. So they have grown from 32 to 64 bytes. But otherwise, nothing else has changed. The API is exactly the same, and so is the data model, meaning that if you have already written distributed applications for swarm or looked at the example ones that we have in our public repository, they would work without modification on the encrypted swarm. So this is a native encryption API on top of swarm, and every swarm node supports it. And the only cryptographic assumption that we make is the security, the collision resistance of the SHA-3 hash function now. So we have not actually increased the attack surface beyond what Ethereum already assumes. And if SHA-3 is broken, then we will have much bigger problems. And finally, we have designed our encryption so that it's as smart contract friendly as possible. So now I'm going to move on to access control in swarm, which basically boils down to how do you change the sets of users that have access to certain content. So it's important to emphasize that in a permissionless, trustless, and distributed system, you cannot really act as a gatekeeper or you cannot really delegate nodes to act as gatekeepers. Instead, read access corresponds to the ability to decrypt. And write access corresponds to the ability to register. So registration can happen on swarm feeds or directly in the blockchain, for example, in an ENS contract. And that's about what you need to know about it. But read access, which is the ability to decrypt, is a bit more interesting. So I'm going to talk more about that. So there are three strategies in which you can identify authorized users to access content. The simplest and most intuitive, perhaps, is a passphrase. So you can make content accessible to users that know a particular passphrase. So in this case, both the publisher and the consumer needs to know that passphrase, which of course has the disadvantage that if the same user needs to access content by two different publishers, then those two publishers can also access each other's content. So in order to mitigate that, we have a more sophisticated access control mechanism, which is public keys, public-private key pairs. And at this point, I would like to draw your attention as developers that these public keys are not Ethereum addresses. These are actually points on the elliptic curve of which Ethereum addresses are the hashes. So you need those in order to, as a publisher, to identify your consumers. And as a consumer, you need to have the corresponding private key. And the most sophisticated and most finely grained access control mechanism that we have implemented in Swarm is Access Control Try, or ACT for short. This is basically a efficient organization of access control lists. And in these lists, you can have users of both kind, so both passphrases and public keys, in such a way that there's no information disclosed beyond a upper bound on the size of the ACL. So anybody who can access the Access Control Try can have a upper bound on how many items are there. But beyond that, they have no clue how many users can actually access it and who those are, even if you are one of them. You can only check whether or not you're one of them, but you cannot peek into other people's permissions. Very importantly, since access means ability to decrypt, you cannot really withdraw access. What happens if you exclude somebody from the access control list is that they're not going to be able to read future updates. They're still going to be able to read what they have read before, especially if they have cached all the keys that they have access to. But if the resource is updated, then they won't have access to the update anymore, only to the old versions. But it's also a very neat property that if you change the act, you don't need to re-encrypt the content because we're using multi-stage encryption, and you basically only need to re-encrypt the reference that I talked about earlier, the 64-byte reference. Also, granting access, extending the Access Control Try is a logarithmic operation. So if you want to add another party to your access, then the number of encryptions that you need to do is a logarithm of the number of grantees. Revoking is a little bit more expensive. It's linear in the number of grantees, precisely because you want to re-encrypt a new encrypted reference so that those whom you have excluded will no longer have access to the updates. And finally, I would like to draw your attention that the granularity of this is very similar to that in a UNIX file system, so you can have per directory permissions, per file permissions, and so on. So you don't need to have an entire swarm volume under the same Access Control. So before I finish, I would like to give you one warning. And this is the last thing I'm going to say, and hopefully this last one you remember, that if you're dealing with access-controlled content, please, please don't use public gateways. Run your own swarm node, because if you're using a public gateway, that gateway will have access to the same access-controlled content, and you might not want to have that. So thank you so much. All right. So next up is Anton about observability. Hello, everyone. So I'm going to talk about observability. The last few months, we've implemented a lot of observability tools in Swarm and also instrumented the code so that you can use those tools. What is observability? So observability is basically answering the question, what is my swarm node doing at the moment? And if you are an operator of, let's say, a larger deployment like us for the swarm gateways, what is my full deployment doing? So basically an aggregate view of is your network healthy and basically how available it is. So observability in Swarm, what do we mean by that? So there are three pillars to it. So logging, metrics, and distributed tracing. So we do aggregate logging on all our swarm nodes so that we can basically trace requests from one to another. We do metrics, aggregation, and statistics. And we also do distributed tracing, so cross-node propagation of traces. Logging and metrics, even though they're very useful, they're not very interesting. They've been in the goiterium code base for a while now. What we introduced is the open tracing framework, we instrumented the whole swarm code with it, which is basically vendor-neutral APIs for instrumentation. So what do we actually measure with those tools? As you can imagine, the infrastructure metrics, so CPU, memory, disk utilization. So in case we have a bucket in swarm, we can detect memory leaks or CPU starvation if we have, let's say, goroutine leaks. And also very useful for us is the application metrics. So basically, we track how many errors happen, how many warnings, and other types of counters. When I say errors, I mean things that developers must look at. So let's say that we have a elaborate unit test and integration test suite that catches a lot of things, but errors is what might happen, let's say, in production on our public test net. And that helps us debug problems for which the test suite is not enough. Other types of application metrics, number of peers per node, so basically the different DevP2P nodes that your node is connected to, different protocol messages between the peers, and pretty much anything that your node is running and that you want to gain visibility into. So I'm going to give you examples of how those tools look like. So that's one of them. We're using the open source Yeager. It's a tracer that hooks up with the open tracing framework. And here you can basically see a request that propagates from one swarm node to another and helps you get understanding of the underlying protocol. I have to mention that we do this only for debug purposes. Obviously, we don't run this in production and you shouldn't really run it in production, but it helps for development purposes. And if you're building on top of swarm, these tools are available for you to use and basically gain visibility. The other one is the aggregate logging. So as I said, we have requested identifiers, so correlation IDs, and we can trace requests from one internal API to another and also track them across different nodes. All those tools are available within the repo. There is a swarm readme. So if you're building tools on top of swarm, these will be highly useful for you to answer the question, what your swarm is doing and basically gain visibility over it. So next, over. Yeah, thanks. That's actually, it sounds just like looking at logs, but that's actually really cool stuff. You see an action triggered in one node, onto the next, onto the next, onto the next. You can track the action throughout the entire network. If you missed that, that's what this is doing. So it's a really cool view of what's going on. Talking of moving things from node to node, our next speaker is Louis about PSS. Hello, everyone. How does this work? The green button to go forward and the red to go backward. This is pretty clear. Hello, I'm Louis Holbrook. I'm responsible or maybe a responsible party of the implementation of PSS and PSS is as Aaron said earlier, messaging platform that piggybacks the, does it work or that green button, I see. Yeah, that piggybacks the swarm rooting mechanism to send ephemeral data across the network, enter node messaging. And what does that mean? Well, it means that we increase the efficiency of delivering messages and therefore it comes at the expense of secrecy. So this answers a question that a lot of people answer. A lot of people ask generally about why do we need this PSS thing when we have the whisper isn't whisper, the messaging platform of Ethereum? Well, that's your answer that is whisper, primarily focused on the property of privacy and darkness, PSS gives you the possibility to actually efficiently root over the network. And since we all know that there are no such things as ninja mailman, you kind of have to choose between one or the other. All nodes in swarm take part in this routine. So, and it's also enabled by default. So you can't deactivate PSS from swarm as such. So what are the general features of swarm? Well, first of all, this is not really a feature, but I already did a couple of talks on PSS, DevCon last year, that's why I won't go into technical details about how it works. But at that time, it was in a very obscure branch in development since then it's actually merged into the main code base. So that means that when you download a swarm binary, it's actually has PSS and stuff in there. It also means that it passed some tests, which is good news, right? So what does it provide? Custom luminosity. So I said that PSS gives you the possibility of efficiently routing. So it means that what happened now? Yeah, it's like, well, anyway. So it gives you the possibility to define exactly which address in the network you want to send the message to. In this case, it will reach the message and maximum of logarithmic hops of the address space. But it also gives you the possibility of partially selecting an address or not selecting an address at all. Now, not selecting an address at all would give the same consequences of whisper. It would be spread all over the network and whoever can decrypt the message, for example, this intended recipient, still no slide. I don't know what's going on. And, oh, I see. So I have to remember what the slides are. That's pretty cool. And partial addressing could be, for example, you send a message to Prague or you send a message to the convention center in Prague or you send the message to Lewis at the convention center in Prague. And I'll use Lewis because I'm selfish. That's my name. And I know there happens to be another Lewis here at the conference. Or you could say Lewis with the fried egg tie at the convention center in Prague, which probably pretty much would narrow it down to me and that would be the full address. Oh, they're back. And I, let's see, we can go back. Right. Swarm has, the PSS has built an encryption by default. So it means that it handles generation of keys and also sending all the encryption happens behind the API. There are pluggable code handlers, which means that for every, so every message has a certain topic and associated to that topic as a recipient, you can register code to be executed when you get a message for that topic. And it can be any number of handlers. So it's extendable, very flexible. There is a handshake module, which enables you to do different helmet handshakes and exchange session keys behind the scenes. Happens automatically. The keys are valid for a certain number of messages and there is also a buffer of keys so that if you run out of, if you, the exchange, the refreshment of keys happen before you run out of keys, so that you won't be stuck if, so it's less likely you get stuck. Protocols, yes. So it has a framework. So PSS has a framework for inter-machine communication. It also so happens that it's designed a way so that if you have already DDEV-P2P protocol that exists between two directly connected peers, directly in the sense of TCP, you can with minimal, minimal code actually port this to PSS. So by newest features, I mean, so all of this stuff that I said now, I kind of said last year, but so again, the difference is it's in the code base now, right? Or it's in the main binary. This is new since last time, but also part of the merged. D-duplication, when we last heard from this, you had the risk of getting the same message twice in PSS. This is now much less likely. When messages are exchanged on the PSS network, they have an expiry, and the nodes will also have a certain period of time where they don't allow the same, the message, an identical message to pass, or an identical message to go in. So that will make it less likely, about 100% guarantee, but actually less likely that you have to handle the duplication of messages yourself. Raw messages, which means it doesn't, you actually have a possibility of handling the encryption outside of PSS yourself, or just send plain text messages if you will, if the secrecy is not important. And last of all, a notifications package. So it basically means a protocol which makes it really easy to subscribe and publish notifications and it just provides a kind of a channel which your code can pump stuff into, and then a very simple protocol where nodes send subscribe message to the node that's publishing, and then automatically gets notifications via PSS. For example, in combination with swarm feeds, as we heard from Tavir, this can be a really powerful feature. Yeah, pull as well as push. Now there is an API, of course. Most applications in swarm are available through IPC WebSockets, HTTP and CLI. PSS, unfortunately, is only IPC and WebSockets because we need subscriptions to get incoming messages. There is APIs, of course, all documented. Unfortunately though, it's kind of low level. So if you want to interact with it to kind of look just like this, at least that's what we provide. Fortunately, there is a community out there that are also collaborating to swarm development among the mainframe that have created a JavaScript library called Airbus, which, as you can see, makes it a little more human-friendly to interact with PSS as such. So for the documentation, swarm guide, of course, Airbus also has its own documentation page, and at the bottom is my repository where there are some tutorials or code examples for tutorials for PSS. I would also like to mention that Victor is having a talk later today on the higher level vision of PSS and how it's going to interact with feeds, as I mentioned, and also access control, as Don Hill was talking about. I can't remember exactly where it was or when it was, but if you follow this link, I mean, everybody takes pictures of the screen, right? When these things come, then you probably can find it. And I think that's it. So yeah, the talk is at the Fluence Meetup tonight at six o'clock. There's going to be four of us talking from LifePeer, NewCypher, Fluence, and Swarm. And I'm going to talk about my vision of how to basically reformulate communication protocols. Yes, you heard it, well, all of them, in terms of tools that are available for Web3. So basically, they have communication that's decentralized, incentivized, and secured. Thanks to our friends from B3 for the motto. Now talk about light nodes. So we realized early on that we need to distinguish between two types of nodes, ones that are constantly online and therefore are reliable for storing chunks, and nodes that have high churn, basically, that you just open your laptop and close it and or have your mobile device. And in general, we call this distinction light node as opposed to full node or light mode of operation. Maybe you can call it. It's basically about mobile support. Let's simplify the things. So it's concerned with not only the high churn, but also the difference in the resources of the environment, the resources that are available for Swarm. So low bandwidth is usually probably low memory as well, and short on disk space. And there's various issues that we came across when trying to support the mobiles. Some of them are kind of boring platform-specific issues. I'm not going to talk about the permission issues. And so in general, what's the chance of a restricted resource client to connect to Swarm? But there's potential gateways, like gateways that might be accessible through others, like, for example, the one that the Swarm team or the Ethereum Foundation is running, the Swarm cluster, which we run a public gateway. This is not really intended to be long-term. This is kind of just for easy access now through the development. But obviously, when the incentivization is coming on the main net, it's going to cost money. So we might only do that for marketing purposes for short term. But as you heard as well, remote gateways are not appropriate for hosting or to retrieve encrypted content from. There's also private gateway solutions, so you run your own remote node. For example, that node is supporting that. And what I'm going to talk about here is just light mode of operation, which is how we natively support this type of operation where you have low-resource environments. So basically, we distinguish these two node types. And in the zero phase, what we already accomplished, kind of almost definitely, is that light nodes, when light nodes are treated differently, they are not saved in the address book of your peers. They are not dialed, so actively looked for. They just accept it as a connection. Since they don't store chunks, they are also not doing syncing process. So they're not taking part in the transport of chunks to their local neighborhood where they belong in the distributed storage of swarm. They're not serving retrieve requests since they cannot store. And also, in order to respect the redundancy properties of swarm, they also not counted in the local neighborhood. So once we have that, we're going to be a lot more tolerable towards high churn. And especially if we set the light node as the default mode of operation, and people who run permanent nodes, we explicitly have to set it to full nodes. So this will contribute to the resilience of our network. Now, I talk about the roadmap. So what's coming up next? We started rewriting the swarm protocol, the swarm accounting protocol, which is basically the peer-to-peer seat for that accounting that's serving as bandwidth incentivization and serving to regulate and basically optimize bandwidth resource allocation. And we ported the odd code and actually rewrote quite a bit of it and simplified the code. So that's what's going to come up next in a few releases time. We are planning on introducing spam protection. So basically protection against people flooding the swarm network with chunks by making sure that people have to attach a proof of burn. So some sort of basically proof of work to the chunks. And also, the first phase of erasure coding is going to be implemented very soon, which takes care of basically it's not the erasure coding. So the two layers of erasure coding one would be to take care of catastrophic loss. And this one, the first phase, just guarantees, basically, gives an upper bound to the retrieval latency in the network. So that's in the immediate roadmap. Also, on the immediate roadmap, that we're kind of moving to a new cluster setting. So we're going to be able to spawn really big scale network tests in a real setting. So that's Anton and Raphael are working on a Kubernetes cluster that's going to be able to start spot instances. And therefore, our network testing framework is going to be able to test really complicated emergent network scenarios. OK, so what else is on the roadmap? Well, as usual, we're still very heavily researching how exactly we do storage insurance. Then we now have a two-layered system that we're planning. The substance window framework, which is a generic framework for basically our take on station networks and also the contract support for service level agreements that are challengeable on the blockchain. So it's kind of our take on state channels combined with the blockchain as a judge paradigm. So the source and swindled contract suite has been developed and tested by Raphael from Riyadh. So it's on our roadmap to finish this part. And introduce the option to have like service networks on Swarm. There's heavy research on databases on Swarm. And we realize that there's a lot of usability issues in general with Swarm. So we're going to put a lot more effort into developing and supporting depth tooling. So basically, bindings in various languages then most prominently provide better JavaScript support together with some of our friends and allies in the ecosystem. So that's about it for the update. Thank you. All right. Yeah. Thank you, Victor. So I said that's about it. It would be a miss at this point if we didn't mention that there's more to Swarm than just the Swarm team. The community has grown this year. We've gotten a lot of contributions from third parties, the always awesome mainframe. Status has been helping us with the mobile devices. We've got Volk on databases. Data Fund is awesome. They make orange sweatshirts and other orange gear for us. It's wonderful, guys. So thank you, everyone, on this list and everyone else who's joining the Swarm community. This is the team at the Orange Summit last year. We have one Swarm Orange Summit every year. We'll hopefully have one next year. So you're welcome to attend. Keep a lookout for that. Here's our contact. We have already mentioned Swarm.theorem.org. We've started our own Twitter and Reddit called ETH Swarm. I promise there's not a lot in there. This is a low traffic environment. But that's where you get updates. We'll make release announcements. And when we have a talk somewhere or when our orange summit is coming up, that's where you look. That's our Gitter, Swarm.theorem.org. If you want to talk, if you want to write to us directly, otherwise, come find someone in orange at the conference and come talk to us. Thank you very much.