 My name is Eric Tang, and I am here with the Live Peer Project. At first, I want to thank Victor and the organizers here to invite us here to tell you a little bit about video streaming. The past four days have been really awesome for me, like just being among my peers and all trying to make decentralized web happen. And it's like the best thing ever. So, today I'd like to share a little bit about why decentralization is so important for video live streaming. And also, I want to share a little bit about a roadmap where how we can work together to make this whole system work. Because live streaming is, you know, a combination of many different things. And Live Peer is a part of that, and Swarm is a part of that, and among other systems. So, before I dive in, I would like to give a live demo. And I know this is against the rules of presentations in general, but I will do that. So, what I'm doing here is I am live streaming a video from my phone to the test network that we've set up about a month ago. And I am consuming that video from my gateway. It's not very clear, but you can see this is video live streaming working in a decentralized peer-to-peer network. So, thank you. I just broke my presentation, but it's okay. So, with peer-to-peer decentralized video streaming, there's a bunch of use cases that we can enable that was not possible before. I think unsensorable journalism is super important as we have seen happening right now in the world. Whenever there is some kind of political unrest or war breaks out, the live video stream is the first thing that gets cut off. So, a decentralized video streaming solution can be very, very important in solving this problem. Another interesting thing that I'm personally very excited about is a pay-as-you-go education or expert network. So, this could be things like a tutoring service or telemedicine or telepsychiatry where finally the service providers can have the tools they need to connect and transact directly with the people who need the service and get off the centralized networks today that are charging 30 to 80% of that transaction fee. Another really interesting use case here is an auto-scaling social video service. And this works because in video live streaming, we're pretty notoriously known for the peaks and valleys of the resources needed to provide the service. Popular streams come up and you require a lot of bandwidth and computation to serve that, and then it goes away and then you don't need that service anymore. So, the underlining economics in the blockchain can really create this automatically, dynamically changing network and throughput to solve this problem and save a lot of costs. And finally, this decentralized video live streaming network enables stack developers to build completely decentralized applications that have a video component where we haven't been able to do before. So, I'm really looking forward to seeing all kinds of interesting use cases that people can think of with a decentralized solution for video streaming. So, before we can get into how decentralized video streaming works, I thought I would take a quick look at how video streaming works today on the Internet. What we see here is a broadcaster could be for my phone, could be a webcam, or it could be a high quality camera from a news reporter sending an RTMP video into a cloud hosted media server. Now, this media server does a few things that are important. One is that it will store the video for a future playback. Another is that it can optionally interact with a DRM system to encrypt the video so you can preserve the privacy of the video. And another really important thing is that the media server will transcode the video into many different bit rates and video formats so that these video formats can be delivered to the CDN and they can be delivered to the end devices that you're watching the video on, whether it's a mobile phone, a tablet, a computer, a high definition TV, or even IoT devices, any device that can connect to the Internet. So, here what we have is a workflow called adaptive bit rate streaming. And this is basically what makes the video streaming experience work on the Internet. And what this really means is the end player is able to pick the right version of the video to play based on its own network condition. So, what we're seeing here is we were starting off with a cell connection at around 200k connection speed and the best video for that is a 240p video. And as we switch to a 4G connection, the player is able to switch to a 360p video with no interruption in the playback experience. And this is crucial for streaming on the Internet because the download speed on the Internet can vary throughout time because unforeseen things can happen on the Internet. So, here are all the bit rates and formats that we have to worry about when we are streaming videos. As we can see, there's quite a bit of them. And this just means for every stream that gets streamed onto the Internet, we have to transcode it into all these different formats in order to serve all the devices that are out there. On top of that, other than the bit rates, we also have to think about video codecs. The most popular video codec today is H.264, but there's a new codec coming out called HEVC. We're not going to go over the details of the difference, but the high level difference is that HEVC is able to pack a much crisper picture with the same amount of data. And so here, what we see in your left, in my left, your right, is a HEVC video served in the same amount of bits as a H.264, and as we see, HEVC is much better. And it just happens that H.264 and HEVC are proprietary codecs, which means when you're using them, you have to pay a licensing fee to the patent holders. So on the counterpart, there is an open source codec called VP9. Companies like Google are funding the development of this, and the next generation of that codec is called AV1. So all of this complexity that I described makes video transcoding and delivery very complex and very costly. To put some number on that, a traditional SaaS transcoding service typically costs about $3 per stream per hour. If you want to build your own stack to do this, you will have to license expensive proprietary technology in order to do that because there is no good open source media server out there. And on the delivery side, it costs about $0.12 per gig on a regular CDN. This might not seem like a lot, but a regular average Twitch user uses about $6.5 gigs. And YouTube uses a little less, but not that much. And this comes out to be a little less than $1 per user per month. And if you have millions of users, you can potentially be paying millions of dollars per month just to the CDNs to relay your video. And so when we think about using decentralized solutions to solve these problems, what we really want is on top of all the nice things that decentralization provides like censorship resistance, we want to have a cheaper and better solution than a centralized service. And this is pretty special because in the blockchain world currently we have pretty expensive services. But what we're trying to do here is to make the service cheaper and better. So what that really means is we want to change this picture, which I showed earlier, into this picture. So we want to teach Web3 how to do the video transcoding, how to do the encryption, how to do the storage, how to do the video delivery, so that when DAP developers are creating DAPs, you don't have to worry about all that stuff. And all that stuff is hidden away from you, so you only have to worry about the incoming video and get the video into your DAP, and that's it. So this sounds like a pretty daunting task because there's a lot of moving parts, but luckily a lot of these components are being used today. So for the storage layer, we have projects like Swarm, IPFI Storage, all doing decentralized storage for slightly different use cases. For video delivery, we have projects like BlockCDN and Filecoin, and within the Swarm project we have this watch spec that addresses specifically for video delivery. For content protection and privacy, we have decentralized key management systems that are coming out. And on the application layer, we're starting to see a lot of interesting app tokens and DAPs that are trying to address the problem for incentivizing the content creators and connecting them directly to their viewers. So these are projects like the props project from YouNow, the stream token project, and the parity DAP. So this is all great, but creating protocols are hard because you have to create your software and then you have to create your decentralized protocol so that it can work in a decentralized way. And on top of that, if you want to create a protocol that scales well, that's even harder. And if you want a protocol that scales well and is cheaper when it's scaled, that's even harder. So today I want to use LivePure and video transcoding as an example to show you some of the lessons that we have learned in the past year about some of the principles. So on the high level, when I think about the blockchain, I think the most powerful thing there is that we can create completely new economics and realign incentives. And this is very apparent for video transcoding and in the LivePure network. So in the traditional service economy, what we have is a broadcaster sending a video in and the broadcaster has to pay for the cost of the service plus the margin that the platform is charging. But in the decentralized world, the protocol itself can create incentives by releasing the crypto token on a predictable schedule to the nodes that are providing the services to these networks so that when the broadcaster is sending the same video into the decentralized network, it has to pay for the cost of the service minus the incentive that the blockchain is already providing to these service providers. Now this might be subtle, but it's a pretty important difference because it kicks off this virtuous cycle where cheaper broadcasting would drive more demand onto the network, which over time increases the token value and the increase of the token value as we can see, as we already seen in the Bitcoin and Ethereum mining world, brings in competition for the service providers and that creates better hardware, better software, and even cheaper bandwidth. And all of that increased capacity and increased capability of the network goes back into creating even cheaper broadcasting services. And this is how you kick off this flywheel that makes the service cheaper and cheaper and makes the network scale more and more. Now this all sounds good in theory, right? But when it comes to practice, we have to make sure the protocol incentivizes reliable service and it creates a secure protocol in both from a cryptographic standpoint and a crypto economic standpoint. So for the case of life peer, what we do for reliability is we use a delegated proof of stake service protocol where the transporters, when they become active and join the network, they advertise their rates and stats and their fee shares so that when the token holders see that information, they can choose which transporter to delegate towards. And in the delegation process, the token holders are essentially protecting themselves against the predictable inflation that's happening in the protocol. And for every round that happens in a protocol, the top end transporters become active and they get work in proportion to the stake that they get from the token holders. Now this is important because number one, it creates an open market where you have open competition and downward price pressure for the transcoding service. And two, it creates some stake for the transporters so that we can create accountability and economic disincentives for the transporters who are trying to gain the system. So let me walk through how the life peer protocol works to do this. When the broadcaster first wants to broadcast a video, it creates a job on chain with the smart contract. And the smart contract uses that pricing information to find a transporter who's willing to do the work. And when that happens, the broadcaster can start sending the video to the transporter. When the broadcaster is sending the video, it signs every single video packet so that the transporter knows exactly and can verify who the broadcaster is. When the transporter is doing the work, for every video segment, it's creating a transcoding claim using the transcoder result hash and the signature from the broadcaster. And it's keeping these claims around and when the job is finished, it creates a Merkle route based on all these claims and it writes that Merkle route back on chain. When that happens, the life peer protocol in the smart contract reveals a challenge segment which the transporter has to provide the Merkle proof for. Now this is important because the transporter does not know what the challenge segment is before the stream is over so that the transporter is forced to do work for every single segment and it can't cheat. Now, but this is not enough, right? It just ensures that the transporter is doing the work for every segment. It doesn't ensure the work is done correctly. So to do that, we have to use an off-chain computation oracle like TruVid or Oracleize. And what this does is the transporter can write the broadcasted segment for the challenge segment onto Swarmo IPFS and the computation oracle will use that information to do the actual transcoding so that after it does the computation, it'll write the result hash back on chain. So now on chain, the protocol has the Merkle proof from the transporter and it also has the result hash from the computation oracle and it'll compare the two results and make sure they're correct. And if they're different, it means the transcoater has done something wrong and the transcoater will be slashed. Now this is how we're able to create a secure or decentralized transcoding market. So now I'm going to demo how that works. So what I'm doing is I'm connecting to the testnet and I am starting to stream from my camera into the testnet and the testnet will go through the transcoding election process and start transcoding the video. And what we will see in a little bit is the transcoding results. And I said this is really important because we need to have different versions of the same video. So the big version might be from my laptop, the really small version might be from my phone and the middle version maybe is for a tablet or something. Now the transcoding is only one piece in this whole live streaming workflow. Another really important piece is delivering the actual content. So to do that we created a prototype based on the Swarm node and we showed this off at the Swarm Summit early this year. And the way we did this was first we extended the buzz protocol in Swarm so that each Swarm node can relay videos to each other. We also created a stream DB so that each stream can be searched for and found in the peer-to-peer network. And on top of that we created a streamer interface so that we can embed the live peer media server into the Swarm node and make each Swarm node also a media server so that each Swarm node can take in ingest video and conserve an outgoing video. And since then we've been working on scalability solutions to make sure not only can we relay videos around but when thousands of people are watching the same video, this video can be delivered reliably and these people can all have really good viewing experiences. So let's look at why video delivery is hard in a decentralized world. So what we have here is a very naive way to relay a video. A broadcaster is sending video into the network maybe to a few people or to a few nodes or just one node and this node is relaying the video downstream to the nodes that are trying to watch this stream. And now this is already much better than the centralized solution because now the broadcaster does not have to provide all the bandwidth for everyone who wants to watch the video. It only has to relay to a few nodes and these nodes will spread the consumption of the bandwidth around. But it has a weakness, right? Slow upstream bandwidth will cause the downstream viewers to have really bad experience and this is undesirable. What we really want is a highly connected graph where every node can stream little bits and pieces from every other node so instead of one video being relayed down a tree, we have this video being swarmed around in this highly connected graph. So this is why we've been working on this protocol called PVSPP which stands for peer-to-peer streaming peer protocol. I didn't name this. It's been an RFC spec that's been in the works for a few years and it has some really great properties, right? Number one, it creates an overlaid network on the base peer-to-peer network for every specific video so that only the people who care about the video will join these swarms. It has very small data chunks. The recommended chunk is 1024 bytes which means we can break the video segments down into very small segments and relay them around and have much more flexibility. It uses a single miracle route to represent the entire stream so that when you're relaying the video around, the data can be validated and it will have integrity. You can pack multiple messages into one packet. Now you can have many connections to many peers and download the stream from many peers at the same time and let me just walk through the really simple version of how this works. So node A wants to join the swarm and start viewing the stream. So it asks either a centralized tracker or DHT about the swarm connection information. It gets them. It sends handshakes to all these nodes that it wants to connect to. Nodes can either send a handshake back and a have request to tell node A which video chunk that it actually has or it can send a choke request back to tell A I don't have a bandwidth or I don't want to serve the video. After that, now A knows who to request what video chunk from and it's going to do the request and it's going to get the video back with integrity check and this integrity check is basically just the miracle proof because we already have the miracle hash, miracle route that represents the entire stream. So now A is optionally it can send an act back and now when a new node joins the swarm and talks to A, A can start relaying the information that it just got to these new nodes. And when the choking node wants to start relaying streams again, maybe it got some more bandwidth, it just sends start sending have message again and it sends an unchoke message so that A can start requesting for data. So that was the generic video streaming workflow. To make it live, we need to do a few modifications. One is we just make the broadcaster push half-packets into the swarm so that as the new sequence become available, the broadcaster just tells the swarm that it has these packets and the swarm will figure out how to relay it around. And also during a handshake, you can establish a discard window so that you don't have to keep all the video around through the whole live stream. And now instead of having one miracle route, we would have a live injector which just means we have a transient miracle route. And to ensure the video that we relay around is still correct and has integrity, we use the Monroe hash which is just a fancy word for saying it's a miracle route for the new chunks instead of for the entire video. So that's basically how the video relaying protocol works to get around some of the constraints from a tree structure. But what we really want to do here is to incentivize the video delivery. And I think the swarm team published this paper called Swaps Were in Swindle many years ago and they recently generalized it to create a good framework around this. And this is a very much open research area and we're very excited to continue to do work in this area. So to kind of summarize what we talked about, the goal here is to teach Web 3 to do all the video streaming stuff that the traditional Web is able to do. So that means adding transcoding features through the live-year network using swarm or IPFS to do the storage for both the protocol verification process and for storing the actual video for a later playback. And also we talked about a few different CDN approaches to deliver the video. And now since the CDN area is very much an open question, an open research area, we have a fallback mechanism to always go back to a centralized CDN so that we can have smooth playback experience right away. The project has been in the works for a little over a year. We published a white paper early during the year and since then we've been working on the test net. The test net just went live a little less than a month ago and the demos that I just showed is running on the test net right now. And our next goal is to launch on the main net and in production either a queue for this year or early next year. So this is very exciting because live streaming in a decentralized context is happening. It's imminent and there are a couple of things that we can do for everyone to get involved. The easiest thing is to run a node and join the test net so that you can see how live streaming works in a decentralized world. Another thing you can do is to build a video-based app like some of the ideas that we talked about or any ideas that you have that we haven't thought about. And another thing is if you are interested in P2P video delivery and you want to help us make this a reality, come reach out to us. LiveFear works as an open source project and we work with people all over the globe and frankly that's one of the best things about working in this space. You're able to work with all kinds of talented people from all kinds of backgrounds. So we are reachable through our Gitter channel. We are on Twitter, we're on GitHub or you can just send me a message. I'm happy to help you with anything that you need. So that is it. I am Eric and we are LiveFear and I think we have two minutes for any questions. Transcoded packets. After like for every single packet is there kind of a... Sounds like there would be some overhead related to having to sign every single one of those. Is that an issue? Yeah, so sending the packets would have to happen anyways in a decentralized world, right? In a peer-to-peer network where you don't control the peers, you are forced to do that. The CPU overhead is not that much. Of course it's better to not sign it and say the CPU cycle, but the security that you get is good because you can scale the network and overall get a cheaper solution. You mentioned that there are many different encoding decoding texts, some of which are proprietary. Do you suppose that might be an issue for like distribution on Swarm or something? Yeah, that's an interesting question. I think the proprietary codex... This is very much on the application developer to answer this question, right? Even the proprietary codex are in these open-source softwares, and if you are a centralized company and you're using these proprietary codex, it's up to you to have to pay the license fee, otherwise the lawyers are going to go after you. Or you can just use VB9 and it's open-source. You mentioned that the PPSP protocol has packets of like 124 bytes, packets of 1,024 bytes. Have you done any benchmarking on overhead calls and the whole system in terms of where do you foresee the main overheads to come from and how does that compare to a centralized system? Yeah, that's a good question. There hasn't been a lot of products built around this. And as I said, this is a very much an open research area. But what I can tell you is that PPSPP overcomes the problem of slow bandwidth upstream, and that's enough of a benefit for the peer-to-peer use case. Cool. I'm going to be outside, so come and find me. Thank you.