 OK. That's legible. Thank you. Well, first of all, thank you to everyone for coming, and thank you to the organizers for persevering with what has been a challenging year for FOSSAsia I gather. There's a lot less people here, a lot fewer people here than were expected. And thank you to the many speakers who have managed to make it into those who've managed to do their presentations remotely. We were fortunate to get into the country early, so we happened to actually be on site. Now, I'm going to try a bit of a stunt in a moment. Given the current circumstances, what I'm going to do, I need an assistant for this. We're going to shake hands. Now, don't try this at home. I can't stress that enough. Not even for a joke. OK. So, if you want to come out here. Now, just before we start there, I'm from the LibraCast project. It's now part of the NLNet Foundation and part of the NGI0 program for that, which aims to build a better decentralized internet. But before I begin, I just want to give you a choice. We could do this talk in a unicast way. Or we can do it in a multicast way. Now, just before I get you to vote, I just want to show you what that might be like. In a unicast world, we need to start, obviously, with a handshake. Again, don't try this at home. This is a TCP handshake. Now, we need, obviously, to maintain that handshake throughout the talk. And that's going to be about an hour. And if you'd all like to form a queue behind this lovely lady, I'll deliver the talk to you one at a time. OK. Now, obviously, I'm being ridiculous. We can be a bit smarter than that in the unicast world. I've got more than one arm. I could probably do two of you at once. You can attach to my legs. And then, basically, I need to deliver a few lines, deliver those same few lines over here, here, here. And I'm doing this unicast dance, basically. Now, thank you. Obviously, in the unicast world, we have some other options. We can use caching and CDNs and so on. To all get around the fact that, fundamentally, unicast doesn't work an internet scale. It just doesn't. We've had to keep hacking it and hacking it to make it work. And so that's where we are today. Alternately, I could deliver this talk to you in a multicast way. Now, essentially, by coming here or by joining in online, you've performed a join on my multicast group. Thank you. So, you know, there are other things you could be doing with your time, and you've chosen to join this track of your own free will, presumably. And so, essentially, what I'm doing right now is multicast. So, quick show of hands. Who's for unicast? Always one. And multicast. Okay, what are we going to do with the rest of our time then? Because you all seem to be convinced that multicast is a better way of doing things. So, that didn't take me very long at all. Back in 2001, there was an RFC brought out RFC 3170, and it says this. IP multicast will play a prominent role on the Internet in the coming years. It's a requirement, not an option, if the Internet is going to scale. Multicast allows application developers to add more functionality without significantly impacting the network. So, that was 2001. What year is it? We've been waiting a while. We're not really doing a lot with multicast. There's not much happening with it on the Internet. Very few people know much about it at all, and a lot of what people do know or think they know is wrong. So, that's not great. I'm going to try and change that. I want to change that perception of multicast. I want to posit the idea that multicast is more efficient than unicast, that it's more scalable than unicast, that it can solve real-world problems, that it has privacy properties that might surprise you, that it is, I feel, the missing piece in the decentralization puzzle, and that it can help make this polar bear a lot happier, too. So, first of all, what is multicast? Has anyone worked with multicast here? Okay. And, right, was that IPv6 or IPv4? Four, unfortunately, yeah. We're not seeing a lot of people who've got recent experience with IPv6, multicast. There are quite a few people who worked with IPv4 multicast back in the emboon days, and so on. Perhaps had a bad experience with it and have never come back. So, I'm here to tell you that IPv6 multicast is a lot better. But, first of all, what's multicast in the first place? Let's start with a bit of a definition. You may have come across this sort of thing if you've casually glanced at a textbook at some point. Unicast is obviously one-to-one, yeah? We're all good with that. And broadcast is one-to-all nodes. Not too useful on the Internet. And in the IPv6 world, it's been completely replaced by multicast, which is one-to-many. We're good with that? Okay. Well, that's wrong. No form of IP multicast in use today is one-to-many. It's one-to-a-group. In the case of unicast, you set the destination as the sender. In the case of multicast, you do not. You send to a group. You have no idea who is listening to that group. In the case of this talk, which I'm doing over multicast now, in a multicast world, if there's anyone in the room, there could be nobody here. There could be a million of you here, and it simply doesn't matter. The load on me as the speaker does not change. When I was doing that unicast dance before, the more connections I'm having to manage, the harder it gets, and the load on me goes up and up and I'm swapping, and, you know, that is the unicast world. In the multicast world, it simply doesn't matter. There could be a million nodes, and I don't know or care. If more people come into the room now, I don't have to put any more effort into speaking to this audience. That's how multicast works. So there's a fundamental difference between unicast and broadcast on one hand and multicast on the other, and it's this. Unicast and broadcast are push technologies, and multicast is pull. So we have a moment here to consider the philosophical question. If a tree falls in the forest and nobody's listening, is any data sent? And in the case of multicast, the answer is no. It's all dropped by the first hop router or by your switch running MLD snooping or IGNP snooping if you're stuck in an IPv4 world. That's quite an important property. You're sending on as many channels as you want, and if nobody's listening, no data is sent. If one or more nodes are listening, you're sending at most one copy of the data. No problem, just having a short break, folks. For part two, consider this tree. No, we did that bit. So yeah, important, keep that in your mind. If nobody's listening, no data is being sent. We're going to return to that later. There are a lot of misconceptions I find around multicast. When I talk to people about it, let's have a look at some of those. The first is that it's only for streaming. I hear that a lot. Secondly, that whilst it might be good for streaming, it's of no use for video on demand. If you've got people wanting to watch the same video at different times, there's no use case there. I've heard that it's unreliable because it's based on UDP. I've heard that it's insecure from the CTO of one open source startup in Europe. And I've heard that it can't work on the internet. Now, fortunately, those are all wrong, as we'll see in a moment. Multicast is essentially about group communication. And the thing about that is all communication is group communication. Even one-to-one, that's just a very small group. That's the special case and the only case where Unicast makes sense to use. For all other cases, multicast is more efficient. So you are here, well, around the back of it a bit, but basically here on this planet with, at last count, 7.7 billion people. That's a whole lot of nodes that want to communicate. Now, on top of that, you've got the Internet of Things with every fridge and washing machine and car that all need to talk to each other. All of these robots that I was hearing about in the last talk. That's a lot. So they want to communicate in groups. You might be communicating with your friends, your family, your colleagues, but not that guy over there. And that's fundamentally how all of our programs work. It's group communication. All of your web servers need to talk. All of your database servers need to talk. It's just how communication works. We don't build standalone systems anymore. And so we come to the elephant in the room. The elephant in the room is that big obvious thing that we can all see but we're not talking about. And in the case of multicast, I'm standing here before you telling you that multicast is worthy of your time. And RFC 3170 that we were looking at earlier said that it's not just useful, but necessary for the internet to scale. And so the elephant in the room is that, well if that's true, then how's everything working at the moment? Things must be fine. We've got our Facebooks and our various other applications. There are people presumably listening to this talk right now across this unicast internet. They're all being sent individual copies of it too. But yeah, is it working? Well, I think there's fundamentally a problem with unicast and it's this. This is I think Berkeley's, Google's Berkeley data center. We're building an awful lot of these. And they're all, you know, what's a data center? It's essentially a whole pile of computers in one place generating heat. Note the massive substation in the foreground there. These things don't run on AA batteries. So we're building these things at a fantastic rate to cope with the fact that, you know, the likes of Facebook and Google and Amazon and so on have all this data of yours to process. Now if you're sharing your cat pictures on Facebook, how much of that do you think is serving that request? You've got masses of data centers. The vast majority of what's going on there is not serving you. It's not providing the service that you are using. It's analyzing that data, figuring out who you're talking to, who you might want to talk to, profiling you, turning you into a product. Because at the end of the day, you are not paying for these services, not with money anyway. You're paying with your data. So, yeah, this is one of the problems with Unicast is that we are working inefficiently. And working with Unicast leads to centralized design. It's just what we need to do in order to make it work. We have caching and CDNs and so on and massive centralized data centers in order to make things work in a Unicast way. So whilst it appears that Unicast works at Internet scale, it only does so because of that. Multicast leaves open the possibility of decentralized designs and that's what I want to talk to you about. So why does it matter? Now, I'm just going to have a drink of water before I cover this next bit. It's always good to look at why in technology. But I'm in Singapore and I looked at the travel advice for coming here and it pointed me at this, which was on a government website. It says the approval of the Ministry of Manpower is required if the speaker is a foreigner and is giving a talk on racial, communal, religious, cause-related or political topics. Now, normally I have some political things in here that I would talk about when I'm referencing why, but I can't do that. So I need you to use your imagination and sort of fill in the blanks because I don't want to get myself into trouble. So with that in mind, why does it matter? That's a picture of Eleanor Roosevelt. Now, she's holding a piece of paper there. Now, look at that carefully and tell me, do you think her hands are bigger than Donald Trump's? I don't know. I think they look a little bit bigger. So the internet is an amazing tool that we've built. It's a very useful tool. It's useful for all sorts of things that I'm not allowed to talk about. And I think that internet is under threat from a number of angles. It's under threat from three main sources. Number one, from criminals. Number two, from corporations. And number three, here's a picture of downtown Singapore. You've got some really big buildings in the background there, and I really love the contrast between the red roof on the one in the foreground and the white stone. But, okay, let's go back to multicast. Efficiency matters. Every bite you waste, every CPU cycle you waste is killing that polar bear. We're building masses of these. If we work in a more efficient way, we can build fewer of them. We're still going to need data centers, obviously. But if we build a more decentralized internet, if we build our systems in a more decentralized way with peer-to-peer and multicast technologies, we can make that polar bear and Greta Thunberg a lot happier. So there he is. He's looking at you, folks. I think fundamentally the design goals of the internet have changed. Back in the day, we didn't really care too much about security. It certainly wasn't a major feature. You know, we have protocols like Telnet. Hands up if you've ever administered a server over Telnet. Come on. Don't be shy. Don't be ashamed. Look, we hadn't invented SSH back then. So if you're going to tell me that security was a primary design goal for the original internet protocols, then I think, yeah, we need to have a bit of a chat. Likewise, we're still using fundamental email protocols that are in clear text and so on. There's a bit of TLS shoved in there opportunistically, but it's still fundamentally an insecure protocol. We're only just now looking at moving to a secure web. So I would argue that privacy and security and other human rights things that I can't talk about are more important to us now. And if we were to design the internet from scratch now and design those protocols from scratch now, we would be thinking very differently. For a start, we know we need to support billions of nodes. That just wasn't on the cards. Privacy and security are important. Well, perhaps putting the source and destination address on every packet in our protocols is perhaps not the greatest of starting points. Everything we do after that is trying to recover from these fundamental privacy and security mistakes that we've made in the underlying protocols. You can have all the Tor and VPNs and so on you like, but that's trying to get around the fact that, you know, at the IP level, we're giving away who's talking to who on every single packet. In the case of multicast, there is no destination IP address on each packet. There's just the group address and you don't know who's listening any more than I do. So that's a huge privacy bonus and we haven't done any work yet. So let's have a look at how we got here. Brief history of IP multicast. In the beginning, the very first RFCs that talk about multicast talk a bit about it like I am now in that they talk about many to many applications and so on. And then this happened. The very first multicast protocol RFC that came out defined protocol independent multicast, which basically the protocol independent bit means that it doesn't matter whether you're using RIP or BGP or OSPF to manage your unicast roots. As long as you've got that working, we can do multicast on top of that. So I like to call this unicast dependent multicast. And basically every form of IP multicast we have running today depends on unicast. I'd quite like to fix that, but one step at a time. So to get multicast working on a LAN, you need in the IPv6 world, which is all I'm interested in at the moment, you need to turn on MLD snooping. Now for some reason, despite the fact we're trying to push more towards an IPv6 network, an IPv6 internet, pretty much every switch has MLD snooping turned off by default. Now what that means is that when you're doing multicast on your LAN and if you're doing IPv6, you are doing multicast, IPv6 does not work without multicast. What that means is that switch looks like it's running a sort of late 90s Windows network because it's doing broadcast. The failure mode is open, so multicast works, but it's working like broadcast. Turn on one setting, MLD snooping, and now you have a proper multicast network where the switch is keeping track of which ports are interested in which groups and therefore it can send to exactly which ports are interested. So that seems daft to me. I think switch manufacturers need to turn that on by default. That's the most basic form of IPv6 support. If you're running IPv6 and it's on by default with Linux, Windows and Mac, and you haven't turned that on on your switch, then just go and do it. You'll save yourself some headaches. On a large network, it's very important. So multicast on a LAN is very simple, but to get to multicast beyond that first hop-rap we need to think about multicast routing, of course, and that means we need to talk about rendezvous points. We'll go briefly into this. This is a sort of thing that really should be the topic of a workshop, but just briefly to explain how multicast routing works, let's consider this diagram. It looks complicated, but don't worry, it's not really so bad. So imagine we've got a node here that's interested in joining a multicast group. That's a Buntu 2 over there. It will send a join request to its local router saying, I am interested in group G. This is a star, G, PIM join request. The star means I don't care who's sending. This is any source multicast, and the group G is just any group, that specific group, rather. So router 5 there, it's got to do something with this join request. What does it do? It doesn't know anything about this group in particular. It doesn't know who's sending. So what it does is it sends towards a rendezvous point. So we need to configure somewhere on our network, generally sort of one per autonomous system. We need a rendezvous point somewhere. It can be anywhere. You generally put it centrally in your network, but you can put it up there at router 4, if you like. And so router 5 knows that router 4 is the rendezvous point, and so it sends the join on to that rendezvous point. And it keeps a note just here in the interface. It keeps a note of which interface was interested in that group, and that's called the outgoing interface list, OIL. And each router along the path does exactly the same. So router 2 here also keeps an outgoing interface list, saying that this interface here is interested in group G. That router knows that that one is, and it all goes along to the rendezvous point. So we get to the rendezvous point, and what does it do? Well, at the moment, nothing. It just kind of keeps a note of that, and when a source wants to send that group, then things can actually start. So Ubuntu 1 over here wants to send to this group. So again, it doesn't know what to do with it. It sends it to its local router, and the local router passes it on towards the rendezvous point. Now the rendezvous point has a source and a destination. We can now build our source tree, and the data can flow directly. That's a simplified view of how it all works with any source multicast. There are multiple different modes. In IPv4, you had sparse and dense mode, and this sort of thing in IPv6 land, you only have sparse mode, but you also have multiple different types. You have bidirectional multicast, which is good for many to many applications where the source tree, you know, the data continues to run via the rendezvous point, and you have various other different modes. So I'm not going to go into them all just now. I just want you to understand that that's the basis of any source multicast. Stark on my G. There is another kind of multicast that's become more popular called single-source multicast. And so you've got, you still got your group G, but in this case, you know the source IP address. So that's S there. So what does that look like? You've got Ubuntu 2 here to join this group G, but it knows that Ubuntu 1 is the sender. How does it know? I don't know. It might have done a DNS lookup or something. It just knows that's where the data is coming from. So, you know, that's your Netflix server over there or something. So this, and we have no rendezvous point. We have no need. And so we just send a join. In this case, we're saying join Ubuntu 1 group G. Send the join across. Data can flow. There's nothing else to it. That's a lot simpler to configure, and for some use cases, that's all you need. Now, for that to work on the global internet, there's one setting that needs to be turned on on each intervening router. It's off by default, of course. But, yeah, IPv6 multicast routing, and it just works. There's no need to configure rendezvous points. There's no need to do anything else. I would really love to see that turned on by default. If we have that, an MLD snooping turned on on the switches, we have a whole pile of things that we can do. We can have a multicast internet, a proper one. So wish me luck with that. In the meantime, we need to work around the fact that the internet doesn't have a unique, sorry, multicast turned on by default. I feel a big shortcoming that needs to be addressed. So, IPv6 multicast has some serious advantages over IPv4 multicast. The first of which is the namespace. IPv6 addresses have 112 bits in them. Of those, sorry, 128 bits, and of those, 112 bits are reserved for multicast group addresses. That's a whole lot of bits. That's enough for an individual multicast group for every atom on the planet. So, yeah, that's a bit to be getting on with. We can do some fun things with that. Back in the day, there was the M-Bone in the late 90s. Rod Stewart did a concert over it. He went out to basically every continent. If you had multicast running on your network and you wanted to talk to another multicast network, you basically connected to this M-Bone backbone and all those bits of multicast internet could talk to each other. Met a lot of people who've used it. Just kind of faded away. It was IPv4 only. There was another project called Castgate by the Free University in Brussels, which I think was a browser plugin and some other bits and pieces. That's been dead for more than 10 years now as well. What I'm proposing is not new technology. We've done this before, but we've done it with IPv4 only, and we did it at a time when we didn't really need multicast in the way we do now. We need it now because of the scaling problems we have. We need it now because of the privacy issues we need to address, and so on. So there are ways that you can communicate from one multicast network to another through discontinuous bits of, you know, if you've got these disconnected parts of multicast networking. If you're an autonomous system, you basically run your own ISP, then you can configure automatic multicast tunneling on your router, and it will talk to another router that speaks AMT, and you're good to go. You can use various BGP tricks, like anycast and so on, to discover the other endpoints, and off you go. But 10 minutes already. What the hell? Okay, I need to hurry up. Let's skip a bit. So we like TCPIP because it's very reliable, but are there other ways that we can find this TCPIP-like reliability? The answer is yes, we can. So, just skip over a few bits here to get to some of the fun stuff. We have various things that we can do. We can do NAX and replay in the TCPIP world. If I send you a bit of data, and you send me an act to show that you've got it, that's not very good if there's a lot of nodes out there. If you all send me an act, that's probably going to DOS me. So what we do instead is we send a NAC. I send packets one, two, three, and five, and you send a NAC back saying I haven't got packet four, and then I just repeat that to you. We can also do loop and repeat, as I'll show you in a moment. That's Voyager 1. It uses forward error correction out of reaches of our solar system in high-latency environments. Forward error correction that lets you encode a little bit of extra data in each packet. So if you're missing one, you just rebuild it from the other packets. Multicast applications. Pretty much everything is a multicast application because everything is group communication. We're building chat applications that are fundamentally multicast and so on. But we're building it up here at layer seven. We're doing multicast, but right up there, we can do it down here. So let's keep that polar bear happy. We'll run through some multicast party tricks briefly. Video streaming and conferencing, they're not much fun, although video conferencing is quite useful right now. Replication and consensus, they're all very good, but let's get to the fun stuff. Let's look at DNS. The purpose of DNS is essentially to map a human-readable address to a machine-readable address. What does that look like in the multicast world? We can do this. This is an IPv6 multicast address, so it starts with eight bits. They're all one, to say it's multicast. Then it's got four bits of flags. And then I've chucked an E in there, which means the scope is global. And then I've got 112 bits to play with for the group address. So I can do silly things like this. We all just need to agree on where example.com is. Yeah, that's the purpose of DNS. Well, we just did. We've not consulted any centralized servers at all, and we all know that's where it is. That would be a very short RFC, no? It can get a little more complicated than that, but let's look at a chat application. I built this. It's up on GitHub, just as an example. But essentially we've got a forking HTTP daemon here, running the Librecast libraries, and that's plugged into a virtual bridge. So no real interfaces involved here at all. And you've got a standard HTTPS connection coming in here that gets upgraded to a web socket. Yeah, bit of JavaScript, web socket, good to go. All following so far. So we get a couple more connections coming into this web server, and they're all talking to this virtual bridge. This is a use case for multicast, where not only is it not going across the internet, it's not leaving the data center, it's not actually leaving the Linux kernel. It's just all happening inside here, multicast to organize groups. Essentially, think IRC, but I'm generating multicast groups for each channel, and all the state is being tracked at the network level, which is fun. But what if we plugged that into a real multicast network? Then we can do things like stick your GitHub web hook in there. So if it receives some information from GitHub that you've just pushed to commit, then that can get published to the channel. Again, there's a sample of that on GitHub or some other app. But what if we did this instead? We plug another whole server in, another chat server, and what we've actually done there is we've got federation with zero lines of code. Not, you know, hundreds of thousands of lines of code or thousands of lines of code, it's all taken care of at the network layer. So if you've got a server over here in Singapore, you've got a server over in LA, and we're all on the FOSSAsia channel here in Singapore, and all these people are talking away on the FOSSAsia channel, no traffic is going across this link. It's just automatic. There's no logic in there to keep track of that state. But as soon as somebody in LA says, I want to join the FOSSAsia channel, one and only one copy of the data flows across this link, no matter how many thousands of people are joined into that channel. I think that's pretty cool. Unicast can't do that. So this is a bit of what the API looks like. It would look very familiar if you've worked with zero MQ. That's deliberate. But let's look at another example. IoT updates. We want to send a file out to a million or a billion nodes, and the only resource we have is one tiny little virtual server. So let's skip over the code. It's up on GitHub if you want to see it. It's 90 source lines of C for the server. So what we send is a UDP datagram. Multicast works over UDP, and it's got to check some of the file, the size of the file, the size of the chunk that we're sending now, the offset from the beginning of the file, so zero means this bit of data belongs at the beginning of the file, and obviously a chunk of data. So that's what we're sending. So on the client side, we just create a memory map, sparse file, if you will, and we start receiving some data. So some data turns up, and it's got a length of four and an offset of zero. So we plunk it in there. And then we get another bit that's got an offset of four. So it plunks in there. But this is UDP. It's unreliable. These packets could arrive in any order, and we don't care. We've got our offset. We know where it goes. We keep track of how much data we've received. When we've got enough data, we run our checksum, and if it matches, we've just updated a million, a billion nodes, we don't know. It doesn't matter. So if you've got a whole pile of IoT nodes to update, and you've got a multicast network, you're in luck. The load on the server does not change. You could run the entire static web using that method, and you wouldn't need any CDNs or proxies or caching or anything. It just works. So what about reliability and flow control? Well, multicast lets us do some things that unicast just can't do as well. So we've got G1 here. This is a multicast group, and we're sending packets 1, 2, 3, 4. So what if that's too fast for you? I'm sending at one rate. You might be on a slow network. If that's too slow for you, well, I could send it a slower rate, but then that's going to be very annoying for those of you on fast networks. So what do we do? In TCP, we've got all sorts of ways of negotiating, but UDP, we don't have that, and certainly UDP multicast, we don't have that, because I can't negotiate different rates with a million nodes, but multicast does have a trick or two at sleeve. You can do this. Remember that tree earlier? If nobody's listening, no data is sent. So we can... If you want the data at half the speed, just join group one. If you want the data flat out, join groups one and two. We can split that over four groups, if you like, or a hundred. It doesn't matter. If nobody's listening, no data is sent. These things are just looping through. So there's your flow control. What about reliability? Well, I'm not saying this one's a good idea, but you can do silly tricks like this. Send the data slightly delayed on another group. Again, if nobody wants it, you're not sending any data. If you missed a packet, or just tune into group two, and you'll get it a little bit later, just like pressing the red button on your TV for a replay. WebRTC has some interesting properties. This I'm working with on a project called LibraCast Live at the moment. Basically, we are building decentralized streaming, a Twitch-like service. It's decentralized and federated using multicast in the data center. And basically, WebRTC lets you send different quality streams. It's called simulcasting. So you've got those three colors represent three different quality streams. You've got a low-quality stream. If you want a medium stream, then you join the low-quality stream and the medium stream together. And if you want the high-quality, then you join that as well. So in the case of audio and video, if we want lower quality, we can actually just drop packets. It's a little bit more complicated with video, but in the case of audio, that's literally it. Drop some packets. And that's why we do SIP over UDP, for example. A late packet is a useless packet in the case of live streaming. But in the case of a unicast setup with WebRTC, if you've got a source here that's sending a stream, you've got a media server here to disseminate that stream to other media servers and the clients that are joined to those media servers, then each of these media servers needs to keep track of state. They need to know who is listening to what and what quality stream to send them. So this one here is getting all three, and therefore it's got a high-quality stream. This one over here is only getting the low-quality stream. And in the case of unicast, each of these servers is keeping track of all that. This server here has got nobody joining, so it doesn't need any traffic. But obviously with multicast, you've probably caught up with me now, you can send all three of these streams all of the time, and only if somebody's interested in that stream will it get passed on. So, you know, this one is getting all three. In this case, only that channel is going across here because there's nobody else subscribed, leaving no data at all. So that's the basic back-end design for the system I'm working on at the moment. QUIC is based on UDP, so I got very excited. UDP, maybe we can do multicast with it. So I had a bit of a look, and I'm not the only one who thought that. And the BBC research team in the UK are in the process of drafting an RFC to do exactly that, presumably for their iPlayer system to be able to get streaming data out to the CDNs as easily as possible. But we can take it further. If we can have that, we can have a multicast web. So that brings me to my project, Libracast. Essentially, Libracast aims to get multicast in the hands of developers. Now I'm not going to do the dance, but I see there's a few people who've got that. What I've built so far is a very basic multicast library, Libracast, that you can just hash include in your C program. Python, Ruby, and other languages to follow. Feel free to contribute. And the next stage is to build more of a messaging, a reliable messaging library on top of that with forward error correction and so on built in. RLR0MQ. Next stage on top of that, which I should have running by mid-year, is the transitional tunneling stuff. Again, think M-Bone if you're familiar with that. Multicast works over tunnels, which means we can do multicast over the existing internet to try and solve the chicken and egg problem that we have where multicast doesn't run on the internet because there's no demand and there's no demand because it doesn't work. And so on. So I also have been working on an improved routing protocol with a colleague to try and break the dependency on unicast. And I want to work with FOSS projects to enable multicast everywhere it can be, and IPv6 along with it. I think that IPv6 multicast is how we get IPv6 out into the world because essentially as much as I love IPv6, I've been an advocate for it for some time, I really have to admit that it's just a bigger namespace. It's a little bit better in some ways, but you really don't get anything new. There's no commercial incentive to deploy it. You deploy it because you're forced to. But multicast lets you do things that unicast just can't do. It gives you the incentive to deploy IPv6. There are billion-dollar reasons, like not building a tenth as many data centers to deploy multicast everywhere. And finally, I want to ensure new standards like WebRTC and so on support multicast or at least don't break it. WebRTC is based on UDP, so it should support multicast, but because for very good reasons, perhaps the encryption was baked into the standard, it mandates a kind of one-to-one Diffie-Hellman style key exchange, which is no good for group data. We can use group keys, we can use capability tokens and things like this if we don't mix our layers together. So thank you very much. Get in touch if you're interested in helping. Yeah, so actually, have you looked at the... In IPv6, there's no app needed. Like how do you feel about that? Could that be like a major improvement for kind of mitigating layer 2 broadcasts? Well, essentially, in IPv6, we've got various multicast methods that are used instead of ARP, duplicate address detection, DAD and this sort of thing. So multicast is used extensively in IPv6, just on the local LAN. And does that answer the question? I mean, really, just the... I think, presently, with data centers, you actually get a lot of, like, ARP broadcasts. Do you agree that IPv6, when you use IPv6, you haven't got those ARP broadcasts? Correct. You don't have broadcast at all. And because you're using multicast, it's only going to where it needs to. So instead of every single node getting those ARP broadcasts, you're getting a multicast traffic that's going to a routers-only group and that sort of thing. So a lot of thought has gone into the design of IPv6. We're not really getting the benefit of that because people aren't using it or they are using it. They're not turning on MLD snooping and things. So essentially, they have got broadcast traffic and they wonder why IPv6 doesn't work. It does work. It's a new skill. We need to learn it. So I have a question from the service provider perspective for dealing with multicast on the public internet and especially across autonomous systems. This is between ISPs, basically. And in particular, having dealt with the situation where the economics does not stack up. The problem for ISPs is not the complexity of configuration management, although that is an issue. In practice, ISPs are frequently having custom firmware builds done. And so they're actually, it's not as easy as turning an option on. It's every single feature that's enabled has to carry its weight in their testing and QA process. So there's an active pressure to turn features off. But worse, for multicast, there's an enormous amount of state that has to be kept by the network. Technologically, that's desirable, I get the point because you then get to do all this cool stuff in a very generic way rather than having to do it again and again and again for every application at layer seven. But the challenge was that the engineering for the routers for their firmware and for their hardware configurations across tens of thousands of routers was already pushing everything to the limit. There's already sort of product management discussions about which things are turned on and turned off. And so the way the question comes out is, okay, a customer who really wants to do multicast can just do their own rendezvous routers somewhere and plug them into our network or networks and just pay ordinary transit rates for each of the unicast streams that works. And then the forced customers go, hey, that doesn't make sense. Multicast should be one copy, not thousands. And so the problem for the nice peers, oh, so you wanted to do a bunch of engineering work and hardware upgrades, more RAM, more CPU to support maintaining the state required for multicast work in order for you to give us less money. And that immediately killed, and I suspect, I haven't seen inside many ISPs. There's only one. It was a global carrier. But it seems to me that, and you actually answered in your second last slide, it's like, how do we get from here to there? And the answer is you build applications that use tunnel. But it seems to me that you've got to demonstrate substantial use cases before any ISPs willing to do... Precisely. That's exactly what I'm trying to do. Displace the other firmware and CPU and RAM use. So, yeah, it's cool. Absolutely. The question of state on routers is a good one. And it's probably the biggest impediment, really, in that, yes, it does put extra load on routers. But if nobody's using it, it's not putting a level of extra load on there. As more and more people use it, then that's definitely... But I know the question that the film is using, the question is, why even enable the feature in the firmware in the first place? If they have no one using it, then they can just remove that from the firmware and not have to QA it. So there's a very real cost with going from it's not there... There's a very real cost to not doing it. We are... I mean, it seems crazy, but we are literally melting the polar ice caps with all these things. I mean, even if we take Bitcoin out of the equation. You know, we are building loads, loads more data centers, all this cloud stuff is going up, all this cloud stuff is going online, and it's all operating horribly and efficiently. Any moderately popular services having to run multiple servers with caching and load balancers and this sort of thing, CDNs, which are very expensive. And then you get situations like, you know, streaming. I mean, Netflix, I'm told, is 25% or more of global internet traffic, which is just crazy. And that's all these people watching streaming services. Now, granted, not all of them are watching at the same time. And, you know, but as I've shown with the updater, in some cases, you don't need to. In the case of a movie, for example, well, imagine, you know, popular movies, we could stream them, start them at the beginning of each hour. Now, that's not going to satisfy anyone. I'm not waiting until the beginning of the hour for my stream. Well, okay, we'll start it every half an hour. That's not good enough every 15 minutes. How many streams do you actually need to run before actually everyone that wants to watch it can watch it exactly when they want to watch it? Or hybridize it, run the first 10 minutes unicast. Yeah. And given you're going to have a certain amount of buffering, which we're used to, and a certain amount of ads tucked on the beginning, well, you don't actually need it to start on the second that somebody hits play if you're, you know, Netflix and you're funding yourself with ads and things. I saw a tweet from LUNAP, the London Exchange Point at the end of last year. There was a big football match on, and they tweeted that they'd reached the highest peak load that they've ever had on their exchange. And that was because of everyone watching the same video stream at the same time. So, whilst not everything is video on demand, there's still a significant case, and the RFC, the BBC Researcher Drafting, talks about exactly this. There's a lot of cases where people are watching the same thing at the same time. Take the Olympics, for example, if it's on this year. Or any other big event. There's a lot of stuff that happens at the same time. But there are use cases even where it is video on demand. So, when you've got billion-dollar corporations who have that need, perhaps we've got some way that we can roll this out. There may be enough aggregated demand for ISP to blink. All right. Thank you very much. That's it for today. Enjoy your evening, and we will see you here tomorrow morning. I'm not MCing tomorrow, but I'll be around in the exhibition area doing cool electronics, hopefully. Have a good day.