 Hi everyone, I'm Vu from Cloudflare. I know you guys have been out here for a while now, so I'll try to keep this as short as possible, but as direct as possible, so thanks for all coming out tonight. So I'm going to be talking to you guys today about lowering network latency, and how that can result in basically faster learning times for web pages. So maybe not that JavaScript-y, but at least if you take a step back from JavaScript, you just think about websites, talking about how we can improve the overall performance of websites, basically. All right, so I'm going to ask you guys something simple. Very simple. You guys have probably seen this before. Let me know if you've seen this. Who can tell you what it is? Yeah, it's a fiber cable. So in Singapore, Singtel, StarHub, M1, they will provide this if you've got fiber at home. And that's pretty much a core part of how some folks in Singapore are connected to the internet, of course, beyond cable internet and other mediums as well. So another question. Who knows what the speed of light is? C. Let's see. All right, so I'll put this for Wikipedia anyway. So the speed of light is almost 300,000 kilometers per second, basically. So something that fundamentally hasn't changed with the internet is that the speed of light has always been the same. It hasn't changed yet, unless someone discovered something different and faster. But right now, that's constant. All right, so if we look at bandwidth and latency, we can see that a correlation between how bandwidth increases, basically, with latency, basically, and how page load times will decrease when the round trip time decreases. So what does that mean? So I'll just put a brief definition of what RTT is, just in case you guys aren't too familiar. So RTT just basically means round trip time. And it's basically the relation for a network request to go from a starting point to a destination and then back, basically. So essentially a round trip. So just keep that in mind, because it's important to note for the later slides. So if we think about 10 milliseconds, how far does that take you? So you can see this chart takes you to Singapore, Malaysia, maybe a bit of Indonesia there, and a bit of a little bit of Thailand, almost, almost. And 20 milliseconds gets you a little bit further now. So you get to cover a little bit more of Indonesia this time round. You've got Vietnam now. And you've got actually Thailand in this picture. And then if we look at this at a greater time, so 50 milliseconds, you can see it covers a lot more. So it covers Australia, even some Pacific islands like Papua New Guinea, China, India. So you notice here the round trip time basically is increasing, obviously, just because distance is increasing. So even though the speed of flight is constant, you notice that the round trip has to increase with that. That's essentially the point of these slides. So essentially, you ideally want to be closer to your eyeballs. That's essentially the goal as much as possible. And I know Sibash just talked about a few things. That sort of leads to some similarities around here. So ideally, you want to have some sort of distributed network, or some sort of way to be able to interact with users so it's closer. So that distance that has to travel, it's not that great anymore, so that you can increase the speed. So ideally, you want to have more servers, rather than just one. So just picture one server trying to serve traffic all around the world. It's obviously better to have more than one, right? So these slides just sort of pick that. So it just zoomed in a little bit more. So you've got one server. You've got multiple computers requesting a file from it. And then this one, quite similar as well. You just got multiple servers. And the same thing in terms of multiple computers requesting something from it. So I did a ping test website just to show you sort of how this is on the open internet. So you can see here there's different load times for different parts of the world, right? So based on this, what do you guys think my origin server is located? So the original server, that one server, where do you think it would be located? Somewhere in the US, yes. It's hosted on blogger.com, so it's owned by Google. And then they just happened to have my blog hosted in the US, basically. So somewhere close to New York, I think. All right, so now I did a similar test again. This time using a CDN, which is like a content delivery network. And you can see here the round trip time is much lower. So you've got five milliseconds here. So on the earlier slide, you can see the time can be a lot more depending on where you're sort of trying to ping it from, essentially. And then just for fun as well, because you guys might have heard of 1.1.1, I did a test to 1.1.1 as well to show you the same sort of experience, not just my website, but just in general in terms of my computer just ping, trying to ping like a data center that's close in a sense. All right, sir. How fast do you think you can process incoming data? All right, so you've got three bars here. So you notice the one at the very bottom is going the fastest, and then the one at the very top is going a little bit slower. So you notice how your brain is trying to capture that data that's coming further. So I don't know about you, but for me, the one at the very bottom lags a little, in a sense. So yeah, so basically there's various input delays. So the first one's like 20 milliseconds. The middle one's about 40 milliseconds, and the final one at the bottom is about 80 milliseconds. So there's been some studies around this, which I can give you some links to if you're interested in understanding this a little bit further. But basically, that's the general input delays that the general human would have. So the studies have pointed out that I guess the fastest rate that humans, us humans, can really process these visual stimuli or data points to about 13 milliseconds before you really find yourself losing yourself into the lag. All right, so if you think yourself, when you're gaming, so this one just has to be an extreme example in a big hall, then these are sort of the rates that you normally would see. Something along these lines. So you notice that the very bottom is, you wouldn't notice a difference really when you're gaming. 50 milliseconds, it's OK, of course, right? But when it goes beyond 100 milliseconds, it becomes detrimental to your playing experience, basically. So someone will probably shoot you dead in the game before you even realize. Or that alien will kill you when you react to it slowly. All right, so let's get back to websites and web pages in this case. So again, a little bit technical here, but basically for a browser-delta web page, it needs to do a few different things. So there's a few different phases associated with it. So the first phase is DNS lookup. So DNS means domain name, server lookup. So it looks up the IP address when you type in, say, google.com or cloud.com. Then it has to do a TCP handshake. Then it has to do a TLS handshake. And in the final step, it actually will do a HTTP request, which actually will send your request. So you notice each of these steps will come with a certain number of round trips. So essentially, we'll introduce latency, basically. So what does it mean? So you'll see here on the far left, TLS 1.2. So you'll see it has seven steps there. Whereas the latest standard, TLS 1.3, has one less handshake. So that does a lot to performance, basically. With a lot of different packets ex-routed, one less handshake means a lot. And I'll show you in a bit exactly how much it can mean. So in terms of that round trip time again, so when it comes to new connections, generally, with each of these standards, so there's the most modern at the bottom to the older standards, there's four round trips for the first one. The middle one, 1.3, has three round trips. And then the one at the bottom, so TLS 1.3 plus zero round trip time is probably the fastest. So there's actually a difference between creating a new connection and resuming connections as well. So you'll notice that the resuming connection improves as each standard evaluates. And this shows your spells in a different way, basically. So those same sort of steps. So basically, a client will say hello. Server will respond. It will do a key share, so that the TLS SL. And then it will finish. And then it will finally send a request. And then this step with TLS 1.3, you have one less step in that process. And then I've got this video, which I'll show you. So if you see there, basically, what I just showed you is with TLS 1.3 plus the zero round trip time, or RTT, you'll notice here that it's decreased the amount of latency by close to maybe around 170 milliseconds, basically just by taking away that step again when it comes to resuming the connection. So there's different things that have happened over the years in terms of the evolution of TLS and SSL. So we're sort of over here right now. This graph hasn't been updated with the TLS plus zero RTT yet. But that essentially will find itself over here. OK, and now I'm going to take a step into the internet. So the internet, as you all know already, is a series of interconnected networks. That's basically at the core of it. It's a bunch of networks connected to other networks. And if you think in the local sense, in Singapore, it's like an MRT, in a sense. Even worse, if you go to other cities like London, and maybe the US, where all these connections are insanely connected, and you want to where the logic is, isn't it? So this is quite similar to what Sebastian was saying in terms of the internet, in terms of having some of the submarine cables, sort of all around the world. You can actually, there's actually some public websites out there where you can actually look up some of the submarine cables a little bit more detail. So I'll just put the link at the bottom so you can just check it out for fun and see where the various connections are. If one of those links gets disconnected, then of course all the routes have to go a different way. And that causes congestion, as you guys may have experienced. OK, and then there's also a pretty useful website on the internet as well called Data Center Map. It's really good. So you can actually, when you get on a website, you can actually search for each of these locations and look at it in a lot more detail, too, in terms of just data centers in each country and territory. So fundamentally, the internet runs on BGP. So I know Sebastian talked about BGP as well. So BGP basically means a border gateway protocol. So what it does, or what it means, is it's a protocol that guides routed decisions across the internet. So ISPs would essentially follow this standard when it comes to routing all those packets that you need to be able to actually see a website or download something on the internet. So it's based on a system of trust, basically. So the AAS or the IS providers, they basically run their networks in their own way. So the internet is, I guess you can say, quite vulnerable to how they run, but also they find advantage in terms of running with these ISPs as well, because someone is essentially putting this network together. So these ISPs have different priorities. Most of the time they look at ensuring that there's enough high availability, there's lower latency, less pack of drops, there's minimization of traffic costs and maximizing revenue, basically. Yeah, because at the end of the day, these ISPs are businesses, right? So it can mean a lot of different things for us as users on the internet. So the routed decisions that are made on the internet are basically dictated by what the ISP sets in their systems, basically. So sometimes they may take you on a detour. Sometimes they will deliberately avoid a tollway, just to save on costs, because they have to pay for another transit provider out there to be able to access a particular route. So often what we see is, it results obviously in decreased user performance, the paths are congested, they can be unreliable, and they just will choose that route just because it may be the cheapest or maybe the most convenient for them. So is there anything you can do about it? It's a question. So you can, there's some things that you can do. So there's a technology out there called Argo Smart Routing. So it works a little bit like Waze, in that relies a lot more on the P1 networks of a particular network provider. In our case at Cloudflare, we have our own preferred network as an example, where it basically goes for the best route, not necessarily the cheapest route, but the best route is ideally the way that Argo does work. And it also means that this connection can be reused, congestion can be avoided, and in general, as a general outcome, there's faster loading times for the end user basically. And that's sort of the goal as a website earner or provider, you want to ideally have that website load as fast as possible. So this picture just depicts that in an infographic. So you can see here again, one's more congested, one's not, and one just relies more on the preferred network. Whereas the other one may go on to a route that is just the most convenient and cheapest to put it quite blindly. So yeah, so we've done some studies. So I'm actually on the post sales team at Cloudflare. So we've done some studies with some current customers in terms of the improvements with and without Argo. So we've seen some significant increases for customers who have to go a lot of hops in a sense, cover a lot of distance. So have a lot of international traffic or traffic that needs to travel quite a distance because there's more possibilities of those routes being sent to more of those unpreferred routes. And it goes more to the, yeah, just essentially the cheapest routes rather than the ones that would send your traffic faster. So I actually put some stats from some of my customers. I'll go show you some public ones first. So we've got the sketches.com. So when they activated Argo, they received 200 milliseconds improvement in performance. There's a company in the UK that did the same and then they saw their website performance increase from 1.54 seconds to 600 milliseconds. And then there was a company in Australia that saw a 50% increase just by using Argo. But additionally as well, they actually used our DNS as well. So that's more of an offshore component in terms of using our DNS. But that also allowed the process to be faster too. So if you remember those four steps earlier, they were saying of four phases. One of the phases is DNS and the other one is all these other steps which gets accelerated with Argo. Yeah, so I know that as a customer in Australia who I saw used Argo as well, so they saw a 45% increase in performance. In India as well, we saw a customer have a 31% increase in performance. So you notice here in terms of the pattern, it varies customer to customer and it's really based on the actual customer's traffic in a sense. So depending on your traffic in a sense, the performance gains may vary. So I had another customer about 45% faster, one in Thailand, which I found quite interesting had a 59% increase in traffic. So for some customers, yep, sorry. Yeah, they're already using the CDN, that's right. So they're using the CDN plus Argo as well. Yeah, that's right, yeah. Yeah, so they just at the fundamental base level, they're using a CDN already and they're just using Argo on top in terms of these statistics and in terms of how to read it. Yep, so the key takeaway, I guess, from this presentation, because I know there's a lot to digest towards the end of the day, is really you can consider using a content delivery network rather than just having a loan server sort of send traffic or serve traffic for your customers. And then the other takeaway would be that the internet doesn't always follow the shortest routing path and there may be a variety of different factors causing that. So just something to keep in mind. Yeah, you can definitely consider a content delivery network. All right, and then this is just something that I kind of love at Cloudflare and that's the 1.1.1 service that we offer now. So you can actually change your DNS at home on your laptop so that you can use our DNS to run your queries, basically. So when you search Google.com, it goes straight to our DNS and then we will give you the IP address, essentially. So that happens quite instantaneously in the backend. I find it quite fast personally, but it's up to you and your personal experience in terms of if you wanna try it and use it and see if it actually benefits you. So Google has something similar called the 8.8.8, basically. Yeah, and I just put a bunch of links at the end and that's pretty much it. I really appreciate your attention and hopefully that was informative in terms of just giving you guys a quick rundown of how you can improve the website speed on the internet. I guess, Sebastian, over here. Questions? Yeah, do you want any questions? Oh yeah, do you guys have any more questions? So when does Cloudflare gonna support quick, the quick protocol? Sorry, it should question. Any more questions? Man, we crushed our brains tonight. Yeah, sir. Okay, in that case, round of applause please for who. Thank you all.