 Okay. Thank you guys for staying so late. I think it's been very brutal, but I hope that you have found it interesting and useful, which I have. So we have got a lot of sessions in the morning, not morning, I think after lunch, about the Google Machine Language ML, the AIPs, NLP. I'm D&D the presentation on the Language Couplers as well as the application of ML or NLP. So I want to switch gears a little bit because all of this is within the GCP or the cloud platform or an enterprise data center. It all happens there. What I want to switch gears about is actually talk about what happens to your end users at the end of the day, who is going to consume the content or the application or the service. So just a quick advertisement on Fastly. So it's a company which was actually born in 2011, and it was actually listed about six months back. Unfortunately, I joined the company after it was listed, so I did not get any stock. So that's a bummer. But I still joined the company because of the technology. I'm really, really impressed. So just a quick check. Does anybody know what a CDN is? Okay, good. The rest, because I just did a poll early in the morning, and it seems that not a lot of people know what a CDN is. So I just do a quick overview of what a CDN is as well as explain what the modern CDN actually looks like. So Fastly has got many pops around the world. Pops means points of presence. What it does is that it deploys very big servers around the globe so that you don't have to. So that's one. Two is that we have created new technologies to serve, better serve, and deliver content and applications and services. What it does is that, so think about it in the 1990s. A lot of you are very, very young, so you probably do not know what internet looks like in the 1990s. But I had dial-up at 2,400 BPS, haze modem that was terrible, and right now we are very, very blazing fast internet speeds. But during that time, the internet was relatively unstable. Peering relationship, transit links between carriers were actually problematic. So you got packet loss. Quick question, who knows TCPIP rather well? Great, fantastic. So you guys know, right? So between two machines or two servers or a browser and a server, a web server, you have the base protocol of the internet such as TCPIP, right? You've got Singsing Act, you are able to communicate and in a reliable manner. Unfortunately, because it's a reliable protocol, which is good because at a time internet is problematic, if there was pocket loss, what it does is it does a retry timeout, right? So the packet on the other side will wait for three seconds and then resend the packet again, right? And based on congestion, window size and stuff like that. I'm sorry, it's getting a little bit warm. Okay, so the modern servers, instead of using a traditional hard disk drive, what Fastly actually did or modern CDNs actually did was that they used SSDs, right? Solid state drives. The advantage of SSDs is the very, very high O per second, right? And you do not have moving parts and so and so forth. Now the problem of SSDs is that they are reliable to fail after some time if they do accidentally reach and writes. So what Fastly did is actually created a file system just to address that. And over the past 2011 to 2019, it's been really, really relatively stable with that new file system that we've actually created. So that's one. The other thing about the internet, it's quite reliable until it's not, right? Internet itself, it's based on a network of networks and how they actually route traffic is I've via this protocol BGP, border gateway protocol, right? It's very, very good. It's very lightweight protocol and which BGP routers actually transfer state or vector based information about which route to actually route the packet to. So, but however, it does not actually take into account load or latency, right? So what Fastly actually did was they created, our founder actually doesn't like routers very much, right? Because it's really expensive for what it actually does. So they created this thing called machines. We actually built custom BGP routing rules within the machines itself. So think about it. You are used to going through, example, when Singapore, right? You're connected to VAR SingTel. So SingTel is connected to VAR StarHub. It goes to Pogen level three and then reaches Deutsche Telekom to your end user within Germany. This path is actually pretty crappy right now. What does it do? BGP doesn't allow you to change anything because, you know, it's not based on performance or anything like that. Unless the whole thing fails, then you got, you change your route. From here, if we know that this path is not working, via this intelligent BGP routing table, the server itself knows how to route via a different carrier, right? So that's pretty cool. I'm not sure whether, so from a networking standpoint, that's a pretty, pretty cool thing, right? Wait, hang on. Sorry. So how it works on the Internet? So this is a quick view. Singapore, London, 250 milliseconds. It's not rocket science, right? This is based on physical distance, I mean physics, speed of light. It's about there, right? The other thing is because of the peering relationship between the telcos and it carries from Singapore to London. New York, 300 milliseconds. Tokyo, it's about 200 milliseconds. Again, it's based on two big factors. One, it's on physical distance and one is carrier relationship. So in China, where the North is China telecom, South is probably dominated by China Unicom, the latency within a country, it's as high as 250 milliseconds. It's not because of physical distance, it's because they don't work together, right? So that's a concept of the Internet. The other thing is packet loss, never issue and unknown error. So when you think about a lot of performance from an enterprise standpoint, when you do ML, when you do various stuff, you are always thinking or one will always think that it's within an environment that you can control. The Internet, you can't. What you're thinking about here, it's in the realm of microseconds or picoseconds, CPU speed, right? Unless you're doing a bunch of data processing where you take seconds or minutes. But Internet, it's milliseconds at least. So those who would know, although school or you're familiar with networking, MTU of the Internet, it's typically 1,500 bytes, right? Data payload is 1,460 bytes. A hundred kilobytes of base page, HTML page, you can divide by 1,500 bytes and you get another packet. Based on congestion window and the TSPIC protocol, you take six seconds or whatsoever to transfer 100 kilobytes if a latency is about 250 milliseconds, right? You can calculate the math. That's theory. TSPIC has improved a lot since, but it is still significant time based on this because it's a factor, milliseconds is a factor longer than a microsecond, right? So with the content delivery network, I mentioned you deploy POPs around the Internet so that they are close to your users. Typically, if let's say there's a POP within London, it's actually connected less than five milliseconds away. What you want from a CDN is to cash as much content as possible so that your end users do not need to come back to your origin server. We call it an origin server because it just takes too long, right? So there's faster response times, a lot more stable, but a CDN is not all bets and roses, right? In an on-prem server, on-prem server as in the physical type server, you have absolute control and visibility. If something goes wrong, you can go to the machine, you delete a content, you purge a content, you change the configuration file, you look at the logs, you can do all of that. With a cloud or content delivery network, you can't. And the reason is because to be able to control that particular piece of service, you need to manage multiple, multiple cloud servers around the world, right? For Fastly, it's about 66 mega POPs, but for some other servers, it's 300,000, 400,000 type servers. So it's incredibly difficult and it makes sense, right? So if you are able to delete, for you to, for a business, for example, I think it was a colleague from SPH, if they were to publish content or user-generated content, which is not good, right? You do not want, it's false, or somebody made a mistake. You want to remove that content, it's not simply going to the server and, you know, delete the file, it's not. You're able, you need to be able to go and purge the content across all servers. And it takes a long time, right? So that's one. Second thing, little visibility. To be able to collect or collate all your logs across the multiple thousands and thousands of servers takes time, sometimes to extend of a day. What this means is that when you deploy a configuration, publish your content, you actually do not know whether it's actually working or not, until you receive a customer call to say, hey, I can't see your content or there's a 404. Right? Last thing is that you have limited control. To be able to change anything with the configuration, you probably have to call somebody to book an appointment, a meeting, or service request for somebody to actually change the control, change the, the, the configuration. Now this was fine and good in the 90s and early 2000s, because development life cycles were SDLC, waterfall models where, you know, fill in whatever feature request and a team takes it in and then plans the project plan, right? After six months later they deploy and that's what you do and publish the content or the service. This used to be it. Today's world is all about DevOps and CI CD, right? You can actually publish like your microservice like 20 times a day. How, how are you going to, how are you going to mess around with a, with a CDN configuration that takes you a long time for you to actually propagate? It doesn't make sense, right? So the new technologies today actually addresses a lot of these. So first of all, I, and I think this is a game changer. You're able to purchase your content sub-second. So if, let's say somebody uploads a crappy diagram, right, a naughty picture or whatsoever within your user generator forum to remove the content, you would actually typically go to your server, delete it, and then got to go to the, to the configuration to be able to purge this content. Now, and it takes time. So right now with this, you could actually do it sub-second, everything, all the global servers. And the reason it could do this is using a bimodal multicast algorithm, which is pretty cool, right? We can talk about it later on bimodal because I'm pretty excited about that. The other thing is that the technology today allows real-time lock streaming. This has two purposes. One, you give a lot of visibility back to the customer and enterprise, the organization. You're able to see when you deploy a configuration, that means to say you publish your service, you deploy a configuration. You actually know whether it's working or not almost immediately. Now, think about it in terms of many, many things. One, it's web. Web application, I'll talk about it later. You can have instant locks. The other thing is that flexible and real-time config change. You're able to deploy your configuration to all these thousands and thousands of servers almost immediately. The last we did an experiment is about 13 to 15 seconds. Singapore alone is probably about less than 10, but globally it's about 13 seconds. You're able to do it in 30 seconds globally and it's all via API. Now, quickly about edge security. You guys already probably know about DDoS. DDoS is a new insular. Let's put it that way. There's a lot of traffic, junk traffic. The aim is to bring down a service. But if you think about it, if let's say you're within behind a CDN and your CDN only accepts valid HTTP, HTTPS request at port 80 and 443, it's a no-brainer. Anything which is at layer 3 and layer 4, that means UDP floods in floods, ICMP floods, everything is automatically removed or dropped. It's a perfect defense because it only listens at 80, 443 with a valid host header. So that's cool. The other thing is WAF. WAF is a web application firewall. It takes into account WAF's top 10 type rules that you are able to protect your web service or web server. Those who haven't had it should have a WAF. But the problem of a WAF is that it's a single, it's a bottleneck of request. So if somebody wants to DDoS you, they DDoS your WAF and then you're dead. So then the concept is it's a cloud web. You bring it out and you have a lot of WAF to be able to defend you. But the challenge is again the same challenge of an H or a cloud type defense mechanism which is you have limited visibility and configuration change takes a lot longer. The new technology is that because I just mentioned, you got instant configuration and instant locks, you're able to manage your cloud WAF as if it's an on-prem WAF with the benefits of a cloud WAF. This is really important. Couple of other load applications which I think it makes a lot of sense, it's actually load balancing. Now if you want to load balance within let's say a data enterprise or cloud, you know different microservices, that's cool. But if you want to take a step out from any user standpoint, the consumer or your customer, be it wherever they are, if the further you are from that particular service to do load balancing decisions, it's a lot better. Reason is because of networking from here to that particular service and here to the particular service, I know the traffic conditions versus if you are a lot closer you have less visibility and intelligence. That's why from a load balancing standpoint, it's best to be as far away from the particular service as possible. Image optimization, there's a lot of image optimization technology out there, whether they are in a data center or in a cloud and stuff. But if you think about it, if the image optimization technology or the product of the image optimization is nearer to the end user, there's a lot less time needed. So for example, sir, if let's say there was, okay, assuming your image optimization engine, right, and you're the end user, if you are to retrieve a content from him and he's like a thousand of them, right, they're able to generate the perfect image meant for your device in a correct resolution and correct size. Isn't it a lot faster than versus coming to me, which is your central cloud or your data center? It just makes a lot of sense and once it delivers an image to you, he cashed a particular size of image and resolution and from the consequently, everybody gets it from him cash which is less than five milliseconds away, right. This is a use case that we have done where customers actually have an on-prem type environment and they want to move to GCP. So you're actually able, I mentioned the HLO balancer concept, but it's similar. You're able to use the Fastly or the H network to be able to slowly transition your traffic without any problems at all to a GCP type environment. Okay. I have with me Atsushi, Atsushi Systems Engineer from Japan. He worked through some actual code or VCL for you guys. Thank you. Oops. Okay. So let me quickly introduce what... Okay. Okay. So let me quickly introduce what our configuration VCL like and how to test and how to deploy the code to our platform. Okay. So before diving into the actual code, so here is a burnish request work flow. So burnish has a state machine and then there are several teams and so starting from VCL receive which is when our server receives a request from an end user and then if our server check if it's already on our cash or not. And depending on the result, we will go to the cash hit or cash miss. VCL miss means that there's a cashed object for the request. And then get the request from origin and the VCL fetch and VCL deliver which is where it's delivering the actual object to the end user. After that we send the log. So this is the basic of the request work flow of burnish. Okay. So let's start with a very simple code. So in this code, first one is under VCL receive which means this code runs when we receive the request from the end user. So as I commented, so this are the new custom header which is the X new header equal to so with simple this code, this header to the request from end user. And the second line is modify existing header. So we can override existing header as well. So in this example, overriding a user agent to my test user agent. And the third code is under VCL deliver which means we modify the response header from our server to end users. And this is unset HTTP server header. So we just remove this header from the response from our server to end user. And let me show our test tool. Oh, how can I? Okay. So it seems I need to read from, yeah. So this is a fastly federal which is our configuration test tool. Which that is interesting because this is a VCL CDN configuration but we still have a developer test tool. So you can put your origin server so whatever it is, you can use your actual web server and you can put VCL code here. And this is each subroutine. And then you can also set a path and you can change the method. And you can also put header information here. You can even put a request body here. And then you can, if you click the run button, then first we deploy this code into our network and actually send the request. And it will show the actual request header and all the information related to the request. So let's see each code. So in this code VCL receives X new header equal to, so this is the request header from end user to fastly. So there is no X new header because, you know, this header is not included in the request from end user. However, we, because of this code, we add this header from our server to your origin server. So basically this simple line of code add this header. And the same happened to the user agent. So the user agent from end user to fastly server is Mojira and blah, blah, blah. It's a long user agent. But we override the user agent. So the user agent header from fastly to origin server is my test user agent. And then this is the delete the response header from response, from response, for the response header. So you see that, yeah. So you see the server information from the response from the origin to fastly. But for some reason you may want to, you know, delete or close the origin server information. So if you add this one line of code and it will, there is no server header information from fastly to edge server, fastly to client correct. So this is very simple sample. And, but you can also add some, you know, condition to the visual code. In this case, so when we receive the request from end user, we check the user agent. And depending on the user agent and check the strings in the user agent we set ex is mobile to or false. And you can also add multiple condition to the request. So condition, so however if the request has a cookie desktop mode equal one, it always go to the else conditions. For the sake of time, I'll skip the test tool for this. So you can also realize the URL. So when we receive the request from end user, we can check the request to URL. And if much is the conditions, we can override the request URL. So, yeah, you can directly change the request to get content without changing the request URL to the end user. So we can also configure multiple backends as Zeck explained. So we can put multiple backends in this simple code. So it just check the URL and then go to decide to go which backend to go. But you can also set up failover or percentage or random lobbying or we have a multiple type of method to choose which backend to go. And also, we also have geo information. So you can get the geo information from our dataset. So this is very interesting because it just go to the error in the VCL which means when we receive the request, we send the request to the error routine and then create the response at our server and send the response to the client with the geo client information. So this is basically the API service. We receive the request and we return the JSON format response to the client. What is interesting for this sample is there is no origin server. We just receive the request from the end user and then return to the client. So this is the true serverless application. Actually, we have a bunch of sample codes like this in our website. Let me show. I can't find my mouse. In our website, we have a lot of sample code and if you click one of them, actually a lot of. I don't understand how many. So if you click one of them, you can go to our test tool and you can actually test the behavior and you can also copy this, clone this and you can modify your code and you can test and you can create your own code for your benefit. Do we still have time? Please let me quickly demo our product. This is just a very simple website. This page is refreshed automatically every one second. However, this is on cache. This shows when the initial request goes to the origin server and now this page is refreshed but it's delivered from our cache server. We can patch this in the very real time. We can patch content from via UI or via API code, a command. So if I send this request, this date should refresh. Try it. Yes. So I will try again. Yeah. So it's almost instant. It's maybe, if you're not familiar with CDN, it's very, what's great. But if you know about the CDN, it's very, very exciting. So it's a game changing. Also, so you can create a configuration very easily. Well, this is our UI. So you can create a configuration via UI here. And it will generate VCR code like this. This is all about our configuration. But if you are tech enough, you don't like UI. I know I don't like UI. So you can write your own code using this snippet. So, for example, you can write your own code directly and you can upload this code into our VCR, which is easier and more powerful and flexible. So let me try how long it will take to deploy the code. So this is just a name. So I want to block the access from Singapore now. So you can also choose the response code. You can choose 200 without the content or you can put response 204 or 404 or 403, whatever you can choose. So you can also write a HTML code here. So this is a very simple one, but you can write the body and the information here and just create. But now, there is no condition. So all requests to this service code is an error. So I will attach the condition. So I already prepared this condition. So it is geocountry code equal SG, and if my IP is not on the fight list, then send that response. Okay, let me activate this. So normally, it typically takes like 10 to 15 seconds. So this is now, refresh automatically one second. So it should be blocked sometimes soon. Yeah, so and also you see the content I created. This is also, if you are not familiar with CDN, this is nothing, but if you are familiar with CDN, if you know CDN, that is very fast. Yeah. Okay, so let me quickly touch on the log. This is the last one. So we can create a log. You can send the log in real time. We have a lot of prefix end point. You can also send, you also have a syslog. And my sample code has a two log end point. So you can also set the condition to send the log. So maybe you want to receive all log, or you want to receive log when it's error, like a 304 or 500. And then you can also customize the error log format. For this, for the log to the small logic end point, I made this JSON format log, and then it equals a lot of information related to the request. So it's very easy to customize. You just can add variable to this log information. And then this is the log dashboard. So this is the sample dashboard I created from the log information I send. So this is the small logic. So you can create whatever dashboard you want. And then good thing about this is you can customize it, and also it's a real time. So you can create whatever report you need to report to your boss or to check your service status or everything. Yeah. Okay. Let's get back to the presentation. Okay. So it's very close to the end. So we have a free developers account. So all I demo today can be accessed. We have free developers account. You may use access to the first side. You always sign up in the top right corner. And then you just need to tell the name and email and phone number. And you can create a test account. So if you have an interest, please please try and touch and contact us. Thank you.