 Hi, everyone. Let's start with introductions. I'm Leon Bracard. I'm a sales engineer at Fastly. I'm David Strauss. I'm CTO at Pantheon. And I'm Rudi Grigar. I'm the infrastructure manager at the Drupal Association for Drupal Sites. And we are here to talk to you about the value of CDNs and caching and all the things that go beyond the origin, which, assuming based on the assumption of you being here at DrupalCon, is probably Drupal. A lot of you probably know that Drupal takes a long time to render pages, at least in terms of computer time. It takes even a great site. It takes a few hundred milliseconds to render pages. And a lot of caching is about avoiding that as much as possible, both to reduce the amount of time your users wait on pages as well as make your origin resources extend further, that if 99% of things are getting handled by your edge cache, then only 1% of your overall traffic needs to be handled by your origin. Did you have some topics to start us off? Sure. So I'll start on a website that all of you visited before, Drupal.org. That's hosted on Fastly. Why is that? Well, a couple of years ago, Drupal.org was experiencing all kinds of latency issues and slowdowns and basically got hired on by the Drupal Association full time to work on making it faster. So what I found was that our DLAN that we have outgoing with the origin servers for Drupal.org was hitting its 30 megabit cap every five minutes or so, which was coordinated with updates to Drupal.org and people's crons being on about every five minutes or so being a big spike on the hour, definitely a bigger spike and we're dropping packets. So kind of initially pulled out a CDN and saw some reduction in the number of like drop packets and things and things are getting better. But it was kind of like a quick like, oh, things are on fire. We need a CDN. We put one in place. But it was a time-based CDN. So every 30 minutes the CDN would purge whatever it had in its cache and we would recache and dig into the origin. So it was kind of a middle ground from where we ended up in this last year with Fastly. And with Fastly, we're doing basically that but a much better job of it. So all of the packaging that's happening on Drupal.org for new releases for updates like security data, that sort of thing. We're dynamically purging Fastly's API when we build and package releases so that those cache items are only dynamically purged when there's a new release available. So only new requests are hitting origins and existing requests stay cached for up to a year in Fastly's cache. And we've combined that with Fastly's origin shield feature which is some fancy varnish configurations. It's distributed varnish in a sense. So the VCL's there to do origin shielding and keeps the request fast. This is actually updates to Drupal.org and Fastly's dashboard that you're looking at right now. So if you wait until a five minute mark here, which will be any second now, you'll see a big spike and things will come through. But most of the hits are happening with a 99-ish percent hit ratio. So there's not a lot of requests actually hitting our origin and our origin bandwidth is only, I mean we're under 10 megabit or so at any given time with all of this happening right now. So that's sort of how we went down into the Fastly rabbit hole. We were already using varnish on Drupal.org for all the sites and services that we were running. So the move to Fastly was pretty natural. Like the VCL's there we can upload VCL that we have that's custom, that sort of thing. And overall it's been a really great experience just in being able to manage the service that way. On Pantheon we've mostly worked with Fastly for very large customers and networks of sites where they need to have points of presence. They need to be able to handle huge traffic spikes. Some of the sites on the platform are top 100 websites and when they get in the news or at the top of Google results they're just going to get a giant load of traffic and it's not an attack it's just legitimate traffic that people are trying to load. And then it also compounds with a lot of this situation of the modern web in 2016. How many of you have heard about Google's changes for page rank with HTTPS? Okay so a sizable amount of the crowd. But delivering things like that is not just free where you just throw it onto your website and you have no impact on performance. Every time you introduce a service like that, especially modern HTTPS and TLS negotiations, it's making multiple round trips to the server. And in a lot of cases if you're just implementing HTTPS on your origin boxes then if I'm pulling the site up on my mobile device it is going all the way back to the origin servers for multiple round trips before it even requests the page from the server. And now that HTTPS is pretty necessary for having sites max out their page rank and other analytics and even providing things like AMP pages to Google so that you get accelerated mobile rendering it becomes more and more important to optimize that. And we've done that for some of our customers with deploying CDNs including Fastly. And by having different points of presence around the world instead of a device that's visiting the site doing multiple round trips all the way to an origin in say Chicago or Virginia or something similar they just do it versus the local point of presence which Fastly has dozens around the world where it can do that negotiation and then Fastly can maintain a much more cached encrypted connection back to the origin so that mobile devices just have a much better experience in terms of latency for requests and ability to scale out that traffic without having the origin bear the brunt of it. And rather topically HenryTakinton.com which is all HTTPS was hosted on Fastly and was mentioned during during the debate last night and we had a huge spike on the network. It was very interesting. So I'd like to take a little step back. Does everyone know what a CDN is because we've mentioned before. So it's a cache. We have we cache your content close to the users around the world. So your request has to go less far. So if you're a user in Australia but your servers are in Frankfurt then you get a cache content from from this service in Australia and from Sydney for example rather than having to go all the way back to the origin. And we're based on open source software called Varnish. How many people have heard of Varnish before. Excellent. Well that's great. So Varnish is is open source for Espoxy. You can basically take your Varnish from from your home server and put it on to Fastly and within about three seconds we'll push it all the way worldwide so you can have all sorts of very clever configuration. But most of the time you don't need to do that. The Drupal plugin for Fastly is just the right thing especially for Drupal 8 with cache tags. Drupal 8 is much much clever at caching and deciding where dependencies are. So if you change one page then one object to one page then Drupal the plugin will send invalidation to Fastly and invalidate all the pages which have that piece of content on within around 150 milliseconds. So newspapers users like the independent in the UK because they want to have the up to date up to date news. And even if you have a highly interactive site there are lots of assets like CSS JavaScript images etc that are getting loaded on every page load even for the authenticated users and those are just not varying that much. And having a CDN for say a user in Australia is the difference between them hopping over the Pacific for every single one of those assets versus them loading 99% of the assets from the CDN and then only going back to the origin even for a signed in user for one request. And so you're only adding in the latency of those say Trans Pacific or Trans Atlantic cables for just that one request. And that actually provides a pretty good experience for users around the world because most of those requests are just for those assets that'll be cached close to them. And with things like HTTP to now, which is supported as an opt in system on fastly, it's possible for devices like mobile devices to roll up those requests into things where they basically are able to use a combined HTTP pipeline to efficiently batch those requests to the CDN and pull down all of those dependent assets much much faster. It actually takes care of at a lower level. A lot of the concerns that Drupal has has tried to solve historically through CSS and JavaScript aggregation by packing those into single files. HTTP to allows having those files all be pulled in batches. And even more than that, even if you are delivering custom content like that to users interacting with HTTP to you can actually advise the system that those assets are going to be used or necessary for the page load. And then the the CDN server can actually push those down to the client like a mobile phone. Before the client even realizes that they're totally necessary for rendering the page. And that means that you're even reducing latency that would normally be incurred with a mobile network because normally it would load the page would process it figure out what assets it needs. And then it's another round trip over the mobile network, even if you're going to a local point of presence. HTTP to allows the push to happen there. So you're just kind of axing away at latency everything from TLS negotiation asset pulling asset figuring out what assets are necessary for the page going over transatlantic and trans Pacific cables for pulling those assets. And you can you can knock off hundreds and hundreds of milliseconds from page load times this way. When did we want to open the floor. Sure, I think we can get questions. Yeah, like we mostly wanted to keep this as an open floor. I'm also I mean, I know this is a fastly session. I'm most of these concepts apply to most CDNs are going to be working with. So feel free to ask about stuff. And I can tell you at least how fastly handles the answers to some of these things. But they're general conceptual issues of web architecture as far as I'm concerned. We have a microphone because this session is getting recorded. So I would appreciate if people could use that for asking the questions, or at least I don't think it quite can get passed around. It's not wireless, but I think so. Absolutely, it can be so I could take this one. So then at that point, there's two, there's two connection paths. That's from the user to the CDN and from the CDN to the origin to your servers. And so you terminate if you terminate on the edge, then either you can use HTTP back to your origin or you can use HTTPS. Absolutely not because there's no overhead because you keep the connections open for as long as possible. So it's actually, it's actually not as bad as you might think. Yeah, it's not nearly as heavy as the device itself, the end user negotiating all the way back to origin. Because TLS, the way that the connection is structured, it happens in two phases. First is this asynchronous phase, which is really expensive for negotiating a shared secret. And then once you have a shared secret, almost any modern device, even a mobile phone is able to process the encryption at wire speed, once that negotiation occurs. But a CDN like Fastly is going to do that negotiation once, and it's going to either keep the connection open or cache the shared secret, so that it doesn't have to undergo the full setup every time it talks to the origin. Even more, moreover, Fastly has a neat option where you can put in your own client certificate to authenticate Fastly's connection back to your origin. So if you're really up on how things like X509 work, you can make it a very clean connection back to the origin where you have authenticated with 100% confidence that the connection is coming from Fastly and all the rules you've put in place at Fastly have been applied. So if you're doing things like mitigating an attack or trying to protect your origin, it provides a good facility for doing that that doesn't involve you constantly having to update IP lists, like the traditional way of validating that something came from a CDN. You'd like to ask a question, but the mic is far away. We can repeat it. What about authenticated users and the individual blocks on their pages that may have individual information on them? And what about searches? For example, like a complex solar search that is parsed in the stream of the URL? How would you use those? Okay, so the question is about, I guess, personalized content and yeah, personalized content. Pretty much. Do you want me to take it? Sure. Okay. So there are a few different ways of handling that sort of assembly of a page. There in Drupal 8, we now have something called big pipe that allows you to pump out that content through a streamed request, implementing the big pipe where you put ESI at the bottom of the page to basically pull the big pipe content with Fastly would allow you to have the initial page structure be cached, ship out to the client before Drupal even knows the request is coming in. And then Drupal can handle shipping the customized content through big pipe as an addendum to that request so that the client still sees it as just one big request where the initial data came in very fast and then the subsequent data comes from Drupal. Other ways of doing it would involve separate requests for that content where you could have some of the requests cached and some of them go back to the origin. I don't know if you can show the graph on there. You can actually see the hit ratio graph on here which is really showing, this is a particularly exceptional hit ratio because this is very, very, I'm sorry what? Yeah, to a more normal site. Yeah, this is a more typical site. It's still a very good hit ratio, at least in the hit ratio done there. The typical site is still going to have a pretty high one, but the reason to pay attention to that is you want to make sure that even with the customization of content that you're not skipping the cache. One of the common ways to accidentally skip the cache is once you've set up a session, that header comes in, you might have configured varnish with VCL to say any request coming in with a session ID bypasses the cache. That'll give you a poor hit rate, but you can put a rule into something like Fastly that says for all the static assets, ignore whether they have a session or not. You could do the same for certain block requests as well if they were dynamically added to the page. And ultimately, you just want to look at your hit ratio. Another thing you can do is you can configure Fastly to ship its log data out to another service for analysis. And then you can look at what's hitting and missing and depending on how you configure your VCL, you can even have it hint at why. And then that can help you optimize the experience for an authentication heavy site. And speaking of that log shipping, we use that pretty heavily on Dribble.org because we use those logs for downloads and for like project usage statistics. So all the traffic that's coming into updates tracks like what is being requested for updates. So that's how those statistics get generated and being able to ship those logs back to us for processing is a handy feature. Another thing I heard you mentioned was ESI. Does everyone know what that is? Yeah, ESI means edge side include. It's a tag you can put into a page that says at this point in the page, I want you to seamlessly integrate the response to this URL so that you can hit an initial page. And then what Varnish does is it'll ship out the data until it hits the ESI tag. And then it'll notice, oh, I need to pull this content now. But the rules for that content can be handled completely independently of the main content that it's delivering. So you can ship out a framework, a skeleton page, and then use that, I think I was talking to Fabian actually two days ago about using that for, to integrate with big pipe where even for authenticated traffic, the initial part of the page comes cached out of the CDN and then Drupal dynamically handles the subsequent customization of the page. I don't know of a lot of production sites that are doing this yet. It's not, I mean as if for people who are present for the keynote earlier today, big pipe is still in beta in Drupal 8, but it's assuming it continues to mature, it should enter a stable configuration in Drupal 8.3 or 8.4. So we use, we're on Drupal 7. Yes. And that does ESI, so that's basically the same principle as? Exactly. And you can totally use AuthCache with Fastly by using AuthCache's published VCL. And then that will properly handle the rule set that's necessary for AuthCache to determine which requests are keyed off of things based on user-to-user role. Regarding the caching mechanism that you said, how do you define which part of the page gets cached except the ESI algorithm? I mean, do you read the pop of the URL or something else? You can use anything you want in varnish and on Fastly. You can, there are two systems on Fastly for it. There's a rule system that has a GUI configuration where you can put in URL patterns, header patterns, cookie patterns, things like that and make a decision of whether you're bypassing the cache or trying to hit the cache or not. And then also you can go, and if you really need to, you can unlock full VCL, which allows you to really actually just write code that is literally has if statements in it to determine how to handle that request. You can use regex if you want to. Do you want to show the VCL? Sure, I can show the VCL for this. It's the point where you also state the... Well, you can. I would advise against it. I would encourage you to use standard HTTP headers where possible. And Fastly will properly parse things called cache control headers. And you also have the opportunity to use something that's called surrogate control headers, which are specifically consumed by something like a CDN, and then thrown away. So you can send in one cache control header that goes down to user browsers and Fastly won't touch it. And then you can set a surrogate control header to tell the CDN how long to keep the page if you want to do different timings. Drupal directly supports using cache control headers though, and Fastly will support those out of the box. So a good example of that would be a live blog where you want people to have the live information. So you make the cache control header for the browser to be very short, say a few seconds, and then you cache on the CDN for a week, but invalidates the page on CDN whenever there is new content. So the browsers will keep on getting back to the page. We'll keep on getting 304 not modified. But when there is new content, they'll get straight away. And when it comes to delivering things like 304 not modified, delivering something like Fastly is actually pretty essential to getting that right. Because if you just deploy your own fleet of varnish boxes, they're going to independently cache the content, and they're going to have different IDs for when the content was created, and the e-tag for the content, which is what the browser prefers to use for validating it. Fastly does request hashing at a higher level for requests, and I believe it's in the MasterBCL, where it determines what servers to send the content to, so that if you have a page and it's cached in Fastly, it will always hit that cache on that one, on the single system that is responsible for that piece of content. And that ensures that if you're distributing something out and you want to deliver something like 304 not modified, so browsers can revalidate their cache, they will efficiently revalidate, rather than if you have the fleet of boxes, they could randomly hit a different one, and then the browser will, that'll have a different e-tag, and then the browser will think, my stuff is too old, and it will have to pull the page down again. So a fleet of varnish boxes is still better than nothing, but the kind of hashing that a system like Fastly does, will get the hit rates to go even a whole level above that, for that sort of management. I was thinking about the blog example, so we have the option to use Fastly to invalidate dynamically if someone posts something, or in order for the page to be cached on Fastly servers, but in the dynamic you can say, get cached for as long as someone writes down and you post. So this is something that can be worked out with rules, or something like that. You don't even need, for invalidation you want to use the Fastly module for Drupal, and then that will talk to your Fastly API to invalidate the content, and then that way you can cache the content in the CDN for a long time, and then it hooks into Drupal's APIs, so that it knows when you create a new node, or update a node, and then it tells Fastly, expire this, and Fastly uses this asynchronous distribution model for caching validation that reaches around the globe in usually under a half second, to all the points of presence. I'd say a fifth of a second. Fifth of a second, okay. Yes. Would you say there will be no need for me to put another cache between Fastly and my Drupal site, or do I still need some other caching? Yeah. Repeat the question, though. The question was, if I'm using Fastly and our other CDN, and everyone accesses my site through that, do I need to have additional caching inside for the site? Specifically for Drupal 8. Specifically for Drupal 8. And I would say, no, outside of like for Drupal, Drupal still need to cache, and do all of its internal sort of caching, but as far as adding another layer of varnish, or something like that, for Drupal.org, we removed our internal varnish completely, removed our load balancers completely, and we're sticking with Fastly's Origin Shield system, and managing all of the edge from Fastly's interface and through Fastly. So all of the requests caching and logic gets handled at Fastly, all of the requests that are hitting Drupal.org from around the world flow through the Origin Shield, which is in Seattle, and then to our data center, with that Internet Exchange in Seattle to Oregon. So all of the requests funnel through a single Origin, that Origin probably has whatever is being requested. Does request collapsing, I believe, is a nice feature. So if there's requests for the same thing twice, like that only one request comes back to our Origin, and that has been working beautifully. So I would say, no, like you can remove your varnish that's running internally after using Fastly. This is not a general feature of CDNs, that most CDNs have their points of presence, and when the point of presence misses, it goes all the way back to your Origin. And so if you have traffic that's pretty distributed around the world, you might have a lot of traffic reaching your Origin still, and might want to run a cache there, but with Fastly and the Origin Shield, you actually have a cache that can hit all traffic before it hits your Origin, no matter where in the world it's coming from. So if you have something, which is a big event coming up and everyone requests the same page or the same object all at the same time, then you'll only get one request to your Origin for that, even though thousands of people around the world will fetch all those objects. And Fastly can even take it one step further than that with a header that sets an option called still while revalidate, which means that, let's say you have a page that's cached for five minutes, and it's five minutes and one second now, and it might let's say it's a heavy page in your website that takes a few seconds to generate, you can turn on this thing called still while revalidate, and what that'll do is Fastly will still deliver the old version of the content until the Origin has replaced it in the cache. So at that five minute and one second mark, it makes the request to the Origin to get a fresh copy of the page, but the customer or browser does not wait on that to happen. So it does, in addition to the request collapsing, you can even have your users never wait on a page to freshen if it's a high traffic page and getting a lot of traffic as the site. We have the VCL for that on screen right now. This is VCL, if anyone recognizes VCL, and that's the option. So for updates, we have 120 seconds still while revalidate, and if there's an error and it's cached we just let it serve whatever that update is. So if the Origin goes offline updates still are available to verbal science, you don't see that unable to connect to updates or error. Yes. If you have authenticated and non-authenticated traffic, and let's say that your server has gone down or you have to make update the page page something, and you won't have the non-authenticated traffic still working, but disable a couple of regions, something like the login page. You can't really handle custom dynamic pages with a CDN if the Origin is offline. And I'm not sure a way that you could safely do that. Not the dynamic page, the same page like the Drupal front page for example, that has a login block on the site. Use a login block. And when your server goes down maybe a way to disable this login block maybe interact programmatically something like that. Okay. So to repeat the question and the question is if you have authenticated traffic and the Origin goes down or it can't access it, could you fall back to having static content? And the answer is yes. You could do that with custom VCL by doing something where you detect you would initially detect that the user has a session and then you would say I want to pass this back to the backend and then what you would get the error or failure to connect. And then you could put request handling in there that says even if you had a session you could tell varnish to retry the request and mark it in a way where it basically says I got an error trying to handle this dynamic page from Origin and then when it goes back into the VCL to process it again you could have a rule in there where even if you had a session if the original Origin request failed you could say you could treat it as a cached request probably strip out the session and then they would get an anonymous page instead of an error. Can you also do that with that? I think there are thresholds set in the origin configurations. So the way we explain this is a very advanced feature of stale if error which is a proposed HTTP standard and the way it vastly does is so we'll check whether your Origin is responding within a defined time out. So I guess you could make the Origin fail health checks if it's not responding within a few seconds and then go through this error mechanism. Go to CDN for global sites and it works perfectly at possibly the fantastic program. I'm wondering how is it for business websites where the audience is geographically limited. In Europe whole countries were with busy websites where the audience stays within the sometimes you get hits from all of the world but most of the users come from that country because of language or something. Would you say that it's still useful in any way to use CDNs faster or faster? Absolutely it's for multiple reasons. One is unless you're deploying HTTP2 to your Origin servers it provides that for users which accelerates their page load times. It provides faster TLS negotiation because odds are fast these boxes are going to negotiate it faster than your Origin servers would. Three unless your Origin servers support IPv6 in a lot of regions mobile phones prefer IPv6 and there's usually 20 to 30% overhead for going through the carrier grade NAT for v4 to go to the website so if you can make your site available over IPv6 you can minimize load times and fastly can do that as well for you even if your Origin servers don't support it and also it's just infrastructure you don't have to manage then. The interesting thing about the Internet is it's changed quite a lot over the last 20 years and all our servers are hosted in or close to Internet connection points so all traffic in Netherlands goes through M6 for example there's D-Kicks in Germany so the Internet is very well connected especially for your Origins so it's less of a problem if you're in Germany then it's going to be fast if using that if you only have one pop node in Germany it's still going to be relatively fast and it will be infrastructure you don't have to manage. You also can choose your configuration with Fastly to only use certain regions of points of presence as well like if you just want to do Europe or just want to do North America however one reason that you might want to actually have advertised points of presence around the globe is for mitigating attacks because with any cast routing and GeoDNS it ensures that attackers that are trying to attack your website are still routed to their local point of presence so let's say you have a button in Russia and they are being used to try and attack your website they're all going to be targeted at points of presence that Fastly operates closer to the attackers and not on the points of presence that your customers are accessing the site through so then you can shut down the attack at a point that doesn't even exist in your normal traffic flow well before it touches your Origins systems well before it touches the actual caches that your customers are using so Fastly we're generally in internet exchanges but we pay with local providers so we'll have fast connections to big ISPs for example which is really what Akamai is probably doing they're probably a single appearing agreement with the ISP yeah most probably I went to avoid the getting out of the local network country, ethernet and using the backbone so is it something that firstly might add to the additional service that they already have it's already there we already appear and when we're always working on improving connections sometimes there are cable cuts on the internet we just think it's going a bit slower between different countries we have to root around and calculate the best the best part exactly but yeah it's in Fastly's interest it appears well because it reduces the cost of transport yeah they actually appeared with our ISP when they got added to the Seattle internet exchange and Fastly emailed Fastly support and was like hey do you guys have a peer would it be possible to peer with our provider and they were like yeah sure get us in contact with their knock and a week later we were peered there's usually a mutual incentive to peer because both parties don't have to pay for transport over the backbone at that point yeah sure I'll jump in there so you've seen some of the VCL already so VCL is kind of like a little programmatic language you can do if then else lots of rules look regular expressions and you can use that to either white list so this is how your rail stream is going to go only lets URLs and query parameters which match this to go through to the origin or to black list if you have some traffic which looks malicious you can match it with your code and that way you could block traffic at the edge and it wouldn't have to hit your origin and we could help you with that and there was a second question which I amazingly forgot it was about how it can help with authenticated users authenticated users it's quite tricky because it's a bit lower if you have lots of authenticated users but by using a CDN rather than going over the general internet it will still be faster if you have to go back to the origin because we optimize our reading over our network so your user will contact in Sydney, we'll contact our Sydney pop which will go over our network close to your origin and then back over the network so it will still be faster even if we don't have it cached and also for mitigating attacks fastly offers a platinum level of service you can add to a plan that they will help dive in and even help you write rules to trap attack traffic and then black hole well before it reaches your origin we don't like to say too much about DDoS attacks but last year there was a state level sponsored attack against github which you might want to google for videos and just it is VCL so any sort of regex that you want to do if you're able to kind of pin down some sort of pattern that you're seeing with your attack you can throw it in and then was it 5th or 2nd that walked the edge so very powerful for that another question regarding the VCL I heard earlier you spoke about master, can you explain a little bit how do you have a structure of how do you deploy each of your customers is plugged into the master how exactly are you doing with multiple VCLs right so the reference to master VCL was was kind of in depth about how fast these structures it's VCL which is we run a little bit of VCL before customers VCL runs but I guess you guys could talk about how you work on your VCL conflicts well the so I think the question is about how Fastly picks the right VCL to run against a site or I'm just trying to make sure I'm right answering the right question another question was rather if you have multiple customers well in their case they have multiple customers and I guess each of them has control of his own pieces of VCL how is this integrated in the whole because I heard master so I thought okay there is a master VCL and then through all the work has their own VCL probably another VCL for a different reason the master VCL is very small it's not at least my understanding of it is that it's not designed to do very much mostly Fastly is trying to route the request to your VCL which can happen I'm aware of at least two ways of doing that in Fastly the main way of doing it in Fastly as a general customer is you put in a domain name and you validate your control of the domain and then anytime traffic comes into Fastly's edge and matches the domains you control it will be handled by one of your services which a service on Fastly corresponds to a VCL or rules configuration the other way that's offered which is only used for a handful of customers is called IP pinning where if you handle lots and lots of domains you can get certain IP addresses that automatically go to your service and anytime traffic comes into that IP it's going to go to your VCL so the question was the question was the Fastly server cache nodes is the address list public it's like yes the address is public we have an API you can download it we update the IP address list way before we start using IP addresses so you can have a cron drop which updates your firewall to only allow accesses from Fastly for example alternatively you could use the TLS to origin where you put a certificate in for Fastly and then it validates that it's coming from Fastly cryptographically yes you can generate your own certificates between Fastly and your origin if you want to it needs to be signed and you would basically do something that's called you would create your own mini CA and you could sign one for Fastly and sign one for your origin and it's sort of self signed in that way and then you could tell the origin to trust things that you've signed and then give Fastly the certificate that you've signed and then every time Fastly connects to your origin like Nginx for example or Apache really almost every HTTP server supports this at this point then it can validate that it's coming from Fastly because Fastly has the certificate that you gave it can you actually go into origin configuration we can show this because there have been a few questions about timeouts for origin configuration monitoring for origin configuration production to origin yeah so here's that domain list that we were talking about this is the actual production so most of it is actually handled by start out your blood or here and as we're moving services over we're individually adding them but the wild card picks up most everything and then over here on the origin side we have our please be nice to these IPs you should validate that your traffic is coming from Fastly but yeah so here's that configuration this is the new interface that I'm not as familiar with but yeah you can set up in the screen the CI certificate the other way to do it to another thing you could implement if you wanted if you didn't care about validating that it's coming from Fastly but you just wanted to have the connection encrypted and trusted you could use let's encrypt on your origin servers you can it comes with a cron job that refreshes the certificate and then what will happen is that Fastly already has all of the major routes trusted for connecting to origin and so that would allow you to run your origin with an encrypted connection that would be resistant to man in the middle attacks yeah but fastly as well I mean once I refresh the certificate it's probably a little too in depth here to go into certificate hierarchies and client certificates and how the validation occurs but it is possible to do on here yes and then another thing that came up was the logging so here's an example we log to S3 and we also log to our own internal Arcist log server here so we can set the log format in a way that we want to accept and it will send the traffic over to our running log posts and that's encrypted as well yeah and this has actually a different that's what we verify there and then the custom BCL so maybe this will help shed some light on that question about how that works we have a main sort of production BCL that has additional includes of other BCLs that we have and I believe that gets sort of grounded by the master BCL that you were describing right so these get combined into one one BCL so when I view the actual BCL that gets compiled on Fastly's end that's the BCL we were working at before but this is the individual sort of I include an ACL we set up just kind of the production logic but it also includes things like redirecting this is the redirect logic that we have and some other like blocks that we have in place and ways to handle 5.3 and things like that Fastly versions the BCL so that as you go through iterations they're all on there so that if you ever need to fall back you can do that if you put in rules without writing custom BCL you can still see the BCL it generates so there's not a lot of vendor lock in either in terms of if you wanted to take your BCL and walk away you'd be able to mostly push it into varnish you might have to make a few tweaks for like more modern versions of varnish like varnish 4 but you can mostly the configuration over so for example this is the one sort of thing that we have in the UI here that gets translated into BCL if I look at the actual output it's just forcing SSL all the requests and so yeah then over here in the BCL there's a force SSL section it's also a great way to do SSL only for a site because it means that the redirect is generated by Fastly instead of your origin box so that you don't have a lot of delays there for redirecting to the same URL on HTTPS so it makes it easy to do that I think there's even prefab rules that you can even do for kind of like HTTPS only on Fastly now yeah I believe that's the force SSL so that sets this and that sets this area which is down here which moves things to HTTPS one of the most impressed you are always regarding redirects how do you handle those on Fastly does those come to the origin on the Apache server and then get the return code or you can do that directly to Fastly you can either create redirects to your origin and even cache them in Fastly if you want to as just normal responses or you can use varnish itself to generate the redirects if you know there are certain URL patterns that redirect in terms of redirects and traffic one other use case I wanted to go into that some of our customers use is for the sake of SEO search engine optimization some customers might say run one Drupal site for their blog and then a different Drupal site for their main website for their .com they might even run WordPress for their blog and Drupal for their main site and one thing that you want to do for SEO is you want to have it all on the same domain name so one thing you can do as a trick in the CDN is you can have in your rules say detect that someone's going to slash blog and then choose a different origin for that and even metal with the host name that it's sending to the origin so that you can simultaneously run multiple websites that are all in the same domain and all of the switching between the different origins can happen in the CDN and I've had customers which are moving date center or moving to the cloud and that way you can do parts of their website one by one rather than having one big bank switch and yeah that would be very useful for like a Drupal 8 migration of some kind on Google.org we can have parts of the website be routed by Fastly to the Drupal 8 site and integrate the Drupal 7 site or something like that so it could be useful for that sort of upgrade as well if you're looking at one of those so if you have more questions if you'd like to go to the floor here stand down and say get a nice Fastly T-shirt or a nice Pantheon T-shirt and you already should have Drupal T-shirts Thank you very much Thanks