 All right, we'll get kicked off. So my presentation today is on Purge the Gap Automating Content Delivery and Network Purging for GovCMS. Okay, so my talk, we'll be talking about Automating Content Delivery Network Purging with GovCMS and the problem that we needed to solve and the discovery process that took place in order to enable our customers to automate the purging of content from the GovCMS CDN. The options and the analysis process to find the best fit solution and the solution that was implemented and the service delivery process. And finally the benefits that were realised as part of the whole bit of work. About me, so I'm the Cloud Infrastructure Manager and the Security Operations Team within online services branch in the Department of Finance. It's a bit of a mouthful, but don't put it on a business card. My job, I look after the web application firewall rules including the content delivery network rules. So that ranges from setting up optimal cache rules for serving cache content to, with our sites and managing, ensuring that the web application firewalls are fit for purpose and mitigate all cybersecurity risks. I do digital certificate management, looking at encryption and making sure that strong and up-to-date ciphers are used as part of that certificate management. I also investigate security incidents and help to implement incident mitigation and management strategies. I look at process improvement and how service delivery can be optimised with the work that I do. I offer also to advice and technical support in my role. About GovCMS. GovCMS is a content management and web hosting solution for the Australian government, making it easier for agencies to create modern, affordable, responsive websites to better connect with government and people. GovCMS works in partnership with Salsa Digital and AMAZY to provide the platform offering. We host sites for three levels of government, local, state and federal. And it's GovCMS is turning seven this year. So I've actually been with GovCMS since the start of GovCMS and it's seen a lot of growth over that seven years. At the moment, at a glance, so we have 348 live sites, 57 in development and 105 organisations that have signed up to use GovCMS. GovCMS content delivery network. GovCMS CDN and WAF services are provided by Akamai. Akamai is one of the world's largest distributed computing platforms. Akamai operates a network of servers worldwide. This network is optimised for performance delivery and they use the product called ShawRoute which helps people connect to a node that will serve content to them at the fastest rate. DOS mitigation and other security services, they do bot management, client reputation and they've got a rich library of API. I'll show you soon. So the problem, what were we trying to do? So the community was asking for a way to purge content themselves from the content delivery network. Seems like an easy problem to solve. Let's give people the option of doing that. So we needed to make it easy and secure and fast. We needed to remove the support overhead for people raising support request to click content from the CDN and make the option available 24 hours a week. What were the options? We had a bit of a discovery process. We thought how can we solve this problem? It was always going to involve the use of Akamai's API library to purge content. Akamai uses an API purge key so we thought the first option was to create a master key. Just give it out to everybody. If you've got a problem solved, they can just integrate it into their site and where you go. Second one, everyone gets a key. 300 sites, no worries. You get a key, you get a key, you get a key. Dedicated purge service. We have a service that manages the purging of the content from CDN. Option one. Everyone gets a key. Sorry, the master key option. The option here is to generate a key that can purge all host names on the GovCMS platform. See the problem there with that sort of approach? Some security issues around that. It's easy to manage. Just go here, here's a key, go away. Just use it. Give us a clear visibility. We know there's one key out there. We can see the host names that is purging. We can see the content that is purging. Disadvantages. It's probably the security that's the biggest issue with this. You're handing out this key that's got ultimate power to do all this stuff. And you've got no way of controlling it. You've got no wrapper around it. You don't know trust what's happening with it. And the key exploration. So you can put some controls around it and say, well, the key expires, but then you've got to hand it another way out. So we quickly said, well, this is probably the best optimal thing to do. Option two, everyone gets a key. Everyone. So we thought, well, this could work. We could just generate the API, just generate keys and just get them distributed and do all that. Bit of overhead with doing that. But yeah, it's a feasible option. You can do that. Limits the damage. So one key goes to one site. They can only purge content for that site. Now it gives us clear reportability. We know that that key is doing that bit of work. So that's good. Disadvantages. Again, security and control. We don't know what's happening with those keys. We don't know how they're being used. We're just blindly trusting people to put it in a safe place and it's not going to get misused or handed out to someone. Renew on exploration. Renewing and expiring 300 keys. It's a big job. And handing them all out. You could automate it. It's probably not the best way of doing it. So we got to option three, purge service. Let's have a dedicated service where we control the purge key. Purges all the sites. Clears the cache from all those sites. Gives us improved reporting. We know what sites are being purged. And the renewal exploration process is dead easy. Just expire the key out a new one to the service. Disadvantage. Security control. Now by this I mean you have to get a service accredited. You have to get it assessed to see if there's any sort of security vulnerabilities or holes in the service. And before you turn it on and get the interim approval to operate. And then managing management expectations for the service. What's the SLA around that? And then the initial investment costs. So setting up the service is obviously going to be a cost with setting that up and getting it up and running. And obviously testing and taking it through its development lifecycle. So just after this criteria. So what must purge service do? So this is what we first defined. We said well must support secure automated purging for all published content updates. And force rules that limit content purging requests to a single host name that is making that purge request. Enforce rules that limit the number of API requests to be within the global API limits defined by Akamai. So with an API you've got a limit that you can throw at it. It's a pretty high limit but there is a limit. Support individual on-demand purge requests for URLs. So give people the pros, the option of doing their own purges through a user-friendly interface. Support individual on-demand purge request. Yeah, user-friendly interface that notifies an authenticated user of the successful failure of a purge request. Support the secure storage of the Akamai purge key. And support functional capability with Drupal 8 and Drupal 9. Soon to be Drupal 10. Demonstration. So you're just going to have to bear with me a bit here. So I did record a demonstration but I'll try and do it live because I figure that's the best way to show you guys. So this was not the site. Let's get rid of that one. So I'm sorry, firstly this is the Akamai API library. It's pretty rich. Most of their products have a complimenting API. The purge API. It's got a lot of documentation around it. It tells you how to do a fast API purge, CLI purge, and then just give some guidance around all that sort of stuff. Purge contenting by cache tags, URL, CP codes, CP codes that feel like a whole site. That's kind of a dangerous thing if you want to purge a whole site. Yeah, but that's more or less the purge stuff, I think. Yeah, the API. So it's got a bit about the API. But if you want to go in and read out, it's really interesting stuff. It talks about the limits as well in there. So as far as the demonstration, so this is just a test site that goes through our CDN. So for a long end of that, you can have a look. So the interface we've got here under configuration, we've got a CustomGhostCMS module that allows people to provide a list of URL paths to purge content from the CDN. So this gives them the power to do that. I should make the distinction at this point that there is the option of purging... So automated purging occurs when content updates are being made. So that purges... So in the background, when you're updating content, a request is sent to the purge service and it's purging these cache tags from Akamai. For this example, I'll just show you what we've done here. So this is just some dummy files. So if I copy the first one, can I go back? Yeah, put it in here. Then purge that. So it puts it in the queue and it's sent off and it gets purged from the CDN. Wow. But in the background, there's a bit of work going on there. So we've got some rules as well in here. You've got to put your site name in here. Like put that... Oops, just too many T's. Try and purge that, just the whole site. It's not going to do it. It's not a file path. It's got some guidance here on what can be purged. It's also got limits on the number of things that you can purge. So I did have a list of 200 things in here. So just to give you... We've kind of set a limit of 200. Go back, chuck that in there. So there's 200-odd files there. It's kind of hit the arrow. So there's over 200 paths, so you can't do that. Configure the module with permissions. So you can say who's allowed to touch this thing. Or purge. So content authors, editors, or administrators. So I wouldn't give it to an anonymous user. But... So that's basically the file purging aspect of it. So we can actually have visibility of that purging. So we've got... This is OpenSearch, dashboard of the purge service as it's working. So it's reading the log files of the purge service. And I'll see if we can find the one that I just did then. By through a filter. You should get it. In the last 24 hours... 15 minutes. In the last 15 minutes. Oh sorry, it's thinking. Maybe that's... Take it off. Purge. So... Yeah, so you could say this is this project, ACC2. I don't know, my mic should be in here somewhere. You know, it's got... You could see the tags that are being queued for purging. And we have a graph of how that's scaling. And if there's any sort of... And if that's hitting the sort of limits of that threshold. So it's a basic view of the service working. I mean, I can't really... I mean, we've got the back end view of how it's performing the purge service. So there's an admin interface that looks like this. So we've got how many jobs for being purged. The service is active in the queue. And if there's any sort of errors, some metrics and branches. Here's a list of the purge jobs. And in the queue. So you can see they're not taking very long. And yeah, this is just another snapshot. What of the screen that I was showing you before? Just where they're hitting these purge queues are just hitting the number of the limits with the number of cache tags that are all out in the purge. So they just get broken up and then just separated into the queue. Okay, so the service delivery. How did we give this to people? So rolling start model was the way we decided to go instead of a big bang approach. So rolling start, much like, I don't know, I don't know, the Bathurst Grand Prix where you start under wet weather conditions under a safety car, you start off slow and develop momentum. So we just started off with a slow number of sites to see how things were scaling and if we needed to calibrate the service. Then we looked at logical groupings and how we delivered it to all these sites. So this namely sort of to minimize the amount of communication we had to do with our stakeholders. So this was mainly around just departments, just like the departments that had this number of sites. We'd sort of group them up and enable the service with them and provide them with some communication on how to use the purge service and what was happening and some of the cache rules as well. Benefits Realization. Okay, obviously the biggest one here is it's been something that the customer has been asking for a long time. So happy customer. That was the main theme. We have reduced the number of support requests to people manually submitting requests to have content purged. Sometimes these things are time critical, coinciding with like ministerial briefs or advertising campaigns. Things need to be sort of purged pretty quickly and have that content made available. Minimize the wait time for content to be purged pretty quick. It's pretty efficient. But then it also opens the door to optimize the cache performance. So at the moment a lot of that content, it's not just one set of rules, it's a number of rules, but basically content is cached for 15 minutes, but where static access like images, PDF files and that sort of stuff, they're like one day. But you can start to sort of extend that cache because your cache, sorry, the cache content is being purged on demand. So you can increase the cache lifetime. So we've got a bit of work going on at the moment where we've increased it to a month with a couple of sites and we're looking to increase that even further depending on how that trial goes and exploring making that change available to all sites. But yeah, that's really where we're heading. So moving forward, as I said, this is kind of the purged services for PaaS customers at the moment. Sorry, PaaS customers, we're looking and making available for PaaS customers, but there are a few obstacles that we need to look at. There's service expectations around that and any sort of technical complexities around any conflicts with the service and how that would work. Service enhancements, what can we do better? How can we improve that? Maybe visualization, make it, so people can see what's happening. Can we make better use of the keys and change the rules with caching? Reporting improvements. Yeah, visibility, what's happening, what's going on, what the administrators and the service can see, making it meaningful. This is kind of a bit of an area that I like to talk about, but with reporting it should be meaningful and it should be about presenting information that helps influence decisions. Don't just go and report something because it looks good and it displays information. That information should be meaningful, it should help to influence the decision. When I'm looking at that information, I know what that means and I know what I need to do when I see that information. That's my presentation for today. Thank you very much.