 Hello. Can you hear me? All right. I wanted to take a look at how we do a lot of auditing with Drudni on GovCMS today. I'm Carl. I'm with Finance. I work with the GovCMS team and I am all over the internet. I think I've had my name more times than I was comfortable with today. But more often than not, I'm on Twitter and GitHub than anywhere. I started Finance two months ago and one of the first things I was told to do was look at our auditing as it was a bit of a black box and only Toby knew anything about it at the time. So that's where I started at Finance on the new platform. I want to look at the background of how auditing came about why it's important and what we get from it and then go through a past present for each future about how our architecture works, how it supports it. So the background was having sites come onto a platform. They presented various problems that had no way of being addressed. We saw patterns which needed to be fixed and there was no tool or standard process to do that. Given these situations come up frequently, Finance decided to partner with the service provider at the time and develop a tool to automate the hell out of it. Examples of issues we had to deal with were administrative access to users who weren't supposed to have it which is part of the information security manual. Yes, sorry. Modules which cause performance drain like database logging. There's a bunch of them and security issues, namely there are a bunch of malicious modules out there as I'm sure you're all aware. We have static file checks which for housekeeping like database sizes, sensitive files, things that shouldn't appear in public file system file names. They shouldn't be there and they cause problems to someone who has a reputation especially in government. So the reason we have it or the reason this tool was made was for ISM compliance, for PSPF compliance and these are fundamental to government. We have mitigated what we've found but we haven't broadly covered everything explicitly. So all of the things that come up we find a process to handle them and then work with that. This poses reputational and administrational risk to us because of the frequency and the scale now which is we as a service provider and inside a government we have a particular reputation behind us and having worked elsewhere in government I can tell you this doesn't happen everywhere but as a service provider we have more to lose by not going through this process. So galsimus 1.0 or 1.x was on aqueous infrastructure. I wasn't in finance at the time and it was at the time obviously there were no tools. Docker was way too immature and a lot of operational work was involved to get rid of these issues. It was all based on virtual machine. We couldn't really have the tool on the platform because we didn't want any performance degradation. Along came drudany, kudos to Sean Hamlin and Josh Wahey both here today from Acquia who have built this. There was a version 1.x and I don't know a lot about that one but drudany 2.x is here and it runs via drush so it collects a bunch of drush command outputs. It processes them in a way that an application can handle and presents a report. At the time finance was the only one available to run it. You couldn't really get to it after it was forklifted and the profile profile was a way of indicating how well a site would be suitable to the platform. What sort of issues needed to be fixed before it came on the platform and it was a good indicator of how it would perform whilst on the platform. GovCMS 2.x is where I came on board which was 12 months ago and we had an entirely new architecture. It's all based on OpenShift, Kubernetes and everything is in single site so there's no more multi site and with this comes a lot of advantages. I'm sure you're all aware now so we can run applications side by side without worrying about performance. We can run all of our tooling inside the cluster without considering performance, their security, it's all there. We don't need outside access to the system to run these tools. So in order to run some audits we do have a SAS scaffold which supports it currently with an ahoy command. The PAS scaffold does not have it right now but that's something we'll address hopefully soon and I have an example of that's not coming yet. Local auditing does check for certain modules being enabled to disable. It checks for permissions, it makes sure certain modules that do cause performance issues are out of the way and of course the housekeeping, the static checks, are there any problematic function calls inside your theme as an example. And here's a semi-live example of what a site audit would look like. I was asked to add more animations by one of my colleagues so and this is just one form of report. The application can print to the terminal, it can print to JSON or a HTML file. What it normally looks like is a developer will commit code to their repository and that will start a job in Drudny. We can ultimately move this to AWX however at the moment it's running inside of GitLab. Once GitLab has finished it will produce a JSON file and that is sent to Elasticsearch and we can represent that to security and managers in nice reports and it does look good when it's all visualized in that way. Here we have an example of I think it was two weeks worth of audits on the platform. Here's what the managers would be seeing and on the left we have our Drupal 7 and 8 audit results. In the middle our 7 and on the right 8. It's actually a good example of comparatively how many Drupal 7 sites there are to 8 at the moment. And at the moment we're looking at what does the future look like? Are the audits represented in the most suitable way or is there a different way of doing things? Some audits can potentially be done outside of audit because Drudny does have full bootstrap so there might be optimizations to be made. A idea that keeps coming up is forking Drudny but I don't think we're going to do that yet. It's generally a very useful tool. It's the only thing of its type that does what it does. And we are looking at centralizing it somewhere either as a container which could be run locally or perhaps we'll move it all to AWX. The scaffold right now uses a very specific flavor of Drudny and our hopes is to sort of bring it to best practice, make it more available, accessible to modify so that past customers can define their own policies, how they want to run their own sites and potentially write their own audits for that. So in order to right now help out the process of onboarding it would be to try to stick to best practice where you can. Don't use any modules that cause known security issues or performance drains. There's not a lot of documentation right now for GovCMAS specifically using Drudny. We are aware of it and it was something we want to address over time and if you need help naturally you can reach out to us. Sorry I'll power through that. Are there any questions? Thanks for that. The output that you get into Kibana is that something you built or is that a Drudny? It is supported by Drudny. It's just a Jason Blob which is sent to the elastic search. Got it. Thank you. How hard is it to write your own audit conditions? Okay so the three primary concepts to consider here is a profile, a policy and an audit. An audit is the guts of it. It'll define the actual logic behind it and the policy defines the parameters that go to the audit. So it's kind of like an inheritance system. A profile is a list of say however many audits you want to run but it stashes in one place so that you run the profile it'll run each of those audits against the Drush target. And that's all definable in code? Yes. Is that code open source or available or could be shared with us? Almost all of it is. I actually have an example repository I used to learn this stuff which I haven't really advertised but it's on my GitHub account and it sort of packs the profiles, audits and policies in one place and it's a good example of how you could use it. In the process I've actually learnt that if you're determined enough you could probably build a site using this tool but you would need a lot of time and it's not really worth it. But it's an interesting approach. I must admit so if it's stupid please apologies. So Drutini can I run Drutini for non-government sites? Like can I put it on a pipeline with CI and have it executed? It is an open source project and it is adopted by Acquire. They are using a lot of their sites so yes there are other flavours of it running about. It is entirely open source and there are instructions on the repository for it. I meant like very targets like government specific requirements or security requirements or where it helps me. All you need is a Drosh alias and your config but it's entirely open source. Whether agencies or companies choose to not show you some of those profiles or audits is up to them but there's a lot of stuff out there that's readily available. So in the pipeline the Drosh needs to have access to fully bootstrap Drupal websites right? Correct. I should have just bought your beer and asked you why are you thinking about forking it? What's the limitation? We've had a little bit of a conflict inside about how our Drosh install works. It's very opinionated and they may have not matched up but it's not really meritable for us to fork it but we have had the discussion. Okay thank you very much. Thank you.