 I think it is 11 o'clock and time to get started. So today we're going to talk about fun with continuous compliance. So the agenda for today is first, we're going to do an intro to what fun is and what not fun is. Then we'll talk about why and what continuous compliance. And then I will talk a little bit about how we're dealing with this at Shopify. And then Zia will finish up with Oskal and a little bit more on how you can use that for continuous compliance. So, this photo was actually taken over two years ago. This was actually the last time Zia and I saw each other in person. We were speaking at a conference in London, so hopefully we'll get this updated with pictures from here. So I am, I'm Anne Wallace. I am a senior manager for security engineering and compliance at Shopify. And... Buenos dias. My name is Zia and I am a security solutions manager with Google Cloud and I specifically focus on compliance and compliance modernization. Cool. So, what is fun and what is not fun? So, this here is a reference to the fun scale. So this is often used by outdoor enthusiasts to describe the kind of enjoyment they get from adventures or misadventures as the case may be. But this can also be used for life or in the case of this presentation, compliance. So let's look at each type of fun. So, type one fun is simply having fun. So, good fun. This is like hanging out with friends, going to CubeCon, drinking sangrias. Type two fun is miserable why it's happening but fun in retrospect. So, this type of fun usually begins with the best intentions but things get carried away. So this might be like Friday deployments or doing crazy ultramarathons. Type three fun is just miserable when it's happening. It's not even fun in retrospect. This is something like, you think what the hell was I doing if I ever come up with something so asinine? Somebody please stop me. So this might be a friend once or coworker wants to go on some epic trip and you're on call for them for six weeks or manual evidence gathering for your yearly audit. So, really the kind of the title of this talk came from somebody told me that compliance isn't fun. And that might be true if your idea of compliance is manually gathering evidence yearly and hoping nothing bad is found. This is definitely not fun. But if you think of compliance as a set of requirements that you as a developer or a security engineer and it's your job to figure out how to meet these requirements as you would with like any other software engineering problem, then compliance can be fun. In this presentation we want to introduce you some new concepts and ways of thinking and looking at compliance to make it less burdensome and dare I say fun. I do need to make a confession. I run ultramarathon so like 100 kilometers or longer for fun. So I kind of have a warped idea of what fun is. There's a lot of type two fun in my life. So maybe this is why I enjoy compliance. I don't know. So I'm gonna hand this over to Zeal and she's gonna get into the why and the what here. Thank you Ann. Okay, so let's talk through. Okay, so let's look at some of the macro indicators in the industry when it comes to compliance and the future. We are headed towards an automated compliance future. There are several indicators in the industry that point towards this. We see that security inclusive of risk and compliance is modernizing itself to keep up with the modernization with the overall IT. So a couple of observations. On the left hand side is an excerpt from an article that McKenzie came out in summer of last year where CISOs from all major cloud providers, including Phil Venables from our side at Google opined on using or leveraging automation and code to express security requirements, compliance requirements earlier inside your development lifecycle. And then number two on the right hand side is something that regulatory bodies NIST is working to bring this to fruition. So NIST in conjunction with industry peers like us has developed OSCAL. OSCAL stands for Open Security Controls Assessment Language. So what OSCAL is, OSCAL is a set of format expressed in a machine readable format like XML, JSON, and YAML that allows for exchange of evidence data. So these formats provide machine readable files where your controls and control requirements, your control baselines from NIST 853 specifically is expressed in YAML or in XML. And so you can validate them throughout your CI CD pipeline. And I'll be talking about OSCAL later in the presentation on what are the potential hooks where you can insert inside with your DevOps pipeline. So when it comes to security and compliance, there is a Ticotomi. There's a vicious cycle of rework caused by misalignment amongst teams involved in demonstrating compliance requirements. This is not new to us. Developers are often left to interpret arcane compliance requirements that don't always become, are not relevant to cloud configurations. The SecOps teams, they work with many tools that creates a lot of noise, a lot of fatigue, making it difficult to evaluate the right compliance implication of an alert. And then compliance analysts, they tend to work in a silo and the environment is getting more and more decentralized. So evidence collection becomes cumbersome or becomes hard. So with that as a background, this is why compliance is scored or continuous compliance is relevant. Continuous compliance is about having a right balance of preventative controls, which are green, along with remedial controls when detections arise from your cloud environment. There are obvious benefits or merits to having continuous compliance, but what I really want to emphasize over here is that a certification or an audit that you perform yearly is not sufficient source of assurance anymore. Secondly, going through a point in time audit is costly. It is cumbersome. It taxes your resources. So it's time to move from point in time evidence collection or point in time compliance to continuous compliance. And obviously, this change requires a lot of cultural change that needs to happen within an organization. Moving on, with all this as a backdrop, we launch risk and compliance as code, our solution on Google Cloud. I can speak about it, find me in the hallway, but the key building blocks of the solutions or the message that we want to drive is to build your tool set, build your automation tool set in each of these categories and starting with the harmonized controls. And I'll speak about harmonization of controls later in the presentation. What I want to drive again here is that this transformation from point in time compliance to continuous compliance is a multi-year journey as you build your maturity in each of these building blocks. Speaking about maturity levels, here's a roadmap of capabilities, team alignment and skill sets that should be introduced over a period of time. To truly get to a truly transformational state. And with that, I would like to hand over to Ann to speak about Shopify. Okay, cool. Yes, so Shopify and continuous compliance. All right. So if you don't know who Shopify is, it's very likely that you have bought or purchased something from one of the millions of merchants who use our platform. So we're making commerce better for everyone. We offer a platform. So folks, small, medium, large businesses can build their e-commerce platform. We do payments and everything's done in a compliant and secure way. So those are some big numbers on the last slide and able to support that amount of merchants. These are some of the statistics. So lots of GKE clusters, tons and thousands of Kubernetes services and even more builds a month and GCP projects. And this is a couple of months old. So this is probably even much larger now. So anyone who's a part of a compliance team or audit team might see these numbers and that's when panic might set in. So just a level set on the compliance programs that we have at Shopify. So PCI, Sarbanes Oxley, SOC2 and Swift. Just one more slide on numbers. So and again, you can imagine managing the security for this amount of containers could become an issue. And then thinking about compliance as well. This is really starting to sound like type three fun, just not fun at all. So let's look at how we build these containers. So these on this particular build slide, there's some specific Shopify tools and lingo in here, but this support or this type of deployment pipeline is probably familiar to a lot of folks. And this is what I would call type one fun. So it's a built out functioning pipeline. So you push your code, all the checks happen and boom, you're in production. That's fun. So for the rest of this talk, we're kind of gonna focus on this triangle area for making compliance fun. All right. So we're gonna talk about in that triangle, we have some stuff like binary authorization and voucher, which is an open source project by Shopify and I'll get into that. So just to kind of walk through the pipeline. So first developer or anyone pushes a change to GitHub. Secondly, build kite. So this is our container builder. Build kite is used to create Docker containers to encapsulate the new application code. When the build complete successfully, build kite will then push that to Google container registry or GCR. And then the build metadata is stored in container analysis or the open source version of that is Grapeus. And then voucher scans the images in the registry and signs those images if they meet the specific voucher policies and I'll get into that in the next slide, next couple of slides. So and then the signature is pushed to Grapeus or the metadata storage and signatures are based on the digest of the version of the image. So then we get into deploy time. So Kubernetes receives a change to a deployment. And then we have another, this is a Shopify specific tool called Shabas. So it's an emission controller. It receives the event from Kubernetes and their views and manifest received converting those image tags into digest. And as I mentioned, the signatures are based on the digest of the image, which means that we need to look up the digest associated with a tag. And then we have binary authorization or the open source version of this is Cretus, which is also an emission controller and it receives a manifest after Shabas and checks for valid signatures in Grapeus and then blocks any invalid images. If the image is not blocked and Kubernetes pulls the image from GCR. So what is voucher? So voucher is a tool that examines container images and runs checks against it. For every check that passes voucher, then signs the image with the checks associated maybe open PGP key or KMS key and then pushes the new signatures into Grapeus or an image data storage. These signatures are also known as attestations. So we use vouchers DIY check, which is used to ensure that only images built by Shopify's infrastructure and approved third party images can be run in our clusters. So now let's look at some compliance controls and see how we can leverage our current security tooling to meet compliance requirements. So maybe you deal with PCI. So when talking to an assessor for PCI about 6.1, you might hear something like that you need to classify risk as high, medium, or low because this allows organizations to prioritize and address the highest risk vulnerabilities faster. The risk rating should be based on industry best practices. So maybe something like taking into account a CVSS baselines or manufacturer classifications. So this particular thing isn't always fun either. So let's make it fun. So going back to what we have that we're already using with voucher, how do we meet this compliance control? So again, kind of looking what comes out of the box of voucher, we have the snake oil check, so which voucher can make sure images are free of known security vulnerabilities. So to implement this in our current environment, what we need to do is edit the voucher config file. First here, we are enabling that it would fail on anything that comes back with a high ranking. And then we're enabling snake oil here. And then thirdly, we are configuring the keys. In this case, it's KMS that will sign the attestations. And then for binary authorization or CREDIS, we create a policy that requires the testers to sign before deployment. So to go into this, we have a default rule that has evaluation mode of require attestation, which binary authorization to allow the images, it has to be assigned, or it has to be authorized by the signed attestors, which is in the required attestation by, sorry, my tongue got tied on attestation. So if that is not true, then the deployment is blocked if they're not signed. So again, we have this flow here. And let's see how this works with the snake oil policy. So first, a vulnerability scan is ran when a new image is put into the registry. The results of these scans are stored as metadata. Then the voucher policy checks to make sure there are no high vulnerabilities. If there are, the image is not attested to, and a log is written to the audit log. Otherwise, it is attested. And so then we go into deployment. So let's say someone tries to deploy this image or this container. Credits and checks if the image was signed by the, if there is a signed attestation. If not, the deployment is stopped and an audit log is written. See, again, I have a warp sense of what's fun, but we're taking something we're already doing, and just with a couple lines of code, we're starting to meet compliance requirements. And not only did we did that, we did it with little effort. And the vulnerability scanning continually happens, so anything bad is found in logs. And auditors really love to see this shit, especially the audit logs of things getting blocked. All right, so let's have a little bit more fun. So here's another PCI requirement. So this control is really about incorporating security throughout the software development lifecycle. This also applies to software developed internally, as well as bespoke or custom developed software from third parties. So again, we have stuff out of the box of Voucher. One of the ways that we can satisfy this particular control is making sure things come from known sources and are built by trusted sources or your team. So luckily, Voucher has checks for this, DIY in Providence, so this is really a two for one. So again, similar to what we did for Snake Oil, the DIY checks here are in green. We need to enable them. And then the valid repo option in the configuration is used to limit which repositories an image can come from. The option takes a list of repos and are prepared against the repos that the image live in, if the image will pass if it starts with any of these items. And then the Providence check, which is in blue, works by attaining a build information from the metadata. And the image from Grafayas. And then verifying that both come from a trusted project and is built by a trusted builder. So in this example, this image has to be built from the project, compliance images, and either built by Anner's at Shopify or by this particular service account. This will not pass if it doesn't come from this compliance images project, regardless of who built it. So, and again, just like Snake Oil, we just need to add the attestors to binary authorization so we can block this or prove it during deploy time. So a couple more. I think you see the pattern we're going for here. So again, here's another PCI requirement about restricting access, along with making sure we're using least privilege. In this case, we wanna make sure that the containers are being run by a unique accounts and limited and not as root. So yep, we have a check for that. So that's the nobody check, which makes sure an image was built to be run as a non-root user. And then just for funsies, let's do one more. So this is a SOC2 requirement that we have that makes sure that CITES has to run to cover security prior to deployments. And you might have guessed, yes, we have a check for that. It's our approved check. So again, you can just see the matter of a few lines of YAML and enabling several checks and attestations. We've satisfied quite a few different compliance requirements. So let's have some fun together. As you've noticed, Voucher has just a few out-of-the-box checks. We would love more. So please contribute to making compliance more fun, type one or type two fun. So here is the GitHub for this. So now I'm gonna hand it over to Zeal to talk about OSCAL. Yes, thank you. So if you go back to the talk from Ann, and if you look at her slides, most of the evidence that was collected by open source tool, or most of the evidence was XML or YAML based. And so the question is, can we take this and convert it into an OSCAL format, which is again, a XML or YAML based, and use the evidence that is collected at the time of your CI-CD pipeline and use it to push it, have the evidence dump into a report. So I'd mentioned about OSCAL, and so I'm going to talk about it. That's the whole basis why OSCAL makes it, makes continuous compliance easy and drives down some of the paperwork. So some of the common asks from our customers around continuous compliance, compliance modernization, as they transform on our platform, I'm sure everyone would recognize this. The box that I really want to drive your attention is the last box in gray, how to decrease compliance paperwork. Most importantly, how do you reduce the audit fatigue? How do you generate evidence faster? How do you populate certain sections of an audit report automatically based on the evidence collected? Is there a way where you can exchange data with your GRC tools based on the evidence that is generated inside your CI-CD pipeline? And so this is the last block is where OSCAL or the goal of OSCAL is to help out. So what is OSCAL? OSCAL is a format, as I mentioned earlier, it is a machine readable format that allows GRC tools to exchange data in the form of evidence and artifact that is generated. So today security controls and control baselines, whether it is 853 or 27,000 one or PCI DSS, it requires data conversion and a manual effort, lots of cycles to interpret a specific implementation of that control. If you have a control implemented, it needs to be manually interpreted, it needs to be manually checked with the auditor and the evidence has to be exchanged. That requires a number of Word documents or spreadsheets. So this is where OSCAL can help. The goal of OSCAL is to move the security controls and control baselines from a text-based manual approach to a set of standardized and machine readable formats. With the systems, when the security requirements are standardized, the monitoring of evidence becomes easy. And there are a number of examples of open source OSCAL efforts to improve the adoption of it within the cloud native and compliance community to make it more mainstream. This is really why we are doing this talk to drive awareness about OSCAL and have people contribute to it. So I spoke about OSCAL, I spoke about continuous compliance and when I speak to customers, the natural question from them is that, how do I get started? What sort of data set should be OSCALized or what should be converted into an OSCAL format? So here's a reference architecture that enables you to develop this capability. The starting point of this architecture is the top box, the directional controls or director controls. That is nothing but a harmonized controls library and the control requirements can come from a number of sources, your threat landscape, your internal governance requirements, your internal security policies and procedures, the compliance requirements that you have to meet externally. So when you harmonize controls across these different areas, you get to a technical control library, which is the red box in between and that becomes an input and a guiding principle for your preventative controls, your detections and then certain responsive controls or remedial actions. The key point I want to drive over here is that building this technical control library, it takes some time. It's an effort that you would have to invest your resources into and it would be great if you can start to think about oscalizing that technical control library. Now how would this technical control library work with engineers? So let's take a look at the path. So for every directive control in your control library, you would have a corresponding preventative control and to stop non-compliant resources from getting into your environment. Next, you have detections to look over the environment to constantly scan for non-compliant resources and then finally, you have certain responsive controls where applicable to bring the drift back to their original state. So every policy evaluation should have a feedback loop to the engineers, to the people who write the code, who write applications. A prompt and a meaningful feedback loop like this would provide better engineering experience and would drive down the velocity, or sorry, would increase the velocity of your applications. You'll be writing faster code, you'll be shipping faster code and securely. Sir Ann talked about the CI-CD pipeline and DevOps tooling at Shopify. This is something similar. What I want to draw your attention to is are these red boxes or red circles as when the code progresses through the pipeline, it gets evaluated at these checkpoints which are marked in red. And most of the results from these evaluations are XML or YAML based and these are perfect examples for the results to be converted into an R-scale format. And if when these results are properly integrated with your DevOps tooling, such as your Kubernetes configuration files are converted to R-scale format, or your policies are converted to an R-scale format, you'd probably get to a truly continuous compliance nature. And with that, I wanted to drive this call to action. If you're interested in contributing more to R-scale, if you are interested in building that control library, there is a Kubernetes policy working group that has started mapping out technical controls from 853. That library can guide your closed loop of preventative and detective controls. So feel free to catch with me. We have a number of efforts at Google to even drive the adoption of R-scale. And with that, I would like to hand over to Ann to close off. All right, so there's a survey that you can tell us if you thought this was type one, two, or three fun. So I would appreciate that. So I always have to put my dog in. So yeah, we have time for questions or not questions or if you wanna have fun. Yes. Ah, we have a crown. In the center. Oh, thank you. Sorry, I had to push the button. That was my bad. Okay. There was a question over here? Yes. We'll start there. Thank you. Thank you for the talk. This question is for Ann. I wanted to ask you why you choose to scan for vulnerabilities when you send the images to the registry instead of doing it in the build pipeline, for example. I mean, is there a particular reason for that? So part of Google Container Registry is, what is that called? There's vulnerability scanning built into that. So it's just out of the box. All the images are scanned and that's stored in the image metadata. What was the question? Why we were choosing to scan the images where we were. Because part of GCR is the, yeah. Well, first some of it. I think there was a question over here. Yes, thank you. I don't see any questions online at the moment, so. I'm also happy to answer things on Slack later, Twitter, whatever. Be sure to ask on Slack. Happy to talk about type three fun adventures I've had. Jet lag seems like type two fun. Thank you for the talk. I have a question. I get monthly auditors on my address and asking a lot of information about the system, the applications, but also about the documentation. Is a service account could describe in a documentation? How do you look to automate that? How can I automate that? That I have to scan a lot of documents to look where it is described. That's by the way a government policy in the Netherlands. That everything is well documented. Automation through service accounts. How would you handle that? Sorry, I was trying to, how do we handle automation through service accounts? No, it's about documentation. Documentation where service accounts is generating. Yeah, you use a service account, but the government policy in the Netherlands, you must describe the service account in a document and you have to prove that a document is exist and that the user is there in the document. How do you scan that? It's all word documents. Excel, it's several language. How can I? How do you explain service accounts to auditors? How do you explain service accounts to auditors? Yeah. Yeah. So coming from an audit background, service accounts or a robot accounts or automatic accounts, you would have to spend some time explaining them, how they get created and how you're restricting them. Once they have an idea about what restrictions you have around service accounts, you should be able to demonstrate your flow of one, you're following up the least privilege and then whatever attestations your service account is signing, it can be used as the evidence. Does that make sense? Yeah. Yeah, we have internal tools that we walk our auditors through on how we limit the scope of service accounts and how long a service account has access to things. So we have built some tooling around what Google does. Yeah, that sounds like it got it. So there are a couple of questions online, so I'll do those real quick. Does the continuous compliance reference architecture allow for approved exceptions or how would you recommend dealing with that? I could not hear it. Sorry, the mask makes it hard I think. So how does the reference architecture for continuous compliance? Does the continuous compliance reference architecture allow for approved exceptions? So approved exceptions, yes. Yeah, there are break glass scenarios, so yes. But with compliance all of that needs to be documented and you have the evidence to show to auditors on why that happened, why the break glass or whatever you might call it happens, but yes. Lovely. Any other questions in the room? Yes. Thanks for the talk. Just a question for Ann. Around the snake oil flow, I can see the stuff you're doing and how it successfully ensures compliance for new images going into the environment. I was wondering what your flow looks like for detecting and automatically responding to if vulnerability gets discovered the next day, which affects the same image. In particular, I'd be interested to know what the automatic response looks like, because pulling it obviously isn't going to be, could affect your service. I am so sorry, I'm having such a hard time. I could not. So yeah, it's... Yeah, the snake oil, the one that... Yeah, I was wondering if you have an automated process for discovering vulnerabilities after the image has been deployed, if vulnerability gets discovered in a package that's already in that image and what your automatic response would look like in that case? I'm so sorry. Why can't you go? So if you have an image that's already deployed, then you detect... How do you detect vulnerabilities in that and how do you deal with those types of situations? That's a good question. So images that are currently deployed, so container registry, so anything that's in the Google Container Registry are continually scanned. So that can leverage things that are currently deployed, so you can map that one to another. But I don't have a good answer for you. I can follow up on Slack on how we're actually doing that. Good enough? All right. And there is another question online, and I'll make my way over here. The one online is very, very quick. I believe it's for the Googler on the stage, which is, why do you use pirates as the persona for devs? Which I have my own take on. Why do you use pirates as the persona for devs? Oh, well, I can answer that, too, because I worked at Google. Google has a tool that you can just generate little images. So, and the thing that happens is people create slides at Google and they have these images in there and they think they're cute and it just gets reused. So, there was nothing that we wanted to use a pirate, or I don't want to speak for you. It was a generated thing, I don't know. I'm pretty sure that it's a group of icons that the design team at some point made. There's a service that we can... For disability visibility. So it's someone without an eye, there's someone with like a hearing aid and just the one with the eye patch gets used a lot. There's one here as well, yeah, go ahead. Thank you very much for the talk. Also, not used into DCI and fun being together. So, your advice for a new company that might want to go through the process of compliance, because I know that the PCI DSS is not exactly up to date with cloud providers sometimes. So, what would you advise someone that maybe wants to go through the process by automating some part of it, but maybe not being prepared for questions from auditors that might not understand the whole concept of automation in compliance. So, do you mind paraphrasing a little bit? I got some portion of it. So, just to rephrase it, advice to new companies that need to go through compliance steps. When they have prepared the automation, but their compliant officer is not ready to transfer that into PCI DSS items. Like, what was the challenges of introducing this automation maybe in the compliance process for you? And what can you advise some new companies that might want to do that as well? Awareness and education. Any sort of new change will drive. You wouldn't need to drive awareness and education about how automation actually simplifies and makes the code more secure and more compliant. Once the conversation flips in that direction, usually whoever is opposing it gets on board. And we're a couple minutes over, but I'll ask this one last question online in case anyone can stick around for it. Welcome to go, of course, though. The continuous compliancy model looks really great. What happens when deployments do not meet the compliance check, but you do need to fix a production incident? That's a good question. I think that I'm trying to think of a scenario where it wouldn't meet a compliance requirement and we would deploy it anyways. I think there's always exceptions. And as long as it's documented well enough and why that happened, and then you go back and fix it so it does meet the compliance requirement, you're probably okay. Hopefully you have a good enough relationship with your auditors so you understand what their tolerance is for that stuff. But yeah, anything, you just document the hell out of everything when it comes to these sort of things. I mean, anyone who's dealt with auditors know that. I mean, it comes up all the time. They want it documented. We recently did a cutover from AWS and GCP and we're taking screenshots of everything for our SOX environment. So leave a paper trail and document it. Yeah, yeah. So we're happy to answer. I know you had a question. We're happy to answer questions out in the hallway. So thank you all who stuck around. And yeah, have fun. Thank you.