 Thanks everyone for joining. I'll start out just by doing a quick introduction of myself. My name is Dan Yonner. I'm a product manager at Pivotal. I've been working on Cloud Foundry since early 2016, during which time I've worked on two main projects, both of them sort of in the security realm. The first one, which I had the opportunity to talk about last year at Summit, is CredHub. And most recently, I've moved on to the security triage team, which focuses on application security in Cloud Foundry and at Pivotal. One of the things just top of my mind right now is thinking about how to integrate security-related automation and tooling into big development environments. So if you all have experience on that, I'd love to chat with you in the hallways. So the focus of my talk today is on application security of the platform in Cloud Foundry. I'm going to start talking about the importance that I see of the community's contributions to security in the platform. After that, I'll be talking about who I think should contribute, run through a quick start guide, just like a one-on-one on security testing, if you're not familiar with how to do that. And then finally, I'll wrap up talking about how to do disclosure in the event that you do find a security issue in Cloud Foundry. Let me just start by saying, security loves the community. Like the phrase that we often hear is, with enough eyes, any bugs can be discovered, or any bugs are shallow. I think that really is one of the superpowers of having a community. If you have a lot of people looking at something the more scrutiny, the more secure it's likely going to be. Cloud Foundry has a lot of enterprise members. We have more than 60 member companies and hundreds of individual people contributing. And the exciting part about that is we all have the opportunity to contribute to the security of the platform. But one thing that I will mention, there's a little caveat to that, right? The many eyes thing only works if we're actually engaged. So the whole point of this talk is to hopefully inspire you all to get more engaged in security in the platform so that we can find all of those bugs. Three areas in particular that I want to talk about for where I think the community can really make an impact in challenges in Cloud Foundry. The first one is about complexity. Thinking of this as just like specific platform level complexity or a platform component level complexity. We have a lot of really highly complex components in Cloud Foundry. And it's really hard to understand a lot of their boundaries and features and interactions. And as that complexity grows, the likelihood of security issues with that functionality also grows alongside that. So it's not necessarily bad. A lot of times this is necessary. But where that complexity is necessary, I think a good counterbalance to that can be all of your interactions in the community to provide more scrutiny on that and provide more auditing of the security of those components. Another area is around just like integration and variations. If you deploy Cloud Foundry, you might see in CF deployment, I think there are 2,500 lines of configuration. And with each of those different settings, there's a possibility for different things to go wrong. We do a really good job in CF deployment of shipping this really great product. But Relent can only test a certain number of configurations. It's not going to be the full breadth of all the configurations that are out there in the community. So I think having the community do testing on their specific flavor of Cloud Foundry is really important. And I think one of the reasons that's important is with these well-worn paths, we may make assumptions about different components contracts and how they interact with each other. And some of these custom deployments can surface that those contracts aren't as well enforced as they should be. And then the last area is diversity of experience. So a large number of security vulnerabilities, basically the underpinnings are that you broke an assumption that someone had. So if you have a user sign up form and you assume that no one's going to put drop table users as their first name, you might have a bad time. So generally speaking, people that have similar experiences are going to have similar assumptions. So having a good diversity of people in the community is a good way to counter that. So fresh eyes and new perspectives uncover a lot of things that are overlooked. One thing I'll also mention is I think this also applies to security experience itself, which is to say a cryptographer might be the best person to try to attack a system by predicting a random number generator and using that to leverage attack, but also somebody that doesn't have specific experience in security might be able to discover that there's a default user. And at the end of the day, if they both get to the same place, I think both of those are valuable things to research. All right, so who should consider contributing to security at Cloud Foundry? Hopefully you're listening to the previous point, which is I think everyone's experiences are really valid, and I think diversity of experience is important. So you all, that's the answer to that question. All right, so let's talk about how to contribute. So next few slides, I'm just going to go over. If you don't have any experience in security, you're thinking, hey, this might be a good idea, but you don't know where to start. This is like a testing one-on-one. But of course, I'll start with a disclaimer. So next few slides, I'm going to talk about thinking about how you could do an attack and picking your target for attack. One of the assumptions here is you're working on a local environment or an isolated environment. If you don't have permission to do vulnerability assessments on a platform, it's likely illegal, so don't do that. Certainly unethical, if anything, so don't. All right, so first step is really about a mindset. I've got a picture here of a bunny. And if this is a normal bunny, it's going to look at those pegs and it's going to say, I'm going to get to the other side by jumping over those pegs. But if it's a hacker bunny, it thinks, why do I go to all that work? I should just go right through. I don't care about your access control. I want to go from one side to the other. So first thing is thinking like a hacker. Don't focus on positive scenarios. Don't focus on where you would expect it to work. You would have to focus on things that are negative scenarios and edge cases. So that's the first thing. A few things that I'll just point out as resources that I thought were really interesting as a mindset shift were the first one's physical security testing. It might sound a little bit counterintuitive, but the reason that I thought these resources are really interesting is because they're really engaging. And also, it's one of those things where I've got an elevator and a door hacking link on here. And it's interesting to watch those and then look at things that you experience in your everyday life and think, oh, hey, I could hack this elevator. It's an interesting thing, especially when your intention is to change your mindset a bit. And also, I think a lot of these principles carry over between the two. So thinking about the various points of entry, thinking about how those are secured, how they're authorized, overcoming different obstacles along the way, and chaining those together to get from point A to point B. I think translates well. The other thing is, there's this whole host of applications. A lot of them are hosted by OWASP, where they basically build an app, and they build it intentionally and securely so that you can go in and get some hands-on experience in doing different exploits. So the ones that I've tried are OWASP Jew Shop. And Hack this site is also just a site that you can visit without deploying anything. But those are really cool because they basically just make it a game. And the fact that you're going to succeed makes it a lot more fun. All right, so you're starting to think like a hacker. The next thing you need to do is start some actual testing. So the first thing that you might want to consider is what sort of attack methods you're going to use. So one of the ways that you can go about this is basically just go to the OWASP top 10. If you're not familiar with this, this is a list of the most common vulnerabilities that are found in software. A way that you could go about this is select a specific one that you're really interested in, like thinking about injection, thinking about the various types of injection that happen in applications, what sort of general attacks in injection look like. And then you could also explore some of the tooling that surrounds a lot of these attacks. A nice primer on that is there's this presentation by PagerDuty that I've linked up there. They walk through each of these different exploits and get into a lot more detail about each of them. Another option that you have is looking at historical vulnerabilities. So if you basically look at all the vulnerabilities that exist, if you see, for example, that a specific component has been affected previously by SQL injection, a pretty reasonable assumption would be that that application doesn't have holistic protections for SQL injection. So if you wanted to narrow down your targets and see basically just increase your odds of success, you might want to look at historical vulnerabilities. Speaking of historical vulnerabilities, a quick plug today at 5.30. Molly and Rupa are doing a talk, which I hear is not going to be recorded about the top five security vulnerabilities that we've discovered in Cloud Foundry. So definitely go to that. All right, so the last method that I'll mention here is just doing plain old automated tooling testing. I mentioned this, but I will put a little caveat on it, which is a lot of these are really good at finding things, but they also find other things along the way that aren't very good. So if you do decide to use automated tooling, make sure you look at the end result and validate that there are actual security issues that you care about and not just a configuration of your deployment or something like that. All right, so you've got a method. The next thing you want to do is think about a target for your test. This is really just a matter of doing a bit of good research. You would want to be somewhat familiar with the application that you're testing, do a little bit of research on all of its different behaviors and interactions and configurations, and then also look at all of its interfaces. If it has different APIs, different ways to interact with them, each of those can be a different surface area. All right, then the last thing that you want to do is you want to perform the test. Basically what you're looking for here is any change in behavior or response that compromises either confidentiality, which is information you shouldn't see based on who you are in the attack. Integrity, so if you can modify something that you shouldn't be able to modify. Or availability, like if you find a way to send three attacks and it falls over, that's certainly problematic. Some of these can be really subtle, like if you send an attack and it sends back an error message, but the error message includes details about the environment or something like that, that could be cause for a concern. So look out for subtle changes and do a little bit more research. The last thing that I'll mention is denial of service testing or availability testing is one of the outliers here, which is usually you don't want to do availability testing that basically just like throws a ton of load at a component. There are a lot of tools and services that prevent that sort of attack. So just throwing a bunch of load at CAPI and seeing it fall over isn't super interesting or unexpected. And then the last thing is around reporting. So as you're doing this testing, a thing to point out is make sure that you keep good logs of the tests that you're doing so that if you are successful, you can actually report it. And it will be really easy for us to reproduce and really easy for us to fix. Testing can be really frustrating because 99% of the things that you're doing are going to fail, which is a good thing. So if you finally do get a result that you think is a vulnerability, you want to make sure that you're recording along the way to make sure that you can reproduce that. And the other notes here that I'll reiterate in the disclosure section are if you find a vulnerability, you're super excited about it, please do not disclose it publicly. Please don't create an issue on the components GitHub. Don't create a PR even, even if you're trying to be helpful because we want to follow a responsible disclosure process, which I'll talk about next. All right, so the last thing I'm going to cover is responsible disclosure. So basically what we encourage here is that people report vulnerabilities to us privately. So if you find something, the concept of responsible disclosure is really just disclosing to the maintainer of the software privately that you've discovered something and giving them a reasonable amount of time to fix it before you start talking about it in public. So the idea is when we do the disclosure, hopefully we can also provide people in the community the fixed version so that they can remediate that at the same time that they know it's an issue. And as part of that process, we also do commit to disclosing vulnerabilities that are discovered in a reasonable amount of time. So if we get a fix for something, the next step is telling everyone that a vulnerability exists so that you can make a good judgment about when you want to update your systems so they're not vulnerable. The disclosure and vulnerability process is really straightforward. It's just starting with reporting, going through validation, remediation, and then finally disclosure to the public. Hopefully, if you didn't take anything from this talk, this is the one thing that you do, which is if you ever encounter a security vulnerability in Cloud Foundry, please do send an email to security at cloudfoundry.org to let us know, and we'll research it and get back to you. As a general rule of thumb here, err on the side of being more communicative than less. So if you see something and you're like, I'm pretty sure, but not really sure, I don't want to bother them, change your mindset, definitely bother us, send it to us, and we're fine if it turns out not to be a vulnerability. Quick shout out here to the security triage team. These are my colleagues, the folks that I work with every day. They handle both the open source Cloud Foundry security stuff and also the security things with Pivotal. So if you send a report to us, you're likely going to be talking to one of these folks. I'm super biased, but I think they do great work. So I think you'll be surprised and delighted with the interaction. All right. So validation. So once you send us a report, we're going to validate it. This is pretty straightforward. Basically, our intention here is to try to verify what you've sent to us. Our hope is that we can really get a good handle of the issue that you report and make sure that we can analyze any impacts that it has on the platform. If it's unclear or if we don't know, we're going to reach out to you and ask some questions here. So it'd be great if you report something to respond back to us. And then, of course, remediation. So if we are able to validate something, we work with the teams to produce fixes that patch that issue. The timeframe for this is going to vary, but the main motivators here are primarily severity of the issue. So if it's a critical or a high severity issue, we're going to start working on it right away. We're going to have eyes on it and we're going to try to get something out right away. But then, of course, we also have to weigh the complexity of the fix. So the combination of those two determine how quickly we are able to remediate something. Lastly, once we've got fixes out, we've released something. We'll do the disclosure. This is primarily communicated via the cloudfoundry.org website. If you go to cloudfoundry.org slash security, you'll see a bunch of security notices. We do this shortly after a fix is available. And we basically just do this so that as members of the community, you can look at this and say, I see this vulnerability was disclosed. I see what severity it is. I can make a really reasoned decision about the risk of either patching or not patching that issue. And it's sort of up to you to do that. But please do patch. So all right, that's the end of the disclosure process. As I wrap up, I just want to say thank you to these community members that have reported security issues to us over the past year. Cloud Foundry is definitely more secure because of the involvement of the community. But I will say, hopefully, I've inspired some of you to do more testing and have a little bit more interaction with us in the open source security side of things. And next year I'll come back and I'll do a presentation called Community Love Security where I talk about all the cool reports that you all have submitted to us. Thanks for listening.