 So cyber security and open source software. Where do we begin? You know, the question I get all the time is sort of, what is the state of open source security? Which is such a vague question. And it ranges from, you know, people who really don't know much about technology to say, it's open source stuff. Should we just stop using it? To folks who say, well, it's too late on that one. And now we have to go look at how open source affects all of our collective security. What are your thoughts on, you know, sort of where, I know this is a broad question, but where from your perspective is sort of the state of security in open source software? And you know, what do you think are some of the things that are good? What do you think some of the things they can do better? Well, once upon a time, there was a lot of concern around security and open source products because of course the source code is available and there was this worry that if the source code is available, then the attackers will be able to find the security vulnerabilities, which is kind of hilarious because they can certainly find vulnerabilities in closed source software as well. And then there was this idea, like Lena was saying, many eyes make all bugs shallow. And so the pendulum swung the other way. The problem is it really matters whose eyes are looking. And for a lot of open source projects, if they're really high visibility, if they're really high impact, then there might be folks out there looking. But for smaller projects that we might depend on in ways that are not as obvious, there may or may not be folks looking. And if they're folks looking, are they the right folks? And if they're the right folks with the right skill set, are they also willing to engage with the community and share what they found and work to make those projects more secure? They might actually be leveraging those vulnerabilities for their own nefarious purposes. So there's a lot of, it really matters who's looking. So we get to this point where we're very often working with very high impact projects, maybe with a small team. Maybe they don't have any security folks on board. And if they do, they might be volunteers. They're working on the things that are interesting to them and not necessarily some of the drudgery of application security because that's not the fun part. And they're volunteers, so their time is self-directed. And we end up in a situation where we have a lot of technology that we depend on and without necessarily a structured, comprehensive security program to support it. Yeah, yeah. You know, I want to come back to that. But a question that we always ask ourselves, both at the Linux Foundation and then I know in a lot of communities is how does our project, how does our world be the best upstream community, security included, for a downstream user who's going to either use it directly or productize it and then sell some commercial service or solution? And the question I have for you because you have lived in this world of application security, you know, sort of taking open source code, building it into a product or service, can you tell some of these folks how you see the world? How does that work for someone who's on the line for application service security when you're using open source? I'm not sure I understand the question. How do you view when you're working at Fastly, open source code that you're incorporating in your products and services? Like, do you check security? Would you have a process for it? Like, how does application security work in that context for a commercial company? Absolutely. So if we're incorporating open source software in any kind of environment, I always consider it the same way I do any third party code that it may or may not be up to the security requirements for this project in our environment or for our deployment. And we have to evaluate it against our requirements. So there's plenty of commercial software, closed source or whatever that that doesn't have a security program to address the kinds of issues that might be relevant for my environment. So whether it's coming from open source or a commercial third party, I have to evaluate it against the requirements for the situation that I'm working on. And we may or may not have to do more work on. And the benefit, of course, is that if it's open source, and there's community support, we can give that work back to them, which for a commercial provider, they may or may not incorporate that work very similarly. But I definitely treat it as any third party library, any third party consideration that I don't know if this was built to our standards. So I have to mitigate risk this way, it might be through isolation, it might be by by doing our own analysis on it. But I definitely treat it the same way I would any other third party code. Right. What advice would you give though to, you know, ecosystems, you know, where the Ruby or node where, you know, how do it does it create more work in terms of all these dependencies that come in, you don't maybe know where it's coming from or not? Is that, is that a big problem from your perspective? Or this is just all part of the same process? It's the same problem, no matter where where your code comes from. Yeah, it would be nice to think that Oh, well, this provider, whether it's an open source team, or it's a commercial provider, they've got the requirements that here I ask a few questions about their development process, and I can say, okay, it sounds like they're doing the kind of work that we need to do in our environment, we can just assume that this code is up to the standards for our environment. But I very rarely get to the point where I feel like I can outsource that problem and say like they've got it handled, and to our standards. Right. So it's very rare that you can say this doesn't require investigation on our part. What's more likely is that we find ways to isolate code. That's if you don't know what level of investigation has happened, and especially even with open source code, you can say, oh, we can do the analysis on it, but very often you're looking at, you know, thousands or millions of lines of code, depending on what we're looking at. And to be able to do that kind of work in most development environments is really impractical. Especially if your engineering team is nowhere near as large as a team would be to develop software at that scale. So you have to deploy other strategies like isolation and compartmentalization. Can I put it on some other server? Can I create a very small communications channel that has very little opportunity for access from vulnerable areas? Can I reduce the attack surface as much as possible? Right. Yeah. I mean, I tell you, I talk to security professionals like you all the time, and just I can't imagine how difficult the job is given, you know, Linus was on stage a year ago, and we were talking about cybersecurity. And he said, you know, one of the hard things about someone who's writing code is it's very hard to think like a hacker, to think of like their attack vectors. And you know, what do you see are tactics that hackers are using now? Maybe they didn't before or is there, you know, what are you seeing out there? Get us a little bit into the mind of hackers. Things have really changed. And it really depends on what they're, what kind of hacker we're talking about. When I was a teenager and working on this stuff, the goal was probably getting access to something you otherwise wouldn't have access to, whether that was an operating system or, you know, a site with a lot of bandwidth that those were the objectives. And then attackers started looking for ways to compromise. Oh, they want to host copyright material or they want to put together a botnet to go spam folks. But the objectives have completely changed. Now, instead of, you know, just grabbing the credentials or personal information off a machine and trying to monetize that, they're also leveraging your CPU time for, you know, mining Bitcoin. They're, the landscape is completely different, right? So now we're also talking about more sophisticated hackers. This is not like, these are, you know, nation states instead. And so instead of, let's say, compromising machine and taking what is present there, we're creating fake news and changing election outcomes. And the stakes are so much higher. Yeah. And the objectives are so much more sophisticated. Yeah. So we don't know what we don't know. Right. And trying to, let's say, build an environment that mitigates risk that are so diverse. And so it's hard for us to say what's going to be important for the future, but we're building the software that has to be resilient against tomorrow's threats today. Right. That's an incredibly difficult problem. Yeah. You know, we were talking backstage about our core infrastructure initiative where we, we are sort of all collectively dependent on this open source world out there. I think it's safe to say that there's so much dependency on open source that that's not going to change anytime soon. You can't just rip all this stuff out and replace it. But, you know, we spent a lot of time at the core infrastructure initiative finding the intersection between, you know, widely deployed and critical to society and kind of screwed up. Right. And there were some obvious ones, you know, open SSL, I think is the best example with Heartbleed and we spent a bunch of resources helping that team and I think things got better. NTPD, another example, spent some resources there, unclear that that's better. What, what are some thoughts that you have on how we can improve the overall state of application security integrated into a development process? I realize this is like a humongous ambitious goal but what, you know, share some thoughts that you have on how we might approach that kind of problem. Well, there are a couple of things. I think over time developer awareness has really improved that more folks who don't necessarily consider security to be their, their primary job still are familiar with the kinds of constructs that could result in vulnerability and how to avoid them and when they do peer review on, on potential commits they're looking for these kinds of things and that's, that's a good start. I think one of the most important things to improve security has been the languages that have abstracted away from developers, things like memory management or crypto. We definitely want to be in an environment where fewer people are, are implementing those things because it's, it's brought with peril. Those are operations that are, are difficult to do securely. There's lots of ways to mess it up. So if you're able to write your code in a, in a higher level language where memory management's not an issue then you're potentially reducing your risk for memory corruption issues down to just what the, what the platform exposes which is, that's, that's a huge burden that's removed. So now we're not going through that code looking for potential memory corruption vulnerabilities. We're, we're looking for logic issues. We're looking for safe storage of secrets, that sort of thing. If you end up rolling your own crypto you are creating all kinds of opportunities for problems whereas if you leverage from the platform the cryptographic libraries and then on top of that you apply best practices now you've really reduced your risk pretty significantly. So this move to higher level languages has done more to improve application security than you know any developer training we could do, any work that we could do as a community of security researchers going out there and trying to review this code or do penetration testing or develop fuzz testing tools or or any of the low-level analysis that we could do which is so expensive in terms of time and really requires a skill set that's not so widely available and all those folks have full-time jobs. It's, it's just, yeah, I think moving moving applications to higher level languages is possibly the best thing we can do for application security. Interesting. I never thought of that. The, the, but you still have to whether it's the Linux kernel or other things make sure that test good test coverage, better coding practices. We had a kernel security workshop here on Monday and you know I, the folks who attended that are just amazing folks and it's this job maybe potentially akin to yours as well where you know the bathroom's never clean enough and you know there's this story about how Bill Gates wrote this letter to all of Microsoft back and I think it was maybe 2003 saying like we're going to stop all development. We're no longer going to release any products. Everyone's going to take secure coding classes. We're going to do better threat modeling. We're going to, you know, go through this entire effort to improve the security. I think they went and read every line of code. The rumor is has it that. Or you're fired is what he said basically what he's implying but I'm sure that's what it was. You can't fire anybody in open source project. What do you think are some ways we could provide incentives for folks to learn about secure coding practices or could we go crowdsource ways to improve test coverage or what are your thoughts on how we might do better at that. I think it's really hard when we're we're talking about folks who are working on software or working on projects because they want to, they've got their own idea of how they want to spend their time and maybe they don't think that the time that they spend working on security is going to be as important or valuable as the time they spend implementing new features. And by the way this compromise is happening in every development environment. Security is one of those things you don't get to see until it fails. Whereas that new feature that's something your users could feel today and you know capitalizing on this opportunity means that you know more people will use your code and that's very exciting and it's very hard to say that the time you spend insecurity is going to have a measurable output because you don't have have a problem until someone points it out to you necessarily. So you might actually be incredibly vulnerable. People might be taking advantage of the vulnerabilities in your code to to compromise the environments that it's running in and you won't necessarily know that this is happening. So creating I think a bar beneath which no one should fall is probably the best thing we can do as a community to say that this is a critical skill for C.S. students that it's part of all curriculum that it's part of the it's part of a development environment that you that you do this work that we do threat modeling that we do identify security requirements for this code that we discuss what the code is intended to mitigate and what it's not so that when you're making decisions about how to use it in your environment you can say well I'm going to have to mitigate this problem some other way because they're saying they're not resilient against these kinds of problems. Good that's actually really helpful. Saying that you know exposing the the the tools you're using for analysis to define vulnerability because other folks might be able to build on that and and change for example let's say this is what this is our our suite of fuzz testers that we're using against this this code base other folks can start from somewhere they don't have to start from scratch and then build upon that and we can leverage the the work that there that already exists in this in this environment so might be somebody who wants to to participate for a little while but they don't necessarily have you know six months to invest or they're you know they've got a full-time something else that they're doing but they're they're interested in this project because of something that catches their attention for a little while make it a little bit easier to to contribute more lightly. Yeah but I think raising raising our standards for what constitutes a professional development environment um and uh and what constitutes a professional software developer in terms of of of creating those those baseline security practices and and skills. Yeah you know one of the things we spend a lot of time on is figuring out what is the most critical shared software in the world who what are the software packages version number dependencies that everybody's using that poses this systemic risk and then you know I think you know you nail it which is that what is that lowest bar and for lack of the command and control and open source one of the things we always talk about is how do you create this culture of secure coding practice or application security in open source and you know you you spend your career and your day probably convincing executives and other people that you know the balance between the new feature and those practices can you give us advice on how you how do you sell that how do you sell the investment in uh that minimum bar or you're even higher and any advice that you can give us and I realize it's a hard question I think among developers we want to have pride in the work that we do and if we consider these if we consider this work to be a default requirement then not doing it means that you you're kind of running a sloppy organization I think maybe shame has a little role to play here where in pride right there's there they're two sides of the same corner we want to have pride that you you're operating a development environment that's that's doing reasonable things like if you put in a change and it it has a 20 percent performance impact you feel like that's unacceptable right if you uh do some code analysis on your code and you identify all these different constructs that could result in vulnerability and you just leave them there right that should feel that should feel uncomfortable like oh I'm not taking care of of the basics so I think raising our standards as a as a community saying that this is what a professional development environment looks like this is what a solid development team looks like this is what a good developer looks like and incorporating security into all those different aspects I think that's one of the ways that we're going to across an entire industry raise the bar yeah you know one of the things I always struggle with is you know fear also sells right in terms of like hey you we all have to do this together because all of society's privacy is at risk if there is vulnerability systemically in cryptography for example or other things and it's this balance of you know simplifying the narrative too much of like hey if you don't do this the world's going to end to the more nuanced and real argument of you know hey this is a this is a journey there's a set of processes you need to employ there's a minimum bar there's you know sort of shame versus incentives and how do you think we can explain those to folks who may not know about application security in a way they can understand it I mean what do you do when you talk to execs it and tell or folks outside of intel about these complex issues I try not to operate with fear as my first okay uh yeah step because people get fatigued from folks coming in and saying sky is falling because the sky is always falling and yes there's always vulnerability in everything and whether or not that sky is always falling it's true that when in it's insecurity this is just it's it's it's always a disaster like this vulnerability can compromise everything and so can that vulnerability and all those ones as well and over here's another bucket of vulnerabilities that can also compromise everything and at some point you're just kind of like well if everything's so insecure then why why do anything what can we possibly do right yeah and it gives you an excuse to say that it's not worth doing anything right as opposed to you know we're going to set the bar here and knock this stuff out and that's attainable and we can say okay there's going to be problems but the bar is higher and the set of folks who can take advantage of those kinds of problems they have to make an investment that looks like this and that's tolerable for now now let's raise the bar you know again and that's an iterative process we can say okay we have confidence that we've addressed most of the stuff that's down here at least the stuff that we're able to identify and the bar is higher and the investment is higher so if an attacker wants to find a vulnerability here they have to spend this kind of time to do it but then to exploit a vulnerability that's here they also have to build build and exploit that circumvents all these different mitigations that we've put in place and that's expensive and then if they try to exploit it in the wild in deployment folks have got all these different ways of identifying that there's potentially an exploit in progress and so if it's identified then that specific problem is going to be or that specific exploit it's going to be difficult to use over and over again because once we know about it as an industry we can mitigate it in that software we can create ways to identify that it's happening and so that becomes less effective we can't use it over and over again and that bar is even higher so now in addition to a vulnerability and an exploit we've got maybe this is how they got around this mitigation and now we can make the mitigation stronger we can make the software more resilient we can make the environment more able to identify these kinds of attacks and over time the bar gets higher and higher and that's actually what's happened it is actually a lot harder now to compromise an environment or even an operating system or or a device because you need multiple vulnerabilities you need a vulnerability in every layer so this is the core defense and depth is part of the the SDL that we want to make sure that we've got that we're not relying on any single technology that even once it's compromised it's not fully compromised you don't keep all your eggs in one basket you don't keep all of your money with you at the same time you're not walking around like literally carrying all of your wealth at any point in time you've got you know what you need your isolated other stuff you've you've you've created an environment where it's it's it's it's difficult to to easily get to any specific asset that's high value to you and so all those mechanisms together allow us to create an environment where the attacker has to work harder and harder and harder and harder and they're willing to do that for a very high value asset but not not for everything they can't it takes a nation state not some random person who gets a hold of a tool and it just that bar getting higher certainly makes us all all safer it's it's interesting because I would love to have more conversations between folks like who you who see sort of this broad view of the world and how technology is implemented and then you know open source projects that are working kind of to some degree in isolation I mean I think these do understand how their software gets implemented but not day in day out right they're working on on the code and so speaking of end users and people are implementing this software because we have a bunch of them out here what advice would you give to folks who every day are getting sold well if you use this software composition analysis tool or you do a bug bounty program or you do this particular tool or I've got this thing to sell you it just I get questions all the time from folks saying like this just is confusing to me and you know how do I deal with all that what advice do you have to those folks who are implementing these systems and getting sold a lot of different tools and things out there it's definitely easy to get overwhelmed and it's really hard to identify whether this this thing that someone's trying to sell you is going to be useful in your environment so I I turn to metrics I try to say okay I evaluated this source code analysis tool I ran it on my project and I got this many red flags and after I investigated the red flags I ended up with this many actual issues that required a fix in code and the time I spent to get those issues identified and resolved looked like this if I have a tool that's given me thousands of flags to like one or two real issues and that's a lot of time I'm spending trying to make that tool useful in my environment even though that that tool might be highly recommended and that salesperson was so convincing they're they're they're they're pamphlets were so glossy this might not be where I want to invest my time which is actually much more interesting than where you invest your money from a security perspective so what things do have a good a good return so for example this code that we're implementing does a lot of a lot of parsing right that might be a lot of attack surface and it's exposed through this interface and is there some way that we can isolate that I would look for mechanisms that we can use to reduce attack surface and one of the most inexpensive tools broadly available on every platform that you can use that makes your code more secure immediately is delete you can cut out code that doesn't solve a purpose that you need to support any longer and reduce your attack surface dramatically by just not having it present and it's reasonable to say that oh someday somebody might want to use this and it's really hard to to to say well we don't need this anymore and so we're going to get rid of it but that is actually making your code significantly more secure because the pathways that are widely in use are the ones who are going to be that are going to be most widely investigated or inspected and then the edge cases that that they don't get as much inspection they're like less maintained we haven't looked at it in in quite a while those are the places that are going to bite you so if it's not necessary or if you can modularize it like hey most folks are going to use this set of of features and for folks who need this functionality maybe we could implement it in a modular way and and isolate it so that most folks aren't exposed to this and the the bulk of your users or folks who end up deploying your software get the code that is able to be inspected at the the closest level of detail I unfortunately running a time and I could talk about this all day but I do want to ask you one thing I I would love you to come back and continue this conversation I think it's such an important thing for folks to hear in our communities let's determine what that bar is and just the way you look at the world I think is so helpful to folks who are writing code or implementing it who you know we just did a multi-million dollar security audit and if we had just had a little bit of discipline and hit that delete key we could have saved all that money getting your perspective is always good so I we would love to have you back and thank you so much for your advice and for coming here today thank you all right