 So someone said out there, you know, welcome to California, but don't move here. I'm a native San Franciscan and I have to say it is great living here and I encourage all of you to consider the Bay Area as a home. All right. So thanks, Jim Zemlin. Jim and I have worked together and stayed in touch for about 20 years now. We worked together at Covalent, as Jim mentioned. Jim impressed me there in his marketing capacity and he has gone on to do wonderful things here at the Linux Foundation. So I congratulate Jim on all his success. Oh, you're right there. Congratulations, Jim. Okay. So Patrick and I have known each other for a number of years. When Patrick was the chief security officer at Salesforce, he considered the Fortify product suite. He goes back to McKesson, as Jim mentioned. And Dropbox was your last commercial endeavor and now is a venture capitalist at a firm called ClearSky. I've escaped the CISO wheel of pain. Yes. We used to say in the commercial side, now you've gone to the dark side, you're an investor, but it's okay. So Patrick and I worked on some topics together that we think will be of interest to all of you in the room. So I'm going to act as the facilitator here and I'm going to ask Patrick about our first topic. This is the one we want to start at a very high level and get your views on the growing security risks as individuals, as corporations, as a nation. I've even heard you use a metaphor of a stealth bomber and all that, so I'm looking forward to hearing that. Hopefully we're not insulting anybody in the audience here because you're like living and breathing security on a daily basis and it's inescapable in the news. But fundamentally I think the game has changed over the last several years in a very visible way. Let me start with the stealth bomber side. Let's talk about nation-state threats and we see it in the news about North Korea engaging in various hacking activities. We've heard about the U.S. potentially collaborating with Israel on the attacks against the Iranian nuclear facilities, which are cyber attacks, et cetera. But think about it, it's like why is this so compelling? Why is cyber as an attack vehicle so compelling? And I like to think of things in terms of economics. Currently, if you Google how much does a B2 stealth bomber cost, it's about $2 billion per bomber, I think it is. Think about how many really smart developers you can hire stick in a room and basically say go hack something for $2 billion. So in the grand scheme of nation-state budgets that are being spent in terms of military, it makes perfect sense. And it also, in a way, I hate to say it is a democratizing effect, it doesn't take many really, really smart people to develop an offensive capability. So even small countries that don't have conventional resources, they can't afford the B2 stealth bombers, they surely can basically mentor, grow, train and develop kind of like cyber offensive forces. I hate to use the word cyber, it's like old school guys don't like to use it, but it's so much in the vocabulary nowadays. So that's one dynamic, which is the nation-state side of the attack is very real. We've seen it play out also with Russia and Ukraine, X number of months ago attacking the power grids and taking it down. So there is a very growing sense of paranoia here in the United States, especially, but globally as well, and how to deal with this, kind of like the new reality of global cyber war. God, that sounds scary. But the challenges that we face there are not just on the, kind of like the government side, I think it extends into people's lives as well. And having come from the healthcare world, CISO, Kaiser Permanente, for a period of time, you'll learn that there is an amazing amount of vulnerability and opportunity to attack and to cause damage, not only to dollars and data. In the past, I would say the impact of many of these attacks was that you lose money or there's a privacy issue and there's an uproar and people get paid off or you get signed up for like a three years of credit monitoring and the world moves on. I think that's changing because the dependency on devices, whether it's medical devices or IoT devices, the consequences are becoming physical consequences that are not easy to compensate. Again, if we project into the future, now there's a lot written about this with autonomous vehicles, you know, it's like, okay, you know, what are some of the horror scenarios you can think about if you have a compromise of autonomous vehicles, crashing, et cetera, with the healthcare space, you know, we did some tests even 10 years ago, we were able to alter infusion pumps via the network hack into them and deliver potentially fatal doses of medication. These are all life-changing scenarios. And what's really interesting is many of these devices and systems underneath are actually powered by open-source software as they're being pushed into the marketplace. And as I'm still monologuing, you know, I talk a little bit about kind of like the internet of things and kind of why it's concerning to me. And the best way to maybe frame it is to put it in the context of a network-connected fridge, which is kind of, I don't know, I consider it to be a silly device. But let's assume you just bought a network-connected fridge and you think about it as a refrigerator and ask yourself how long do you expect this refrigerator to be operational in your house before you basically replace it with something? It's probably, you know, like 10, 15 years after that, it gets too grody and maybe you have a efficiency problem or a refrigerant leak or something like that and you have to replace it. So a refrigerator is a durable good that has a relatively long expected lifetime from a consumer perspective. But if you look at the embedded component, it basically probably has a little Android-based tablet sitting in the front and a few sensors inside. But if you look at the embedded compute component that's built into that durable good, ask yourself which consumer electronics company has actually demonstrated their ability to maintain technology designed for the consumer market for more than they three years? You know, we've created a disposable culture around technology in the consumer space. And what happens when you don't maintain something anymore? And you know, I have this principle I developed which is that if you want to maintain a reasonable security of a system, you have to monitor and maintain it for the duration that it is connected to a network. So as soon as that monitoring the maintenance, the patching stops, which should be a function of the manufacturer, you have a security problem. So, you know, what's happening? I kind of like to think, I'm being a little bit alarmist, that we're kind of driving to a cliff. We have a lot of saturation of these low security quality IoT devices that are gonna be sticking around in people's homes and in businesses for maybe decades that are not going to be maintained. We know that the embedded components are not gonna be maintained. So they will become exploitable eventually. Good news is there are a lot of vendors basically fighting to play in that space of IoT security. And if we go back to the nation state though, I think a lot of the average consumer, average citizen thinks of nation state attacks as military against the grid, those kind of things. But you have nations like China to point to one that the government and the commercial sector work together in order to advance the causes of the commercial sector. So the impact is not just do we need to fear these nation states in terms of military defense protection but it's also, do we need to fear them? Well, it's been proven we need to fear them around the economy as well. No, that's been a huge issue. And I remember shortly, if you remember the Aurora attacks they were called. This is when China compromised Google. That was, at least for the tech sector, that was a bit of a wake up call. That was a big changing moment. And I remember at Salesforce, it really drove security and trust being the number one priority of the company. It drove an immense amount of effort and focus which is this can't happen to us, especially in the cloud business as a provider. It's like massively degrading trust if an event like that happens. So the reality is that the espionage element of it, it's probably more limited in terms of the number of threat actors that are out there but it's still very clearly apparent in terms of economic espionage. But what we're also seeing is clearly the more traditional espionage against nation states. And if you read the press this last week, I believe there's a bunch of evidence of Russians having compromised a German government network. So the economics are too compelling not to do this. The systems are too vulnerable and it's too cheap and easy to do it to not do it. Yeah, for sure. Okay, well let's bring it down then to the topic that's of interest to the room specifically, which is open source. Many say that the debate between the closed source world where there are some that would argue that total control and open source world where there are many eyes, this many eyes debate. What does this mean in terms of security of open source who's responsible? How should the eventual consumers of open source think about security when they use open source products? I think there's been this ongoing debate. Which is better, closed source, open source, whatnot. And I don't know if it's settled yet. And maybe I wanna poke you guys and make you a little bit angry at me, but I don't think there's a lot of evidence that the many eyes is actually working. I saw a report recently I read it, but I think it was published by the Department of Homeland Security where they did an analysis. And they basically said that, yeah, the many eyes is not as effective as we'd like to see it. And there is some benefit to actually having the obscurity of having a closed source product. And I think one of the events that illustrates this is last year when we had the vulnerabilities around open SSL, the heart bleed vulnerabilities as they were called, it's open source. There are hopefully many eyes looking at this. That was the theory, but they weren't. So I think there's still a challenge around, and maybe we need to move beyond the argument of which is better. We just need to figure out how do we manage risk in both situations. Open source is here to stay, it's one. And it's like I say Linux is the new data center platform. Every company I look at that's doing some interesting networking product. It's based on x86 hardware with Linux running on it. So the proprietary world is shrinking in terms of kind of like core especially. So the question is how do we live in this new world where we know that there is a growing dependency in open source software. And we know that kind of the many eyes that this is somehow self-regulating and that smart people will look at it and find vulnerabilities. I think the evidence is that it's not working as we may have wanted it to work. So we have to think slightly differently about how do we manage risk in this type of a world. What about these open source projects that are coming out now, Facebook has some stuff, Google has some stuff? It's very hardening. There's security in terms of securing the code of like other open source projects. And then there are like open source security projects. And it's actually really cool to see many of the large tech companies like Facebook open sourcing, say Facebook case, it's OS Query. And I know that Netflix and a number of the companies they're developing tool sets internally and they look at these and basically go, we're building security tools but they're not core to our business. So let's share those with the broader community and then you see uptake on those. And then you see commercialization such as I've seen several companies now that have taken OS Query and wrapped it with manageability components so that other industries that aren't very heavy in terms of having their own engineering teams can take advantage of that. So I'm really heartened by kind of the open sourcing of security tools by very skilled security teams, security engineering teams. The other issue though is I think less addressed right now which is like the actual integrity and the quality of the code of all the other kind of just open source projects as a whole and your ability to rely on them. Well how do we, if you think of the company that Jim mentioned that I was the CEO of called Fortify was a commercially built product to scan code and look for vulnerabilities and then provide ideas on remediation of such and we would sell that typically to commercial enterprises and they could enforce rules. You had to go to training, you had to use this tool, your output would be evaluated based on what the tool told you about your code. The code could not go in production until a certain level of security was achieved just like we've always had a certain level of quality, a certain level of performance. But in this open source world what's the responsibility of the developer then given that there isn't this kind of Uber enforcement taking place? So it's responsibility and or measurement as well so is every developer gonna have the same sense of responsibility, do they think about my code is going to end up as a module that's powering a battleship and has national security implications? Probably not, so I think the question is how do we drive more visibility? How do we enable the organizations that are building on these open source components to have better visibility into the quality and sustainability of that code that's happening? So I love the concept of like source code vulnerability is gained, there are all kinds of other things that can be done including like dependency scanning of libraries, trying to see the distribution, which ones are vulnerable and not their companies like Source Clear, they're basically building businesses on this or Black Duck as well as moving in that direction. That's all really exciting. The question is how do we pull this together so that if you're building an open source application or larger one that I'd almost like to say basically is like I've only used components that have a tier one trust score of some kind and then back off and look at some kind of composite metric on the components that say that is this actively being maintained? In other words, do I see commits happening on a regular basis? Has this gone through vulnerability scanning? Have we done a full dependency check on this thing? Is there maybe a corporate sponsor or somebody more accountable? But I think there is an opportunity for the open source community to really come together and put some metrics, KPIs, other things around these reusable components so that the individuals and companies that are building on these libraries and tools, they can see basically there is more transparency as to the trustworthiness. What about these commercial attacks that are happening like ransomware and things like that? What are your thoughts there and how does the community respond to those kind of attacks? I'm gonna go back to economics. So it's about attack surface and ability to make money. Ransomware and or Bitcoin are awesome because you can completely disintermediate an entire very complicated criminal supply chain and go directly from compromise to profit. So it's like it's a trend that we're not going to see reverse itself. I think we're going to continue to see both destructive attacks and also kind of these direct monetization attacks if you wanna call them ransomware related attacks. How does that affect the open source community? Right now, I would say prepare for it because it's been targeting the windows desktop and end user environments. You go where the money is, but let's assume that some clever security companies and maybe Microsoft can figure out how do they restrict that ability? Maybe it's at the file system level so that you have reversing capabilities that negate this, et cetera. So if that happens, the attackers gonna shift elsewhere. So what's next? And like I said earlier, I really think that especially Linux has one in the contemporary data center. I mean, even kind of remember what it is, but a large percentage of even the machines running in Microsoft's Azure cloud or Linux machines. So it's like you're going to go where the money is and who knows, maybe they'll move upstream away from attacking the desktop and individual end users to attacking the infrastructure holding it for ransom. So the accountability again of open source developers, you know, it's like, you know, it's a vulnerability. It's like manage your vulnerabilities, but there's an old adage which I think comes from Microsoft which is if the bad guy can run his code on your system, it's not your system anymore. So, and it's like, and the question of the impact is really how creative the bad guy is. So ransomware is just kind of one instantiation of creativity where like I said, they've disintermediated the criminal supply chain and have gone directly from compromise to profit. Interesting. What we've read in the news and we've heard about the Spectre and Meltdown and just curious on what your views are of what happened there, how the response was, the notification has been a little controversial. I think there are a couple of interesting trends here to look at. I mean, one of them is there's a shift towards more scrutiny at the hardware level, which I think is good. I'm glad researchers are paying attention to this and which is also causing more scrutiny to the hardware manufacturers themselves. And I don't have inside knowledge in terms of how everything was negotiated with Intel and AMD and the other chip manufacturers with some of these vulnerabilities. But clearly, I think there are gonna be some very interesting lessons learned because it doesn't seem like there was a lot of muscle memory there to make that a very smooth process. It's concerning that certain open source projects like OpenBSD for example, who are very security focused were left out of that kind of like negotiation ring kind of like the circle of trust basically, which is very interesting. I think that needs to be reevaluated as well as again, as an outsider, it looks like just the quality of design. I mean, I would just basically put a call out to the open source community to hold themselves accountable by putting metrics and KPIs in place. But I think maybe this will be a turning point as well to look at the hardware manufacturers because they have such an amazing amount of power and impact to the overall system security and maybe drive a higher degree of transparency into kind of what some of those risks are and even into the process that they're using for building some of these kind of new chipsets. The other side of this is maybe less so on the pure hardware side but there's something in the middle that we're also seeing which is maybe the attackers trending towards firmware, BIOS, whatnot. It's like the UAV BIOS. It's like, if you look at the BIOS from X number of years ago, it was this horrible little CLI, not command line interface, but text interface. And nowadays, you have these rich graphical interfaces basically for computer biases. You've got to wonder how many lines of code are driving that and what kind of attack surface is there and what opportunities does that open up? And when I reflect back to my time as being a CISO of large environments, good God, I would not want to have to touch the BIOS. And it's like the whole concept of proactively going up and updating and patching BIOSes is like, let's put our fingers in our ears and hum la la la so we don't have to think about this because just managing patches that the OS and the application layer was tough enough. But I think we're slowly being forced into kind of like expanding our set of priorities to focus more on the hardware as well, having knowledge around hardware layer vulnerabilities, having knowledge around BIOS versions and what opportunity those create. And there is evidence again especially at the nation-state level that within the supply chain and other places BIOSes are being corrupted. And if we kind of go back and I said that there's a destructive element to some of these attacks, what better way of causing damage than to just break systems by corrupting the BIOS in a way that it can't be recovered easily? Yeah, we're gonna have time for a few questions. So if you think of something you'd like to ask, Patrick, we have about nine minutes left. I have a couple more questions and we'll open up for a few from the audience. One question I have that's off script, sorry, is there are companies represented out here and you've had a ton of experience as a CISO, Chief Information Security Officer. What advice would you give to a company on just at the 30,000 foot level of how do you think about protection? How do you think about detection? How do you think about remediation in today's world? So my general bit of CISO advice is prioritize everything. It's like we talk about the attack surface being almost unbounded, almost infinite basically. That's kind of true. So you do have to really understand your business risk and prioritize and be willing to ignore stuff that just quite frankly isn't that important. And as part of that prioritization, kind of work with the business so they understand and basically say, look, here's what I'm gonna be able to focus on and maybe risk that I can address and here's the rest of the universe that we know about but we're not gonna be able to work on for lack of resources and other constraints. So I think having an honest conversation with the business around priorities and risk management would be my number one bit of like CISO advice. The other thing in terms of maybe it'll be a little bit more open source focused, I'm really excited around the whole DevSecOps movement which is, I don't know how much everybody knows here but basically the whole concept of the integrating security into the CI CD pipelines, not security being outside of development and outside of operations, appointing at the problems but being part of the process both for software development and operations is really exciting and I'm seeing companies like, gosh, I'm blanking right now but there are a number of companies that are basically creating enforcement along that pipeline where you can integrate a variety of tools, a variety of checks, make sure everything passes before anything is pushed, make sure you look at all the library dependencies that they're updated and then as everything is tested. So there's cyber, that's the company I'm thinking of and there are a number of other ones as well. So there's a good movement going on that's trying to make, that's trying to create a sense of sustainability especially on development on top of open source system using kind of these DevSecOps philosophies. And my last question is that it concerns all of us in this room as individuals is what do you do to protect yourself? What can we learn from that? Number one thing you can do as an individual is use two-factor authentication. I'm a huge fan of Yubiquis. Okay, thank you for the applause. Now seriously, if you look at the data, it's like what are individual compromises come down to? It's not crazy zero day nations data tax. It's not like ninjas coming in and trying to kill you in the middle of the night. It's simple stupidity of reusing the same password across multiple sites. One of those sites happens to get hacked and the bad guys then test that password they have access to across all the other known websites. And we saw a lot of evidence of this testing activity. It's always going on and then they have access to your accounts. So I'm not even gonna tell you smart passwords. Just turn on two-factor authentication wherever you can. And for those sites that don't support it make sure you use really solid random passwords. And I personally use a tool called LastPass which generates them and manages them and pushes them into browsers. And you can also argue that's a risk in itself but that's a risk I'm willing to take when I combine it with two-factor authentication. That's number one piece of consumer advice, use two-factor. Great, I used LastPass too. So do we have any questions? Mr. Zemlin. Yeah. You're next. I'm gonna hog these questions. No, I just have one question because you keyed in on something and keep in mind that we talk about this idea of open source projects that are then productized and companies will then sell those and create value and then reinvest back into these projects. A lot of people here are in charge of productizing open source and in that supply chain, they're indemnifying, they're taking responsibility. But now you're saying, well, hey, that upstream has to take responsibility too. And there's some people in here who are in those upstream projects that might counter argue and say, hey, the license is pretty clear here as is no warranty, right? And it sounds like, you know, Martin Mikos was here from Hacker One on Tuesday and was saying it's everyone's responsibility. It sounds like we've moved from sort of blah, which is as is no warranty, into this social norm of listen, we are all in a pretty bad spot. What can folks who are in industry here productizing open source do to kind of help these upstream projects be aware of this responsibility, provide incentives for them to do this, to kind of help them in that world? What ideas do you have around that? Wow, incentivizing them. You want to take that one? Well, to me, the irony of that whole situation is if you're developing an open source component, you're very prideful that it works, right? You are very focused on functionality. And the reality is in today's world, functionality and performance have to be, have to go side by side with security. So it's almost like you can incentivize, but it's almost like there has to be this mind shift that it doesn't work if it's not secure. That's right. It can do what it's designed to do over and over again, but if someone can hack into it and take advantage of it, then it doesn't work. And that's, to me, that's the irony of the situation. Security, it doesn't work unless it's secure. I think my comment was step one before I even think about incentives is just having knowledge. So looking at that dependency tree that's there, looking and whether they're old versions of libraries that are embedded, we, as a venture firm, we do analysis of source trees as part of diligence. And we even find security companies, like why are you running an old version of this library? It's just like the, in some cases, the muscle memory, if you want to call it, or the process maturity around maintaining currency and going back into all of these different open source projects, just isn't there. And the tool set may not be there to enable them to do that. And that's why I was saying, I'm actually really enthusiastic about what I'm seeing with DevSecOps where it's changing some of that dynamic. So you have continuous updates of embedded components and visibility into the risks. Jeff Borek with IBM, thanks for being here. Follow up on your point about the, I don't want to say you characterize it as a myth, but the old convention that many eyes on open source make for a secure ecosystem. Can you pull the thread on that just a bit more and get to the next level of detail? Do you feel that's because it's just ultimately the amount of volume of open source that's creating a problem that many eyes can't keep up with? The question is, who are those many eyes? So are there, I mean, they're obviously like Project Zero at Google where people are looking at very significant open source projects. And they're basically saying this is good for humanity. We're gonna invest in that, which is phenomenal. But I would say let's maybe up level it. And one of the biggest challenges in the security world right now is that there really aren't enough skilled qualified individuals, especially within the space of application security. And those that are there, especially in the Bay Area are highly compensated by companies that they work for. So it's like if we peel this many eyes argument back, I would ask, who has the skills to look at and analyze in depth and verify and find bugs in open source products? And then who was incented to do so and who actually does it? And it's probably a relatively small, I think about it in Venn diagram, a small group of individuals. So I do think there is maybe, take the lead that Google has basically, I think industry that is relying and building an open source, maybe we have to create some economic incentives to take these individuals because it does come down to that and tools like Fortify basically and basically say, hey, it's like our contribution and maybe IBM has a role to play in that our contribution to the open source community is to actually pay for people to look at this. And clearly it's maybe not everything in the open source universe, maybe it's the most core projects, the ones that everybody relies on. And you can do the dependency analyses and identify kind of what those are that are actually being used and you can apply some science to figure out where you put your resources. But in my mind, I think that's the call. One more question. Hi, I'd like to follow up on that. You mentioned earlier that you thought there was some advantage to obscurity in security. I'd like to take issue with that. Mainly because of one of the things you just said, which was when you were doing due diligence, one of the things you did was analyze the source code. If you're using proprietary components, there's no way to do that. So I agree with you about the many eyes. People only look at your code if they're interested or if they're trying to break it, nobody volunteers to go and secure to review something. But given open source components, what that means is that they have an advantage in that people who do have security tools and who want to examine the code for flaws can do so. And the security tools I'm hoping are getting better and better. My big hope is that the intersection of machine learning with vulnerability analysis will actually create a massive automated scanning machine that will find the vulnerabilities before the bad guys or probably the bad guys are already doing this. But I think proprietary components simply can't take advantage of that. The only way you can get to take advantage of that is by exploring the source code. I think I was very careful of my wording to not step in the middle of this debate and get stabbed on the way out by saying it was a DHS report that made that statement, that conclusion. So now this has been a long ongoing battle and both sides I think have very valid points. But again, for what it's worth from an attacker's perspective, it's maybe slightly more difficult for me to attach to a binary and try to find vulnerabilities and do black box testing than if I had open source to test against. But so it's again, I'm not gonna step in the middle of this debate. That's why I referenced kind of the external report on that. And I also said basically, regardless of where the reality is, I don't know if it necessarily matters. It's just the open source community has to look towards itself about how do we improve the quality overall. All right, thank you very much, Patrick. Thank you all for your attention. Thank you.