 Hi, I'm Priyanka Sharma and I'm Director of Technical Evangelism at GitLab. Today we are hosting GitLab Connect SF in partnership with General Catalyst Venture Partners. We're super excited that you're all here. We're being recorded and this is going to go on the interwebs, just FYI. The theme for today is Zero Trust Security. As many of you know, security is a hot topic, especially in the age of cloud native, which I, trust me, I hear a lot about. I serve on the board of the Cloud Native Computing Foundation and it's an important aspect of companies modernizing their software workflows. So today we're going to have conversations around there that there's folks from multiple companies here today. So everybody should mingle, hang out, learn from each other. And the agenda for today is going to be, first we'll start with a kickoff talk by Steve Herrod, Dr. Steve Herrod, who is a general partner at General Catalyst, followed by a lightning talk by Jim Zemlin, who heads the Linux Foundation. And then we'll have a panel with these awesome panelists right here where we'll talk about Zero Trust Security. Through the panel, folks can ask questions and we'll all interact and talk to each other. If there are things that are discussed on the panel for which you have a comment, that's also welcome. I think of this as an interactive experience and less of a broadcast. So with all that said, let's get started. I'd like to welcome Steve Herrod, who is a GP at General Catalyst and has a background in computer science and was the CTO of VMware in his early days, among many other accomplishments. You're very nice. I just wanted to say a few things quickly as we welcome the panelists here. First of all, thanks for coming to General Catalyst. We're in Palo Alto if you ever down there. We're in San Francisco and Boston, New York. And I lead our investments in developer tools and enterprise infrastructure, kind of anything that's extra techie as well. So it's a good crowd to be with. I guess this is also nice because we're on the eve of RSA. So many people here are going to RSA, like apologies, I guess, it's such a tough event. There's 800 plus companies that are going to be there this year. And part of it is this is one of the biggest growing areas of trouble, obviously. That's why it's a board level issue. It's also one of the growing budget areas because it is still a problem. But I think it's also really neat and I think that's what the group here will talk about. Really, as you're going to try and improve the software development life cycle, which GitLab is obviously, that is the mission. How can you actually make security, which is usually the biggest inhibitor, actually become part of the flow? And so I'm anxious to hear how you'll think about it. There's easy themes that come at each RSA if you've ever gone to these. One year, well, it's gone through a bunch of different phases. This year, I think the predictions are it'll be, everyone will say they're doing AI. So that'll be the first prediction. Actually, I heard that the funny quote now is if it's written in PowerPoint, it's probably AI, if it's written in Python, it's ML. So that's the way to know. So I think AI would be a big deal. Zero trust architecture, as this group will talk about, is absolutely something. It's kind of a trust no one, even if they're in your perimeter. So that'll be a big theme. And then cloud native apps and the way that they're deployed, whether it's in containers or in the future as serverless comes on board. Really big focus and people don't really know yet exactly how the security world will work for that. So our friends at Twistlock and other places can talk about that. So anyway, the whole point of these events is for you all to talk to each other and meet some interesting people. So I hope you do that. And thanks for coming. Well, thank you so much, Steve. We appreciate your remarks. And it's so nice to be here at General Catalyst. I really appreciate you hosting. Now I'd like to welcome Jim Zemlin, executive director of the Linux Foundation, which doesn't need any introduction. They are leading the charge on open source across the world. Every time I talk to Jim, he's either in one country or another, rarely ever in San Francisco. It's like the mission is going far and wide. And the Cloud Native Computing Foundation is also a part of the Linux Foundation. So they're very deeply involved with modernizing software workflows. So with that, Jim, please come and talk to us about security and open source. All right, thank you. Thank you. So you've all heard of Linux, I know that. But how many people here know that the Linux Foundation has a lot of projects, including the Cloud Native Computing Foundation beyond Linux? Were people even aware of that? I always try to, all right, this is a good crap. Like, San Francisco or an adventure firm, GitLab, like you know. So, yeah, I mean, we started out as the Linux Foundation, but really took a lot of the practices that we learned in collaborative development from Linux. Obviously, Git was another outgrowth from Linus Torvalds, who worked at the Foundation. And we are glad to see that there's a lot of commercial value being captured from Git and that project. But we work on the Kubernetes project, Node.js, Cloud Foundry is a project of the Linux Foundation. We're the largest certificate authority in the world. Let's Encrypt is one of our projects. So we just have a very hundreds of open source projects and thousands of developers working on them. A couple of very quick trivia questions and I'll talk about application security. We gather about 40,000 developers every year at open source events all over the world. Our KubeCon Barcelona is expecting 12,000 people, I think. Yes, yes, we're a little freaked out about space. But it just, the growth curve of the interest, attendance, and then just quality of the attendees at these events just in my tenure at the LF, which admittedly has been long, is just stunning to see. And I think what it is, is a shift in purchasing patterns, influence towards developers and towards open source that's just manifesting it itself in our experience at the Linux Foundation. And one of the fun things about the Linux Foundation, I was talking to Steve about earlier, is I get to see all kinds of projects, from the world's most successful projects to projects that may be struggling. And that's something I think about application security as it relates to open source, that all of us have sort of a collective interest in, and in many cases, a collective responsibility to try and understand. The most successful open source projects, whether it's Linux or Kubernetes, tend to have a very similar set of economics, labor patterns, and sustainability patterns. You start with incredible code, you don't end there. But that code is generally worked on by a diverse set of stakeholders, whether it's a company, or people from different countries, or different development perspectives. But the main thing is there's a second step of productization, where Git gets a Git lab, or a Git hub, a Kubernetes begets a Heptio, or a Kubernetes service on any of the popular clouds. And those products in the productization process provide tremendous innovation feedback. What does it mean to scale the Alibaba cloud on the most popular shopping day in China? Like you don't get those requirements working from your PC at home. It's a very important innovation feedback loop into the upstream project. In addition, it kicks off a ton of profit. There is a lot of money that's being made either directly, companies based on open source. So we're proud at the foundation that last year, two projects based directly on technology started by Linus Torvalds, Git, and Linux were collectively purchased for $40 billion. Now we're waiting for those deals to completely close so we can get our portion. But it's that monetization, it's that value capture that then creates reinvestment back in the open source project. Not in the form of working at the Linux foundation, or giving to Linux foundation, I was joking about that. It's largely the labor economics of the developers who work at a VMware or an Amazon cloud wherever it is that are working back in Kubernetes or Linux and creating more value that begets better products and services, more profit, better code. And that's really how generally the economics work. And in that cycle, you get the benefit of people who as they productize care about security. And I would argue that even in that very effective positive feedback loop, we can improve in those healthy, sustainable projects application security practices. And that's a big part of what the foundation is focused on right now, and what we think will improve our collective privacy and security. Things like just making sure that there's a responsible disclosure policy, there's a security mailing list that you have good test coverage. Again, for people who are at application security, this is very basic, and it may shock you that many open source projects, and I'm not gonna name one, Linux, but not the best test coverage in the world. And we need to kind of continue to work on that. Fuzzing the code regularly produces a lot of bugs, but those bugs are important to fix in order to reduce the footprint for being exploited as software, which is what kind of a lot of intrusions end up coming from. You know, third-party audit of very, very critical code bases is something that's really important. Bug bounty programs, I noticed there's a Hacker One t-shirt right there. I don't know if you work at Hacker One, but we work with Martin who helps us facilitate bug bounty programs for critical open source projects. Making sure that application security is part and parcel, like the process, and you know, that's obviously Kit Lab, very curious very much about this, is something that we really wanna promote in the big open source projects that we all care about. And that's something you're gonna see us do more and more and more and more. The interesting thing is that most big successful open source projects from a security perspective tend to be generally good in the same way. That's that positive feedback loop. They have the means and the resources to collectively fund an audit to do meaningful application security practices. The question that I think is even more interesting is what do you do with the long tail of projects that fall in the intersection of important to our security and privacy and totally screwed up beyond belief? This is really the most difficult part of understanding all of our collective security from a software perspective because this software is so widely deployed in almost everything we use every single day. And what I'm talking about are not just doing a dependency analysis on the software you're running, which you should, and then doing a vulnerability analysis of that and so forth, that's gonna pick up NPM packages. It's gonna, you know, things from like the major package management systems, RubyGems and so forth, where you're pulling in things, they may be the wrong versions are out of date and there's some vulnerabilities because of that and so on and so forth. Like there's a whole industry now of software composition analysis tools and products from companies like it, that do that kind of work. But the question that I think is even more interesting is something the Linux Foundation is working with, Harvard's Lab for Innovation Science, the Institute for Defense Analysis in Washington and some others on, is trying to answer some simple questions. What is this world's most important software? By package, by version number. By some algorithm that measures criticality, is it network-facing, is crypto-involved? Like where is it on the stack in terms of criticality? Understanding that, ranking that, so you can say starting with number one and all the way down to let's say three, 4,000, here's the world's most important software. Second question, who wrote this stuff? Does anybody know? I know, it's actually kind of disturbing. Like we don't actually quite know. I think with modern version control systems like GitLab or GitHub, where fortunately a lot of these projects live, you have relatively good provenance of where code is coming from. True identity may be a little bit more opaque. I think I'd like to see some things that solve for that in the future. But for older projects that are critical and everybody knows these names there, the open SSLs, the NTPDs of the world, it's less clear. Who's working on it? They're not in places like GitHub. They're in like super old versions of like Bitbucket and stuff like that. There's nothing wrong with Bitbucket. But they just are in a mishmash of things. I was telling Steve earlier, all the good projects are generally good in the same way and all the troubled projects are very Tolstoy-esque. They're just screwed up in their own unique individual ways. And I'll give you then some examples and I'll stop speaking of the third question that we really ask, which is what is the world's most important software? Who wrote it and then is it secure? Is this helping us? Is this hurting us? And what we've discovered in that long tail of these projects like an open SSL or an open SSH or an NTPD is there's these unique cast of characters that maintain this software. Open SSL is sort of the perennial example where the Linux Foundation post-heartbleed, this is when everybody's privacy was essentially exposed on the internet due to vulnerability in open SSL, raised, I raised a $6 billion fund to go help these projects in 48 hours. People were like freaking out, like, oh my God. And I remember talking to people and saying, open SSL is maintained by Steve Henson and Steve Marquez, these very dedicated individuals who have cryptography experience in a very unique space. In another way of saying it is the internet is secured by two guys named Steve. And you just see like policy makers like, oh my God, I knew the open source was bad and I always said it. Sadly, that's not an exaggeration. I just said it's true and I got more. Unfortunately, it's like, yeah, I won't share tonight. But what I think is interesting and what I think is a call for the industry and for open source projects is akin to something that basically Microsoft did. So now that Microsoft is a great member of the Linux Foundation and we love them so much, I'm now going around talking about everything I've learned from Microsoft and this is actually something where the open source community can take some good lessons from Microsoft and I will say there's a lot of impressive people there. Bill Gates writes this letter. I think it was 2003, 2004, stop all software production. We're not gonna release any more products. We have a real meaningful security problem here. It is bad. Customers, I believe, even just like, hey, we're not gonna buy anything more. It's that bad. You're just exposing all of our private data. It's just terrible. And he, a gentleman named Steve Lipner, who's a wonderful guy, retired now of an application security specialist at the time, went and forced every employee at Microsoft that was at least on the technical side. We're gonna look at every line of code. We're gonna take application security classes. I don't even know if they called it, maybe secure coding classes they called it at the time. We are gonna go through and make sure that we have a good way of doing this. We're gonna have bug Tuesdays. They literally research, like what is the best day to release bug fixes so that people aren't at the end of the month, where they're kind of sluffing off in the beginning of the month, where they're super busy, like that day is the day where these patches are gonna get applied. And it was really an effective thing, mainly because if you didn't do that, you were gonna be fired. People kind of wanted their jobs. So the question to all of us, and then I'll turn it over to the panel, is in a world where in an open source project you cannot be fired, where you lead through influence, where even though all of the developers tend to be professionals in the good projects and maybe in some of the long tail projects, it's still not a thing where I can go to open SSL people or I guess I could say that to Linus, but like probably not, even though I really am his boss and he doesn't listen to me ever, but like it's not that kind of world. And so the question is what do we do about it? And at the foundation, we believe the answer is we have to create a culture of application security and of secure coding practice for all of these projects. And we've devised lots of different ways and are still experimenting and still learning and sort of still stumbling along to do this. We created a thing called the Core Infrastructure Initiative Badging Program. So people like to have badges on LinkedIn or GitHub or GitLab or wherever. And so this is something where in order to get the badge, you have to show, do you have a security mailing list? Do you do responsible disclosure? Do you have at least not just theater, but like meaningful application security practices in your code? And I think we've done now 2,000 projects have graduated from that. It's a requirement for the Cloud Native Computing Foundation to even get into both the sandbox and graduate. And we do it for all Linux Foundation projects, but we need to kind of bring people with us by creating incentives and this culture. I think there's interesting things going on both from software composition analysis vendors. I'll do a quick shout out to a portfolio company, Tidelift, which is a general catalyst that's doing some things to try and improve some of the long tail economics around certain open source projects. All of these things I think, I mean, one's an investment thesis, but in our case they're all these little experiments that we're doing to get us collectively to a state of solid application security practices for code. I think Zero Trust Computing could be part of that literally in the process of how these projects get made. I will never forget a terrifying call I got that colonel.org had been hacked and compromised. The gentleman who did that has actually gone to prison for it. We caught him, but it was disturbing. Thank God Git was structured in such a way that we could go back and check every patch that was ever committed, ever, and rebuild the entire data center from scratch. But just understanding the systems where the code is developed, who wrote it, why they wrote it, and is it secure, and what projects are important, I think is a really important question. And like any of you who are super smart about technology or all this, those sound like simple questions, but please help me noodle on them. Because when you literally ask yourself what is the most important shared software by vertical industry, package, version number, where are you gonna get that info? Who's gonna give it to you? Is it in production? Is it just stuff they bought? Where is it? It's actually really hard to get. And then when you start getting to provenance and who wrote it, I mean, we're literally, we have tools doing essentially cyber archeology, trying to figure out who wrote this stuff, who owns what, when it got committed, did they change spacing, did they actually commit that line or not? So these are tough, intractable, difficult problems. I think they'll take years for us to figure out. But the Linux Foundation is committed to it. If you wanna get involved, we have our core infrastructure initiative. All of our projects are engaged in this in two weeks and this is my shameless plug and I'll give up the rant. My time here is we're gonna have a meeting for all the open source CEOs, leaders of the industry, leaders of projects down in Half Moon Bay, and we're gonna be announcing some new initiatives and tools to solve this very problem. So that's the challenge. I hope, I know Hacker One that we're working with and a whole bunch of others. So I hope you all join in on trying to improve the security and privacy of all of us by improving the quality of software in these open source projects. Thank you. Thank you so much, Jim. Every time I talk to Jim, I learn so many new things about the history of the internet, the history of open source, the biggest problems we're dealing with. So I'm so glad that he came here and talked to us all today. I'm always inspired after talking to Jim, so I'll take a moment. Okay, I'm done. So with that, we'll start with the panel part of today's event. For the sake of the recording, I'll start by introducing myself and then the topic, and then we'll have some intros here and then get into it. As I said before, ideally hold your questions till the end, but if you have a comment on something that's being discussed, you can raise your hand and suggest it. This is a collaborative thing. So I am Priyanka Sharma, Director of Technical Evangelism at GitLab and also a board member of the Cloud Native Computing Foundation. And today I'm hosting a panel on zero trust security here. Zero trust security, as we learned a little bit from Jim's talk as well as from Steve's is, it's about building a culture where security is a top concern, where every developer is thinking about it and there are protections in place the further and you go. So firewall after firewall after firewall is a simplistic way of saying it, don't freak out at me guys. But the concept of zero trust security is what we're gonna unify on today with a very diverse panel with different security practitioners and vendors over here. So with that, let's get started. Andrew, would you wanna kick it off? Sure, so hi everybody. My name's Andrew. I'm the head of product and one of the co-founders of a small company called Cytale. We're about 24 engineers based out of San Francisco and we focus on a particular part of I guess the zero trust problem but a really important one which is identity of software systems or trust of software systems. How do I get all of my disparate parts of a software system resolution or a distributed application to be able to actually talk to each other and trust that I'm talking to the right person? And we do that in a couple of ways. The first thing we do is or that we've been working on is an open source project called Spithy. Which is actually now part of the Cloud Native Computing Foundation. What Spithy allows you to do is to build your Cloud Native applications with best in class trust and application authentication and PKI built in. So it just becomes a function of the platform. And then on top of that we have some commercial tooling as well that helps once you've established trust across your Cloud Native application you usually have to establish trust back to things that are a little less Cloud Native and be able to authenticate to those. And so we have tools to help with that as well. I love Spithy, I've given some talks about it. It's based on a Google project, right? It is, yes. So it's the initial inspiration. Actually the name Spithy comes from Joe Beta who is part of the Kubernetes project. And he had spent some time at Google. He started Kubernetes to take the ideas behind Borg, this internal Google system, and see what would happen if we created an accessible open source analog of it. And then he went to, he cast his mind around for other systems at Google that could serve as the same treatment. And he came across a tool called LoAS which is this system at Google that runs on, I think every piece of developer facing or infrastructure facing, every machine running code at Google runs this software. And it's the foundation in many ways of Google security. And his big idea was to say, well, could we, much as we did with Kubernetes and Borg could we do the same thing with LoAS? And so that's really where Spithy came from. And now we've got a lot of other folks, not too far from this office actually who've been contributing to it as well. So Uber have made a lot of contributions recently, Square are very involved with this project, Capital One and a few other folks. So it's taking Google's ideas and bringing them to the rest of us. Thank you. My name is Patrick Townsend. Townsend security is the company. I'm founder and CEO. We specialize in protecting data at rest. So encrypting data and doing the hardest part of that which is protecting encryption keys. So in today's world now doing encryption is really not hard. Almost all the basic languages have good encrypto libraries that's readily available. Protecting encryption keys is really the challenge. So that's where that's where we come into play. We have products. It's hard to, unless you've kind of been through this, it's hard to appreciate what Jim was saying. We're so dependent on open source. It's deeply in our products. We're dependent on it. We contribute to that community. We need that community desperately. The two steves are people that we've worked with over the years was no exaggeration. We love them. They're way overworked, but we as a company, we're typically on, we come into play when there's a problem. When someone calls us, they're in trouble. Somehow they forgot to put a certain amount of security in their projects and now they're suffering from it and we're gonna help. But it's so important to work in security like encryption and key management into your whole DevOps process. And so we are GitLab customers. We appreciate that technology. We use a lot of these technologies, but security has to go right in there from the beginning all the way down through. It is so hard to re-engineer these things. And I think there's a lot of convergence between zero trust and cloud native. If you're doing cloud native stuff and we're right in the middle of this ourselves, you really have to adopt a zero trust model because even if your project's going on premise, the architecture is cloud and your customer wants to get to the cloud or they think they want to get to the cloud anyway. So you have to adopt the zero trust model even if you don't think initially you might be in the cloud, you have to adopt that. It's where people are going and it's what they want. So totally in agreement with the notion of working security in at the beginning. It is so hard to patch that in at the end. And I think actually we've seen companies, startups as well as mature companies fail in their projects, maybe not the whole company, sometimes the whole company, but fail when they get down to the end and they didn't do the security piece right. And they try to close that big deal and maybe it's a global bank and suddenly they get a sig and they've got to fill out with 80 security questions and they can answer maybe two, okay. And then they're not in contention anymore then they're not able to compete. So it is important and just so glad that so many companies are focused on this and working it in. And we come at it from security. So we're learning our way into the DevOps area. We're obviously doing DevOps, but that's where our learning curve is more in that spot. So anyway, thank you as a good, it's hard to say anything after Jim spoke, but that's where we come from. So I have a question. So the two Steve's, let's go back to that. Do they not sleep? What's the deal? I don't think they, I don't think they sleep. Steve Henson and Steve Marquess again are the kind of the core, they're technical people. They're the core people around Open SSL. Open SSL is everywhere. It's hard to overstate the importance of that open source project. They helped us do our first FIPS 140-2 validation. There were components that we used Open SSL. They were there helping us, you know, step-by-step and get that done. They're just in so many products. It's hard to overstate how important it is, but it's just really, I think they're probably, I hope there's more infrastructure there today. When they got into trouble, I think they piled in, right? There were a lot of folks who started contributing to help with that. We did a big grant. We didn't talk about that part. I'm sorry. That's okay. When we engaged in the first time, there were two Steve's, Steve Marquess. We couldn't talk to Steve Henson because he never stopped working on the code. But we work with Steve Henson. That's right. He might be a mythical creature, I don't know. He kind of likes the machine. Awesome. Thank you. Hi, I'm Cindy Blake. I'm with GitLab and I focused on security. I'm a product marketing, but more of the security evangelist and pretty passionate about application security. From a zero trust standpoint, as containers and Kubernetes kind of detach the application where it can run on any cloud, one of our goals is to be cloud agnostic so that you can run your applications anywhere that you choose. And as you do that, it changes the whole security landscape because now there is no perimeter anymore that you're protecting. So that's why zero trust becomes so important and protecting the data through encryption, protecting the applications that process that data. And what we do is we protect the applications in two ways or secure them in two ways. One is we help you as the customer of GitLab use our scanning capabilities to scan your code. And the other way is we protect the software development lifecycle, the integrity of that, so that as Jim was saying, you wanna know who made the change, when did they do it, what did they change, all of those audit capabilities of the lifecycle. And I've got a whole page on it out there under compliance. If you're interested, you can see more details about how GitLab can help with the compliance end of things. But on the application security testing side, if you think about it, AppSec's been around for a very long time, 15 years or so, that's a long time in security, in the security landscape, maybe longer. And yet we only have probably 20% of the folks out there using application security and we're still having lots of security hacks that focus on the applications. So you have to kinda question, I think it was at Albert Einstein that said, if you're doing the same thing over and expecting this, don't expect the same different results, something like that. I need to look up that quote again. But it's really important to think about, is there a better way to do application security? And so what GitLab does is we are helping really shift left. And in fact, I'd call it a seismic shift because lots of people are trying to take security tools and applying them to developers. But what we wanna do is really enable a whole change in the workflow, a change in how the security people are involved. So empower the developers to find and fix whatever they can. And then let the security folks be the ones that help with the exceptions. So let them come in and help with what the developers need help with. Rather than, you know, I've got 10,000 vulnerabilities and the security person is just, you know, wading through all of those and prioritizing them. Get them out of that minutia. Let them focus on the value added pieces. And so Dev can iteratively, as your DevOps environment is very iterative, your security environment needs to be just as iterative. And so that's an area that's very passionate that we're focused on. Yeah, I work with Cindy and I learned so much from her about security, zero trust security, security and cloud native. So I myself, I'm a DevOps person, right? I have spent a lot of time in observability and hang out around the CNCF folks a lot. But I'm new to security and everything I've learned, I'm like, man, so this is like, you know, if someone hacks into our system, takes all the stuff, it's our fault and says like, there's no police around here. Oh my gosh, this is reality. And then I was like, this is my Mr. Robot is this popular. Now I get it. It's become my go-to show. But I've learned all of that from Cindy and Kathy who leads security at GitLab. And I think that thought process adds a lot to the product development. Finally, Kevin, do you want to introduce yourself? Sure. So my name is Kevin Lewis. I'm a principal solution architect at Twistlock and we're a cloud-native security company. We started primarily as a container security company, securing things from the entire development life cycle from the build to the deployment, as well as the runtime. But over time, we've expanded out to cover pretty much anything that you can provision in the cloud, whether that's virtual machines, containers, your clusters, or even your serverless functions. And what we're seeing, and you guys have made a lot of great points about zero trust, is that when you start building things for the cloud, you can't trust anything. You don't know where the applications are coming from, who's provisioning resources, and you have so many different touch points of how you can be compromised. There's not one ingress point anymore. You've got virtual machines that can be compromised, that have access to Docker networks. You've got serverless functions that could be exploited that you made out of secured properly that now can access resources. And so you have so many different touch points and part of that is how do we secure that? And some of that is shifting things more downstream. So you put security closer to the actual applications and the things that you're using, rather than traditionally where you have things as far upstream and protecting the perimeter. And so really what we focus on is how do we make that possible? And we feel that you have to integrate first with the development life cycle, which is integrated directly into the build process, whether that's with GitLab, or Jenkins, or Circle, or whatever you guys happen to use, to build better images, build better applications, and let the developers know upfront, you have vulnerabilities, you have things that you can fix. So shift security left, tighten the feedback loop with them, so you're fixing things before they ever even make it out of the build process. So that's kind of step one, how do you build more secure images and more secure resources? Second part is monitoring over time, scanning things like registries and your serverless repositories and understanding. Are there new vulnerabilities that affect me today that weren't there during the build? I don't wanna have to wait until I scan things over time or scan things in production to understand that I have new vulnerabilities. So let's take a continuous approach where we scan and understand vulnerabilities on a daily basis, send alerts to the right people so they can be fixed and you can push out new versions. And then finally in the runtime, really understanding the intent of applications, which process, network, file system activities that we expect our resources and entities to exhibit and then alert and enforce on any of the anomalies in those behaviors. So really having that end-to-end protection and understanding how can we empower developers but also empower the security professionals to be able to interject policies in an automated fashion. So developers aren't left to be responsible for the security of the applications. You can have other parties involved as well. Yeah, Vanav is first looking at cloud computing and cloud native way back. I realized how many exposed edges there are and how no one knows what's going on there. And I was like, how did the cloud providers convince anyone to use what they're offering? This is crazy. This is, as I said, there's no police, right? And I realized, and Cindy and I have talked about this, how the business value of shipping fast, of using microservices of going cloud native is so high that people have been willing to make that trade-off. And what that means is that security then becomes just that much more important. And so all of these different approaches and different aspects are so critical to make more and more secure for the world as it moves cloud native. So I wanna ask a question around Zero Trust that may be different for each of you. I rather expect it to be different for each of you. So all of the people on the panel here are taking one angle or another. There's a lot of overlap, but different approaches, encryption, tokens, scanning, et cetera. What aspect of building the Zero Trust security mindset or culture do you see as most important and least addressed in your customers? You don't have to take names, but I'd love to hear some anecdotes around that. Maybe we start with you. Yeah, sure. Well, I think there's a really area that I think throws most customers off. That is going to the cloud. So the cloud service providers are all messaging constantly about security. I mean, you read it, they're constantly talking about how secure their platforms are. And they are, I think we're partners with all the major platforms, and so there's some really great security people there. They go about this far, okay, and then they're really secure. Now you take your stuff into the cloud, you have to own that piece of it. And I think this really throws a lot of people on the ground, and it's hard to get your arms around, and partly it's the messaging about the cloud service providers that they're doing. So most of when we engage, and we're in AWS and Azure, and we have been for quite some time, but what we see is that people are not owning the Zero Trust requirement with solutions they're taking in there. So I know that Microsoft and Amazon are securing their data centers, they do SOC2, SOC3, they're serious about all that stuff, and they have good teams. But when you lay your stuff on top of that, you have to own that Zero Trust stuff, and that is throwing most people on the ground, and I think is probably the thing that we see most frequently lacking. And especially in a world where everybody is trying to get to the cloud. Right. So it's like too much trust. They're starting with too much trust. They're trusting the platform too much. Yes, absolutely. And it's not the fault of the platform. I'm not trying to cast shade on Amazon or Microsoft or Google or anybody else. It's just that when you build on top of that platform, that's your foundation, and they own that, you just have to own your part. Right. And you can't make assumptions about that. Even to the fact that you're using the platform correctly, we've seen breaches around the Amazon's S3 service for example, that's, you know, if you don't use the service correctly, you can get yourself out of trouble. It's your own fault. Yeah, absolutely. It sounds so harsh when I say it. I'm sorry. Your own fault. Got it, got it. What do I use in these? Kind of going off of what Patrick says, there's a lot of effort that still has to be done. The cloud vendors make it sound like if you're using cloud, you're safe. I mean similarly, people tend to think, oh, I'm using containers or microservices, so I'm more safe because my things are more partitioned. And yes, there's a benefit to that, but there's a downside of it too, because now, you know, for containers, you've got your image, your registry, your traffic within the container, and all of those things need to be secured as well. So they become an additional point that wasn't, of entry, that wasn't there. And so people need to look at the cloud native environment a little bit more holistically, I think. And, you know, we've had a hard time getting them to look at application security. Now we need to get them to look at not only the application itself, but the infrastructure in which it resides and operates. And so I think that's gonna be the challenge going forward. I hear that. What about you, Karen? Yeah, I think similar. I think there's two kinds of things. I think number one, kind of focusing on containers and the serverless functions is that we give a lot of the responsibility to the developers to secure their applications. And, you know, most developers don't necessarily have that security background. So I'm building an application, I'm building an image, I might even build a database image that's supposed to have encryption and I may not have any experience in that. And now I've deployed it into the cloud. It has customer data or, you know, business data in it that can be easily exploited. So I think, you know, giving a lot of trust to the developers to secure their own images is one thing that needs more focus. And then kind of going on, what you guys have said is that ability for anybody to deploy anything in the cloud, I mean, so many people have access to the AWS accounts. I can deploy something in production, just provision an EC2 instance and leave it there and do we even know that it's there? Right, and so now it has access to our VPCs, it has access to our data and it might just sit there and not being updated, not anybody even knowing it's there until it's too late. And so I think that, number one, the developers have the responsibility of security and so there's ways and tools that we can help make that easier, but also understanding what's out there, what's being secured and then having policies and controls in place to ensure that we know what's being deployed into the cloud. Yeah, I think just to kind of carry on the theme that every other panelist said, it's that there's this gap between, on the one hand, there's application developers who are pushing to build in the cloud and build on containers and build on functions and get that agility, get that things to market, but don't necessarily spend too much time thinking about security. But then you have security teams who of course it's their day job to think about security, but the toolbox that they have and the mental models even that security teams have, geared around things like perimeters and around firewalls and IP addresses and it's hard, it becomes very difficult for organizations to realize the benefits of moving to these great new technologies if they apply many of the old security models. There's so many cases that you can deploy an application in 20 seconds now, but it still takes you two weeks to get a firewall set up so that you can actually do it. So there's a gap between, I guess what the application developers who want agility and want to be able to push for that and these old models of security, and there's not a lot sitting in the middle right now, there's not a toolbox, there aren't designed practices, there aren't well-codified design practices and patterns that people can lean on or even a language that people can lean on in order to solve for that, so. You know, I think part of that comes down to automation because if you can automate, you said having the developers be responsible for security and they don't really understand security, they weren't trained in it necessarily, but if you can automate and find vulnerabilities and by looking at, for instance, if there's a vulnerable code library that you're using and the tool can find that there's a more current one, apply that as a patch and see if it works. You can run the pipeline if it works. Great, then the answer is to your developer, we found a vulnerability and it's fixed instead of hey, here's a vulnerability and you need to figure out what to do with it. Yeah, I just would affirm what everybody else has said, you know, at our company, we get up thinking about security every day. That's where we live. The startups that we work with, the companies that we work with, they're trying to bring a solution to market, that might be a financial system or HR, they're trying to compete in their world, their focus is there, they're trying to get a solution out and make their company successful. They're not thinking about security and so I'm with you, the more that gets embedded in our DevOps process, in the core modules, the better off we are. So it would be great if we could enforce that those different components of security through DevOps and so something comes out the other end, much more secure and you don't have to think about it. Best case. Yeah, we all have too much to do, way too much to do. And I think that's a real opportunity as well for leveraging automation to have your security professionals, they can implement policies in place that as you move through the lifecycle, whether it's in the build or even at deployment time, you can have policies that are built by people who understand what they're trying to accomplish and then let your DevOps people or your developers deploy applications through those automated pipelines, how you have security built in, they're still getting feedback on the reason maybe why something wasn't deployed, maybe you had a critical vulnerability that has a patch available, so let's block the deployment of it. So they get the feedback, but they aren't necessarily responsible for defining those policies and that's where you can really leverage automation in addition to all the stuff we get from scalability and force multiplayer. So I wanna ask my last question and then I'll open it up to the audience. So you're all providing some help around security, but you're also running your own systems. So you're consumers of security products, of thinking through the zero trust security model and all of that. So, and I would imagine you're much more sophisticated than the average company out there. As folks think about security, there's so many aspects, right? Like the diversity of what you all offer is an example of that. What kind of framework would you suggest people to use when they're thinking through, okay, this is a good way to get a lot of coverage and feel secure based on your own experience and your own products? It's funny when we talk about security frameworks, there's a whole language and ecosystem there that I think I'm probably the least qualified person on the panel to talk about, so I won't. But there's a lot of thinking around practices and policies you can apply. I think in startups, it's even, you get exposed to that one way or the other when you start to talk to real customers and all of a sudden you get that 80 page checklist of that security product and it's, I don't know about how prepared you are, it's usually a little bit eye-opening in some ways, but then I think the, so we as a company went through that recently and there's always eye-opening questions and gotchas, but where we, there were also cases where we were looking through those questions and we were breathing a sigh of relief and it wasn't because we thought of the question specifically, it was because we were able to put in place some automation and some, you know, to Kevin and Cindy's point, some automation such that it almost became a non-issue and there was, the tooling, the tooling would answer the question for us and so, which is a somewhat vague answer, but the more you can lean on automation, the more a lot of those security questionnaires or NIST questionnaires or, you know, FedRAMP questionnaires suddenly become, hopefully a bit more tractable. So that means look at a security questionnaire that's part of an RFP or something and then automate to checklist, answer those checklists even though you might not be participating in an RFP, it's a good framework to use. It definitely is. I mean, I'm just trying to problem solve here. Got it, got it. Okay, what about you, oh, sorry, Patrick, you're... Oh, okay. Well, what Andrew said is right to the point, the automation is critically important. We do have some good frameworks. Center for Internet Security provides a framework. National Institute for Standards and Technology, NIST provides frameworks. So, SANS Institute does a lot of work in this area. We live in a key management area so it's a very narrow part of security but NIST has provided a framework, an actual published framework for key management systems. People can use that to evaluate potential acquisitions. But the problem is all that was built without DevOps in mind. That's exactly what I was going to ask you. Is it kind of new-friendly? It's not, yeah, so there are the all the ideas that how do you stitch all that into a DevOps process and the automation and what Cindy was saying about really making this a part of your standard practice and automating it so you don't have to think about it. We're just not there yet. We're so early days, it needs to happen. Yeah, I mean, OWASP is taking baby steps towards the whole DevSecOps thing but it is very early, although I'm sure we'll hear lots of DevSecOps next week at RSA. Maybe that'll be the second highest term behind machine learning. It's there, yeah. Yeah, I shot every time that they said that. Right, right. It's a very interesting. Yeah, drinking game. You know, I think there are some exciting things coming out. Our CISO, Kathy Wang, she's really a thought leader around Zero Trust and she's working, not only on Zero Trust for GitLab itself, but working with the analyst community as well and we're excited that there's, I don't know if I can say who it is, but they're getting ready to do a really cool reference architecture around Zero Trust. It's both an open source version so what could I put together for Zero Trust if I had no money to spend and it's all free and then a paid version and how would that look and what would it cost me and what would it give me? Super useful, I imagine. What do you guys think? Will it be useful? Yeah. Okay, this is good news. Yeah, I think the frameworks that you can leverage really depends on what you're trying to secure. When you look at securing a container versus securing a virtual machine or even setting up policies into your pipeline, I think there's different things you can use. The CIS, the center for internet security is fantastic. They have their Docker, their Kubernetes, their Linux benchmarks that you can use. One of the things that we actually do to make that easier on you is we have all those modeled out of the box, but we also rank them based on, we have an in-house research team that ranks them just like we do vulnerabilities and so you can focus on the critical and high aspects and things that really matter whereas maybe the things that are ranked as medium and low are things that would be nice to have and you should get to, but it really helps you focus if you're new to this environment or you don't have a huge team. These are the things you really should be doing, but let me kind of get back to the question. Does it really depends on what you're trying to secure? But definitely start with the CIS benchmarks and then understanding how to integrate security I think primarily into each phase of the pipeline whether that's build, deployment, or even runtime. Having something in each one of those three phases is a good high level to say okay, let's find something to secure our running stuff, let's find something to ensure we're monitoring things over time and then primarily building better applications out of the building. With that, I'd like to open it up to the audience. Anyone have a question? Did anyone hear? Okay, there you go, yes. I'll repeat the question once you've asked it for the benefit of the recording. Sure, thank you for your contribution. I really appreciate this forum, this panel. My application actually spans both cloud and hardware platforms. That's curious whether you have any experience or approaches to mitigating insecure processors, firmware, and whether you see some opportunities to automate pipelines for validating these purism reports as it's open sourced. There are chips that design pipeline. I was curious to know whether anyone's applied sort of set up approaches there. Do I have to repeat this question? I guess you do. Okay, who wants to take that? Well, we intersect, I mean, we have a hardware appliance and so we follow that. We understand the Intel STX challenge and other challenges that have to do with firmware as well as hardware. So these issues come up, we track them very carefully. You've touched on a brand new area. I mean, this is getting DevOps to integrate at the hardware level is really a challenge and I'm not, maybe you guys have done this but it's an issue, yeah. Yeah, we're not looking at that per se. I can tell you a lot of that's being handled at the firmware level. So HP, for instance, is doing a, they have a new method that'll track whether the firmware itself has changed and whether it's fingerprinted back to its original. So I tend to think of that as more of a hardware piece but maybe we should talk. Maybe there's a way, maybe there's a gap there that's not being filled. It feels like there's a set of disjoint technologies and right now there's things like secure enclaves and measured boots and then software supply chains and signed binaries and then there's things like TLS. So we have all of these different ways of verifying what goes into and out of our systems and whether or not those systems have been tampered with but what we don't seem to have right now is a way of being able to reconcile all of that or unify all of that in some way. It's not something we're looking at either but it's something that people are using the SPIFI project to try and solve for. Someone look at this. Someone else look at this. Well actually, I had a conversation with someone recently and it was about zero trust and I asked them what zero trust meant to them and they said this was a large financial institution and they took a very traditional view of security which was if it's not behind a locked cage and I know exactly the names of the people who own the keys to that cage and I vetted them then I don't trust it and that was one model trust and this person pointed to, this was in a conference room and there was a server at the back of the room. It was unsecured, it wasn't doing anything useful and they pointed to that server and they said to me zero trust is being able to run the same things that I have in my locked data center where I have to go through biometric screening and all the rest of it and jails and cages to get in that I can run something on that server with the same degree of integrity and safety that I could run something in my data center and their point was that again all the technology is there to do that if we want to but it's fractured, there's different pieces of it. It is things like secure measured boot. It is things like TPMs and some of these things work better than others frankly. All of these things of course have their own threat models. SGX is a great example but they can be unified, they can be scored. There's got to be a way to do it but no one really has yet. Someone, yeah, someone should. Awesome. Any other questions? Speak loudly so I don't have to repeat. Sorry. There's a notion out there of a zero day threat or a zero day vulnerability. Something that is unknown in the wild that from a security practitioner's standpoint how often are people really worried about those unknown kind of things or are they focused on the blocking and tackling basic kind of types of those securing your application and how much, how many attacks out there are based on these zero day threats or just kind of base level, baseline types of security protection? All the big ones are zero days. Yeah. What a zero day means is it hasn't been, it hasn't been documented and so people don't know how to patch it. There's not a patch available for it yet and lots of times those are discovered and they may be disclosed ethically and that goes back to the vendors for them to create a patch but sometimes it can be months and months before a patch is released and so your defense in depth is really important and continuous security, scanning everything all the time is important and patch management. You wanna make sure that when those patches come out forget which, there was a real big attack, I forget which one it was where there had been a patch out for some time and they just didn't apply it. Well that happens all the time too. Yeah. That was Equifax and the Wisconsin, I don't think. I think the, No, go ahead. I was gonna say I think the zero days defending against that is, so we call that an unknown attack. So there's the known attacks or the unknown attacks, the vulnerabilities, things that publicly disclosed, I know whether or not Ubuntu has a patch for it or not and those are things that we can go ahead and fix. When we start talking about the zero days, in that case, having policies and things in place that can help prevent those things from happening in the first place is something that we focus on and so if we take the container breakout, the run C vulnerability that's been going around lately for containers, we can't necessarily stop that exploit from happening, somebody taking advantage of that and running that image if you don't have security in place. But you can put policies in place that say, I'm gonna whitelist the images and resources I'm gonna allow to run. I'm gonna prevent containers from sharing the host namespace, I'm gonna prevent root and privilege containers from running. So there's things that you can put into place to help mitigate some of the zero day things. Also one of the things we do is we have our whitelist security modeling where we know which process should be running inside the container which networking activity or file system activity to expect. Now the result of the exploit is gonna be something, I'm gonna download an attack kit, I'm going to contact some other server to do whatever. And so there's gonna be fallout from the actual exploit. So how do we protect that fallout and prevent that is a way you can protect against the zero days whereas a known exploit, hopefully there's a patch against it. So there's different ways to protect against them. And I would just add to what Cindy said, the DevOps environment is critically important. We were affected by Heartbleed. We got a product, it's got open SSL in it. Heartbleed happens, Amazon's on the phone. We know that you're running an exposed version of open SSL. How do they know that? Well okay, they know that, they know everything. But your ability to react depends on the automation and the DevOps platform that you have. So if you can identify that and you can get a patch but can you move it through your whole process? He's a GitLab customer. Yeah. It's true. But it's critically important that you have those kind of things, that you're doing those kind of things to even address the zero day, yeah. Cool. Oh sorry, Raven, is there something? Oh, well, nothing important but it's. Just a small observation, you know that another indirect way of helping to solve that problem is, I guess you're seeing more and more particularly critical security functionality being moved out of applications and into the infrastructure and becoming the provenance of the infrastructure. So service mesh is probably a great example of the design pattern right where things like open SSL now aren't typically run as part of my deployed application binary that an application developer needs to reason about. It's part of a, it's still a binary but it's now one that's run by my infrastructure team. And so I am at least now in a position to start doing, have a single owner of that who can be hopefully responsible for it and I can start to do, when I find a vulnerability like Heartbleed, I can address it reasonably easily systematically rather than having to go to 100 different application development teams and beg and convince them to update the patch and to rebuild it. This is of course assuming there is a patch so it's a bit more than zero day but at least gets you to a point where you can, when you do have a fix for something you have an option to do so. That's a great point. I think there's a trend now to embed security invisibly into the product. Okay, take it out of the hands of the end user, the end customer who may not have an IT team even in house having trouble getting cycles from their security team embedded in the application. So I think our partners who've embedded our technology have benefited from doing that because they can address it. It's just immediately there and they don't have the complexity. It simplifies in many respects that and that just makes it faster to address too. That's a big parallel to the observability world, right? Where a lot of folks are putting in the observability stuff into the service mesh or whatever they use to generate the base framework of each microservice just so that there's something that out of the box people have and it makes it just a lot more easier. Well, anyway, with that we're 10 minutes to eight o'clock so I'd like to give a hand of applause for the panel. Thank you. Thank you. Thank you so much for joining us today. I learned a lot. I think the audience did too. This duck around was signed for something and I'd like to just thank all of you for coming. This event was brought to you by General Catalyst and GetLab, the first single application for the entire DevSecOps life cycle. Thank you so much for coming. Yeah, let's get out of here.