 So playing defense, the reactive cloud native security battle. There are a lot of war analogies in that title, the two sides, the defense versus the offense, the reactive posture versus the proactive posture, the battlefield, the security. And it's all being presented by someone who crunches numbers all the time, who calls themselves herself a data scientist. So expect to see a lot of trends as it pertains to cloud native security landscape. I'll be talking about qualitative and quantitative research results, our surveys, the data that we have been collecting and analyzing at root.io. But let's start the conversation with something else. It might sound like a diversion, but it's not. So bear with me for a moment. So on Monday night, I came here and I had dinner with old friends from MIT, from college. All of them are French. Some of them are living in Paris. And naturally, there was a lot of conversation around food. The pride that French people have over their food is unmissable, unshakable, and rightly so. And we talked about this iconic French pastry, the croissant. And what makes it perfect? Apparently, 85% of the croissant sold in Paris is not made fresh, it's frozen, which disqualifies all of that, that 85% from the perfect category. A lot of places sell it, but if you're making it from scratch fresh, you can call yourself a boulangerie. That needs to be a master pastry chef involved. So there's a lot of regulation around this. And again, this is my French friends. I don't really think I get the differentiation well. But they talk about when you break the croissant, there's a certain texture to it, the honeycomb texture. They talk about the look. The look has to be imperfect. If you're looking from a window and seeing all these shiny and perfect pastries, you might not be getting your daily dose of perfect croissant that day. And it needs to taste watery, it needs to smell fresh, and everything when it's made right. And it is not just that too. There is every year, I learned all of this on Monday night. And don't quote me on this, because a lot of this is just hearsay. They live in Paris, so they should know. Every year, there's a competition but done by the National Baking Association. They select the best croissant. And every two years, there's a presidential medal involved. And they pick the best craftsperson in a field of friends. And when you win that award, you got to keep it right next to your name forever, for a lifetime. You wear it as a badge of honor. And you want to be known for the quality of your craft. So it is not just safe to eat, but it is top quality. If only software was like that. This person here, Mr. Joel the Fives, got that award. And this excellent bakery, the elite bakery that he has, it's amazing. But why do we like, do you trust this bakery? Or why do you trust food in general, if you think about it? Not just in France, but at my local grocery store in Boston. I have a sense of confidence when I walk in. The system, somehow, built that trust in me because there's a lot of transparency and governance. It is not that it is error-free. In fact, there's an inherent level of risk acceptance. But I know of that from the flowers, to the bees, to the soap maker, to the dishwasher producer who is involved in that entire upstream. Everything is being tested. And when there is an issue, an infection, an outbreak, we know how to contain it really well. I come from a background in physical supply chain. And I know that how visibility in supply chain is remarkably well. It's also a very resilient system. When there's an issue, the third party, like a farmer being found, there's also a quick switch over to the next one. So there's that recovery, a South recovery in that system. Imagine, software was like this. A lot of the software developers I know are like this excellent baker. They take a lot of pride in their produce, their craft. They want it to be top quality. They want it secure and safe. But when you compare these two in the industry side by side, and I picked some numbers here, they're both the food industry and the digital economy. Huge contributors to GDP year over year. And there's that inherent sense of trust that we talked about in food industry, not that it is error-prone, but it is mostly considered safe. According to the world's economic forums, our shared digital future report, only 45% of the world's population trust the technology that powers our digital economy. Only 45% feel that technology will improve their lives. So the majority, 55%, has a certain, maybe, distaste or distrust to the system. But I think the tides are shifting. There's not just a sense of awareness. We've been talking about that for a while right now. But I can see that in the data that my company collects, my team, my data science team collects. We survey executive year over year. We see that in that data, and I'm going to share that with you too. And I also think that the guidance is going in the right direction. So I'm sure a lot of you are aware of the new Nest cybersecurity framework. I see people nodding heads. February 26th, I think it's a big deal. There's a huge emphasis for the first time on social and supply chain security. The roles and responsibilities has to be defined well for suppliers, vendors, consumers, partners, the entire stakeholder spectrum. They're talking about how suppliers have to be known and prioritized based on their criticality. They talk about planning and due diligence when you're getting into a relationship with a third party, which you would think that that's expected, but they are underlying the importance of that. And they're talking about how this cannot be a one-time thing. This needs to happen over and over again in a continuous cycle. So that relationship has to be very tight. They're actually talking about if there is an incident that happens, it's a team effort. Your suppliers and you are working together to remediate that incident. Again, I think I find it remarkable. I think it's a big step in the right direction. But you might ask, what is the response in the industry? That's what I would like to share today with the data that we have been collecting. And for the last three years, my team and I, we have been running these reports, publishing these reports, talking about cloud-native container security. Year over year, we are publishing results on how things are trending with the raw data. And every year, we are also running a survey with top cybersecurity executives. This year, we partnered with Enterprise Strategy Group, a very well-known researcher group. And we surveyed executives, especially cybersecurity, IT, Infosec executives in some of the largest companies, some medium to large companies. And I have a slide at the end that gives you the demographics of it. But it's North America and Europe, medium to large companies, and top-level executives in the tech sector, in the technology departments. And it's a variety of sectors, well-regulated, 30%, 60%, not regulated. 30% of these companies are technology, but it's a variety of different industries. One of the first things that I would like to share is that all these executives say that it takes ample capital, human resources, and time to battle the vulnerabilities flowing their way. That first bar chart there, it says 65% of the organizations employ six or more full-time specialists, employees, just for vulnerability remediation. So my background is in the intersection of AI and cybersecurity, and we talk a lot about automation and AI, but this field is ripe for innovation, I will say, with that. Nobody is thinking about decreasing their spend. 91% said they are going to increase it. And when we asked about the time that they're spending, per month, about 40% said they spent a week or more just to prioritize vulnerabilities. I'm going to repeat that. They're spending a week or more, 40%, per month, just to prioritize vulnerabilities. And that's the beginning of the task, if you think about it. There's a lot more to be done. Once you prioritize, there's probably a lot of negotiations still with third parties that you're working with, then deciding, patching, making sure that those goes into production, everything working properly, it's just the beginning. But despite all of this resource allocation, it still seems to be a pretty down thing. There's an uphill battle to achieve these SLOs. One of the most eye-opening tasks that I had in the survey when I saw this, I couldn't believe it. Only 12% said, despite all of this effort that I'm talking about, specialized forces, lots of time, increased spend, only 12% of the organizations said they feel they have been successful, achieving their desired service level objectives. 38% said there are moments of success, but there's a lot of room for improvement. And 50% said we are greatly struggling. We are nowhere close to be successful. So there's that constant feeling of being behind the curve. You're always reactive. In fact, 40% of the executives said their security departments are utilizing a reactive approach when managing vulnerabilities. Another 20% said it's a hybrid approach. They're trying to be proactive, but there is a lot of reactive moments, as well, as expected. Now, this is not just, again, contained in an isolated organization. There's a lot of there is a very intricate web of players. Is the sound quality still OK? I'm good? OK, awesome. So we actually wanted to quantify that intricate web of relations between these different parties. So we said, are you a software vendor producer? Are you a consumer? Are you bot? Or are you just working in isolation? Nobody said none. Everybody is identifying as a producer or a consumer, as you can see. And a lot of companies are defined themselves as bot, a software producer, and a consumer. So we can say that we are building together. There is this constant exchange of software in the software supply chain across almost all companies. Let's quantify this a little further. So we asked, how many software vendors you're working with in a typical month? 73% of the organizations said they are working with more than 10 software producers. There is that 13% that's working with 50. So I can't imagine getting containers, packages, libraries, all these software from 50 different vendors and trying to integrate it into your own workflows. We also specifically asked, how many containers you're deploying? And they said, 68% said more than 50 containers. So all these new releases, patches, updates to these containers, it must be managed really well, you think. And we said, what is the pressure like of managing this web of different actors, the suppliers, your partners? 63% said getting containers from third party producers makes it extremely difficult to triage vulnerabilities. They actually said, which is obviously natural, there's this inherited attack surface. 67% said having too many container images from third parties is increasing our overall attack surface. I don't know, I think the rest, the 33% is actually not aware that it is increasing their attack first surface. Everybody is pressuring their software producers, saying please improve your SLOs so that we can keep track of everything. But again, the pressure is real. And you would think that there's all of this complexity. You are trying to manage your own business value proposition, your own workflows. And then you're looking into all these different players. How do you communicate? And this is not going to be a surprise. A lot of you might even relate to this. There's a staggering reliance on inefficient communication practices between these pairs. And again, it is not just one consumer working with this other vendor. Sometimes you're working with 50 other producers. And 70%, 75% of the respondents say they're managing vulnerabilities in these applications, in these packages that are being shipped to them, through spreadsheets. 63% said they're doing ad hoc meetings. 70% said emails, lots of emails. I mean, our most important vulnerable software issues can as well be broadcasted. And I'm not even like I don't even want to touch that security aspect here, but think about the friction here, the inefficiency in this communication. Again, the AI age versus this. And naturally, like a man asked, all of them almost nervous consensus, saying that there needs to be a centralized location. The learnings should be transferable. I'm learning. I'm seeing the CVE from this one vendor. We are resolving it. There are sometimes compensating controls. And I need to make sure that I transfer that learning to this other vendor. And sometimes you have that centralized location where you actually somebody else before you has deployed that software and decided that it is not problematic. So maybe you want to move on as opposed to spending so much time on that piece of code. But let's take a step back again, talking about the day-to-day struggles. Ben asked about the evolving guidance. One entry executive said, one of the top challenges in their organization, their organization space, is this evolving compliance and regulatory guidance. I just talked about NIST. I said it's a great step in the right direction. But at least at the beginning, it's going to create a ton of work. It's already that 85% said they need to do extra work to be compliant with the new executive order. And almost everybody in the states say they are affected by this new order. And a lot of European companies are in that batch as well. What about the day-to-day, about this vulnerability management stuff, the hustle and the fatigue of it? I double-checked this stat. Because I wasn't expecting to see it in production. So we asked about vulnerabilities in production systems. 36% of the respondents said they detect critical vulnerabilities in production that require immediate attention every day. So every day, you come in, in your production system, there's a new critical vulnerability alert. You cannot ignore it. It's potentially true, but there is a significant likelihood that it's a false positive. In fact, we asked about that too. And I have seen this question being asked multiple times in my career. How many false positives do you deal with on a regular basis? In this report, 68% of the organization said four in 10 alerts are false positives. It's the lowest that I have heard of in my career. It's usually like I have heard 60 plus. So four. But even like, you know, you're seeing a vulnerability in your production every day. And four out of 10 times is a false positive. No wonder, there's that alerts and more alerts case there, 74% of the organizations find it very challenging. I'm not going to spend more time in any one of those details, but I would like you to understand that storyline. What is the real cost? So what? All these vulnerabilities, are they even real? Do they even affect something? Is there a real cost to it? So when we asked to executives, I was expecting to see a lot of concerns around data breach, non-compliance, data losses are not ransomware maybe. But that was a resounding, like no, the real problem is the hit on our performance, the team dynamics, the innovation, the opportunity cost of doing something else that's more meaningful, more fun for my team versus dealing with these, like being inundated with these false positives. I mentioned that there's a lot of effort that's being put in, and I can quantify that in the data that I'm looking at year over year. So this container security, cloud native security reports that we are publishing every year, I can see what was the picture like in 2021 versus today. And what I can say is that in this data set, I'm looking at top publicly available containers, 53% of all the packages in our database saw at least one update, up from, I believe, 40%. So a lot of package maintainers are updating those packages. And the average version update from 2022 to 2023 has increased from four to six. So there's a lot of new versions being created. Every time there's an issue, those package maintainers come in and patch. Also, when you look into top publicly available containers, the average number of days between releases for these public containers is about 10 days. So those maintainers, again, come in and create a new release almost three times a month. Also, one more statistic. I know there's a lot of detail there. But last year, we have seen about 2,400 new packages being added to the set of containers that we were tracking year over year. This year, it was only about 1,000. So maintainers were trying to get fewer new packages and maintaining the existing ones. So there's a lot of awareness around not bringing in maybe all these new excellent shiny stuff and staying secure. But what is the result? What is the result of all of this robust upstream effort? It's still being outpaced by the influx of new vulnerabilities. And I know that's old news. Year over year, we have been seeing this. I should say that this year, the maintainers of those containers have removed three times more vulnerabilities than last year. So you're coming in, removing three times more vulnerabilities. But at the end of today, there is a lot more vulnerabilities than before. So if somebody was just looking at that top level number, it says if you have done nothing, you actually did worse, although you have increased your effort by three times. So what is the bottom line here? What is the core problem? I think it boils down to a lack of trust and collaboration issue. If the suppliers in the system, the benevolent actors in the software supply chain, do not proactively propagate their security findings downstream, down the supply chain. I don't think there is a way out. It's not going to work. Consumers cannot come in and constantly verify and remove vulnerabilities from your own software. I would like to eat my croissant and feel good about it. I would like to be in that peaceful mindset when I do anything with your software. The software consumers at the end of today are buying software because it is most probably non-core to their business value proposition. So they would like to just get it into their system and focus on their value proposition without thinking about the issues upstream. So if these actors do not transparently share the issues, that will be asymmetric information which will result in delayed software deployments. I mean, the absurdity of how consumers and producers communicate, I think, is unbelievable. It's arcane. It's primitive. It's non-optimal. It's human-centric. It's very error-prone. And anybody who is publishing software, I think, needs to embrace a new pattern. And it's also consumers on the consumers' shoulders as well. The consumers, I think, we need to also learn to be more flexible, maybe have an optimum risk acceptance level. And maybe it's not zero vulnerabilities. It's something else. And I'm hopeful that there are all these things happening, like VEX, for example, a producer coming in and saying, you know what, I have investigated this issue in my environment. If you have these compensating controls, it's not going to be impacting you. Or this is a non-issue for my customers. 10 of my customers accepted this. Hence, you can, too, so that I can accept that with that tolerance level. In general, I do think that we know what to do. But our tools, systems, practices are very primitive. At root.io, we want to make sure that cost-of-air consumers and producers have a transparent, secure collaboration. They collaborate effectively. And all these new guidance, the NIST guidance that we have and the new VEX statements, for example, are giving me hope about the future. I do think that there is going to be a lot of innovation in this space, and I can't wait to see it. Hopefully, you're interested. If you're interested in this topic, ping me on LinkedIn. X, if you're still using it, send me an email. I would love to speak about that more. But if you have any questions, please feel free to use the microphones in the aisles. Or you can come to me later. I'll be here in the conference for a couple of days. Thank you.