 Good morning, good afternoon, good evening. Welcome to another edition of DevSecOps is the way here on OpenShift TV. I am Chris Short, executive producer of OpenShift TV. I'm joined by my friend and fellow redheader Dave Muir. Dave, how's it going? How you been? I've been great, actually, just spent the week in the North Carolina Mountains last week. Yeah. So I got to escape the heat. We were just talking about the heat all around the country, but in the mountains, it's nice. Awesome. So what are we talking about today? Well, today we have a great guest, Paul Novarese, and I'll give him a chance to introduce himself, but we're going to talk about SBOM. So if you're not familiar with SBOM, this will be a great session, especially because recently, it was what, May 12th, an executive order was signed for the security cybersecurity. And there's a section in there, section four around software supply chain, and there's a lot around SBOM. So we're going to dig into that topic. And before I have Paul introduce himself, I just wanted to mention about this series that you're watching. This is the first show you're seeing in our series. It happens every month. You can see a schedule and some of the topics that we've discussed previously. This month is application analysis month. And so what we've been doing is taking different security topics and focusing on those topics every month. You can see we've done, we do two OpenShift TV shows. This one is DevSecOps is the way where we bring in a thought leader, like Paul, and talk about that category. Our other OpenShift TV show is with the OpenShift Commons briefing. And if you know Mike Wade, he runs those as well. And that's usually a partner of ours that has specific capabilities in that category as well. Then we try to publish three podcasts. We also do a blog on that category as well, plus a couple other data. And we're going to be releasing a page that summarizes what we've done over the last couple months so you can reference that as well. But any questions or any thoughts or concerns, please feel free to reach out to us. You can see the web link there and the email address as well. So we're excited to bring you all this content on a monthly basis. So with that, I will stop sharing here and let Paul Novarese introduce himself. Go ahead, Paul. Yeah, hey guys. Paul Novarese here. I'm a Solutions Engineer at Anchor. I've been at Anchor for about a year now before that bounced around a few places. I was a doctor for about four or five years as an enterprise architect there. I was at Red Hat for almost 10 years before that as a technical account manager. So I've been doing a lot of stuff in the container space here for the last six years or so. Nice. So former Red Hat, you got Red Hat blood in you, which is always good. Yeah, yeah. Enjoyed my time there. Yeah, still in touch with quite a few of the guys from the team there, most of whom are still at Red Hat, but they've spread out all over the place too. It's amazing where you see people go from Red Hat. It's a really good springboard for just about anybody's career, I think. So what are the some of the things that you do now at Anchor? So, you know, lately I've been doing a lot of, you know, pre-sales work, technical work, but on top of that, you know, curriculum development, training delivery, and a lot of, you know, for lack of a better term evangelism around stuff like the self-reliable materials that we'll be talking about today, but just around container security in general, right? There's a lot of people as they do their app migrations or as they launch new projects in containers for the first time. You know, some of the security implications are not always super obvious, right? Both the positives and the negatives, right? Things you need to look out for and just helping them get their arms around how they need to approach it, right? What to be aware of, what to look out for, lessons learned, all of that stuff. Yeah, there's a lot of new security techniques and technologies. So many different vendors out there that seem to be overlapping. I'm sure you see that as well. It's kind of tough, even if you look at Kubernetes and see all the certified partners and things like that, there's really difficult to get your hands around who does what. Yeah, it's confusing. And there's a lot of overlap, especially in, you know, there's a lot of places that have become commoditized, right? I mean, vulnerability scanning, right? For example, is kind of table stakes at this point, right? That everybody is doing it, but then there's a lot of stuff around the deeper inspection of the containers, the images, the runtime inspection. So there's a lot of places where the boundaries are getting pushed out a little further, right? The vulnerability scanning at this point, I wouldn't say it's a solved problem because there are some wrinkles and stuff there, but definitely the velocity of the frontier, right, is in other places at this point. Yeah, and it seems like a lot of vendors that actually don't compete with each other have vulnerability scanning. So you see that in a lot of different vendors. So tell us a little bit about Incor and what you all specialize in, in just the company in general. Yeah, so it's a, it's, the company's been around since around 2015. You know, once Docker kind of started taking off around that period, it was pretty obvious that containers presented some unique both challenges and opportunities for security, right? I mean, there's a lot of things that scanning containers is not the same as scanning applications on a bare metal system. But the way containers are packaged in these images, right, presents a real big opportunity as well, right? So you get a lot of opportunity to kind of speed up the process because of the way containers are packaged and distributed. So Incor has been developing open source tools to help, you know, inspect the container images, catalog the things that are in there, and then evaluate them, right, apply policy rules around not just vulnerabilities, like I said, right, vulnerabilities is something we do look at, right, but looking at how the images put together, you know, things like what licenses are being used inside the container, you know, how you, how you actually build the image layer by layer, things like that, but have a lot of impact on security, right? I mean, even if you have zero vulnerabilities, you could have a very insecure container if you misconfigure it, right? Just depending on what ports you expose or what packages you include or, you know, how you put it together, essentially, right? So there's a lot more to scanning those images than just the vulnerabilities. Yep. I've always wondered though, Incor with an E, where did that name come from? Yeah, that's a good question. Yeah, I wondered that for a while too. But, you know, I mean, around containers, a lot of the technologies and companies have kind of built on that nautical theme that Dr. kind of started, I think. So Incor, I guess, was to give you that connotation of stability and security, right, that, you know, you're well anchored and very stable. And I think the extra E on the end there is, you know, from a practical point of view, it's a little easier for search, you know, SEO purposes, you know, you got to have a little unique wrinkle. So I don't think there's too much to it, right? But yeah, it was definitely a nautical theme, you know, and that stability, right? I mean, that's really what we're aiming to give you that warm fuzzy feeling, right? Well, good. So let's jump into software build materials or S-bomb. How would you define that term? Yeah, so there's a lot of discussion around it. And I don't know if there's really a standard, you know, platonic definition of an S-bomb yet, right? There's a few competing formats out there. There's definitely some discussion about what should be in an S-bomb. But I mean, essentially what it is, is it's like the nutrition label on the can of soup, right? What's in the can? You know, what should I expect? And, you know, from product to product, there's going to be variations. Some of those nutrition labels have more information than others, right? I mean, but there is a baseline of, you know, what's the bare minimum that needs to be on this label for it to be a software build materials. And I think we're actually going to see a little bit of that really get codified here in the next few weeks even, right? So Dave, as you mentioned, there was an executive order recently about, you know, cybersecurity with all of the supply chain attacks we've seen, solar winds, CodeCov, et cetera. So we're really expecting to see some direction from the federal government here in the next few weeks around what is that bare minimum that needs to be in an S-bomb? Maybe some guidance around formats as well, because like I said, there's a few competing formats. So it's definitely in flux, right? Yeah. Exciting times. Very, very exciting. Yeah, in that executive order, they do try to define S-bomb, but it's very generic. And you're right, there's no mention of formats. I think I remember hearing that they are looking at formalizing on three different formats, like SBDX or something like that. But yeah, it's going to be interesting as well, looking for the government to provide us some information on that. Yeah. And I think like a lot of things in the government, right? I mean, there's an executive order that just kind of provides the strategic direction. And then it's up to the bureaucrats to kind of really hammer out the details, right? And like I said, I think we'll see the first draft of that. I mean, there's definitely like an open call for comments right now. And I think sometime in July, we're supposed to see that first glimpse of a proposal on a real standard. So yeah, it's happening fast. But S-bomb obviously isn't something new in this industry and application analysis. We've been trying to beat that drum for a while now, right? I mean, that's one of the key things to secure your apps and now containers. Yeah. And up to recently, it was really more of a means to an end, right? Whereas now it's like this is a front of mind thing, right? We have to have this S-bomb, right? Whereas before, the customers that I was dealing with didn't really care what we were doing. We were using an S-bomb because it made our job of providing them with an assurance of this container is compliant, right? It made that job easier, right? But now it's like, I need to have this S-bomb because I want to be able to do business with, you know, party X or party Y and I can't do it unless I can deliver an S-bomb with my software, right? So it's kind of now an end into itself along with all of the tactical advantages we get for analyzing the software, right? So generating it, I mean, it's one of those things where, like you said, there's a couple of different competing standards right now. There's SPDX, there's Cycling DX, there's a couple of others. And in the container space specifically, right? There's not a lot of tools that are aimed at, especially open source tools, aimed at generating an S-bomb for a container image specifically, right? There's tools out there that do it for a project directory or whatever, right? This code repository. Not that there's a lot of different wrinkles, but there's enough for images that make it, you know, a specialized tool becomes necessary there. What are the challenges with an S-bomb, I guess overall, but then if you think about containers versus applications, what are some of the challenges there that you see? Well, yeah, that's a good question. I mean, a lot of tools out there just don't know how to deal with container images is really the simple answer to that, right? It's just a foreign concept to them. They just haven't caught up. And so they're really not built for that. The good thing about container images from our point of view is because they are, in a sense, you know, people don't think of images as being immutable, but they are if you look at the image digest, right? That just tells you, has this image changed or has it been rebuilt? That gives us a lot of, you know, it's kind of the key to saying how we can, you know, tie a specific S-bomb to a specific image, right? Then you might reuse tags or, you know, push a particular image from one repository to another, and it's not obvious that it's the same image. But if we look at those things like image digest, that gives us a very easy and accessible fingerprint that we don't even have to generate, right? I mean, you can generate fingerprints one way or the other, a hash for a tar file or whatever, right? But all of that work is already done, and we don't even have to mess with that. And so we're just that much further ahead in speeding up the analysis process. So, you know, once that's the real key to what we do is once we've seen an image one time, we can evaluate the image over and over again without even seeing the image again, right? So if the vulnerability definitions change, if the policy rules that we want to evaluate the image against have changed, we can issue that evaluation really quickly without re-scanning the image, right? Because we know for image digest XYZ, right? This is what's in it. And that's, you know, a fact, right, that it will never change. Yeah. And there's a benefit, I think, right, of having an S-bomb is you have the answer your first scan, a second after your first scan, things can change. But you have that list so that if a new vulnerability or something else happens that you already know right away, right? Exactly. Vulnerabilities are, you know, we're consuming vulnerability definitions, you know, constantly, right? In real time, they're just flowing into our system. And like I said, policy rules can change. I mean, those are not going to be on a changing, you know, hour to hour, usually, right? But you might add additional policy rules like maybe you need to comply with the PCI DSS standard, let's say, or whatever it is, NIST 800, 53 or NIST 800, 180, right? Those are pretty common industry standards. But let's say you decide, oh, we're going to go into business in Europe. And now all of a sudden, we also need to comply with ISO 27,000 more, right? So it's not a matter of the policies themselves changing necessarily as much as your business needs are changing so that additional policy rules need to be looked at, right? And again, we can issue that, you know, evaluation of the image without having to do that intensive work of opening up the image and figuring out everything that's in it, right? We've already done that. So the new policy can be applied and evaluated essentially instantly, right? I mean, that's the goal is to be able to do it continuously, right? On a regular, you know, as early as possible, scan the image, right? That's the shift left mentality that DevSecOps really is, you know, we're beating that drum, right? So we do that scan as soon as the image is built. Some of our open source tools even help you get a little bit of feedback before the image is built, right? While you just are kind of, you know, noodling around on your laptop. But once that image has been scanned, right, then we can say as you move it from, you know, Dev to QA and QA to staging and as you push it into production or even as the containers are executing in production, you know, that information that we know about is changing, right? As you mentioned the vulnerabilities or, you know, if we need to change gears on the policies and we can just continually evaluate those and alert you when something has changed and said this image was compliant and now all of a sudden it's not, right? And then you can, all of this happens through, you know, APIs, we can send out notifications, you can automate your response to that. But a lot of times it'll require, you know, some human intervention. But, you know, as we get better at it, we can automate even more of that, right? Yeah. One of the debates I've heard in the past around images and build materials are around layers. And some folks say, well, yeah, I want to know what the first layer had because it was resolved in a second layer through there. Do you see use cases out there where it's valuable to understand the build materials for each layer or is just the last layer good enough? It depends, right? I think there's definitely a lot of debate around that, right? Because from a security point of view, if you have, let's say, compromise code in a lower layer, even if it's not present in the final layer, right? An attacker could, in theory, get ahold of it and, you know, it could compromise something, right? So, yeah, I think you need to look at both, right? Now, there's always the option of like squashing an image before you publish it. That has its pluses and minuses as well, right? I mean, it really hampers our ability to do things like, for example, for an image from a public repository, like let's just say we're pulling the Jenkins image from Docker Hub. Having that layer history there actually helps us a lot because we're able to infer, like I don't have the Docker file for Jenkins, Blue Ocean or whatever, right? But I can infer it from the layer history, right? So, squashing it in that case actually makes my evaluation a lot less effective. It's just information that gets lost. On the other hand, right? I mean, a squashed image is going to be faster to scan. It's less, you know, to go through there. And in fact, we see a big performance yet, especially in images where there's deletions from one layer to the next, right? If you're just building each layer, it's not, I mean, there's definitely a hit, right? I mean, it's more work to do. But once you start deleting things and our engine has to kind of resolve what's going on from one layer to the next, it really does slow things down. So there's kind of pluses and minuses. I think the medium there is being, for images, you're building yourself, being very aware of things like multi-stage builds, right? That can cut down on a lot of problems. It makes your images smaller in general. And smaller images tend to be more secure, right? I mean, there's less attack service. So looking at those image construction best practices are never a bad thing, right? Those best practices were already there before we started looking at these things. And they still tend to hold up pretty well, right? So. And I think you're touching on the next question I was going to ask around. How does this integrate or relate into DevSecOps and the methodologies and the tools there? I'm assuming you can find an Sbom as early as left as possible, right? But where are those assertion points? Yeah. So what we typically see in an enterprise is we will integrate in with like a CICD pipeline. As soon as the image is built there, we would do a scan of it, upload the Sbom to our back end, and then issue evaluations. The evaluations happen in our policy engine in our back end. And essentially what we would suggest to our users is once that Sbom is collected and archived, is to evaluate it all the time, every time you move the image, right? So if I pull an image, I can get an evaluation pretty quickly, right? I just hit our API and say, please give me an evaluation of ImageX. And I don't need to worry about keeping track of where it's been before. I just tell it what image I'm about to look at, and it just gives me the evaluation back. And that, like I said, since we've already scanned the image, that's the intensive part. The evaluation is actually extremely lightweight. So separating out that scanning of the image, analyzing the image, and making the Sbom from the actual evaluation of the image to say, what are the vulnerabilities right now, that we know about with the definitions we have? What is the policy evaluation, given the whatever rules are in effect today, right? That process, separating those two steps enables us to do that continuous evaluation, right? And that's really the key. So I can get that evaluation as I push into production, right? So right before those containers are actually created, we have a Kubernetes admission controller. And it can operate in a lot of different modes, but the most common is, A, it checks for two things. One, have we seen this image before, right? So that is a big gate there to say, did this image go through the proper process? Has it gone through our pipeline or whatever our process is? If it hasn't, it wouldn't have gotten scanned. That would be a huge red flag, right? Somebody's trying to deploy something we've never seen before. Whether it passes or not, that's a big red flag. And then of course, does the evaluation have a passing grade, essentially? So you can get that last second check, right? As opposed to, there's other, there's the opposite, maybe not the opposite, but the competing, well, I don't want to say competing either, right? Because something like ACS in OpenShift does a lot of looking at code as it executes, right? That's definitely an important part of security, right? But our philosophy is for what we do, we aim to be best in breed and making sure that the stuff is, all systems go before you start executing it. Once it's executing, we will let something like ACS take over and because that's what they do best, right? But both of those are important pieces, right? I don't want it to sound like, in fact, I've been having quite a few conversations with some other people at Red Hat and the two pieces really complement each other really well. There's always car analogies, right? But I kind of think of it as like any like breaks kind of help you prevent or avoid accidents, right? But if you get in an accident, you need the seatbelt and airbags, right? So just because I've got, you know, one doesn't mean I'm going to get rid of the other by any means, right? Yeah. At Red Hat, we call that, you know, layered approach or defense and depth. But I like your car now, right? I hate car analogies, but they're the ones that always people, you know, relate to them, right? I mean, they're, they're tangible. Yeah. They're not perfect. They get the job done. Yeah. And I just, it sounds like you might, we might be going to a demo soon before, before we do that, I just want to, because we've been talking about vulnerabilities and policies. This is more than just a vulnerability scan, right? It's not just a, a list of vulnerabilities. Can you talk a little bit more about that? Yeah. Yeah. So I mean, there's definitely, it is a big piece of our evaluation. I mean, because that is, like a lot of people, that's all they think about when they think about security as well. If I don't have any CVUs, then I must be good. But it's definitely not where we stop, right? It's just the beginning for us. The policy evaluation is really the key. And that can be, you know, like we said earlier, things like, you know, maybe I don't want to use the, you know, depending on what I'm going to do with the software I'm building, right? Is it for an internal project? Is it for something on my website? Or is it software that I'm going to actually be shipping to customers, right? I might have three different policies on what open source licenses I can use in those three cases. So we can look at the components there and say, you know, are those components in line with our policies? Or it could be something, to the current, you know, mania around supply chain attacks. Where are the components that I'm sourcing? Where are they coming from? Right? So like, when I do a PIP install, numpy or whatever, am I getting, you know, what I expect? Or am I being, you know, redirected with a package, you know, what do they call that? A dependency confusion check or whatever. That's a big point of concern, right? Am I being redirected to a different repo, right? That has compromised components in it, right? Or maybe I want to use nginx in my project and I fatfinger in type nginz, right? And I get some kind of compromised, you know, image typo, typosquadding has come to those container images, right? Yeah. So those kind of things that we can validate, you know, where all of those components that are in the image, where they came from. Because, you know, if someone is malicious and giving you some kind of compromised package, they're going to use a version number that would indicate it doesn't have any vulnerabilities, right? They're not dumb, right? So the CDE scanner, a simple CDE scanner is not going to necessarily tell you everything. It's going to tell you some things, but you still have to be able to trust the provenance of those components, right? And we can check things like that. That's all of that is captured in the software bill of materials. Things like, even things like what ports you're exposing, right? If I, in my Docker file, I can, you know, expose port, you know, 22, right? Or what packages are installed in there, you know, I mean, maybe, maybe sudo doesn't have any vulnerabilities right now, but I don't want sudo installed in my container images, right? It's amazing how many people have it in there, right? But, and it'll pass a CDE scan, right? But there's still no reason to have it in there, right? So you can audit those kind of things. We'll look at a bunch of that stuff, right? I mean, there's a ton of things we can check. But essentially anything we collect about the image, you know, any, any facts that we find out about the image, we can, we can build rules around and say this is acceptable and this is not. So should we take a look? Sure. Yeah, let me, let me throw my screen up here and see what we got here. Let's see, share. Let's see. Yeah, there's that slide. I was just telling everybody before the, so we're in the process of shipping 3.1. I mean, as we speak, right? I mean, literally engineering is, is, I'm watching them on Slack, you know, going through the checklist. And of course, this morning I updated my demo environment to the 3.1 images and, you know, it caught on fire instantly. So I've got it all up and running again. But, but basically, yeah, it just, I run into this, never learned the lesson, right? I mean, I don't know how many times I've had to do it, you know, at a conference or a meetup or something. And I'm like, Oh, I want to tweet one more thing. In fact, my son, you know, he's in this lego league, right? Where they, you know, you program the Mindstorms robot and they have like this obstacle course set up. Of course, at their last competition, someone on his team, like literally decided to start from scratch and rewrite the program. And of course, the thing like did not work at all. I'm like, right. But, you know, like sometimes this guy is good. Yeah, yeah. This guy is definitely on his way to be in a professional, right? Yeah. Anyway, I might come back to a couple of slides just to illustrate some points. But so here, what I've got is I've just got a few little sample containers running in open shift here. Basically engine X and then a couple of Braille containers that just are, you know, they're just running a beacon. But basically I just needed something running in here. We can take a look at what they actually have in them. So this is the anchor enterprise web UI. One thing I always tell people, I mean, we have 100% API coverage, everything that I will show you here, we can do through our API. You know, this is the enterprise UI, the open source version of this called anchor engine doesn't quite have, doesn't have a web UI, you have to do everything to the command line or through the API. But the API is basically the same. So anyway, I'll just open up a couple of these images and we'll look at what's actually in them. So this is a policy compliance here. I'm actually going to come back to this and go to the software bill of materials, right? So these tabs here of the image, metadata, contents, et cetera, this is that software bill of materials that we're talking about, right? Just every fact about the image. What is it based on? How big is it, et cetera, right? Image digest, image ID, the contents of that image, right? Not just what RPMs are in here, right? And again, this list of RPMs here, this is just a fraction of the information we've actually got on these RPMs, right? There's a bunch of other stuff in there. Obviously, it wouldn't all fit on a web UI. But we also catalog every file in the image, right? Along with, you know, permissions, checksums, all this stuff. Language artifacts, well, in this case, I don't think there's a whole lot of stuff. And this is kind of a simple image. I don't have anything installed in it. And then the changelog. So like, if I, you know, have multiple versions of an image, I can do diffs across time, right? I can compare those two SPOMs and say what has changed from one to the other, right? So for forensics, you know, this is a big plus. And then the build summary. This would be things like the manifest, the docker history, right? So this is the layer history. This is how I infer a docker file for images that I don't have a docker file for. And you can see, like, what's being done at each layer here, like what's being installed. So when, then when we get to the evaluation, it's things like, okay, what are the vulnerabilities in this image? And again, this is, you know, an up to the second view, right? This is not what the image looked like when I scanned it. This is what I know about, you know, using the current vulnerability definitions. And then, of course, the policy compliance, right? How do I turn this into a pass or a fail, right? So you can see here, got a couple of things wrong with this image. The root user, you know, I didn't define a user for the image to run as. Things like this aren't quite as important in Kubernetes, right? You can, you can, you know, put these in your deployments, but still, again, belts and suspenders, right? Things like, are there secrets in the image, right? So I, like, left an AWS access key and an SSH private key in here, right? And we're just searching for these, looking at, these are essentially just arbitrary, regular expressions, you can kind of see the, the regex here that I'm searching for. But as I scan the image, I look for these and we can add whatever we want. There's an AWS access key as well, right? So these are things that the developers should be using, you know, vault or cube secrets for. But, you know, a lot of times, while they're prototyping, these will end up in a file somewhere and they forget to remove it before they build the image, right? So we can, you know, check for those kind of things that, again, you wouldn't see these with a CDE scan, right? Right. And then there's a bunch of vulnerabilities in here as well. So that's kind of what your, you know, your basic software bill of materials plus the evaluation looks like. And this, again, is generated on the fly when I pull up the webpage, like if I go back here and say, show me like this image, it will reevaluate, it's evaluating it right there, right? So boom, it just reevaluated it. All right, so I question for you, because typically when people think about software bill of materials, they might only be thinking about the dependencies that you pull in, but I think you made a good point here and then a little bit earlier that it's not only that, right? It's some of the files that are right that are installed in an image like suit or whatever, you need to understand those as well, right? Right, right. I mean, there could be a lot of stuff in here, right? Just general files that are laying around that are not coming in from a package manager of some sort, right? It could be, you know, we're going to look at things like Node or Ruby, Gems or, you know, Python packages, but I can just put arbitrary files in an image, right? And all of that would slip through that scan. So I can still look at those and I've got a comprehensive view of every file in the image and, you know, I can look at things like checksums or, you know, it's a good way to fingerprint things like crypto miners, right? I mean, that's another big hot topic, right? Is stuff like XMRIG or Monero that, you know, people will try to pull in those things in a sense, right? They're not very sophisticated. People just like, if I've got access to a EC2 instance, I'm just going to run this crypto miner. I'm not going to spend a lot of time customizing it or anything, but the plus is the fingerprints of those things or they just stick out like a sore thumb and we can find those things. I might have one in this image actually. But in any case, I'll show you how we can find those, right? Policy rules, right? We talked about these. What do they actually look like though? Let me just open up so you can see, like, out of the box, we ship like a CIS, Dr. Benchmark policy. We have some NIST policy bundles. We've got one that we just recently developed for FedRAMP, et cetera. I've been working on one for, like, DevOps in general that I've been using. Things like, you know, we talked about licenses. It's really easy to check those vulnerability checks. I mean, again, it's kind of table stakes in a way, but we can do different things with them and look at things like, you know, if the kind of this multi-stage set of criteria around it. Like, in this case, I only flag vulnerability if it is greater than or equal to high severity and it has a fix available, right? And you can get even more specific than that. You can look at things like the CVSS scores or how long it's been since the advisory was published, et cetera, right? There's a bunch of knobs. So you can get as sophisticated as you want around those kind of things. But again, vulnerabilities, you know, it's like the least interesting part of it, right? So image typosquadding that we mentioned. I mean, I can do things like make sure I'm not pulling my base images from Dr. Hub and instead say you can only use this internal Harbor repo, right? Or build, you know, rules around package typosquadding, right? Same kind of thing happens with Python, right? So in this case, like I just make sure if you're using Python, you have to install the tip set package. It's got its own set of hardening around, you know, what Python does when it pulls down packages. Data transfer checks. These are things like I'm just going to make sure they're not in the Docker file as they build the image, right? Looking at how the images put together, are they pulling down, you know, code from like arbitrary GitHub repos, right? If they're doing that, rather than using the code in my internal repo, you know, if they're running Git clone in here, right? Something's probably not right. So, you know, just checking for those kind of things. And that, again, it's, you can't find that stuff with, it's beyond a CDE scan, right? So you see it's secret scans we talked about, crypto miners, you know, like I said, they have pretty, pretty stable fingerprints. They're really easy to catch. I mean, and we're looking at things like checksums on files, right, binaries that have a particular checksum or, you know, directory structures they create. I mean, any number of things we can look for in here. So there's a ton of different things out there. And of course, you know, we also have a feature called mappings that really allows you to apply different sets of these rules in different situations, right? Because you may have two or three different projects that have completely different regulatory requirements, right? So we can just say, you know, for images in repo X, apply this set of policies for images in repo Y, apply this set of policies. So you can, you know, get it all done in one big bundle. And all of this is JSON, right? Essentially, this entire what we call a policy bundle is just a big JSON file. So it's very easy to flow it out of our API, make changes to it, update it, and then flow it back into the API. So you know, it's a real, it's real GitOps friendly, essentially. And then we're kind of venturing into, you know, like I said, we're not a runtime monitoring tool like, you know, ACS or Falco or something like that. But we do keep track of where the images are actually executing that we, the images that we scan, like in this case, you know, I've got in this namespace, I've got these images running, right? And I can do things like show me all the vulnerabilities in my production environment, right? And I could, you know, filter around a particular CVE, if there was a zero day or something like that, right? So there is, you know, this kind of view into where the things we've scanned, where they, where they actually are out in the, you know, in the real world. So we don't just cut it off once the image is in production. How are we on time there? We're good. Yeah, you're good. Plenty of time. 20 minutes left. Yeah, that's good. Yeah. Yeah, that's the, you know, the quick and dirty view of, you know, what that software materials looks like in the product, how we use it, what we do with it, right? Yeah, that's interesting. Sorry, I was just going to mention, you know, back to my earlier question around, there's more to an SBOM than just the dependencies. I believe, and I actually read it again, the definition given in the executive order seems like they're only thinking about dependencies. So we might need to educate our president on what else... I'm going to go sit down with Joe, yeah. But no, I think you're right, right? I think the first version of this, this, you know, will be, I think we're ahead of the game in that regard, right? We're definitely looking at a lot more and all of the stuff I've seen mentioned as things that will probably be in that first standard, we are already collecting, you know, a lot of it is around the dependencies, the licenses and so on. But yeah, we want to, we want to basically have complete knowledge of the image, right? Everything about it, we want to catalog, even down to things like, let me go flip over to this, this is my other back end here, even down to things like, when we look at the image, we want to know where it's coming from, right? What are the base images, right? So if I look at this rail image, I need to resize my fonts a little bit. You know, in this case, looking at those layers, and this goes back to the conversation we were having earlier about, you know, looking at layers versus a squashed image. If you have a layered image, we can detect the base image based on those layers, right? So in this case, this image is based on UBI8, right? And I know that because I've seen UBI8 and I know what those layers look like. And this image has those same layers at the beginning of it, right? So we know for a fact that that's what this is based on. And that gives us the ability to do things like differentiate between, see if I click on it, it just takes me to the scan for UBI8. Once we have that, and we can say this image is based on that image, I can differentiate between a vulnerability that was introduced by the base image or a vulnerability that the developer put in with whatever he was doing. And likewise for policy violations, right? So that helps us with the remediation workflow and say, who do I assign this to, right? If it's a base image that we maintain internally, I can send, you know, open a GitHub issue or send a JIRA ticket to that team. If it's something the developer did, I know how to assign that bug, essentially. Right. So that's a big advantage, too, right? And we can only do that if the images are not squashed. Because once you squash them, then it's just one layer for the entire image and there's no real... You can do things like see sort of like this, like, you know, oh, it's RHEL 8.4, which we look at by things like the slash at COS release file. But you can't quite be definitive of what the ancestor image is in that case. So you do lose a lot of information. I mean, we can be pretty sure in a lot of cases doing forensic work, but you can't be 100% sure. So are there challenges that you see with other Linux distributions or different programming languages, maybe, to generate the SPOM? Well, maybe not to generate the SPOM itself, but to really be effective at it. I mean, there are definitely some distributions that are better than others about things like their advisory feeds, right? You know, RHEL is definitely extremely good and up to date. Some other ones are not so good, but in those cases where the feed is deficient, right, all the information we have is the NVD feed, usually, right? There's a couple of other feeds as well. Or we're relying on a third party to update that information. And in that case, you're basically, you know, we can only be as good as the information we consume. And if the publisher of the distribution is not maintaining that feed, you're going to get a lot of false positives. Yeah, absolutely. So we can generate the evaluation is going to be less than perfect. And if that, if the good thing is, if that, you know, OS publisher does get their act together, right, then all of a sudden the existing scans you have are more valuable, right? As soon as we consume a better quality feed, the evaluations get better automatically. We don't have to re-scan all the images because we did the scan, we know what's in there. We just don't know the quality of those components until the feed is, you know, up to snuff. Yeah, but yeah, that's a good, that's a really good question. And you mentioned CICD integrations and assuming it's API command line, is that technically how it would work? We have plugins for things like Jenkins or for GitLab or things, but yeah, essentially it's just, they're just massaging the API, right? I mean, everything is done to the API. And, you know, for like GitLab, you know, what that looks like is the developer would get something in their security dashboard, right? So like, you know, I set up a pipeline for them, you know, basically just like build the image and then, you know, hit the anchor API right here and just scan this image. And then it just shows up in there, you know, if you have a plugin, right, it just shows up in their security dashboard, like, okay, here's your, you know, what you're not turning this off. So you can see in here, okay, there's a few images, I can go to the vulnerability report and they can see what's in there. You know, this is a little bit new, but like in Jenkins, you see all of the actual, and it's not just those two, right? I don't want to, you know, single them out. You know, we work with Azure DevOps pipelines, we work with CircleCI, we work with CodeFresh, I mean, basically, we try to work with any CI CD tooling out there and push all of that stuff that we evaluated, both the vulnerabilities and the evaluation. They're just JSON and they flow out the API and then the plugin can do something with them and present, you know, the vulnerabilities or the policy violations to the developer right here. They don't have to go to our web UI to see this stuff, right? This is the exact same information they would see if they did go into the web UI. So they can just fix it, you know, rebuild the image and move on. And, you know, I mean, that's really the idea, right, is to get this information to the developers as early as possible because it's easier for them to fix when they're, you know, they haven't moved on to another project, they haven't lost the contacts, they just know what's going on, right? They're in that mode. It's a lot cheaper to fix too, right? I mean, I've seen some of the estimates of how much it costs to fix a defect, right? In development versus in QA can be 10x, which I was like, I don't know, that seems, you know, pretty, like a pretty big difference. But, you know, in production, when something's in production, they're saying it could be a hundred times more expensive to fix the defect. Like, I think that was something like $80 to fix a defect in dev versus like $7400 to fix it in production. You know, when you take into account the time it takes away from, you know, money making. Right, right, right. Yeah. I mean, yeah, if you have to take your website down, right? It's pretty impactful. Right, right. Oh, yeah. I mean, I've seen all kind of crazy numbers on, you know, if this application goes down, we lose, you know, X number of dollars per minute, hour, whatever. Like, that's, that's what I tell people, right? Like, if your whole infrastructure goes down, you need to know that number. Because I remember working for a newspaper company, and it was like something like $64,000 a minute. And what about your reputation? It was the third largest publisher in the nation, right? So it's like, that's a lot of money. Right, right. Or people come to your website and it's, you know, busted. They're going to, what are they thinking, right? I mean, that's, that's hard to recover from there, right? I mean, that mine, you know, something gets in the mind of somebody. And that's all they associate you with is, oh yeah, their website was broken at one time. I went to look at it, you know, or I went to, you know, track my, you know, I was getting my, my, my package from, you know, major e-commerce retailer. And I was trying to track the package and it didn't work, you know. Yeah. Is there just thinking about the pipeline? Is there anything you would recommend developers do before CICD at all? Or CICD? Absolutely. Yeah, no, that's a really good. So our open source tools out that are out there now, you know, we just released two new tools a few months ago called SIFT, right? This is the Anchor Toolbox. And what they do is we've, we've kind of split the SIFT, generates these S-bombs, right? And you can point it at an image, but you can also point it at a project, right, on your laptop. So you can kind of start doing some of these policy checks before you even commit code into the repo, right? And then gripe does, takes that S-bomb and generates the list of vulnerabilities. And it can also work on, it doesn't have to be an image, right? So you can get that first pass of feedback before you push. You don't have to wait for the image to build and then the scanner to run. And these are, they're written in Go, they're really fast. I mean, so you can get that feedback very quickly and just fix a bunch of stuff before you've committed any code at all, right? And then, you know, your first, your first push, you know, hopefully will be the only one. You'll never have any, any bugs in committed code ever again. Yeah, you impress your teammates. You didn't break the build, right? Yeah, yeah. I mean, there's all kind of, you know, there's, there's games around, you know, bug bounties or whatever, right? I mean, might upset the economy on those things, right? But obviously it won't fix every problem, but you can get a lot of stuff fixed really quickly that way. Yeah. So we would absolutely recommend that people do these scans as they're coding, as they're prototyping, as they're, you know, doodling on their laptop before they, they push into a repo and have the, the auto build kick in. Yeah, nice. Very good. We, we have, you know, a handful of minutes left. Anything else you want to show? Yeah, I think the only other thing, I mean, we did run a survey just recently about, you know, we supply chain attacks are really where we were focused right now. I think we said something like, I think I even have a slide on it. Yeah. Like 64% of people and these are like major enterprises, right? That have had some kind of impact from a supply chain attack, right? No, I'm not saying Anchor will solve every single one of these, but, you know, our mission is to basically reduce this number, right, and get it as small as possible, right? And, you know, I think a lot of organizations really have heard that, right? They're, they're paying attention, right? So the supply chain attacks are really a big, they're in front of mind with a lot of these enterprises, right? So we definitely want to help people protect themselves, right, and reduce the frequency of these problems. And, and, you know, like I said, we're never going to eliminate every problem out there, but we want to reduce the frequency of them, you know, how often they occur, and the severity of them when they do occur, right? That's the main thing. You know, if we can do those two things, I think, you know, we'll make the world a better place. Yeah, that's it. Cool. Definitely appreciate talking about it with you guys. I mean, it was really good conversation. Yeah, likewise. Thanks for stopping by on this, on this Dead Sick Ops is the way show. Yeah, I definitely enjoyed it. It's always good to talk to somebody else and kind of bounce ideas around and get yelts perspective on this too, right? Because sometimes I feel like I'm, you know, in a echo chamber here, right? But you guys see a little bit larger slice of the industry than I do, right? So it's good to see what else is going on there. Right. Awesome. Well, I think that does it. I guess, Chris, shoot, I had a slide up. Oh, you did? Yeah, I just shared this real quick. Okay. Can you all see that? It's coming. All right. Yeah. So again, this is our Dead Sick Ops is the way monthly show. This month was app analysis and a couple more days to go in the month. But next month we'll be talking all about data, data controls, data encryption. And we'll have, again, another Dead Sick Ops is the way we'll have a second OpenShift TV show, as well as a couple podcasts. So definitely look forward to that as well. But with that, I think I'm good. I want to thank Paul again for joining from Aincor. I thought it was a great show. And Chris, as always, thank you. Oh, no problem. Thank you. Really appreciate it. And great content, Paul. Thank you very much. All right, guys. Take it easy. See you out there. Thank you. Bye.