 All right, I have it's 1131. So I will go ahead and get started. Thank you for coming to this. I know there's a ton of talks out there and I appreciate those of you who have showed up to learn about Ortelius and tracking DevOps and security data, something that is near and dear to my heart and I have thought about this for quite some time. And it's an important topic, especially as we venture into things like AI. What's AI without data, right? So we've had a security awakening. I'm sure most of you are more than aware of the security awakening that we've gone through. And here's just some numbers to ponder to kind of get us started. 742% the astonishing growth rate of malicious supply chain attacks. It's kind of crazy actually. 88% of boards that consider cyber security a risk of doing business means that they're actually talking about it more than just at the lower levels but at the board level. And then this is my favorite number. 65 to 80% of companies who say they need more visibility around application security. Now sometimes when we talk about visibility we think about observability and using observability tooling to track transactions. But that particular number goes farther than just what does it look like in my Kubernetes cluster? This is what is the visibility? What visibility do we have into our software supply chain who's creating the software supply chain, who's building these software components? What's the provenance? Why are we using them? Who's using them? So before we get too far, I am Tracy Reagan. I am the CEO of a little company that I always say the company that could called Deploy Hub. I have served on the board of the Open Source Security Foundation. I helped start the Continuous Delivery Foundation and I helped start the Eclipse Foundation. So I've been around open source for some time and I guess I am kind of a perpetual volunteer when it comes to these open source projects. I'm currently on the board of the CDF TOC and I'm the Ortelius Community Organizer and I also co-founded a company called OpenMake Software with my partner, Steve Taylor, who is sitting here in the front row supporting me. So what are we really talking about when I talk about Ortelius and the process of tracking data? In a monolithic world, we may have not seen this as big as a problem because when we run our builds, we run a full build of all the components. We have builds that will run for, we supported builds in the past that run for four to six hours, some even longer. But when that build ran, it generated logs and everything in that build directory and if somebody needed to know information about it, we had all those logs in that build directory. S-bombs, some people might be generating them, some people may not. I recently wrote an article with Vincent Dannon from Red Hat called S-bombs. So far so good, so what? And that's kind of the point here is that we have trapped information across all of these build systems that we have, all these workflows, all these Juncan workflows, CircleCI workflows, we have tons of data that's underneath the covers and sitting in these logs in what I call trapped across all these workflows. And in a microservices environment, you're doing that even more, right? Microservices decouple a monolith and so every single microservice has its own S-bombs, its own CVEs, its own deployment configuration, its own inventory. And that is where we start struggling with understanding our supply chain. We also have to think about the fact that we're generating a lot of data and we work very hard to generate that data but we often don't do anything with it. Now if you think about what some of the new AI coming up and Christy did a nice talk on AI during the keynote, if we think about AI in terms of DevOps, we simply don't have the data to really build AI systems. If we look at co-pilot, if you haven't played on it, you should, co-pilot uses GitHub as a source of information to create those code snippets. What do we have in the DevOps world? We don't. We don't have a central place even within each individual organization to track this kind of data. So Artilius is about tracking this kind of data, S-bombs. And S-bombs, there's a lot of information in S-bombs, what release versions, what's the drift across the cluster between two different clusters running the same microservices that are probably not the same versions. And Artilius is looking to untrap that information, pull it into a central data point and dashboard so that we can start using it. Once we get that data, we can start doing some more interesting things with it. So when we talk about log visibility and what Artilius is tracking, it's tracking all the work, the hard work you guys are already doing. It tracks things like your, again, your S-bombs, regardless of what you're using, SIFT or something else. It's gonna pull in any of your versions, any time a new version of a microservice or any component for that matter, any component whatsoever, as soon as a new version comes up, it aggregates the data up to what I like to call the logical application. So if you're building microservices right now and you're being asked to generate an application level S-bomb, how do you do it? Do you take all of the S-bombs from all of the microservices and say, here you go? I don't know if the auditors are asking for that. I think they're asking for application level information. If you are an SRE and you get a call and it says, hey, your logon's not working. Well, we didn't write that. Who did? Where's the ownership of that? Something as simple as understanding who to call if a microservice breaks in your application can be challenging. It would even be hard to answer the question, what version of that microservice are you using for that version of the application? So Artilius tracks that kind of information as well. It tracks a blast radius because it's starting to track this information and all these relationships from the S-bomb all the way up to the logical application and the environments that it's running in. We're starting to see that if you make a change to one microservice, what the blast radius is gonna be. This microservice is a high-risk microservice because it's gonna impact 15 of our most important applications we're running within our organization. It's that kind of level of information that begins to build an organization's security profile. So Artilius can gather data from any source and we are pulling as quickly as the open source community can build these ability to suck this information in. We're pulling whatever we can pull in. So we can pull in information from GitHub Actions, for example, from AppCo, from SIFT as I mentioned. We track things that are happening in Quay and Docker Hub. And today we announced some very exciting news for Artilius, Red Hat has joined the project to contribute what's called a universal object reference. A universal object reference is basically an artifact repository that can manage any kind of artifact. Artilius can track the data on any type of artifact and believe it or not there's three main types of artifacts, containers, database, SQL and files, pure files, a Salesforce, Apex file, any type of file. Files kind of cover the rest of the world. You've got containers, you've got databases and you've got file systems. So we track that information and the project that the Red Hat team contributed is called Emporus and it will sit on the back end and I'll talk about that in a minute. Artilius has been incubating at the CD Foundation for the last two years. We are super thankful to the CD Foundation for supporting the project. They understand the importance of the data that we're gathering and tracking and it is the next step in the process. I'm a big fan of CD events. Imagine the amount of event processing that we could do if we had proper data. So data is part of the puzzle and centralizing the data is going to be important. If we think about ourselves as a community who's building, when I say I'm talking about the CD Foundation, as a community who's building a broad solution, what we have to think about first of all is how do we get rid of all the one-off scripts? How do we start using data? How do we start building policies and what can we do to fix the interoperability problems? CD events is fixing the interoperability problems. Artilius is worried about gathering the data and they're CNCF tools that are out there that are helping with building out the security profiles. So I say, you have the data, let's make it actionable. What is actionable about that kind of data? So first of all, you've got to centralize all of the DevOps and security data. One is not good without the other. Certainly, you might generate an S-POM, but what you need to be able to do is track where that particular component is running. Again, in a monolith, it may be a little easier but it gets more complex as we decompose. So we have to combine the security, SCA and DevOps information to have the complete picture. One piece is not enough to tell the story. It's just a chapter in the story. How do you view open source packages across your entire organization? When Log4j showed up, how many organizations scrambled to figure out where it was running and what had to be rebuilt and what versions? All of us did. Artilius answers that question. How do you version microservices? And now, of course, you can check them in to get or whatever repo you want to, but there's more than just the microservice itself. It's the microservice and all of the data that's connected to it that needs to be versioned together. How do we see the logical application? And that's the other thing that Artilius is doing is creating a way for teams to create a logical application that once is created, every time an underlying service gets updated, a new version is created so that you know you have a new release candidate even though you didn't run a build. And that's what happens in a microservices world. In the monolith, we have a process that runs. We have a build. We have a new release for our build. It may or may not get released. It may not get deployed, but at least we know we built it. Our CICD Jenkins ran our build. It created our new release. In a microservices environment, you may have an underlining component that you're dependent upon. They got rebuilt, but that doesn't mean you knew about it. And it doesn't mean that you had deployed it. It just means that it got deployed out there and now you have something broken and you may not know why. And that's why a release number for a logical application is critical. Artilius has a CLI. It has a command line interface. And if I was to say where it sits in the process, it pretty much sits kind of if you know if we had the left-hand side dev and the right-hand side ops, it fits kind of on the ops side. Initially you do your Git commit signing and your source and repo scanning. And then you start creating your image and that's when Artilius is interested in the data. It starts getting interested at the point in time that you build, you create your image, you get your S-bomb. You're gonna push that out and register it so that we can start tracking the changes in your logical application and where it's been deployed and what it's consuming because we're gonna take that S-bomb information, we're gonna bring that in and we're gonna continue aggregating that data up, creating a pretty massive set, a pretty big set of data that now you can build some logic around. You can build some policies around. A policy might say if this microservice is used by more than 20 application teams, it has to go through a more stringent release process. That would be a simple policy. So the architecture, the new architecture, this is an image of what the new architecture looks like that we're currently working on. It includes a UI that the Emporus project will use. So Emporus will be a backend that will store the objects themselves. In the past we've not stored, we've not been a repo. We're just a data collector and Emporus is gonna add that back in. We also got a grant from Ripple to create an immutable S-bomb ledger. So the team is working on that and if any of you out there are interested in helping an open source project, we would love to have you. There are bounties because we got grant funding so if you wanna get paid for your work there are bounties out there to help with some of the pull requests. In the middle there is an Arango DB because there's other information that we have to also do like login and security information and tracking what we call domains. And most of it's gonna be pushed through the Artillius REST APIs. So we talk about gathering evidence and what we do with it. Oftentimes one of our white papers that we use is how to version microservices and it tends to be one of the most popular downloads. And that's because people have trouble with drift. So what I'm showing you here is components, a list of components and their versions. So this happens to be the cart service and it has multiple versions of it and it also has what we call a domain. So let's say for example, I'll dream with me in the future here of what Artillius could be doing. Artillius could be managing Kubernetes, right? And all the components underneath Kubernetes. Kubernetes would be our domain or we could even have a higher level domain, the Linux foundation. And underneath the Linux foundation we could have Kubernetes, we could have Argo, we could have Jenkins. And maybe Jenkins and Argo are ones under CNCF and ones under the CDF. Those are your domains. So that's how Artillius tries to organize the data is through a domain driven design. And each component is assigned to a domain and you can then select from a group of you can look through your domain and say this is part of my stack and this is what I wanna start tracking as part of my logical application. So the way Artillius is designed it has the ability to really track your stack. Now once we have the components, we start pulling information in about those components like CVEs for example, or their license consumption. Any of the data we get from the SBOM and once we have that data we can generate the CVEs. This information is useful because if you're an application team and you're about to take on a component that you didn't create, you need an easy way to find out if everything is going to be approved moving forward. Are you able to use those licenses within your organization? You also might wanna know if there's some outstanding CVEs before you start consuming it. And we know that that happens all the time, right? You get a build out there, it's clean and a day later you have a bunch of CVEs because something new I found. So oftentimes I get asked the question how do you aggregate the data up? Artillius is not psychic. I would love for it to be, but it's not. It's smart, but it's not psychic. Application teams either use a Tomo file to define their application or they can use an application designer. You might see this in similar deployment tools where you have to define what your components are and if there's a flow logic to it. We allow you to do that. Most companies use the Tomo file. Once that information is there, we have the components, we have the versions. We know what now the application based version is because the application team told us from there on out, we automate the rest. Any time one of those underlying components gets updated, all the data that that application is, all the microservices data comes up to the application level. Any time one of those components get updated, we create a new logical version of your application. So now it looks like a CI server, right? It's exactly what it kind of is. It's replacing what CI servers used to do in terms of you ran a new build to get a new release number. We're creating automatically for you a new release number every time that happens. Now why is that important? It's important because every time you get a new version of your application, even though you may not have built it, you have new S-bombs and you have new CVEs. And Artilias is tracking that for you and showing you what your logical application looks like every time an underlying microservice gets updated. So a new version of a microservice means a new version of your application. A new version of your application means you have a new S-bombs and you have new CVEs. Now why is that important? I can stand up here and give you a big long list of things that is super important. But if we just go back 18 months ago, December of 2021, when we were all asked to say, tell me where log4j is running? Why do it now? Because I'm the director and I'm panicked because now we know everybody can get out to the operating system through an exit. Artilias, because it's gathering all that information can do that. And it does it simply by a package search within the database and that package search will result in whatever you might be looking for. And it will tell you all the way down to the application and the component and the environment that it's running in. So you could do a package search based on environments, based on applications or based on components. And that's where we begin to generate kind of hardcore organizational security level profiles. Because with the evidence all in one place, we can now start searching it. We can start reporting on it and we can start doing something about it. I have never have been a big fan of KPIs, to be quite honest, to try to track the success of things. But when it comes to our software, we should be able to track these simple levels of detail. The reason why we can't is because in the past, we've had a very bad habit of doing everything with scripts and scripts generate logs and logs sit around and are in the build directory and maybe we'll check them in and maybe we don't. And as long as we continue to do that, so the most important core data is gonna sit around and we're not gonna ever be able to see it, use it and start building some really cool logical AI systems on top of that data. This is where it's gonna, these are the kinds of tools that that data is gonna come from. The kind of data that we're talking about is just the drift of a single microservice. What is the drift? Tell me how many versions I'm running in all my different clusters or even in my different namespaces. Simple, simple data that we need to start aggregating up. Now, a lot of this data's out there. And sometimes it's between all these different tools. You might have a deployment tool that you're using that's showing you that. You might have Spinnaker sitting here showing you that. You have stuff that's in GitHub. But we haven't centralized it. And until we can centralize it, it makes it really difficult for us to start using it. So ultimately, the goal of Ortelius is to really start building upon the data, centralizing the data so that we can now take the next step and start building more intelligent systems around DevOps. And that is ultimately the goal because we've been doing this for a long time, guys. We have been writing scripts for so long and we still haven't really solved some of our bigger problems. We have a lot of scripts that we need to retire over the course of the few years. That's one of the reasons why I'm really into CDEvents. The CDEvents Project Summit, I think is happening today during lunch. I highly recommend you go and listen to what they're doing because CDEvents has the ability to disrupt how we do CICD altogether. And it's things like CDEvents, just like think about cloud events, it has to have something to act upon and the data is what it's gonna act upon. So I've been pushing CDEvents now for a couple of years. The project's going really well. They're starting to get kind of their first big release ready for everybody to take a look at. But it will be the way we solve the plugin problem and we solve how to start automating without somebody doing it. And we wanna get there. We wanna get there where we can just rely on the data. We don't have to go and look at the data and then tell the system what to do. The system can look at the data and then it does what it needs to do. If we can have self-driving cars, folks, we can do this in DevOps. Now we talked about a blast radius. Something as simple as trying to figure out what your application's consuming and what version. We're tracking that information and we provide some interesting graphical maps to do so. And in Porus, we're super excited to welcome the Red Hat team. They have some really deep ideas on how to expand the use of a repository to make it universal. And because Ortileus already supports files, DB objects and containers, it was a perfect match between what the Red Hat team is working on in this universal object reference and what Ortileus is already doing with the data. And Vincent Dannon, who is VP of Red Hat Product Security, and he talks about how important these types of tools will be to assist security and operational teams in four supply chain policies. And that's eventually what we're getting to. Thank you so very much. Again, Ortileus.io is where you can go to learn more. Again, if you're an open source contributor, please come and talk to us because we would love to have you on the team. And you can always reach me at tracy-regan-oms on LinkedIn and reach out to me and I'm always happy to schedule time and chat about what we need in the open source contributor world. And I've already had some of you reach out to me, so much love. Thank you. Can I answer any questions? I know this is a disruption in our model. Totally is. Yes. I'm gonna let our CTO answer that question. If you could, yes, speak up and kind of... So we'll integrate there at the build level and that's usually through our CLI. So when you, let's say you do a Docker build and then you're gonna push that image over to your registry. After the image has been pushed, we're gonna go out and talk to Ortileus and tell Ortileus what we just did. And what that gives us is the information about the Git commit. And there's a bunch of Git information we can gather at that level. Wine's changed, what branch you're on, all that fun stuff. So, and then that ties to what your artifact created. Cause right now, if you look at artifacts, you have no way to get backwards to the Git world unless you go through and embedded into the artifact in some way. So that is the first level where we do that integration is right around the build time. The second part of the integration is when you deploy. Where did this thing end up going? And after you deploy it, we wanna record what went where. And with that, we're able to now map together all your dependency relationships. So we know that this service was deployed out to this environment. We must say it went out to QA. And we know which version went out to QA. And now, because we know the version that's running QA and the version that's running in production, we can calculate the drift between the two of them. So those are the two main points that we do the integration at. So things like ServiceNow, Jira, any of those types of tools, those are gonna be inputs that we're gonna gather usually at the build time. We'll gather information about, okay, this is the Jira ticker being worked on for this fix. And we'll associate that information at build time as part of that version of the artifact that we're tracking. Does that make sense? No, we're gonna have a conversation over mics. Yeah, okay, so with NRorg, we actually have our listing of all of our, within our service now, we have our listing of all of our integrations and all of our consumers and dependencies that are all managed internally anyhow. And always the mapping between them as far as what's currently in production, including the drifts and so on is always a challenge. And so as part of a release, what we've been looking at is how do we actually identify all those and actually through this tool for security purposes would be useful, but also just for change management for identifying those and actually doing an API call, for instance, out of service now. Yeah, so on the producers consumers, when you're talking about like RESTful API as PubSub type of thing, we want to, when we gather that information, we're gonna version it. So we know that this version of this microservice, for example, has published these endpoints. And this is what the endpoints look like. Now, those endpoints could have had a, could be different than the ones that are running in production because they've added a new parameter, for example. So we wanna keep track of all that history as things go through time. So we can understand and calculate the drift based on that. So your information that you have in your service now about your producers and consumers would be inputs that we would gather in version as part of the process. And so Steve will be doing a demo in the demo theater. I think it is tomorrow. Tomorrow after lunch. And we will be having an Artilius Project Summit with some of the contributors there. And that is, that's tomorrow at lunch. I think your demo is today. No, it's tomorrow. It's tomorrow. They're at the same time. Okay. So look for, yeah, tomorrow at, is that tomorrow at four? Sorry, I have to scroll back down to see what data was on. Yeah. So is that, that answers your question? I think you're today at four, Steve. 305, 340. Here it comes. You find it? I thought I did. You got a question why she's scrolling? Four o'clock demo theater today. All right, four o'clock. Yeah. So he'll, he can show you more of that. And tomorrow at lunch is the Artilius Project Summit and please come to that. And before you guys leave, I have Artilius shirt jackets. I only have a few left and they kind of run small. So I'm going to leave them out here. I'm wearing a large. I'll keep it to two. What does CVE integration look like? Say with sneak or some of the other tools, what do you see people doing? Right. So our CVE integration is, so what we've done is we've separated the SBOM from the CVEs. And what I mean by that is at build time, we'll go through and take the SBOM that was generated. We'll persist that as a version in our database. Now we know all the packages that you're consuming and the versions of them. And then we go over to the open SSS OSV dev and we cross reference to see if there's any CVEs that are outstanding for that. Now what that does is because we separated the two is if we get a new CVE that's found, we'll be able to report on it. So it's not like a point in time CVE report. It's more of a, you know, what's happening as the CVEs are found day to day. And right now there's some pull requests that they're working on notifications around that because that's really becomes the problem. You know, you didn't go from, you know I have to go and bring up Artilius or CVEs are for the day. So we need to be able to automate the notifications around that based on groups. Awesome. Okay, one out of 10. And then my other question was, could you use the API to say, write Kubernetes admission controller where you could call back to Artilius and say, you know, this may be certain version has CVEs I don't want and reject the deployment. That's gonna be more at the, like a OPA level. So we'll have the information and OPA can come out and query us to say, I'm getting ready to deploy this version of this container out to Kubernetes. What's its security profile look like? So that's where that would fit in. So Artilius isn't going to be the policy agent but it's gonna give inputs into it. It's just gonna drive the, it's the data driver, right? Yeah, that's what I, we do that with OPA today but we talk to Sneaks API. So I was wondering if I could just rip that out and use Artilius as API instead. Yeah, exactly. Any others? Well, I have to run to another talk. So thank you again. Thank you everybody. I appreciate you guys being here.