 We're good. OK, we can get started. Welcome everybody and all of you as well in virtual world. Sorry you couldn't be with us. I'm glad we have a virtual version of this, because certainly it democratizes the education. If you're in South Africa or India, it's hard to get yourself to Austin. So I am Tracy Reagan. We're going to talk about application level S-bombs. How many of you are doing microservices yet? Oh, really? OK. We've got some people who have actually started actually playing with microservices. That's great. They're coming, if you haven't done it yet, if you haven't started yet. You may have been using containers, so you put your monolith in a container. Generally the next step is to start thinking about decomposing, and that's really what we're going to talk about today, those challenges, with the introduction around Ortileus. We all have seen this. I have to reiterate it. If you have not, and I'm sorry I didn't just put the link there, but this is you can download this, and the PDF should have the link in it. If you've not read the Linux Foundation's software bill of materials and cybersecurity readiness, if you're in this room, that should be homework for you. And Jim Zimlin talks about how important it is in terms of cybersecurity to know your ingredients, and that's really, really what we're talking about is knowing the ingredients. I had a chat with Scott earlier about peanut butter, and I think it's a really good example of the supply chain and what's facing us. If somebody can do a recall on peanut butter, we certainly should be able to get it down to the software world, right? And Jim Zimlin says, S-Bombs will play an essential role in building more trust and transparency in how software is created, distributed, and consumed throughout the supply chains. And then, of course, we all know about this, but I will reiterate it. I believe in September now, if you are writing software for the US government, you have to tell them what's in your peanut butter. So we are finally at a point where the US government says, you know, maybe if you're giving us code, we should know what's in it. We need to know the vulnerabilities is really what they're asking. I am Tracy Reagan. I am the CEO of a little company called DeployHeb. Hey, Lori. We are a, I'm also the community director of Artilius. I sat on the board of the Eclipse Foundation many years ago, got super excited about Open Source, which I still love. From there, I was invited to be on the board of the Continuous Delivery Foundation, and now I sit on the board of the Open SSF, as a what they call a member rep. So if you're members of the Open SSF and you'd like to chat with me, I'm always open, I'm all at years in terms of what's needed. I'm also an ambassador of the DevOps Institute. I work a lot with Jane Grawl and her team, which I think is a great team for education around DevOps. And I have about 20 years of experience in what we really have called life cycle management, and that's, we're still talking about life cycle management. We can call it anything we want, but it's really about life cycle management. And you can find me at tracy-regan-oms on LinkedIn, and if you want to chat, just message me and I'll send you my calendar. All right, so let's just take a quick brief thought about the difference between monolith and microservices. Monolith, we've been doing. Microservices is now something we're starting to do. In a monolith world, one of the most important things to understand in terms of software, the supply chain, which is our discussion for this afternoon, is that when you have a monolith, you have somebody who's a buildmeister. I have a product called Meister that manages builds. So you have a buildmeister, and that buildmeister does a lot of the decision-making for you in terms of what goes into your build. They will work, you know, they write build scripts, they determine where they're pulling code from, they decide where the libraries are gonna be pulled, it's compiled, you create intermediate objects, you link those intermediate objects into some sort of a binary. You know, you have class files that go into a JAR file, you have OBJs that go into an executable or a DLL. So what we do there is we do all the supply chain management at the build, and this is where generally S-bombs are created. In a microservices world, you have a lot of builds, and builds aren't necessarily statically linked. In fact, you probably use Python to create a build image, a container image, and then that container image gets deployed. So now where are we building the S-bombs? We're building them at the container level, right, as opposed to the full application level. So when we were in Monolith, we would generate an S-bomb at our build point, and now we are generating S-bombs still at our build point, but the build point is creating a smaller object. So we have S-bombs for every object that we create in a microservices world. So today we're gonna talk about that challenge. So some of the challenges of doing this, pretty much when you lose the North Star of a release candidate, like an application version, some things kind of start to stand out. We use this term toil, I just say struggle. So teams really struggle to understand what an application is in a microservices world, and what versions they're using. When you don't do a monolithic compile and create an application version, you have a whole new world that you're doing in terms of the application level. So what that causes is you don't have a single software bill of material report for the release because you have many of them. You don't have a list of your CVEs and your license obligations because you have many of them, and you don't necessarily can't map it back to your known vulnerabilities because you have many of them. So while your application, you may as a feature team, you may be creating a jar file for your feature. You also have feature teams that you're consuming and you have to include those feature teams, S-bombs in your feature jar file, right? So now we've gotta figure out a way to collect the data and aggregate it up to the application level. This man is brilliant. I speak to him maybe once or twice a year. He writes this thing, he calls it Tyler's Muse, and he writes something called the developer-led landscape. And I always read it because it's very insightful. He always predicts what's coming and he's always spot on. In this last year, in the early part of the year, he talked about this idea of service governance. And I think that right before lunch, there was a presentation that talked about dependencies and the need for service governance. In the last talk we just had, we learned about MITRE's attack, which is again a way of creating service governance. So the idea of having a simple single location to start collecting the DevOps intelligence data around individual microservices is what these service governance catalogs are all about. And all of them have a different role to play. Maybe we should start aggregating all of it into one place. But this kind of intelligence is what is needed because software systems are so complex. We really are creating massive desk stars out there when we're talking about using all the dependencies from the open source world and how they connect to each other and how they're consumed. So we have to have a way to start managing those and we can't do it in our head like our build meister used to when they created an application build. So Ortelius is a open source project incubating under the Continuous Delivery Foundation. It was added as a incubating project in 2020, at the end of 2020. So it's been there for a little more than a year. The Linux Foundation has been super supportive of this idea of a service governance catalog and we're super happy to be part of the family. So what is Ortelius? Ortelius is a microservice catalog. We've all heard of service catalogs and a lot of organizations use them. This is a microservice catalog. And that microservice catalog builds some basic relationship and information. First of all, it tracks everything as a component. So let's say you have a database update. That's a component. You might have a microservice. That's a component. You could have a Salesforce Apex file. That's a component. Any object can be a component. Now in our old world, we would take those components and we create them and we package them up and create them as one big release. In our new world, we're gonna be moving these components across the pipeline every day all the time. So what Ortelius does is it tracks components. It tracks the component to application relationship which allows it to understand who's consuming that component and give you the blast radius. So if you're gonna go do a deployment, it can do a predictive level of saying, you know, if you deploy that, you're gonna hit five different application teams. You may wanna let those application teams know they have a change coming across. They track both the application, the logical application and the component inventory by having a feedback loop from the deployment whether that be a Helmscript or something like Spinnaker that you might be using to do the deployment. And it tracks at the point in time where it is woke up and we'll kind of go through how it's woke up. At the point in time it's woke up, it's gonna grab any of your service and application CVEs. It's gonna get your bomb for each component and it's going to aggregate that information up. We have a really awesome team and I would be remiss not celebrating them. This is our board. We're super happy to have Brian Dawson. He's from Ripple. He recently joined our board and folks from Red Hat as well as Siddharth Parikh from NatWest Group which is a very large bank in Europe. And they've been giving us a lot of information about their struggles and we've been incorporating that into the product. And these are some of our unsung heroes in terms of development. Phil Gibbs, he works for Red Flag Alert. He created what we have as our domain structure and all the way down to Adam Gardner who's been working with Kepton and building some integrations into that. We also enjoy Brad McCoy who was our CDF, our favorite contributor and was announced two weeks ago here as well as Syme Safter who does all of our outreach. So if you ever hear somebody by the name of Syme Safter reaching out to you to do a podcast, please see us. So who are the users of this catalog? Who is interested in really taming this microservice environment? So really there are about five categories and we're gonna go through some of them. First of all, they're the DevOps teams. How many of you would consider yourself DevOps in this room? Yeah, which is a good number of you. So what DevOps teams need to understand probably most importantly is if you have a microservice architecture, where's your drift? In other words, I have a microservice, I have 70 clusters and that's not an unreasonable number of clusters or even 20 clusters with 70 namespaces. What version of a service is running in each and is it the same version? So that's what we call drift. We also worry about sprawl, meaning how many versions of a login routine has been written and where is it being used? This is a common problem whereby each team will write their own service and you're not actually reusing microservices, you're creating your own and you're still doing monolithic development in a microservice architecture. Microservice developers need the ability to announce the usage of a microservice or the availability of a microservice to minimize microservice sprawl. I often equate it to two things. One is a junk drawer. Everybody has a junk drawer in their kitchen. I have one, I know you guys have one. And oftentimes what I see organizations doing is you write microservices and you throw them out to your cluster but you don't necessarily organize them in a way that they can be easily found and shared. The second thing I often equate it to is the ability to organize microservices. So if you're a parent and you have lots of Legos around, I see a lot of moms who are really tidy, they like to organize the Legos based on shapes and colors. And those shapes and colors that are easy to find when you're trying to put it together. Basically what Ortelius is doing is it's organizing the Lego pieces and shapes and colors and it versions everything that you create with those Legos. Application teams need a way to understand what their application is consuming. They know at the high level what they're consuming so they can tell us in the beginning what the high level pieces are. But at that point sometimes they don't know what the transitive dependencies are below their top structure. So we're providing that data for them so that they have a way to see it. Security teams, how many of you are from this would be considered from security? It's interesting, most of you are from DevOps and security and I would suspect that. We need to get the application teams and the microservice developers in the room as well because they're the ones that really need to adopt and push this kind of technology. But security teams use the dashboard to centralize the vulnerability information and the SBOM information based on each microservice rolled up at the application level. And then support teams, if you're on the support team I know we don't often bring in the support team in the conversation but in today's world in a microservice environment, if you are a customer you don't call in and say your cart service microservice isn't working, you say I can't order my product off of your clothing store. So if you're a support person you've got to figure out what happened. And in a microservices world it can be more challenging. So managing SLOs is part of the reason for a catalog because it gives you ownership. So let me just walk through how Artilius works and please ask questions if you have them as we go along. So there are things that we call products and producers. Components are our products and they are created by producers. Producers register their microservices. Now how many of you have heard of Backstage? Couple of you. So Backstage is another kind of a catalog that you register your microservice and it automatically generates your DevOps pipeline for you based on, you get to select which one it's going to use and it uses that template to generate that pipeline. Makes it super easy for you to onboard a new microservice if you're a developer. That we've been talking about integrating into that because then we'd already have all that information about the microservice. So it's a catalog that collects who wrote a microservice, what it does. So we do that in a similar way and we assign it to a domain so that we know what bin it goes in. Is it a red triangle or a yellow triangle? Once it's been registered, application teams can create a base version of their application package. Now an application team has been creating application packages for quite some time and when I say application I'm talking if you're a bank it's the teller application of the fraud application or mortgage or auto loans. They create that by assigning what microservices they consume at the highest level that they know about. Now that's all fine with the world. We have a base version of our component. We have a base version of our application just like you would check code in. Now what happens is a new version of the microservice comes along. Ortelius has woke up at that point. It's woke up at the pipeline. If it's woke up at the earliest point in the pipeline which would be at the container image build we can pull in as much information as possible and we're really just hoarding data because we're hoarding the DevOps intelligence we need to do that kind of predictive analysis of what a blast radius is gonna look like for example. So Ortelius gets woke up at that point in time we grab the shaw, we grab the swagger info, we grab the S-bomb, we're consumers of S-bombs, we do not generate it, we integrate with things like cyclone and we create a new version number of that component. So now we're starting to track the changes in a component. Now if we track the changes in a component we can also track what's happening at the application level. So now we're tracking new versions of each application as soon as an underlying component has been updated. This is your new continuous integration build. Because we're not doing this kind of work at a, we're not building a monolith, we're not building an application release anymore, we're building individual components that are built in container images. We're aggregating that up and we're showing you what version your release, your application release is at based on the changes that may be happening. Now you as the clothing store developer may not know that the shipping service was ever updated. You might get a call, but you didn't make a change. We're showing that information, we're showing what changed. And when we do that we can start generating some reports. We can show the blast radius, we can show the application component, what it's consuming, and we can aggregate the application level SBOM, which is really the discussion here. Yes, thank you, thank you. The question is, can you clarify what the blast radius is? We've used this term many times to describe many things, so I'm glad you've asked me to clarify that. When we talk about blast radius, we talk about the impact a single microservice has to all applications consuming it. So in this case right here, and I probably have a pointer, but in this case right here, we have the shipping service, and we have it impacting the candy store, the toy store, and the clothing store, along with the versions, because we know that what versions these are currently dependent upon, so we know that this shipping service is going to increment the candy store, the toy store, and the clothing store another level up. So it's creating a new release candidate in the same way you would is if you had a new library for the shipping service, you recompiled and relinked your application because you had a Jinkers build running, and it did that for you, you have a new version of the application that you release out. You don't do that anymore with microservices, instead the shipping service got updated, and now we're going to produce that blast radius that says, hey, you know what, maybe those three applications should be retested, because it's going to get it. So let's just walk through how it works. We talked about the Lego bins. Here's our Lego bins. We have what we call a domain-driven design, and when you start doing microservice development, you really should be looking at your domains and understanding what you need to work on, what your problem spaces are, or what I like to call solution spaces, trying to be positive, right? And in this case, we have, it's a little blurry up here, so I'll just walk through some of it. We have the Hipster Store, which is the application that we're creating, and it has subdomains that's DevTest and Prod. That's what maps to your pipeline and allows all the automation. We also have a purchase processing catalog, and we have a store services catalog, and under that we have subdomains that says in purchase processing, we have currency, we have checkout, we have cart payment and shipping, but we also have a cart service in the store services catalog upon that, over here. So now we have a cart service in both, and this happens on a fairly regular basis, and what the domain structure allows you to do is to have a fully qualified path name based on the domain, so everybody knows that they're using the cart service from either the shipping or the cart service from the store service or the purchase processing. These are the kinds of problems, the real-world problems that people run into when they start writing microservices, organization of those services. Once we have the domains defined, your producers register their component. It could be a database component, it could be a microservice, it could be an Apex file, it could be a, we have people using mainframe components as part of this, but they create a base version of that. This is what I was saying, we could actually connect into Backstage and pull some of this information out, which I think is a really good strategy because Backstage is a sister project at the Linux Foundation and we should have those common integrations between these open source tools. But when we grab, when somebody registers a microservice, we're gonna pull some important information out. You can see up here, as the demo, of course, I registered it, Tracy Reagan, I gave my Tracy at deployhub.com email, so if this service breaks, that's where they should contact me, or they can catch me through my phone number or pay your duty, or, you know, if I had Discord or Slack, you could put that up there as well. It also has something important about how it's being deployed. It's being deployed using a Helm chart. So we know now that the Helm chart's being used. We can also list there our key value pairs, and if I could scroll down, you could say what key value pairs you should be using. We'll think about adding secrets to this and there's other things that you could add. The get commit, though, is kind of important. So we track the get commit and we track the shaw, because that's how we know what version that we're running, that this is at. Once, so I was saying that you can have any, there's basically three types of components that solve all the problems with the world right now. There may be another type that comes along. Application file covers everything that you can imagine. A container has specific information to it that we have to collect and a database object has, or component has different information. For example, in a database object, you need to have a rollback script in case we need to roll that particular back. So you need a rollback SQL script. So once you have your components registered, your application teams can start finding them and using them. And application teams create a package like they're used to. This might look like something from a harness or something you may have seen before, but it allows them to identify how their application at the highest level is built. This would be kind of your new application build, really. You define your jar file, your particular feature that your team is working on to build the clothing store and anything underneath it. For example, I'm pointing out here the cart service. So the microservice developer creates their build. At the point in time that that build is done, you know, they're gonna pull things out of version control or use things like artifact repos, like JFrog. They're gonna pull their SBOM and CVE scanning in. That's gonna go into our global catalog. And at the back end, then we begin to track the application level, SBOM, versions, difference reports, and CVEs. So you have then data at the component level and you have data from the application level. If you look at the component level, you can see where the cart service has been deployed. You can see that the cart service has been deployed to the hipster store cluster at version 149. And then later on, a new version of it was deployed to the hipster store cluster at version 150. Those are our internal versions and you can format the version any way you'd like and that's a common question we get asked. Secondly, now you can see over here the hipster store application has been installed to the hipster store twice. Now, if we drive deeper down, and I didn't drive deeper down in that, you would be able to go into the hipster store and you could see its difference report. But we're gonna just see its difference report in this format because I apparently said this is gonna be easier. It's gonna show you at the application level, its components and what it's using and you can show it over time. At this level, you can see what the difference report is for the 1.291 version of the hipster store. A cart service created it and it was deployed to the hipster store cluster. And again, the blast radius. Now it's also gonna gather the S-bomb and the vulnerability information. So first you get S-bombs and into vulnerability at the component level and this is important to remember. When you go into microservices, you're creating S-bombs at the lowest level. So the S-bomb and the CVE data is not at the application level, it's not at the teller system or the fraud level, it's at the login level. It's at the very lowest service level. Once we have that, we already know all of the relationships. We can aggregate that S-bomb's information all the way up to every application that's consuming it. So now the application team has the information they need to be able to report on what their S-bomb looks like and as a result, what their CVEs look like. So this is simply aggregating up. Yes, sir? Your CVE? Yes. So the question is how do you get the CVE data? You have the S-bomb data that gets generated. So right now in this case, I don't know which CVE database they're pulling it from, but at the point in time that the S-bomb is created, we're gonna go look for the CVEs and populate that for you. So as soon as we have the S-bomb information, we can then go get the CVE information. We update it every time a microservice has been updated. So every time there's, and we create a new version. So now you have differences over time that you can start tracking. And we're looking at right now actually taking this information and adding it to a blockchain. So you would have an immutable S-bomb with CVEs over time that nobody could actually manipulate. We would track the fixes as soon as a new image was created because that's when we'd go grab the new information. But we wouldn't be able to take the historical CVE and say, no, hey, you gotta go fix this particular piece. However, what we can do is provide a way to say, show me where Log4j is running. Because we are tracking who's using it. So we can say, show me where Log4j is running in every cluster, in every namespace, in every application. So you have that information so that you can react, you can't do what you need to do. I hate to say using the word react, but that's kind of what we do. We react to problems. You can react to it so you can fix it. I think it's an interesting thought though to be able to automatically go out and report if we were polling the CVEs on a regular basis to say now we found a vulnerability and this is where you might, this is where you're gonna find your problems because we have that data. So early adopters are telling us we're saving them about 50% of time in terms of redundant coding. I think this problem will continue this idea of redundant coding for a while. But if any of you were up the open SSF kind of day, we did, I was on a panel. And one of the areas I think we should be looking at is autonomous coding. We're gonna have bots that's gonna code for us very, very soon. And that's probably going to solve a lot of problems in terms of sprawl. But that doesn't mean that that's gonna happen, right? It just means that we're gonna potentially be getting a lot more code being pushed through this assembly line. And we need to even do a better job of managing what that code is doing. Now, from just a tracking S-bomb level, it saves about one to two hours per deployment. And in a monolithic world, if you're doing a deployment once a week, that's not a whole lot of time. But if you're in a microservices world and you're trying to do deployments all day long, that can add up to try to track all those S-bombs and aggregate the information. And to be honest, when we talk about software supply chain security, we have this data. We have had S-bombs for 30 plus years. We just have never done anything with them. We have not made them an important piece of collateral that we need to distribute across teams and start talking about what the CVEs are within that. We've only had that capability actually in the last five years. So we have some work to do with the data that we generate in our DevOps pipeline. Aggregating that information and pulling it up and taking that DevOps intelligence and using it is really what we're facing from a DevOps and security perspective. And visibility is really what we're talking about. Not observability, because observability is something that happens when you do a deployment and you're looking at what happened at the deployment and you're using that to determine what your changes were. We're talking about providing visibility before you do a deployment. So this data is available to you. You know you're blast radius before you ever deploy. You don't wait and say, this is what it impacted. And another conversation I love having is in terms of the DevOps space is we are in a point of flux when it comes to our pipelines. We have to start thinking about having agile pipelines. We've got agile developers. We talk about our pipelines being able to push code across. But when I go into a bank that's got 52,000 developers, they'll have 52,000 pipelines. And if you're interested in that, I would suggest you think about learning more about CD events and the CD Foundation is gonna really start pushing the CD events conversation. And it's kind of like cloud events with a listener. So you wouldn't have to define an imperative pipeline for every single release out, every single microservice out there. Instead, you would have a listener that would already have defined what it needs to go through. And all of your changes in your pipeline would happen at the listener level. So for example, if you've discovered you're not generating S-bombs as part of your standard pipeline, you're gonna have to go visit every single workflow file, Jenkins work file, for example, to update it. That's not how we are agile. That's very static and we have to move beyond that. So what we're really working on is we're really trying to facilitate the evolution of the pipeline in the easiest way possible so that you can still have those Jenkins pipelines out there and we're gonna add some benefit to it. But to be honest, I think you're gonna be looking, you might be even hearing about Jenkins using CD events so that you're not having to write plugins anymore. So we have some work to do on the pipeline to really be able to support a more secure and agile supply chain. And that is it for me. Please, if you're interested, go check out Artilius. Let us know what you think. We are super driven by the community. We spent a lot of time with the Jenkins community actually and said, how did you build your community? What can we do to make things right and make things easy for people? One of their biggest things they said was just be as transparent as possible. And we have recruited folks from all over the world. We have folks in South Africa, Brazil, Chile, Pakistan, India, the UK of course. Trying to get some folks from Turkey would be fun to have some folks from Turkey. They're brilliant programmers. So please join us if you're interested in solving this problem. You can find us at Artilius.io. Here's our information on our Twitter and our LinkedIn accounts. And again, there's my information if you'd like to reach out and chat any more about this topic or how you might be able to get involved in solving this problem. And on that note, I will thank you all for coming to my session. I so much appreciate it. I know there's a lot of sessions going on right now, so thank you for choosing mine. And I sound like a Southwest Airlines. Are there any questions? Other than the two awesome ones I just had? Yes, sir. So the question is in terms of your difference reports, do you see more value in discovering that there was a package later or that there was a package that was added? So which direction? That's a good question. I've really never thought about it. But I think the most important part is knowing that the package showed up at one point, right? Somewhere in the line it showed up. And the more data, so for example, if we were able to start that blockchain, we would be able to have a very interesting way to take a snapshot and see the changes over time so you know at which point in time it showed up. I think that's the more important is understanding when that package showed up. Because when you talk about vulnerabilities and security, what you really wanna understand is at what point in time did that vulnerability start impacting you? If you're a bank, you need to understand that to say, we know we have a gap of time of this three week period that we were exposed. We should be looking at what happened during that three week period. So it's important to know it in the beginning, but I think it's really important to know if it showed up and you didn't expect it. It just showed up in another, you just did a release of your service, and now you have a new package there that you have no idea how it got there because it was being pulled in by a transit of dependency that you had in your service. So I think that knowing when it showed up and having the timeline is the most important part. It is a big challenge for us, it really is. So the question is, if you have an application with 20,000 containers, how do you track that visually? We're working on it. We have graphs, we've been using different graphs to display that data. We'd like to grow up to be as the graphs as clean as data dogs, because I think they do an amazing job of displaying that information, but we do it based on visual graphs. Now, believe it or not, the more we look at it, the more we think that maybe it's not a visual thing, maybe it's a report. I hate to go backwards, but in some cases, data in a table format is easier to understand than trying to track it in a visual way. So the question is, if you have a vulnerability like log4j in a container and you have it in a lot of places, how do you start tracking that? So that's why the versioning is important, right? Because you could have a version of log4j that's running that doesn't have any exposure. And do you want to update it? No, you don't necessarily need to update it. So this is why we've built into the process, this was the hard part of building, is the versioning. So now, keep in mind, you're tracking a logical application version, and within that, you are tracking the component versions, and within that, you're tracking the package versions. When we have the package versions exposed like that, that's when we can start giving you queries and say, show me this range for log4j and everywhere it's been installed. Now, if we are not connected completely to the CD pipeline, I have to say, we don't have the information of where it's been installed. But if you're connecting us to those, remember, I showed you the domain that had the dev test and product, we're connected into that, and we know something's been deployed, we're going to complete that feedback loop, and then we'll be able to say, ask that same question based on a cluster. So we could report on any namespace in that cluster that is consuming that particular low level version of log4j in a range. So that's what that DevOps, and that's why that data is so important, that we've been leaving S-bombs for the most part, or a text file, that we leave on sitting on a file system somewhere where the build ran. We're trying to bring that information into a database and be able to do logical stuff. And on that note, I have to say thank you again for the questions. We have ran out of time, and I hope you guys have been enjoying the show, and thank you everybody in virtual land for visiting. I stalled your peanut butter reference, Scott.