 All right, everyone welcome to the very end. Thank you for sticking around with us This talk is on leveraging us bombs to automate packaging transfer and reporting of dependencies between secure environments It's some quick introductions. My name is Ian Dunbar Hall. This is Jared. Heck. We're members of luck mark software factory If you look at what we do, we are the organization within the company that owns the DevSec Ops Experience so we do a lot of internal tooling and then we own the integration between container registries package for positive ways and other things So keep an open mind. I think the premise of this talk really revolves around one concept You know, we hear a lot about s bombs over over the week And we're often talking about it as a build artifact It's something that you made through analysis with trivy or sift or something else and it's used for compliance or tracking I'm gonna ask you to just think about it slightly differently. Think of it as a packaging definition so we have this kind of agnostic Packaging format that allows you to specify a pearl and that pearl could be of many different types So when we talk about how to package something and move it between different environments It's really nice because it doesn't necessarily tie you to one specific packet format not necessarily, you know Say a requirements that text file with Python or a Maven gradle file for something in the Java realm So what is our problem? Well, just show of hands here who here has worked on the classified other secure environment Okay, this is why you have to talk So number one it sucks being a developer in a disconnected strict environment, right? When I say a strict environment, I'm talking specifically classified or some sort of air gaps environment We don't have access to the general internet. How do you pull in your package dependencies? How do you get them updated? well What do you do you do something that's incredibly slow, right? You submit a package to get approved it goes to some queue you generate a lot of paperwork around it and by the time You're done six months later an update has arrived. You're ready to go and start all over again You're doing a lot of rinse and repeat and a lot of manual paperwork So the question becomes is there a better way? Can you automate the patching that combines and the delivery of those build dependencies and do a strict environment? Can this be done using existing tooling, right? How many of us write custom scripts all the time for a one-off project? All right Is there a way to do this where you have a common workflow for multiple teams through a single data flow? Right if you have 15 teams working in a strict environment, you don't want them all doing their own data Flow sneaker night and their own discs into that environment And often you're not just going to pull things out from the general internet You want to collect things from trusted sources within an internet or something like iron bank or some other trusted repository Well, that's where hopper comes in so we're gonna be talking a lot about hopper It's an open source project of we've been working on And it's a framework for defining validating and transferring build dependencies between environments using software materials, right? So for us, we are heavy users of Cyclone DX. You've heard a lot this week probably about the executive order Well, if we're already going to produce them already going to make them Can they be the thing that we use to move stuff between environments and maybe package it up to give to a customer? Hopper provides this well-defined solution And it's repeatable, right? We have s bombs. Ideally, they're gonna be semantic we released and then you can Reprocess things over and over again to pull stuff in So I'm gonna talk a lot about hopper But there's two other projects that are pretty key to this all working one of them is a renovate So if you're not using renovate or depend a buck highly recommended, right? You have tooling out there that will look at your upstream dependencies and tell you when something's changed And there's a patch or a minor or major update We in house to use renovate pretty extensively Renovate is excellent for us because it does that it goes out and looks at Python packages may go look out and make them Gradle go doesn't really matter And it's easily configurable for internal sources to you so you can use renovate to look at other projects and say get that For instance, they have this private We also use semantic we release for pretty much everything. So how are you versioning all your stuff? Well, semantic version is probably the best way to do it And semantic release is really nice for us. It ties in really well to our pipelines So at the very beginning it will kind of parse through all your commit messages and look for conventional commits So you say like fix colon whatever Feature colon whatever for patches and for minor updates, and then those are used as part of the semantic version All right. Well, let's take that and pair those two together, right? You're probably not handwriting s bombs You're probably taking like an unused Python, but I go car with that text file and using Maybe the cycle in DX tooling and join entering an S bomb around it Well, let's meant to be released that and that can be consumed as your packaging format bring your build dependencies across the environment So if I'm a team and I'm working in a classifier mark classified environment and I need up to date versions of certain packages Well, I can use these two tools to figure out when things change and define. What is my standard for that transfer? From the next update. So what is what does this allow us to do? It's kind of like that multi-team single process So I'm going to show a couple diagrams to kind of walk through this concept a little bit And then I'm going to turn over to Jared. He's going to walk through a demo The big thing here, right is if you have these s bombs that different teams are producing They're all being updated with renovate and then they're semantically released You can build on that and create a tree inclusion process where you have like a master project But then references that the inputs from these other teams that could reference other teams He begin have a generic process for bundling things together and be included in bigger deliverables So I'm going to jump over to this diagram So I three diagrams to this first one just kind of talking data flow. I don't think this is going to be shocking anyone if you've worked in this realm We typically break things into kind of three three areas You have general internet where you might have Docker hub iron bank quay other sources of container images And then you have multiple package repositories. You might also have stuff from github Especially if you're working on let's go or helm charts that are coming from from get lab or get hub We pull things into internet instances for proxy caching and that's where we do a lot of our scanning We look for security and other types of vulnerability information there And then we probably are going to change things transfer things across a diode using a get lab pipeline So for us We're talking about teams defining s bombs Well, they're going to be pulling their stuff from internal references and using that s bomb to define the what needs to be transferred I think most of us probably still do a decent amount of sneaker netting But if you want to you can use sneaker net necessary buckets CDS or you know, it's data diodes On the other side, you got this package and you got to do something with it Well, what are you going to do? You want to poly transfer stuff the container edges to whatever your secure container registry is the packages for Python or whatever to your package repository and Mirror and merge any any give repo changes Hopper as that thing that runs in a pipeline kind of has a couple inputs and a couple outputs So I talked about about cycling DX s bombs as the thing that defines the what to transfer But that doesn't really help us in our situation, right? If I just give you an s bomb and has a whole bunch of pearls Well, you need to know where to pull things from You may not want to pull things up from the general internet You may want this internal Rob repositories or container registries That's where manifest comes in and that's our definition of the where to pull from so we say for a specific package URL type I'm gonna say, you know PI PI again pull from this internal nexus instance if you are using a docker Pearl type pull from this ordered list of container registries that allows you to have an enterprise registry first And maybe a team specific registry second and it will cascade and what it where it goes out and finds things Next up you can kind of see here in the middle. We have our package collectors So we have those s bombs and those manifests our input there is go and find all the stuff All right, and we've written a plug-in architecture here where you can have our base core plugins that are about eight I think Pearl types and you can add other ones for anything constantly Once you bring it in you can augment and filter that s bomb All right So one of the big problems we have often seen here is you have an s bomb and it's lacking additional metadata That you may need for a security approval All right, you may want to have CVE scoring. That's what hopper cop does We're looking at other things like scorecard quality All things you may want to give off to you an authorizing official At the other end is a series of plugins. We have two out of the gate. There's others that could be easily added And one of them produces the giant tar file of what those things are they need to be transferred The other one would be a nexus instance So you can have nexus instance that spun up and then you can transfer all the stuff that you define an s bombs into that nexus instance Snapshot it as a container image and move that to the high side We have a feature flag, but as part of this you can generate in total out of stations for each stage and all the files That have been collected and processed as part of it We also snapshot the s bomb as it changes over this period of time So you can see exactly what stage and where it was modified and again just to kind of you know Bring it home again Our primary goal here is one data flow one security team Right and multiple teams had their own ability to specify what they want Right, you have the control as a team member to say hey, I need to update these four packages I don't need to go talk to someone to get permission to do that. Right. You just add to your s bomb You say where to find it The security team then which has dependencies on those and Jared will show you how that works That could then bring that stuff in do additional testing if need be and then bring it over to the high side also because we allow you to specify a set of Places that go find things in a manifest you can easily override it as a security team and say well I'm actually gonna only allow you to pull things from these four types of package repositories and these container registries So the hurdles here right and what are we solving well often you see incomplete s bombs They don't have all the information you need for a security approval and we're looking at how we use augmenters and filters the I guess Add additional metadata there that someone would want to see for that security approval Because it is a plug-in architecture if you wanted to generate additional documentation and say I don't know Excel because No one's here ever done that you possibly could do that. Right. It's a plug-in architecture. You have an s bomb We provide a model for which the parts do that and then you can just Generate your own documentation. We do that with html reports That allows you to work with the legacy approval process if you have an AO who isn't there yet Who would accept say a policy of some type to bring things into a classified network? Edibility to restrict where things come from Right, and then the ability to detect where things change Renovate really does that for us. We don't handle it with an offer But it's a great tool to detect things and as things get detected generate the next semantic release using Using semantic release that then kicks off my another pipeline So as things change downstream in s bombs, they all percolate up to the security team that can then do that transfer automatically And then lastly this allows us to have a one-way delivery into an environment But a clear understanding of what's in that environment from an s bomb perspective So you can do it all on the low side So we talked a lot about these advantages. I'm just gonna find out a couple of them The biggest one being is hey, we have cyclone DX s bombs for everything. That's in that that in that environment Well, that allows us to work with a lot of existing tooling out there I'm gonna say one of my favorite dependency track, right? so if you want to track dependency or do you want to track issues that may arise in a secure environment on a low Side, you know how this you know this inventory of everything as an s bomb And that can be then tracked and dependency track if you want to use hopper You can augment it with additional meta data like back CV And then ideally sometime down the road the validation of attestations and everything else for all this component All this is done in a pipeline. There's no mean new interaction. All right, so I'm gonna show I'm not gonna show a demo Jerry's gonna show a demo, but here's some of the key features and things to look for as we We're gonna talk a little bit about the dependencies between teams and a security team just kind of showing how you know, there's a tree structure We'll probably briefly show an ass out of stated an attestated a CVE report with hopper cop And then we'll show a little bit about attestation creation and specifically a layout file for that So you have a way of verifying All the then total links and lastly we'll just show the the bundle This demo does work in get pod so if you scan qr code you can actually pull it up yourself and play with it And that's how we're gonna demo it is in get pod So you're welcome to play with this Cool All right, so we're in we're in get pod here what we did with with get pod as we put the The commands right here on the top so that's what I'm going to run here And then I will walk through some of what these pieces are doing the pace that's gonna kick off Okay, so what we're doing here is From top to bottom We're generating in total. We're doing a product key and a functionary key We're then generating the layout and the layout is over here We generate that layout based on The transfer instructions as well as the s-bump so we say this is what we expect to see And it gets generated based off that command And then in total uses that to go and verify that what we thought we were going to do actually happened over the course of that operation We then Run the bundle which is what it's doing right now. It's collecting our dependencies It's gonna run the hopper cop on which does gymnasium and trivia and gripe and scans it and then we get our reports and attire So you can see it kicking off right down there Ian talks a lot about the multi-team concept and really that is defined in our Let's grab this It's defined in our manifest so our manifest We've got some metadata up there, but really what we're looking here is we're saying grab a local s-bomb That's other stuff. We want to augment here And then you can see in here that our includes are referencing two different external These guys here two different external teams that have manifests that also define that team or that projects Dependencies and lists and whatever they want to bundle up there And what we're able to do there then is say go grab from this team We expect it in this format. We expect an s-bomb with it give us our references pull that all together and then let us basically Zip it up and package it up so that we have an artifact to deliver Down here with the repositories that you see this is really our Mechanism for enforcing the location that you're getting your dependencies from At the end of the day within an organization. There are some sources that you trust Or have higher faith in versus Allowing kind of just that free form Which you can with a note adding a no strict flag But by default you needed to find those those repositories so that we can be very clear about what our sources were when we gathered them Let's bring that back up We'll go here real quick. We're gonna look at The transfer the transfer.yaml is really what what in total used earlier to kind of understand what was gonna happen In here, we have stages For example, we have a collect a report and a bundle But those names can they're free form just as long as the structure maintains the same But inside we define in the collect we want to run our git plug-in our raw plug-in our Docker plug-in We're gonna go and generate our report with hopper cop And these are the different Scanners that we want to enable and then down here at the bottom pretty simple, you know Where's my tar file going what what should it be named and so forth? Coming back up here You can see that we went through we ran we collected we successfully passed which is always a good thing for a demo And in grand total we then look at the layout we do a validation and verification and we ensure that what we thought was gonna happen Happened anything else in here got a renovate. You can see the links in here, right? We've got generation our length. They did We use that then to go down and check to make sure that pieces What where they should be so in total is pretty cool. I think that's it Yeah, you can see our s-bomb come from hopper cop with our references And here you can see looking inside that tar object grabbed all our different pieces from kit We got our s-bombs in there. We've got our consolidated and delivered. So there's two different s-bombs that get put into the tar file Right now they're gonna be identical But on day two operations where we want to go and send a delta That is where the consolidated looks at what the original was and then the delivered is the pieces that we The deltas right we only want to bring over that smaller subset instead of the entire Package especially when the deliverables can be pretty big All right, so looking ahead Do you need to go back to the QR code? Give you a chance to go on looking forward We're we're working on developing for for hopper itself. We're working on developing a unified report generation To allow multiple plugins to kind of define what kind of reports should be part of the generation and part of that bundle We're looking at expanding component validation Doing signature validation with recor further attestation validation within toto as well as some additional component validation between During the collect phases to verify the shaws and ensure that the pieces that we got are what we expected within the s-bom Looking at doing an additional plug-in with the open SSF score scout scorecard That date is really cool. Be nice to go and be able to package that in as a report To provide an it's over an islam or security representative on the other side We're currently in progress. We're looking at unbundling and installation of disconnecting networks Those can be a little bit challenging because those networks kind of also define where pieces go They don't always go to the more traditional locations and they're often diversified across those systems And then looking at doing addition and implementation with as we go forward with some of the work with It's the s-bomb with the verified verifiable s-bom It's the bomb Some of our inspirations sig store pretty strong With a lot of their tools are awesome and we really enjoy working with those pieces Cycling DX part of the working group with them at the industry working group. They've been a great Community to engage with as well as in toto And then other cool projects in this area Zarf with defense unicorns and then a witness With testify sec those are also great tools to go check out. So the question is like, how do you update s-bom? How do you track those? Sure, so we're not checking if s-bombs are stale in any way But what we kind of encourage and what we're doing in-house is using renovate quite heavily, right? So if if there are dependency updates that should happen We would assume that a new s-bomb gets generated and that's what would be used to generate a new pipeline and push a new bundle across into our environment We haven't really considered if there's a need to say okay. This s-bomb is Way too old maybe we need to filter it out, but it's a great idea. I think that's a call to action here This is a very extensible python code base. That's all plug-in And we allow you to customize and build your own plugins very very easily and that would be a great good community edition Yeah, we haven't we haven't experienced that problem yet, but it's definitely keep in mind I think the bigger problem that we have right now is discoverability with the open SSF scorecard being path-based and assuming that you're on github How do you for us since pearl is our biggest identifier? How do you transition from pearl to the scorecard URLs? It's been tough to find So that's yeah, that's that's a tough one I think we are right now assuming that you are defining your transitive dependencies And we're not gonna can discover them for you granted again. Hopper could facilitate the addition of your transitive dependencies I'm not sure if it's in this example, but we typically like if you're using Python or something else We force those transitive dependencies into the S-bomb so that they're attract as part of the the fun They deliver us. Yeah So we we we assume that you're gonna figure out your transitive dependencies include them in the bomb And most doing does allow you to capture that right allows you to to snapshot that