 Hi Michael. Hello. A few minutes early. How's it going? Good. Thank you. Good. I think the, you know, I've had more contributions to the document, which is good. Cool. Going to go through that. There's been some really good discussion actually at the number of other meetings. Some other references that we can add in here too. Cool. I did you found this on CNCS slack and I, am I audible? Yes, you can. Yep. I can hear you. Oh, sorry. I wasn't sure. I found this on the CNCS slack. I'm a member of the endota team at NYU. So I just thought it was going to be something that's happening. Awesome. Welcome. I think I found the right place. Definitely. Andress. Hey, how's it going? Good. Thank you. Almost done with your day. Aren't you Jonathan? Ready for the weekend? Yeah. Just another six hours will be good to go. Exciting going on this weekend. I am actually taking president's day off. Yeah. Decided everyone else in the US is off. I'm going to take a day off too. There you go. How about, how about you? I got, I've got my boat out of the water. So I'm going to give it some love. I got to put a new radio in it. The other one got water damage and then, you know, do some polishing. Nice. Yeah. Have a couple of cold beverages and just kind of enjoy, enjoy the downtime. I was jealous just for the fact you're going outside. We're still locked down in the UK to be fair. We can pretty much go nowhere. So you already got us. Yeah. I live on, I have about an acre and a half. So that's going to stop me from going outside. And you say rest and downtime, but I'm sure as you'll be sipping that beer, every other sip supply chain security will come back. I'm going to try not to. I'm going to try not to. Senior. Yeah. Trying is the keyword. Yeah. You know, I think we were talking about the. Yeah. I'm going to give you a podcast. Tell you the truth. So you're absolutely right. So I updated the GitHub issue. Because it needed some. Yeah. See at least the. Like. Summary of the issue. I pulled down. Most of your links down there. Jonathan. Make sure that. Calendar invite is discoverable. That we have the links to the notes. And that was on me. I think. I think no one else has added access to it. Well, thanks. Yeah. No problem. I think we can kick off, but frankly, I think we can make it relatively free flowing right. So, what I was going to do is just go back through the document see if there's anything specifically to add or anything people wanted to bring up. Before that just sort of raising to people's attention. An interesting call with the OSSF. Is that the day before yesterday or yesterday. And they were discussing reproducible builds, which I think it'd be interesting. It'd be interesting at least to add that as references and sort of contemplate some of that, certainly around the higher security end of this. But they have a chap called David Wheeler, who's got a lot of interesting work that he's done in the past, and including a PhD thesis. And this is at a Linux foundation over archival. Correct. That was in what context. The OSSF is, I believe so, yeah, the conversation was more just general and update from the chap that runs reproducible builds org. It was a really interesting conversation. I think they've got a video available recording the conversation. You can check it out but also reproducible builds.org I think. I know and total has done some work there with some of the rebuilders and testers on that. You know they showed something yesterday on the community cause pretty interesting. Find the reference. I saw them down there. Did you talk to that. Yeah, actually I was presenting the rebuilder work on the community meeting yesterday. Oh, so, yeah. So yes, we are working with the reproducible builds community we're working on using into the stations for the results of rebuilders and I administer an arch Linux to the builder at NYU. And I think the URL was reproducible hyphen builds.org I can, I can drop that in the chat for others who are interested in the project. Yep. Just added it to the references to. Yeah. The reason for bringing that up is, is we haven't, I don't believe we'd necessarily discussed that as part of our conversation so far. I wanted to see if there's interest and support for adding that in terms of best practice, potentially for some of the higher security. You know we had the two personas the low and high. Yeah, I think, you know, when you get down to it when you threat model everything out. You know that's really, you know the only way you can really make some reasonable assumptions about the security of your software is if you rebuild it on and number of nodes and those caches match right, you know that the attacker would have to attack and number of nodes for that to fail for that to be compromised. Yeah. Yeah. And on my end that this is from a few years ago but even even not necessarily fully reproducible builds but like mostly reproducible builds has helped us out a lot in the past, and just like being able to get some reasonable guarantees. You know, and I'm wondering, you know, like those those rebuilders. I've been doing a lot of research into D bomb, the past couple days. And they have this concept of like a D bomb node. I'm wondering if you know if you had a rebuilder and had that as a public D bomb node. And then you could almost have it as like a community effort, you could open source some of the security for for open source software to make sure that the bills are actually happening the way we want them to. So the D bomb node would also rebuild the software and publish the attestations via the D bomb channel. Yeah, well, wherever yeah I mean it wouldn't be the D bomb node software that rebuilds it right it'd be something like that and total rebuilder that that they've been working on right and then it publishes that to the bomb node, which distributes that, you know, via a public channel. So it publish an S one would. Yeah, okay. How far along is that in total rebuilder, you know. I, I'm sorry, I'm a little confused by that question. Do you do you mean a specific instance of a rebuilder or do you do you mean the various ways of tying into the attestations into the builders or what Cole said, the rebuilder is being worked on. Right. All by that is it something readily available if we're documenting best practices without people. Hey, here's a complete solution. You can reference if you're looking for reproducible builds, or if it's something that eventually will become available. I don't track the product too closely. Okay, so we're working closely with the Debian side of reproducible builds for the, and we also have an app transport to perform the verification. We're also working with, with the arch Linux space to builders which uses a project called the building. And I think in the last couple of weeks or so there's been some interest from a core member of the cubes project and they've been working on using in total attestations within and they call it RPM reproduced and so on. There were there were some mention of this in the community call yesterday as well and I can track down those resources so so that I can kind of give you a picture of where the entire team is trying to plug into existing rebuilder infrastructure. Gotcha. So there's, there's a few references that save where people are to productionize it, they can look up. Yeah, I'm going to track those links down now. I think it shows that reproducible builds should be included given, particularly we have a detail and Mike to contribute. Yeah. So I'm thinking so I've added the reference we might as well put that in there in the middle. I might might tentative question in there is is. I mean it's, it's, it's pretty mature and Debbie and I mean it does some awesome work there as well I'm just wondering how prevalent that is to the rest of the industry. What we're talking about is this perhaps you know we've just discussed this before I call in that this is best practices now, and then this is here we're trying to fill the gaps and some of the future work around some of the deep on work that you and I talking about some of the spiffy and total work. Yeah, I think we should definitely reference it and put it in here right. I'm just wondering about how mature that is if we give them someone best practices vice, but we're still building out some of the infrastructure around it maybe that's an identified gap. But, or rather something that's to be thought through, not necessarily something that's widely adopted as yet. Well I think reproducible build as a concept is different than the actual implementation right you know the implementation is fairly new and I think you know they're working through some of the issues and some of the design of that. You know, look to be working really well I have to take a deeper look at some of the source code that they've been doing but I think we can list it maybe as like hey this this is this is some of the current work going on but hey you should strive to achieve reproducible builds, and actually Tesla's reproducible builds in your environment, whether that just be you know another Jenkins job on a different set of infrastructure further for high security workloads. Cameron, I'm sorry I joined late and kind of kind of late to the party. It's good to meet you all. Well, like you're talking about the CI model, so to speak, and all of its forms. It's really the whole supply chain right right down from the ingestion and security of the dependencies and your source code right through and how you're building that product through to how you're distributing it and sending out evidence of what's in that build and effectively S bonds and such. So it's kind of really looking at it from an end to end perspective. What we're also looking at is, you know, what is the best practice of producing those builds. That's how we've sort of led to the reproducible builds concept and some great conversation was on the SSF a few days ago. And so I come from the the SUSE and open SUSE world. I, I'm an employee of SUSE. So I can give you some insight on what's what's going on in that open source community as well from a supply chain perspective. And also rancher. Since SUSE owns rancher now as well. So we've got the convergence of supply chain coming in here. And two different companies doing it two different ways. And there's going to be some big changes coming soon in the open SUSE community. You will see a convergence of rancher labs and open SUSE coming together to create actually some brand new supply chain methodologies. So it'll be very interesting to see what happens with that. Our current process uses a tool called the open build service. I don't know if you've ever heard of that before. It's actually used by the Linux foundation to build all of their Linux distribution stuff. It is a complete supply chain tool. It builds RPMs and builds entire Linux distributions. You can use it to build Ubuntu, Red Hat, CentOS, and many other flavors of Linux. But SUSE uses that it's it's our, it's our bread and butter. It's what builds SUSE Linux enterprise server. It's what builds open SUSE tumbleweed and leap. There's a lot of security built into that supply chain. It's interesting from an authorization standpoint when you see people building out, you know, their own supply chain to handle things like Maven builds and things of that nature which are very insecure in nature, because you can inject from, you know, various other repositories across the internet. So, it's very insecure. Our new build chain from the open SUSE community will be incorporating these different build environments from Maven to Python to Ruby, and building something more secure as a supply chain. So, we're kicking off an internal meeting about all this next week, in fact, that we'll be talking about the next generation of Linux enterprise, and what that's going to incorporate from a build and supply chain methodology. It'll be very interesting. Interesting. I mean, it is the much where you can tell us about how the, you know, the security elements of the build service. Yeah, so from the open SUSE perspective, the build service, we do use Jenkins in there, but it does have its own, you know, build environment where it scales out across a Kubernetes cluster, and it can build for multiple architectures. So, it uses, see what's the tool called, forget the name of it. But it allows for, you know, multiple architecture builds, no matter the binary type. It does some very limited stuff with Java today. It's not expanding that even more. That's been a crutch of the environment for many years. But it does an extremely good job of connecting to many of the different build sources out there from GitLab to GitLab. And there's many others that it plugs directly in with. I have some source code checking that I think that let me pull up some of those pages on how that's actually working today. They can show you a diagram of our build environment. I guess one of the sort of interesting things that Cameron is from the document and maybe you were able to take a look at it. Is there any items or recommendations that we are missing so far from that, even at a high level that they're already implemented and build service you recommend adding. Okay. I haven't gone through all the doc yet, but I will get there. That's certainly something that can consume this afternoon and give you some feedback on whether or not there's some areas that might need some improvement there. Well, definitely an improvement. I think we'll continue to work on it. And I think it's starting to get some more meat on the bones of that sort of skeleton document. There's a couple of people offline putting chunks of pages together and reviewing it before they submit, but at least the high level titles are in there. So if there's anything specifically missing, please do call that out. Yeah. Anyone else on the line got experience using open build service. An additive on the open build services that it can also do container builds. And so once you build your application the container the output can directly output it to to a registry, which you can inject into your pipeline to do scanning as well. But the idea behind the open build services that you do all that scanning before actually reaches the registry. So you're scanning all your binaries, you're scanning your RPMs to make sure that make sure the RPMs don't have any outlandish CVS that have not been applied. You know, there's various checks within that environment to make sure it's completely up to date within the pipeline. So does it sort of so scanner evaluate CVS within the open source components that are ingested into that pipeline. Is that Yeah, so this, the build service pipeline doesn't actually scan for CVS because we actually have a separate tool that we use that will do the CVS scanning. And it gives it kicks out a report to our developers. Since since we're in the business of making sure that binaries are secure and compliant with CVS. That's kind of a whole. That's part of our business right to make sure that our binaries are secure and make sure that they are, they have the latest security patches applied. We're doing a lot of the indemnification, if you will, on those binaries so we have a security team that is constantly checking. So we have a tool that plugs directly into our build service that's constantly checking binaries, and then it spits out a report on a daily basis, it does it in real time. So you can look at it and look at the report and it will automatically create bug reports for us based on security vulnerabilities that it's scanning against the CV database. Okay, and then we're taking that list. We're doing checks. We're writing new code to fix the patch, you know, CVS that are that come out in real time. It's an interesting process from our security team perspective. It's a lot different than, say a supply chain from a consumer supply chain from a consumer might scan a different database and make sure that, you know, they're using, you know, maybe some third party, or third party tool that's actually got their own database or they're getting it from the mitre.org or some other, you know, location. So, from a consumer perspective, scanning might be a little bit different. But what we're trying to do as, as a delivery company, we're delivering the Linux sources we're delivering all the binaries that are capable, you know, being inside of a container, we're delivering all the binaries for programming languages and your build packs, all those kinds of things. We're making sure we're doing all the checks up front to make sure that your build packs are completely hardened to the point that they have all the right security patches and everything available before you actually start injecting your source code into that build pack. There's a lot of different methodologies behind all this behind the scenes, which I'm sure you've all discovered right there's, there's a lot of ways to actually do this. And Susa is trying to make this better for the consumer side. You don't have to inject this outlandish supply chain security method into your software supply chain, where you can actually pull down a trusted source for a build pack or, you know, your programming language of choice. And then create, pair that up with, you know, a build services source that that has all those binaries ready to go. And then you take your source code and, and put those components together. And then you kick out, you know, your container or your RPM or whatever it might be, whatever your distribution point is. So the whole idea about around the build services that it's, it's a built in process you only have to worry about your source code that you're writing within your corporation. And you're doing source code scanning at that point, where everything underneath should be all taken care of by a company like Susa, or red hat or, you know, you know, whoever's providing you those binary sources in a secure way. That's what's really trying to do. That's what we're trying to solve. Because the biggest concern, you know, we see them all the time, you know, we, you posted one up here the other day. And I was like, yeah, I've been telling this people for years that this is a problem. You know, a supply chain attack. So either through Ruby or through Python or, you know, that can happen so easily. If we are not, you know, doing our due diligence to secure that that source where it's coming from. And so Susa will be doing a lot of work in that area to ensure that we have the best supply chain possible for our consumers. So I'm just trying to think from the security. I guess the main thing is, is looking through the different different best practices we have to see who actually picked up or rather than just wondering how, as you're going through that pipeline, are you rebuilding the source libraries that people reference within their source code? Yeah, so there are some validations that go on. So, in terms of, you know, where we get those binaries, we've got guys that we have hundreds of developers that are working in many different communities. You know, handling the relationships between, you know, the source code owners, the maintainers, you know, those that are writing that source code. We've got developers that are helping in those communities. And if you can imagine, you know, when you take Linux as a whole, you know, you have thousands of binaries. And we have hundreds of developers that are connecting between these thousands of maintainers across the world. So it's a very heavy feat to be able to accomplish. And I know from a SUSE perspective and also our open SUSE community, we have very, we have quite a few members of the community that do outreach and we are, that are maintainers of source code that gets dropped into our build service. So to the point where they maintain the build environment or the build sources for those projects. It's not necessarily coming directly from that community so they'll work directly with those community maintainers. So some of those maintainers don't know anything about the build service. It's just our community members, developers that are going out and maintaining that relationship and maintaining those build sources. And so they'll typically maintain a subset of, you know, 100 different projects, some of them. And they're just maintaining the build on certain projects, making sure that the software builds properly, making sure that the CVEs are updated. And so there's a lot of due diligence that goes on there. It's very interesting. And then from a, you know, pure checking, you know, standpoint, it's, you know, some of those get checked for specific security. Are those developers, if it's an enterprise developer and it's a, it's a product that is consumed by the enterprise side. Those communities will, will actually be checked for security by our security team. We'll put it through some internal tools as SUSE that does, you know, extra security scanning than the open SUSE community actually does. The way that it sits today, that actually happens in tandem. So you'll see a lot of the majority of the open SUSE software today is actually the same software as the enterprise today. There are very few pieces that are different. You can install open SUSE 15, 15.2. And it is literally identical to SLEZ 15 service pack two. There are some very minor differences. You're probably, and the ones that are going to be different are the ones that actually have licensing software in it. Can I ask Cameron, just with that build sort of set up? Do you have the ability to attest to how the build has been put together or? Yeah, yeah, let me. We've got some slide decks out here that talk about how our build services put together. I'm going to find some of those. It'll be useful to send out links and then we can take a look at them offline and maybe add them to the reference section at the bottom there. Absolutely. And I think some of our build service maintainers have actually put out some videos that talk about it too that are very interesting to go through. You're not sure if it uses in total or some sort of mechanism like that to provide attestations within the build process. I don't think it does use in total, you know, with in total being as new as it is. For something else. Yeah. Yeah, let me find these presentations. Okay. That's looking through that I guess what one of the things that I think reproducible builds make sense is there any volunteers to take that one on as a as a section. I'm going to get to the rest of the ones we've just got through as a group. I don't know if anyone's got any specific in depth knowledge of reproducible builds and I'm interested in the core fairly niche. The moments are probably not. And so your, your active doc is the one that's in the, the slack right. That's correct. Yeah. Okay. And what section are you looking at right now? We're just going through it, but I think we've got a number of people are taking up similar sections PKI and such. I just added at the moment, just below software factory. This is a placeholder. I also added Mr. PhD pieces, the references. It's actually really detailed. Very interesting stuff. One thing I'm trying to do, Jonathan, I'm trying to go through some, you know, the recent publications. You know, that cloud security white paper where I think there's a lot of stuff there that we can just grab and maybe summarize are going to detail where appropriate. A lot of source material there as well as that that spiffy spiffy book as well there's a lot of really good stuff there I think we can draw from. So I'm going to be doing that today and trying to grab all that prior art. And then I wanted to talk about that D bomb a little bit more maybe next week. So I got, I reached out to the Chris Blask from unisys and he wants to talk about he's been doing a lot of work in that area. But I get some notes from him to see what the current state is on that and then I'll have that info for the meeting next week, and maybe try to convince him to join us to join this team to. Yeah, that'd be great. I've spent a good bit of time talking to Chris about some of the work there. I think it's, it's, you know, it's certainly in that document right in the reality is we need some way of distributing the S bomb. Yeah, and I think it needs to be decoupled to right because if we just say okay blockchain right that doesn't work for everybody the channels need to be server I think the D bomb kind of addresses that in a nice decoupled way. I don't think we can get too far down it because there that works just not far far as far as long as I'd like it to be to really put in here but what are your thoughts on that. It's open for discussion right but but I think it hasn't necessarily been picked up widely in the community I'd say from what I can see. However, there are some pretty significant sort of initial deployments that from what I've been to see in the industries that I think are interesting. So, I think, even if we sort of identify that as a functional issue that we have the s bombs how do we securely trans transport the s bombs and understand that the source code that you receive, and the material that goes with it, the s bomb material is associated with the source code, or rather the binary that you're looking at. Now you can securely distribute that. That's a problem. I guess it's a solution that currently exists that people are moving forward with. I guess back to that earlier conversation of. Is that a best practice, or I don't think it's been really picked up yet. Maybe that's a gap that there's a really good solution that people are trying to figure out if that's a way of fixing it. Yeah, so yeah distribution of is definitely a gap, right. Yeah. Secure distribution of s bomb. It certainly looks promising. Yeah, there's some. I think on the, the s bomb work, because there's a lot of good work out there from SPD x and cycle and the x and how people are using it. That's pretty well understood and I think there's still conversations about the appropriate file formats and such, and how those tools are going to be used to actually create those s bombs and how we can just that but it does seem to be a gap around how to distribute things securely. So I think best practices right now is going to be just to use and I'm pulling some guidance from some white paper I saw the NTA reference it would be just to use is this existing secure channels and develop internal standards and policy based around those right. Yeah, I think that makes sense at the minute. I think that brings up the other one is is maybe you extend this document a little bit more and you know we've got a high security but reasonable security level low I guess. Yeah, but also we need to be mark some of this stuff maybe put into that second document of. Yeah, this is like draft, but this is a suggestion how to fill that gap. Yeah, or even appendix that has some some of those use cases and possible solutions for it. That's the main document that oh this this would be cool if this happened but right does it really belong in the white paper about we're going to give high level guidance executives on probably not yet. Maybe return the white paper into a book. Well, that's, that's almost a fear. Is that what's happening already Jonathan. Well, we're at a somewhat fairly fairly complicated right. We're going to start to tie this up and disappear forever. Oh yeah let's produce the raw content and we see how we slice it up might be a series of locations might be one big one might be a internal living document online. Right. And that's why I think that for the. That's where we're getting into the, you know, the gaps in the. The areas where we're starting to build new functional to identify the issues. That's the bit that maybe we end up pulling out that second document and then we can solidify the best practices and publish that whilst the piloted these are some gaps and we're going to continue on separately to fix that. I think that's a great idea. Yeah, with that I started adding boxes of how far we'll go into each each section, what we'll put there. I think we can agree beforehand, like what the scope of each section is so we don't end up over rotating and putting more than we think it's a great idea to just disappear. We'll never come back. Yeah, and we'll remove those after the fact gives us a placeholder for the metadata or meta discussions of sections. I think we're pretty light on it was kind of a back end of the supply chain how we distribute software. Just don't have a lot of content with basically the huge amount of ingesting and validating dependencies and the source code. Get commits and search for chunk of material coming in the software factory. You could trust etc. We're bearing in mind that most most of this you're going to be a producer and a consumer in some way or another. Yeah, so we get a bit more on the back end. Yeah, so we have to talk about like internal repositories, external repositories right and that that trust. And I think we should also add like difference between a commercial software or a difference between a standalone software or a library because they have different kind of like if a company's building an open source of software inside a company right so the S-POM and D-BOM scenarios be different from a S-POM receiving for a commercial software or a code solution from a vendor right so and maybe for a standalone software also like even if it is an open source software like as a Docker image or something like that. I think there is a slightly different scenario for S-POM and D-BOM in those cases like especially if it is a software development company consuming open source libraries and building it internally. I don't know how the D-BOM can be aligned the same scenario as a company just buying a software from a vendor and they may want to they may get the S-POM from the vendor itself right but for open source software we can't expect all the open source libraries to have S-POM so we may have to generate S-POM inside our build sorry inside our software factory instead of expecting S-POM from upstream libraries right so Isn't that jump back to rebuilding the thing from source and creating the S-POM? Yeah I mean that is also related right and rebuilding yeah so but even without rebuilding a software we can still generate S-POM right like in Java like even if you are just building your application you can still generate an S-POM of your transitive dependencies without rebuilding them but ideally we should rebuild and then we can have more accurate information of all the transitive dependencies. Can you not mention it if you don't have that in there about whether or not you do or don't rebuild? Not good sure. Do you want to maybe add that in there at the front? That's what I want to ask should we categorize the different types of supply chain software like in a supply chain it can be a small library to a full full-fledged software which is just running on a server or a container right like it may have different behavior or how securely you can procure those like it may not have all the same attributes or requirements like or we just need to say I don't know maybe inside when we mentioned D-BOM maybe we can say that D-BOM in a code scenario you can expect the D-BOM data to be shared with partners and things like that whatever the channels you have but for an open source library perspective it might be a different scenario right like something like that I don't know if you need to categorize in the bigger picture in the supply chain are we considering only open source supply chain only standalone software like Linux distribution or something like that so are we considering Java, Google, Guava library or you know GoCyber libraries and things like that they may have different treatment we need to provide right like and you know we need to be also practical right like we can't expect to have S-BOM for all the open source software libraries like at least in S5 or 10 years I believe but yeah only do you want to recommend some capitalizations at the front I'm just asking like it's just there and I just want to ask everyone else thoughts on that like what do you think like I mean there are several common aspects among all these things but there can be different things treatment like S and then maybe specific to D-BOM or a specific S-BOM generation or things like that Any other thoughts? Yes sir I think Finnaud if you maybe make a couple of suggestions you know go into the document perhaps and just put a question or a box as Anders is doing and just potentially open that up as a question that we can dig into Okay If you can supply some thoughts around it we can discuss it later next time Okay so What I was trying to sort of highlight was from the document standpoint the bit I think the lighter one is after the build and if we're deploying the software if it's an open source library how do we get it to open source consumers if it's if we're providing this guidance to someone who's consuming open source software obviously they're using or something they're deploying it to other customers how are we going to actually deploy that It's kind of the inverse of how we're ingesting it making sure that we provide S-BOMs, provide the signatures of the work we're providing possibly send the data that we're building into some sort of a recall transparency log That's the area I think I feel pretty light on. If you look at the document everyone's looking fine up to the point we get Sorry I don't really got info. I'm not sure I completely understood this. Are you saying there needs to be more discussion about how to actually distribute any S-BOM or other related metadata that's generated or it's more like software itself right so if we've and that's kind of one of the answers of this feature we've built something but if the person building this thing is effectively an open source provider you've got your library, what are the recommendations of how you distribute that things if you're what's our best practice in how you distribute your open source library or distribute your product One thing is please provide in total data in an S-BOM I mean if we're talking about package repositories where these libraries are uploaded I don't mean to get very pluggy of the work we're doing at NYU but my mind would automatically go towards stuff you know the update framework for if you're uploading these libraries into a repository and then that repository should distribute to consumers using something like DAF so that's the direction I meant to right away so So in a similar point of view I say we're going to an organization we deploy software in our gap environment so someone somehow handed me an artifact or I got it from somewhere and passed it to someone else in my team who's going to do the deploy what would be the guidance and well yeah this just has a tough or notary signature like how does the person go around validating the provenance so we put a lot of focus on maintainers of software what are the best practices but for the end consumer how if this landed up in like an artifact or some catalog how can they validate what's that process and I think we're doing a great job up front but how can people really tell if they're just going to like go pull something that they do that extra step somewhat hygienic to validate where this came from how did they check if it was a simplest check in an MD5 checksum or like doing something else but like what is that at the station they can perform I think John that's where you're trying to get at yeah but it's it's I still see that as the front part so if we are a cons I mean it's difficult right if we're writing this document and we are the consumer and this is advice to the consumer okay I'm going to build my product so I've got to validate my inputs I've got to validate my dependencies everything you just said I don't know where this thing came from some of this gave me some random software in the middle of the factory and this is advice on how you validate that etc what I'm now getting to point is literally at the bottom of our document if as a consumer there's document I need to send my code to somewhere else and publish it what recommendations are we giving to that person and how to do that securely so that consumers further up the supply chain have a fighting chance of validating our software so the way I look at it is this would be you're in the middle of a supply chain you're not necessarily likely to be in so what are the checks the end user performs what are the what are the checks the end user performs and how can we facilitate that by providing in that data so yeah you know it could be we're going to build our products we're going to contribute an S form we're going to make sure that the sign are relevant artifacts and we distribute it securely into a package manager for example it's almost the back end because the way I look at it is supply chain unless you're at the end of the supply chain the start of the supply chain you're both a consumer and a producer of software and this is the we've done a lot of consumption now we look at the production and give it to someone else and that's the bit to me at the bottom of our document we don't actually have a lot there yet you know what likelihood it is it's the inverse of what we've done at the ingestion thing that makes sense back to you know what I'm curious what his thoughts are there I hate to put you on the spot but following up on well what's that latter part in your mind from your perspective so yeah this thing has tough how does someone like verify if you're passing it to like a colleague of yours and yes this this was built using in Toto like the artifact has a binary signature how do they verify the binary signature should they accomplish that using technology should they do that manually so I'm sorry I was just I wasn't sure if you directed that at me yeah you had earlier thoughts on it so right I'm actually I actually started thinking some more I'm not the most experienced person when it comes to tough I kind of joined quite a bit later and I'm more involved in the Toto side of things but I was I actually wanted to kind of think about this a bit more about how to accomplish this for for the intermediate steps in the software supply chain rather than just at the end because when we talk about tough I think we usually talk about distributing software right at the end of the software supply chain right so and I guess the question is how do you verify if you're an intermediary how do you verify what was handed to you from the previous step in the software supply chain right Jonathan I'm actually looking at the other one where you're about to send it to someone else okay what technologies do you ensure that you've implemented new distributes to the person next in the chain is able to validate that stuff and it kind of already hit a couple of generate gas bomb right I think Jonathan is getting into it because in the paper we need to cover everything it would be a disservice if we do all this great things up front but if like an ellipse actor said hey take this software it has a D bomb and it's kind of a tough like I actually think this is kind of I'm sorry I think I cut someone off I'll go for it I want to hear what you have to say I actually kind of think that this is where in total comes into the picture because if you perform something and you're about to hand it off you'd also be generating an internal link for whatever you perform what for whatever you did in that particular step and while we've focused on verification workflows for once we have all the steps performed we usually have like you know one root layer for the entire supply chain and all the link metadata for corresponding to each step in the supply chain I'm actually wondering about some kind of you know not a full scale verification but at the very least you can check that the link for that the person who's performing a step and he's just handed something right they can check if the link metadata associated with that was at the very least signed by the authorized person for the previous step and so on and so forth I also wonder if we could kind of capture these transitions between two steps into their own you know little root into the layouts to ensure that the right person handed off software to the next step and so on and so forth you know I know you know what I'm bringing artifacts into an air gap deployment right the you know we get the docker container the docker image at the end of the build process then I do a docker export right down the hash manually burn that on burn the image onto a CD and then walk into the security facility and give it to somebody else and then they they verify those hashes match you know physically right that that's the current process that is like that exists right now for security and air gap environments there may be some other you know places that have automated that but I just don't know if there's anything else out there that that really we can guarantee the security of verifying those hashes there's no build transparency server out there and internet that we can go query yet right the DBOM stuff isn't there so I think that's that's the best way to do it now if we're talking about actually moving the artifacts around right and we have a lot of repositories for that we have Artifactory you can set up your own satellite server if you're doing different types of artifacts so we can talk about that but as far as distributing like the S-bombs and that attestation information I just don't think there's a way other than manually moving it is that where you're trying to get at Jonathan yeah yeah but it's on the backside right at least we at least we can maybe provide the contract of okay there's no way of shipping this stuff right now but the best practice would be at least the great and even if it's available at an end point someone can write the thing down yeah this is what you should do right and then if you want to automate that business practice you're up it's up to you to figure out how to do that yeah Mike what do you think me I'm mostly just listening in still trying trying to get up to speed because it's been a few years since I've sort of operated in sort of a supply chain side that's cool well for do chime in I think we need to cover all perspectives now like Cole I still think about like Solari gate or whichever like you want to go by it like if you do an MD5 check some on the final artifact and you tell people who are deploying this thing this is what you're going to validate you're still spreading the thing around right so if you put that onto the CD that's not and say hey like just check that we find like we did build this thing ourselves and like here's the supply chain the supply chain lock for this thing you're not mitigating that attack oh right that was a horrible process underage right I've built that on my own machine not even a secured build server so right there there's there I think you know there's there's ways that I want to automate that process and make that better of course but that's the process we had yeah history there right the the conclusion of the wide paper is like gracing those questions right and questioning everything how we do things today and the need for something better but I think maybe that's it it's like like that that is the you if we take this when you're linking this chain okay we've clearly got a gap here but this is the contract or at least the end point you've got your piece of software and this is your contract to the next time the chain there's a gap we don't know how to get that there it might be it might be Cole taking your software writing it on the cd disk and writing the md5 in the top but do that do an s-bomb you know take the in-toto data and at least make it available somehow yeah I think right that in toto makes that whole process secure that I just talked about that's the part that I didn't have two years ago when I was doing that process today right I would say okay let's let's implement in total for this so at least I know when we are making that handoff that we can verify the signatures and the artifacts along the way so if someone developer does need to help on their the terminal that they can use that I think we could point them to these projects to fill these gaps in the current work in progress right and say that hey this exists here but but if there's no commercial or supported tooling to do any of this I think there's an added dimension of yes like there you're almost making the assumption that the machines that built that software are properly secured but it's almost like separated a concern entirely right like there must have been defense in depth and least privilege to this machine that machine must pardon the kernel must have been protected the machine must have properly been attested memory should have been encrypted and we often don't like we just presume like people are doing those things but like we should stay like hey these are all the other things that are not supply chain directly but you should have strong of security of all your notes and all the software that it executes there and that to me is like the software factory itself right in that you know if I'm building that thing I'm going to have all of that I'm going to have strong strong protection within that pipeline to the nth degree to make sure I know what's being built and I've got solutions to monitor and build stuff every way right yeah and we have a lot of great we have a lot of great reference material so I'm going to be plugging that in today from the work that Andres and his team did over at the Spiffy Spire book as well as a cloud security white people I think that that really that is a great body of work that we can just plug in totally totally and then that's that's anything we've got huge amounts of great working already and then it's really cool and then you send that thing to someone else and at the moment it's backed out the cold CD in a piece of paper the next to the chain yeah but it's doing all this things in conjunction right because if if you're doing just one of them it's not enough yeah you know it's okay that there's gaps right we all know there's gaps so this will if anything help you know call action within within our community to say hey let's let's work on these things so let's work together on these things I'm gonna basically a comment on that section right at the bottom which is saying effectively is the gap in transmission but do everything above this and it was a different chance yeah Mike you've gone off me a few times yeah I was just gonna say because I think to what you had mentioned before I know in the past one of the big things for us was you know everything you mentioned about you know securing the build servers I think is huge and I think one of the things that that has been sort of I think the big open questions is how do you guarantee that what you know let's say an open source build is doing like how do you guarantee that they are following that protocol you've outlined yeah so I'm going back even from build to source right like there can be npm or pipa package like jQuery package or something which may even have a back door inside that and even it is a secure build they are following that can still execute right so my point is that I think at least in the software factory for a some threshold security I think we should rebuild everything from source which we can't trust and we should generate our own you know s-bombs and in total attestation whatever we need right so we shouldn't just toss it because it's an open source it has a in total attestation it has an s-bomb you know attackers can still publish a library like you know with all these things they know that we just need to have a in total we just need to have an s-bombs attestation a software block material right so that's that's what I'm thinking like maybe we may need some kind of a threshold for like a higher security requirement yeah so these libraries should be rebuilding the software factory and re-attested or something I mean whatever other internal security like static analysis or dynamic analysis they want to do they can do all this testing and they can have their own threshold and they can certify that if they're internally certified and it is internally inside that company to use for further supply chain ingredient for other software right you make a great point and we should try to capture that in writing but now for that library vulnerability to be exploited you must have like had your network penetrated or someone like exfiltrated a credential and gain access to one node on the edge and like started performing lateral moves so I think yeah we shouldn't like skimp on well move away from embedded credentials and like long-lived keys and like move on to like identity based systems and have short-lived credentials have MTLS and to end like these are imperatives because yes like there's there's going to be day zero's right and there's going to be like software still written by humans but we should like automate like we should delegate to the machine and forcing least privilege at every single layer of the step yeah definitely I mean there can be like this kind of a device for I mean we should at some point in future in my opinion every open source software maintenance should generate an S-POM from his library he should do proper attestation he should securely you know authenticate you know he shouldn't use weak authentication and things like that I mean that's something we can expect in future and we can also give advice in the white paper for this open source community but for the consumers like if they have a higher level of security requirement I think that's what I'm thinking like maybe you know we shouldn't trust anything right like we we definitely need to go back to the source and see what it is and then rebuild and then sign and attest but I think we've covered that part or at least picked that part out with the kind of that document and I think there how we're securing the build of that software I mean that's the massive chunk of software factory that should have a lot of detail on we will be able to dump in there right hopefully we will cover that but I mean it's basically just how to secure a system Jonathan I gotta drop off you're my holiday I'm with all the things I'm curious that like we need to get the tour sign off but if we move like that line to keep on EU maintain and track sessions that was February 7th but I might be able to pull something off everyone a cube con session maintain and track 35 minute slot and we have like the group like share the progress in the white paper we might need to finish by then yeah it sounds interesting yeah definitely I think this is something that should have a wider audience and by then I think really we can start wrapping it up but if a tree falls in the forest right so we need to do all this promotion we need t-shirts we need like mugs just like some sort of baby Yoda we need a logo with a mascot I'll get my five year old son to draw us a picture done no I agree I did this good because I think it's important to get the cons out there I just want to make sure it's nice and tight before we publish it but it makes sense to me Cameron thinking loudly he's got some ideas yeah I'm sure I'll have more ideas as I go through and read things Cole you're doing the software factory piece of digging into that later on today yeah I'm digging into the boat yeah I'm digging into that today I'm going to go find a hot cup of coffee and a nice place to sit and start typing do you have a mobile hotspot for the boat do you get a reception from the boat some places I do it's pretty good coverage but I actually just pre-ordered Starlink so when that comes maybe by next fall I'll be camping out for a week or something working from the woods we'll see how that works it's a dream I don't know how awesome maybe by next fall we'll be out of this thing supply chain working group offside this has been a drag I'm an introvert but even I'm starting to get stir crazy I'm just extremely jealous of the whole boat thing and Starlink thing maybe last in line for that thanks very much everyone systems of boats and autonomous boats I like it bye