 I'm ready as well. All right, as I said, hello, everybody, and thank you all very much for coming to this presentation. My name is Aaron Conklin. I'm presenting with Don Vosberg. Hey, Don, thank you very much. And we are product managers at SUSE here to talk about securing the open source software supply chain. So I'm going to cover the risks of supply chain attacks. I want to talk about the relevant certifications that are related to securing the supply chain. We're going to talk about SUSE's software delivery pipeline, and then finally maintaining security at web scale with the SUSE manager offering. But first, I want to do a history of supply chain attacks. I'm going to introduce some of the key concepts in securing the software supply chain and then cover some of the additional facts about supply chain security. We're a pretty small group. There are a number of slides here, but I feel like we can probably do questions as we go rather than wait till the end. So if something comes up, please feel free, kind of interrupt as we go, and we'll try to answer as best we can. We have a gift waiting for you. So think of what you might want to ask. And our colleague, Sanjit, will bring you a t-shirt. So think of good questions. Thank you, Don. All right. Supply chain attacks are increasing. 2020 and 2021 witnessed one of the largest supply chain attacks in history on solar winds. I think most of us have heard of that. As companies better protect their runtime environments, the supply chain is becoming kind of the ripe target for attack. Originally, supply chain attacks really required the resources of nation states in order to execute. Because of the kind of expertise they have brought to the table, these tools and processes are starting to percolate out through the market. Well, not through the market, but through the business. I'm saying the wrong word again. Not a problem. No nervous as heck. But yeah, the attacks used to require the resources of the nation states. But best practices and toolkits are beginning to percolate through the ecosystem. That's the word I was looking for. Sorry about that. A secure software supply chain allows a company to secure, document, and trace sources and binaries, build environments, and the update mechanisms of a specific workload throughout its entire functional lifecycle. The focus is on product support through that lifecycle. It's great to secure it once it's there, but these are not really immutable products anymore. These are being updated quarterly, monthly, weekly, potentially daily, depending on the attacks that are out there. You need to have something that is much more responsive. Can't just trust that initial bit of security. Well, why do we care and why should you care? Well, as I said, supply chain attacks are increasing. The amount of software packages that are needed by most businesses continue to grow and grow and grow. And the risks aren't just technical risks. Social engineering is ever present. Now, each additional person responsible for software security in your organization increases the total surface area. So how do you kind of bring that down while still maintaining your overall security? I think the answer there is partnering. Because we have so many people in this room that our risk plane has increased. As an individual company, yes. I mean, the more people you put towards that, the bigger the risk is. Do I get a T-shirt for asking? You can have two for the first one. We got it done. But no, it's a really good question. And by partnering with a company that can handle that for you, you kind of take off that burden from your own organization to a company that is actually focusing on that as part of their core business. If this isn't part of your core business, you shouldn't be spending a quarter, a half, or all of your time on the actual security of the product. You should be working with the partners that can handle that bit for you so you can build on top of that foundation you're actually trying to deliver to the market. All right, what is security about? Couple key concepts then. Confidentiality, you have to make sure that only authorized entities can access the information that's being provided. Integrity, let's do it again. It comes back. Yep, I know. There we go. Integrity, only authorized entities can modify that information. Both of these are really tied up with the concept of identity, role, and access controls. We'll see that come up again as we go through the supply chain itself. Availability, the information is accessible when and where it's required. Availability is often at odds with the steps needed to ensure confidentiality and integrity. So how do you make those trade-offs while still ensuring that your products are secured? Authenticity. You just unplugged the network cord, right? That's the way I handle most of my security at home, but then you're not an exceptionally functional platform. That kind of ruins availability, doesn't it? Just a little bit. So it's kind of that trade-off on where do you draw that line. I think I was talking about authenticity. Is that information original? This is a huge bit, right? Especially when dealing with the open source world, you have input coming in from a lot of different organizations. How do you ensure, A, that the initial packages are correct, that they are the packages you are expecting, and then as they are continuing to get patched over time, how do you make sure that there's not a man in the middle attack getting malicious code into your code base? Authenticity. Non-repudiation, the information's content is indisputable. Now, that really gets down to signing keys. And how can you trust that the information that is being provided by your partners, the packages that we're providing here, can you tell that they are the right packages for your environment and they are secured the way you are expecting? Gets to accountability again. The information can be sourced to a specific origin. Now, these three topics more directly defined. The chain of trust required to ensure that the information itself is not a threat to the operating system. So how do you make sure that the code you are bringing in isn't, A, going to immediately break your environment, and, B, open up a back door that could be used to exploit you in the later date. Finally, anonymity. Now, the origin's details are inaccessible to non-authorized entities. So this is a bit more about protecting the original source. If that information isn't needed by the end users, how do you make sure to keep that masked from the rest of the organization? Again, kind of working through a partner is a way to abstract a lot of your users from that while still providing that level of security out to them. So aren't accountability and anonymity at odds then? Absolutely. Because you can't be anonymous and accountable at the same time. No. Many of these are going to, you'll see a bit of an ebb and flow between the openness needed for a functional environment and the closeness needed to provide that level of security. And where you need that to fall is likely going to change based on the individual needs of your organization. So that initial planning, and again, we'll kind of get to that when we talk about the supply chain itself, that initial planning to understand where those give and take need to happen and how much risk you could have versus how much flexibility you have for your actual application to run, important part in kind of picking how you're going to proceed with these security contexts. Sexual question. I'm going to skip through this next paragraph because you kind of covered that with your question, which brings us to slide number seven. Now that I've introduced the combinatorially explosive issue of information security, as we talked about, I know, right? I'll pull that in slightly and focus on software security itself. So we're going to be talking about product configuration. We're going to be talking about source codes and binary code, other security related and customer related information, and finally, signing keys. You'll see this more when I get into the supply chain or how we've secured our supply chain itself, but I wanted to make sure to at least introduce the key concepts before we move on. I'll relate these back to SUSE software supply chain initiatives later on after a bit of diversion into the world of security certifications. All right, so how do we handle that combinatorial explosion? By working with trusted sources, of course. But how can we grow to trust those sources? The answer is a robust framework for security certification, and I think we've picked the right ones. OK, a reminder. The focus here is confidentiality, integrity, and availability of the software supply chain. And again, those are a little bit of dots, so how do you find the right mix? At SUSE, we focused on the following standards and certifications. Common criteria, particularly EAL4 plus certification, and NIAP, which handles common criteria within the United States. DISA's security technology information guides, Stig hardening guides, if you've heard of those. The FIPS 140.3 standard, this is for any cryptographic work. So specifically, we're talking about signing keys here. And the CIS benchmarks for best practice configuration. And I want to focus in on that third bullet. A key approach is to maintain zero trust thinking as you design your application environment and the security posture you take when operating within them. By removing the assumptions of security within your network, companies can make attacks from within the organization harder to succeed. And with a supply chain attack, those internal attacks are likely where things are going to come from. I'll delve deeper into each of these certifications as in the following slides, but it's worth highlighting how relevant these certifications are to the many, or to many of the key industry verticals many of us are targeting. And these here are just an example. Solutions that cover government and healthcare use cases, likely would work with FinOps as well, just as one example. So the point here is there is highly relevant across multiple industries, important when you're trying to pick the right security context. It worked that time. This is going to be a little wordy, so I apologize for the reading. The common criteria certification. I'm not going to read the whole slide. I'm not going to read the slide, but I do have a lot of words that go along with it. I hate reading slides themselves, so you'll see a summary of that, but thank you for asking. The common criteria certification provides a detailed framework within which technology vendors can map the security functionality of their products to the needs of users. Additionally, it provides a means for independent testing labs to evaluate the security claims made by the products or made of their products by the vendors. Common criteria allows for a number of different levels of certification. I'm going to talk through some of them real quick. So level one, functional testing. When user does A, B occurs as expected. EAL one would be relevant in a scenario where the target function consistency is important, but overall the security risk is deemed low. So again, this really starts with your analysis of what is your overall security context? What do you need from your vendors to provide? Is it really just functional testing? Does the product work the way it says it does? Easy way to go here with EAL one. Level two adds validation to the structure of the software. This is relevant when more security is needed, but maybe the complete development record is unavailable. Often in the open source world, you'll see EAL two certification because we don't always know kind of what's coming in from the open source community, but you could still test the structure. You can still test the overall functionality. Does the software work as expected? Is it designed as expected? But may have the whole of the supply chain itself at least remains partially opaque. Level three adds in an evaluation of the design and build methodologies followed by the vendor. So here you're looking more into the actual supplier, not just what they supplied. EAL three is where processes used by the vendor begin to be put to the test. However, again, the overall security context is still moderate. Level four ups that security ante further. And this is the more stringent, and this is likely the most stringent level of security achievable by an existing software solution because you already have packages in there. You didn't build it with a method necessarily with a validated methodology to begin with. So there's a measure of unknown that you go into, but EAL four still allows you to validate what is there, at least according to the criteria that you're setting. I'll go into more detail about SUSE's implementation of an augmented version of EAL four in the next section. Level five begins to drive very specific development requirements into the source code. So now you're defining not just what's being built and how it's being built, but the actual development process that the organization is following in order to deliver on that. Six layers in additional levels of verification, and seven finally includes a very robustly defined development or design and development process at what is expected to be a quite high cost to deliver a very formally defined, designed, and tested software package. You'll see this in, potentially see this in more custom designed software packages for very specific use cases that were built for the ground up, potentially for an individual company as you start to move into a more generic operating system or container package that is more globally accessible. Doing it again. It'll be back. Got it. As you get into something that is more kind of worldwide accessible, it levels two through four is where you tend to find software packages if they're following the common criteria guidelines. Cover that bit. How do you say that German word there? Zicker Heizer Tificat. Deutsches. I don't know. I don't speak German myself. Deutsches, absolutely. IT, can't predict or come at the S1. Zicker Heizer Tificat, yeah. I don't know. I could try to find out for you though. It's on your slide. I thought you would know how to say it. Come on, man. No, no. Thank you for heckling me from the front though. I appreciate that. I'll have to do the same once you're up there. Definitely do not get T-shirts. A key point to the relevance of common criteria is its global reach, right? We're not just talking about an individual company's security standards. We're not just talking about an individual nation-state's security standards. This is something that is becoming more and more of a global standard. Authorizing and consuming members represent five continents, 15% of all countries, and 15 of the top 20 countries by GDP. It's relevant to the industries you target. We kind of talked about that earlier. And the nations within which you target them. So again, you want something that is globally relevant so that you're not having to redo your security standards when you try to sell something in Turkey or when you try to sell something in Spain or when you try to sell something in Qatar. You can know by using something like common criteria you are still defining for a very broadly accessible group of countries and group of customers. NIAP. So as I mentioned before, NIAP is the US body that manages common criteria for the United States. Specifically, they manage a program for developing the protection profiles that are used to describe the security functionality of a software package. In SUSE's case, we utilize the protection profile for general purpose operating systems, because this is an operating system thing we're dealing with that was released in April of 2019. The NIAP protection profile is used as part of an EAL1 certification. SUSE also uses a protection profile from Germany's Federal Office of Information Security, which is how that is pronounced in English, which allows us for the EAL4 plus certification that we bring with the SUSE operating system, the SUSE Linux Enterprise operating system. As you may notice, U.S.-based validation is very focused on functional testing. We recommend looking beyond function and form into the structure of the upstream businesses themselves and EAL4 plus helps provide that level of trust. Covered a good bit already. Any initial questions? Okay, moving on. DISA-STIGs, so the DISA-STIG frameworks act as the security best practice guides for both the design of our products and the operations of the people, processes and tools used to create them. DISA provides hundreds of these security technical information guides that cover how to implement your security protocols, how to identify weaknesses and points of attack in your code and how to properly configure your hardware and software to minimize your attack surface areas. Anyone interested in selling their software to the U.S. government, particularly to the Department of Defense must become intimately familiar with these frameworks or work with partners that do themselves. Actually, even then, you'll probably have someone that will apply to your level of the business, but how many less will you have to use kind of going with a partner that can handle that? Okay, good. Next, another U.S. agency involved, National Institute of Standards and Technology. They provide a set of standards called FIPS that was demanded by the Federal Information Security Management Act. SUSE, most interested in the FIPS-3 standard, which focused on cryptographic modules. These standards define in great detail how to maintain secure key storage and key processing, a foundational component of software security and gets into the authenticity and non-repudiation bit that I mentioned earlier. The aforementioned standards are defined and managed by government bodies here in the U.S. and abroad. All great for what they are, but somewhat lacking in open source credentials. The Center for Internet Security, CIS, brings these open source credentials into the mix for certifications. CIS standards are developed via public, a public-private partnership under the auspices of CIS, a nonprofit organization dedicated to building security benchmarks using industry best practices. These benchmarks are the only consensus-based, best practice security guides developed with inputs from government, business, and academia. So again, the worldwide acceptance bit important here, right? You don't want to just focus unless you happen to only be working in one region, one geo, one country. You need something that is more globally relevant so you're not having to redo that wheel every time you open a new market. This is a way to make that happen. As mentioned before, there are costs associated with implementing these standards and practices, costs in time, costs in human resources, and in the opportunities lost while key employees are focused on supply chain security. You can mitigate some amount of those costs by partnering with a vendor who focused on the secure software supply chain, allowing you to leverage those certifications and that level of trust as the foundation, think I said that earlier, as a foundation upon which you can build your own secure operations. So let's delve further into how SUSE secured its software supply chain. All right, but first a quick reminder about those relevant security targets. You're going to see these come up in each of the slides in the future, kind of how, which part of the overall kind of security information security context we're discussing, each phase of this is going to relate to. So I wanted to make sure to cover both of those and the types of information relevant to the software supply chain. These will come up regularly as we continue to step through SUSE's experience. All right. I like to say that our goal is to make community source software enterprise grade. Our process begins with an in-depth review of all code sources that have been identified, either by the user community, right y'all, or our engineers and architects. As these candidate packages are judged to be both safe and relevant, we define the package list for the release and begin efforts to build that version, service pack or patch. Then these are put through numerous automated and manual tests as defined by the certification frameworks and our own internal best practices before being signed and prepared for delivery, especially via SUSE manager. It is the totality of these steps that help produce and maintain our secure software supply chain and I'm gonna go through most of those now. Perfect. At the beginning of the process, we plan the build and build the plan, really. This phase is very much about the overall configuration of the product. As you can see, we're focused on integrity, availability, authenticity and the accountability of the code. Is it what we say it is? Is it going to be available for folks? We have to continue to support these potentially for 10 to 15 years. Will we be able to keep getting those patches and being able to make those available to you? Authenticity, can we report on the source that we're getting it from? And can we get back to those original sources when there's new work to be done? That's the accountability piece. This is also where we start to line up the configuration benchmarks that will apply and how we will ensure they're tested later on in the process. So the planning phase here, super important or you're gonna be spending a lot of extra time trying to figure out how to test it and then how to get it built. Perfect. Now we begin to ingest that community code. The origin of the code influences the next steps greatly. Code source from within SUSE can leverage the security practices within our organization. We're still going to treat it overall as zero trust so the types of tests we're going to do is going to be the same but a lot of that work is done during the development process if it's happening in-house. We don't necessarily have to test it at the end because we've secured the middle, right? External code may be a completely unknown entity. These facts will influence the type of testing done and when it will be done. This speaks to the vital importance of both integrity and authenticity and we'll work back through the community sources to build that level of trust at least as much as can reasonably be done. Sometimes it simply won't be or that package is just available in the wild and we're going to bring support in-house for kind of ongoing support of it. Really depends on the individual piece of software and I love that thing. And the overall kind of story here is that with all the different packages and all the different potential sources of those packages, there can be myriad of approaches that you'll have to take to figure out how to validate and qualify each of the individual ones. Just again, adding to that level of workload that you don't necessarily need to be doing to support your own business. Again, opportunity to partner. Now additionally, we're taking consideration the life cycle of the code itself. Supported packages in ASUSA distribution can live for a decade or longer depending on the need for long-term support and we have to be confident that the entity or community providing that can maintain the code base or that we're able to take over, offer that support in place of those original providers. But source code isn't what powers a business. Not only paying attention to the source code misses the heart of the software supply chain which is the software binaries. Binary code testing confirms the compiled code remains true to the original source. So this step is very much about the availability of the code. A little bit on non-repudiation as well. Does it do what it's meant to? Does it generate unexpected outcomes that might point to a deficiency or even a malicious bit of code? This phase highlights the importance of a solid testing plan built off of detailed industry standards which we talked about in the last section. The consistency driven by these tests by these test standards help limit the variables within the test helping to ensure that any issues uncovered are not due to some random configuration changes on the part of the testing organization that it is truly either a defect in the code or an actual malicious bug that we need to remediate. As we move towards the right of the graphic, we're now in the build phase and here's where SUSE's open build services leverage to great effect. OBS allows us to build and distribute binary packages from sources in an automated, consistent and reproducible way. OBS was designed to operate across multiple hardware architectures. Someone correct me if I'm wrong here but I believe the answer is we can do it on x86. We can do it on power. We can do it on system. We can do it on arm. System Z, we can do it on arm. I think that's it. Yeah. For now. For now, but there are always options to expand that is the- You used to have itanium back in the day but how many of you are running itanium anywhere in your enterprise? There you go, okay? This means it's totally secure because no one's running it. Right? It'll never be, it'll never be hacked. Here. OBS was designed to move across, but... Sorry, lost my place. Oh, right. Hardware architectures and we can test it against multiple operating system distribution. So this is if we're using something at the application layer, not in our OS itself but allow us to quickly test new code against a wide array of hardware and software configurations. Given what happens out in the ecosystem, you have to know that you're gonna be secure across all those different platforms. That takes a pretty expensive lab, doesn't it? Well, I mean, it's more the time and effort of the people, not the lab itself. I mean, the equipment is going to cost a measure of money but it's really the expertise out there that's harder and harder to find in order to build that automation and keep that rolling through and keeping it updated as new hardware comes out. So yeah, there's definitely a hardware cost in the lab itself, but it's the people running the lab that are the real expense. It's the OPEX, not the CAPEX that'll really hit you with this process. All right, finally, the software packages that pass all the tests and complete the build process are signed and prepared for delivery through our software repositories. Preferably via the SUSE Manager application. More on that in a few sides. Thank you, Don. You may have noticed that this is the phase where non-repudiation is finally addressed. Those signing keys are SUSE's assurance that the package is what it says it is and no more. This is also then published on SUSE.com slash security. Granted, not the individual bits of the source code but the metadata that will go to show that these tests have been passed and that you can use as a check against the individual packages. Now, this isn't something that is a check that you can do within an automated fashion yet although we've been getting some feedback today from folks. So those are, that's bits of feedback that's going to go back to the overall build service team because if we want people to use this we need to make that a little bit more verifiable in an automated manner. That's kind of where the world's going. So yeah, more on that in a few sides. You may have noticed, sorry, I said that bit already. I'm gonna skip on to the next one because I've already said that. Perfect. The steps we just mentioned are, we're very much focused on the new release of a software package. The software configurations are becoming increasingly dynamic especially in the general operating system space. We've committed to a multi-year life cycle for everyone with the option for many more years of support and software maintenance. This requires us to be proactive in an ongoing review of code. So this isn't a kind of a one pass through the system and then we're done. There's a constant review here for any of the software packages that we are supporting and maintaining to ensure that still saying secure any new bugs that come out if there's an issue. We are remediating those in the same process and getting that out there. So yeah, this requires us to be proactive in an ongoing review for code management in light of that ever-evolving world of hardware and software exploits. Industrious in the creation and delivery of fixes to any issues uncovered and responsible in how we share those details, the details of our discovery. Both kind of within the security focused groups and to the consumers of our products. So as it says there, the kind of responsible disclosure. How do you make sure to get what you know out to the other organizations that are working on security without kind of publishing an exploit that could then be abused between kind of the time that you notify people and the time you finally have a remediation plan in place, constant balance. Again, kind of better to work with a partner that is focused on doing that than trying to do that yourself. I'll cover these last two slides rather quickly and then Don will take you through the SUSE Manager application in much greater detail. I don't want to steal your thunder, Don. That's okay. However, do want to say that we are, I will say that we had to close the loop on customer deployments and ongoing management. Ensuring that compliant use of SUSE assigned software both helps ensure the best practice for the best experience for the end user and helps ensure that our software's reputation remains stellar and it's not tarnished by non-compliant use. So again, kind of how that software is being used, another part of what we're checking throughout this entire life cycle. Our compliance team could go in on a lot more than that. Yes, sir. Tradition of the supply chain like from what it founded or what was modified because of the evolving supply chain? So honestly, I wouldn't say that we've changed a lot about how we're securing our supply chain because of that because we were already sensitive to the threat and taking these steps in advance of the attacks increasing. So this is something that we saw coming and have been trying to take steps A to secure ourselves and spread the word that this is something that needs to be addressed. So we haven't made a huge change other than we're pressing this message even harder to try to make this important to other people the way we find it important to ourselves because we see those increasing. But these, the steps we're describing here are the steps that should be taken to help mitigate the threat of those supply chain attacks in light of the fact that they're increasing. Yeah, great. That's not great. It's something that's becoming more and more of a risk, but we think we've picked the right set of sort of, the right set of certification frameworks to protect against this and have implemented those policies inside of our organization to ensure that we're not going to be the source of that attack, like actually as a demand of the middle attack itself and that the code review, the package selection and then the testing that we do ensures things coming in from the open source world also won't be a vector. So yeah, a long way to say we haven't actually had to change much ourselves. What that really did was validate that the steps we were taking in advance of those attacks increasing were the right steps and ones that should be leveraged by other companies. Do you have a follow up? Anything else I can answer? Great question though, so thanks for that. So I just said that we want to make sure it's not tarnished by non-compliant use. This includes kind of the ongoing support and management of the software. It's not just something that has to happen during the deployment phase, right? This is as new packages come out, as new bugs are identified, as new exploits are there, new patches are made. The same process has to follow over again. All right, I like to think you've all heard enough from me at this point, a little faster than I was hoping to do, but we'll see how this goes. I would like to turn over the mic to Don now to keep talking about supply chain security at scale. Thanks. I'm actually gonna start with this slide that Aaron shared with us in that how the customer operates are things necessarily that we as a software vendor we encourage people to follow best practices. So what might have been secure at the time of release of REL7 in 2022 is probably if you're still running that same code unmodified, your likelihood of experiencing vulnerability to potential problems is higher. So how you operate and how you choose to apply security patches and fixes really matters. So that's where a tool like SUSE Manager comes into play. So we'll talk a little bit about what kinds of things SUSE Manager can help you with, right? So obviously disruption to business is one of the reasons why people choose not to apply security fixes and patches or update because heck, it's still running. I installed it five years ago and it's not broken. So why fix it? Well, as you have seen, things like that become time bombs. It's only a matter of time until some type of disruption occurs. And hopefully it won't be just hardware crumbling. It could be other things too. And it's costly to administer these things. So if you have to go and SSH into every single server you wanna update and now you started out with 20 servers and now you have 100 and now you have 1,000. Now you have 10,000 and you have Edge and you have hundreds of thousands of devices. It gets very expensive. And when servers are unavailable, nobody likes it. Everybody makes a phone call. I used to work for a company where I managed wide area network connections. And each time we had a global outage, I could count on the vice president calling me and saying, okay, I wanna tell you how much it costs us for every 30 minutes that we are down. Nothing you need to do is gonna be too expensive. You fix it now. And I don't care what it costs. So that happens. The tool that we use to help people manage some of that is Suica Manager. So it's to try to help you ensure that those disruptions are minimized and you can manage your packages and software continuation of the integrity of that software supply chain. So while we mentioned Suica Manager in a conference and context like this, we start with our open source upstream package, which is Uyuni. Uyuni is a city in Bolivia. If you've ever seen pictures in the internet of Uyuni, there are all these salt flats where it's just a beautiful, I mean, just look at some of the things on the internet, it's kind of fun because we use salt a lot in our delivery mechanism in Suica Manager. So our clients are connected with salt minions. It's community supported. We have monthly meetings every last Friday of the month. It runs on open Suica Leap, which correlates to our enterprise distribution. So they're actually binary compatible, both built on the secure software supply chain build service that Aaron was referencing earlier. And it's a rolling release, which is actually ahead slightly of where Suica Manager is. So you can try it out and use it. And why would I do that? Well, because I wanna have security for my entire environment, maybe I don't need to, maybe I don't normally run Suica things, but this is how it fits. And some of the things that he was discussing earlier on how do I know whether I'm compliant, getting profiles like OpenScap profiles in my Linux system validated against metray.org common vulnerabilities and exposures knowing exactly where. And it's not just the scanning part, but it's also the remediation piece that allows me to do that. And with Suica Manager, we can manage all of your enterprise-ish Linux distributions. Now, don't come and tell me that you need one that's not on this list. Well, you might. So, but most of them we have accommodated. From the beginning, Suica Manager's design was not just to be a point tool to update Sless, but we wanted to make it something where all of our customers' Linux portfolio could be updated with one tool. So recently we've added support for some new OSs for us. You'll see a few of these. So like all the Sless stuff, all the Suica things, rel and its derivatives pretty much. So whether you pay Red Hat for your subscription or you use Oracle Linux or another derivative like Rocky Linux, Almalinux, those are supported within Suica Manager and Uuni's framework. We've also added Debian and Ubuntu support, and we include the importation of the patch metadata for those distributions. So then you can actually see more than just, okay, I've got 100 packages that need to be updated. I have patch information so I can correlate and know exactly which vulnerabilities are subject to issue. And that's a dynamic thing. So it changes over time so be careful. The asterisk ones are ones that are coming soon because these distributions are newer and it takes us a bit to process all of that. In addition to managing any Linux, we can also manage anywhere. So wherever you wanna put it, on-premises, in public clouds, all the major hyperscalers, we actually have Suica Manager system images out there in their repositories for you to build infrastructure on public clouds. And then if you have a mixture, you can stick proxy servers in any or all of them. And functionally, it's identical so you don't have to relearn all your administrative processes depending on where things are located. You can actually have server infrastructure where it's a combination hybrid of all the above. I don't have itanium in there. Give me back your t-shirt. So content lifecycle management allows you to do staging of content so that you can validate before you introduce changes into your production environment and you can create whatever stages are important to you, migrate the content through those stages from testing to production. And regardless, again, of distribution that you're using and do it with very granular filters, align it with your patch cycles. So if you have monthly, quarterly, whatever your patch cycle is, you can align it with that. We even have templates where you can automatically do things like the default modularity in rel-appstream repositories so that those get flattened out and you deliver the packages in a timely manner wherever they're needed. And finally, we have a number of customer success stories including this one for Office Depot. So we have significantly helped them save an IT management and deploying patches and things like this. So we have a number of customers throughout the US but these guys are one of our favorites for the SUSE manager team. And with that, I think we're at the end of our presentation so what questions do you have? Anything else you wanna know? Yes, that's an Aaron question. Sorry, today I don't have a wonderful answer for you right now. I am newer to the organization so I'm still learning so what I hope I can do is get your information. I know how to take that question back and get an answer for you. So my hope here is that I can kind of gather these questions in and then we'll be responding back over say the next week and a half in order to get all that. So please make sure to get with me before we head out of the building and I will get that answer for you. Anyone else who's interested, please feel free. We'll get your information as well. Make sure you get them a T-shirt. Yeah. Yes. Yeah, SOC2 and high trust in my own personal experience has come up more in my MSP experience than with the operating system itself. To my mind, things like common criteria, niabness, I think those are kind of upstream from a high trust or a SOC2 compliance. What you'll end up finding is a lot of the issues you have to answer for in a SOC2 or a high trust world are going to show up in the common criteria requirements or into the CIS benchmarks. So using our operating system doesn't make you SOC2 compliant. It doesn't make you high trust compliant, but it gets you a decent way there for the software itself. Now, there's a ton of other operational pieces that tend to go along with that. I mean, that's your operations. We can't necessarily answer for it. But we haven't needed it in-house, but I do think a lot of that is covered by the other actual OS certification frameworks that we've used. Yeah, so I think they build off. It might not be a bad idea to put something together that kind of lines out, kind of wear these things overlap. But yeah, to my mind, we're more the bricks that you begin to build a high trust foundation on top of. Awesome, thank you. Thank you everybody for your participation and have a great conference. Please stop by our booth. We'd love to speak with you.