 Now, we have an entire DevSecOps stage at commit this year, but I would be remiss if I didn't mention security at least once today on the platform stage. I've talked quite a bit about delivering value, but if your software is not secure, it is not valuable. Security is quality, and any DevOps platform worth its salt should be able to automate your security tests into your DevOps life cycle and do that early and often. For our next talk, I'm happy to invite our sponsor VMware to talk about doing exactly that, shifting security left and building it into every aspect of the life cycle. When you're done today, please check out the rest of our DevSecOps speakers on replay, but for now, let's check in with VMware. Well, hello and welcome to our session embracing DevSecOps for modern application. My name is Henri Venenblok. I'm one of the executive technical advisors and intensive value advisory team. And with me, I have Andreas Vega. Henry, great to be here with you today. All right, I'm Andreas Vega from VMware. I'm responsible for product security across the task portfolio. Outside of my work, I'm also a technical leader for the CNCF technical advisor group for security. Love to take the conversation offline. Great place to find us both as on Twitter, you see our Twitter handles on the screen. Feel free to question away any thought or feedback that this presentation elicits. Absolutely. We love the engagement and reach out on the social media. So as the goal of DevOps and modern application is really to have rapid release cycles to deliver capabilities. Security and compliance can often pose a slowdown or critical insight of an organization to achieving this. In this session, we're both very passionate about secure supply chains and DevSecOps. And what we really want to do today is share with you how you can apply those practices with cloud native application development and how you deliver software continuously and maintaining security in your supply chain from start to finish. So thanks for joining us. Why don't we just dive straight in? So according to the 2020 state of software supply chain we're seeing an incredible increase of the use of open source software. Third party libraries are used more and more. Now this increase has been driven by universal desire for faster innovation. More and more we've seen this over the last year during the pandemic as well, a massive increase. Already the other thing that we've seen according to this report as well is a 21% of enterprises experiences open source software breaches. That's a 430% year-over-year growth in cyber attacks or open source software. Now, slow performing organizations really are slowing down their innovation because they're trying to introduce manual processes, compensating processes to deal with the security risk. And they don't really have a good way or a means for stopping updates or making updates in a conducive way. So we kind of pose this way now, how can we increase speed while also increasing our security capabilities? And in the end, as I kind of stated, it's all about creating velocity in your organization. Slow release cycles are not only a disadvantage in the competitive economy, it's also creates a toil on creating security and addressing security vulnerabilities because they take longer in production. So think about a vulnerability staying in production longer because it takes you to such a long time to immediate. And the scary part with this is that in less than 40% of companies deploy software more frequently than monthly. So having a month-long exposure is very significant. So how do we wanna address these things? But before we do, we came across this great quote by Neil Searing, is that at the end of the day, it's all about imposing costs on the malicious actors. Now, classically it would mean put a lot of firewalls, a lot of hurdles in place. But actually our hypothesis is that you can actually impose more costs by continuously changing your posture, what you have in the infrastructure, how you address dressings. And to really solve this, you need to follow some of the DevOps practices. So why don't we dive in a little bit closer on what do we really mean about specific outcomes? And to do this, let's take a look at the DevOps feedback loop that exists and articulate what outcomes are we trying to achieve by the set of personas and what are some of the measures that are being used? And these are critical to really see, are you moving the needles? High-performing organizations do this intuitively. First off, from a development perspective, we wanna build new customer experiences. We measure these things by increasing developer productivity. Now, this might not be congruent with what operation wants, they wanna take advantage of innovation offered by the cloud, creating scalability, simplifying operations, while also maintaining security. And this has been kind of the classical outcomes to meet SLOs, reduce meantime to recovering. However, there's a new actor that also has a set of practices and outcomes that they wanna achieve. And this tends to be the security and compliancy side of the house. They wanna manage this growing volume of software vulnerabilities. And this persona is really looking at another set of outcomes that is securing the whole secure supply chain and applying those practices to DevSecOps. Now, we think in this talk, we wanna also talk about the house and how can we help enable you? But before we do, we first wanna ground ourselves into a set of principles that we found are critical to be able to achieve in these things. So as we look at the key principles on applying some practices, first and foremost, we think that identity and authentication needs to be across the whole supply chain and also your systems. You shouldn't have a lack of identity. This creates more exposure. Second, you need to have transparency in your system so that you can rationalize over what you're putting in the environment, what has happened, what are attestations there. And there's some great examples on work that we're already doing with software build materials, creating transparency in what we have. Thirdly, we need to automate things faster and faster. The human element needs to be removed. Again, making it very hard, increasing the cost for the bad actors also means by creating automation to simplify this. And using things like 3Rs, like where you're repairing, repaving, rotating your environments using declarative means. Next, we think that the infrastructure really truly needs to be treated as immutable, same as with the applications as well. They are declaratively defined so you can create an identified drift inside of your organization for security, but you can always go back to a particular state because you've declaratively said what that state should look like. And last but not least is also we wanna create zero trust. And this is notion where instead of thinking of assets and resources, you really create implied, that have implied trust based on location. But there's no implicit trust between those entities and rather trust must be established based on dynamic evidence and we'll be walking through that as well. So now we talked about some key principles. Now let's dive into some house on how we're gonna be able to do that. Andres. Thank you. It's important to underline that these principles do not exist in isolation from one another. They're tightly interweaved. In order to arrive at zero trust, we must fundamentally be able to ingrain constructs into our system that will help us determine with certainty their trustworthiness. And fundamentally it starts with a strong foundation of identity. So let's look where we're going back to the left of identity and authentication. Clearly there are keys and certificates we've been using for quite some time. Now, high profile attacks in recent times, we see the entry point as an excellent credit credential. An attacker gains this key material and then reuses it to perform lateral moves to do code injection. Now, if we change this embedded secrets or this embedded credentials or shared material to be short-lived, it significantly reduces the utility time in the event of an exfiltration. Short-lift here means typically an hour or less. This could be configurable on a paradigm basis and the requirements of the environment. But it does help eliminate a very large range of web application attacks. Now, a problem with short-lift keys is how do you provision and deliver this credentials to every workload and environments that are dynamically scheduled, elastically scaled? How do you do that across technology boundaries and platforms and machines that are common go and scaled up and down? So starting off the premise that if we're reasoning in a multi-cloud or hybrid cloud environment and we have a complex distributed system that spans some of these boundaries for any two components to trust each other, they must be able to confirm each other's identity and that ensure that the messages exchanged in between them haven't been tampered with. A great example of an implementation that codifies this key attributes we see on the screen of being, well, short-lift credentials, make it platform agnostic, give it to any workload, regardless of its form factor, that this key material has been cryptographically signed and can be cryptographically verified, that it's programmatic, and that it gets us over the hurdle of, well, I have an API key or I have a certificate, but in order to protect the certificate, I must encrypt it and now I have a decryption key. I should store this decryption key elsewhere securely. How do I do that? I encrypt it again and it's turtles all the way down. So we must move from the paradigm of proof of possession onto recognition technology, what would be analogous to a fingerprint or a readiness scan. And with that, so for credential zero, wherever a workload pops up. So the Spiffy and Spire projects, both at an incubation level from the CNCF are a set of APIs and associated tooling that provide uniform language for describing a service identity in a wide range of workloads, a wide range of orchestration systems, a wide range of providers and different layers of abstraction if you have confidential computing capabilities, if you have T's and TPMs as examples of those, if you need to introspect the different layers of like your provider, I am, your container orchestration framework and the kernel itself in order to verify those identities, only issue them if workloads meet the shape and size you expect them to be and providing that workload with that identity and the key material that it can cross authenticate to other systems. Now, this is done at multiple layers. I'd encourage you and we'll have references at the end of the presentation of, there's attestations within the ontology of the projects for doing verification of the infrastructure a workload runs, ensure that it's trustworthy, it's not rogue, it is infrastructure that does belong to you and it hasn't been tampered with as well that as the workload as be it a container or a function, so there's two layers like node as well as a process level introspection. Now, this is fairly general purpose. Once we have this in place, it solves beyond, well, we talked about that credential zero. So you're not only cross authenticating between workloads, you can authenticate from a workload to a secret store. You can authenticate from a workload to a database without using any username or password but doing direct authentication using the certificate. The framework support supports both jobs as well as X509s. It can do secret less authentication to your cloud providers. So if you're talking to AWS, to RDS or a Lambda function you can use your SPIFI identity to have an eye in binding and receive an exchange, a STS token that you can talk to any third party service outside of the infrastructure identified and you can have trust boundaries and trust domains modeled accordingly to that. So that's not the SPIFI inspire. Hope that's picked your attention but let's look at the next level up, how we start to model once we have that strong bedrock of identity. It lets us start to reason holistically around privilege and performing least privilege which is for every function in the system or every user in the system that initiates the task as well as the task in itself should operate with the least amount of privilege necessary to complete the job. So supply chains are complex, software factories are complex. More cloud adoption means a proliferation of cloud native solutions. More cloud native solutions means more moving parts and producing software. And more moving parts and tooling and picks and shovels. So we can agree that supply chains can be overly complex but it can help as engineers to break it down and discernible parts and implement the fence and depth that's measured with an organization's level of risk and assurance. So at every stage of a logical pipeline or a logical supply chain extending a little bit further there are some key things to think about. So we got to think about securing the source code. Know who's in your get repo and force MFA, sign your commits. Next up you want to look at securing the dependencies you want to scan the dependencies, generate software builds of materials that can be ferried along the artifacts. You want to harden and secure the pipeline itself. Move away from well, RHEL release engineering came up with the system and it's being handed off to security to come and harden it. You want to make sure you make it intrinsically secure from the onset. So leveraging state of the art technology to ensure that you preserve the integrity and confidentiality across every single step, constructs like identity and others that we're going to talk a little bit more come into the picture. You want to secure the artifacts that are going to be well produced from source and go along and like the byproduct of the pipeline itself you want to start moving towards reproducible builds and ultimately well, what goes into production? That's crown jewels, geese to the kingdom. You want to gate what makes it into a production based off the metadata and all the insights and telemetry you have around your artifacts. If you see that a particular artifact has a severe risk vulnerability off its S-bomb, you might want to enforce some gating criteria at that point. There's different ways to do binary authorization but well, something we want to put in your mind for you to think about and strongly consider. So let's look next at what other supply chain tools that you want to assess and evaluate to make part of your toolkit. Ultimately, DevSecOps is about giving developers the autonomy and the agency to develop as fast as they want but to do so securely and be able to give them a range of options, give them the technology they want to be able to secure applications up front. So there's several screenshots within this picture things we have on the left side of the slide. We showcase a screenshot of the hardware repository. We have VMware's carbon cloud black which leverages a considerable amount of open source. There are some other great tooling like Google's not pictured here, open source vulnerability database, OSB.dev. And it's all driving towards creating experiences that reduce the noise to signal ratio and boosting those signals by leveraging insights, laboratory and having actionable insights. There are a number of other considerations within a pipeline, what we see on the right hand side of the greatest and the latest around Linux foundation and CNCF projects such as the update framework for signing and verification and total around supply chain logs having clear understanding of the inputs and outputs and expected steps to be carried out and make sure that there's high fidelity and confidence that there hasn't been any deviations from architectural intent. There is the umbrella of projects under six store which leverage tamper proof transparency logs and tamper proof ledgers. So in the event of a compromise, it's readily apparent known to the world that something off has occurred but during regular business knowing that, well, everything is tight and as expected to be trivia as a scanner. So these are some of the noteworthy products we wanna highlight that we want you to start looking into. This is by no means an exhaustive list but helping make all this innovation consumable is our primary goal, which takes me on to the next slide, which is you've done quite a bit within the development and tying it all together as we were gearing up to the right doing that admission control and doing policy as code to be able to declare your organizations, compliance and regulatory objectives and be able to use gatekeepers. So rather than blocking applications from scaling, based on often a scaling event, you know upfront whether something should be allowed because it meets the bar to run in production through the use of like, we look at mutating admission webhooks and gatekeeper and being able to write regular policies around these so you can reason around different risk levels if a workload has an S-bomb and the absence of an S-bomb if this particular environment is not highly regulated, that's fine but maybe there's this PCI so this shouldn't without the absence of an S-bomb or without a detailed understanding of the composition of this workload, it shouldn't be present here and turn it down and this is a feedback loop and sub-incentivizing and driving for better development outcomes. Henry, if you would take me to the next and last slide before I pass it to you, something that I mentioned briefly was, well, there's great technology, part of the problem is it's hard to use, it's hard to consume, there are certainly end-user organizations that are doing a great job at it in lighting the path for others. We have partnered with many of these end-user organizations, our customers, to make sure that we codify these principles and best practices and much as we have done with Kubernetes to externalize, generalize that and make it applicable to the outside world so we drive to a world where our applications are safeguarded. We're putting a lot of effort around the production of templates and code examples where progressively you can arrive at what's modeled in this picture and a model of like crawl, walk and run. This is entirely modular and pluggable. Some of the parts you see in the picture are optional, but they're at the time the most mature and the ones we advocate for, but all the way from source to production, the different steps, triggering off your testing, handling metadata, handling the references between an artifact and its metadata, handling the reference between an artifact and its signature and helping you, helping your reason about it in a very logical way and giving you also the constructs to be able to realize the benefits from it. So we're here from the MRTASU once again helping you make supply chain security approachable and consumable and happy to help you get started. And Rhi, with that, I'll pass it back on to you to close this up. Great, that was very insightful. And maybe to kind of echo as well, we're really on this mission of making things composable, like creating the builds materials, be able to continuously build your software, providing gatekeeping mechanisms into what you get deployed and reason over things what should really be running. So I'm also very excited that in the Tanzu portfolio and with our partnership with GitLab as well, you can actually come check things out on our website as well. We provided some key links on some of the individual projects. A lot of our projects are actually open source available as well for you to start experimenting with and really creating a thread of building a secure supply chain. With that, I wanted to thank everybody for attending this session. We're looking forward to the engagement after the session and reach out to us. So with that, thank you so much and have a wonderful conference.