 Hello everyone. We are really excited to be part of the proceedings and welcome to our session Deja Vu. Let's think about security again. My name is Sripad Naragouda. I'm a Senior Software Engineer at IBM Research. My current research focus is on driving the innovations around DevSecOps. I'm the Chief Architect of Code Risk Analyzer. The DevSecOps solution we delivered to IBM Cloud last year. And I'm very happy to have joined with Paolo. Hi. My name is Paolo Dettori. I'm a Senior Technical Staff Member at IBM Research in Northern Heights in New York. My current research interest is in cloud technology, containers, and containers orchestration. I've been working for the past few years on several projects related to Kubernetes technologies, such as the IKS, IBM Kubernetes Service, and the IBM MCM, the IBM Multi-Cloud Manager for Kubernetes. In the past few months, I've been working with the crossplane community and currently I'm a trainer for the crossplane provider for IBM Cloud. Thanks Paolo. So we are seeing a lot of momentum, a lot of interest across community and across industry in embracing this crossplane. So we believe it's time for an intervention. We need to take a step back. We need to see what are the security implications of this. So we have seen that misconfiguration is still a number one cloud vulnerability according to the National Security Agency. And there is a high cost associated with such misconfigurations. And it is growing. We have listed the survey which shows that the cost can grow in trillion dollars. More recently, Gartner has published a report where they are estimating that by 20, 25, 99% of the cloud security failures are going to be the responsibility of customers rather than the service provider. We'll put these numbers in perspective when we discuss the role of developers and importance of those in the crossplane. Now let's revisit the security landscape from the vantage point of four different actors. First, we have a cloud provider. The cloud provider, they are focused on improving the experience for the cloud users. There is a tremendous momentum in onboarding new services on the cloud across AWS, Azure, IBM. There are more than 150 products and services. We looked at some of the service configuration exposed to the user based on their respective data from module inputs and variables. And these input parameters, the configuration parameters, they can grow in basically 20, 30s or even 60s. And this is very easy for users or developers to make mistakes when you have so many inputs to care about. Again, there are different modalities, the cloud provider enables for accessing and changing configurations. That includes user interface, CLI, APIs, and some programming construct build on top like Terraform and crossplane. So it adds a new layer of complexity because now we need to secure these additional layers and make sure that they are secure. Now let's think about the security director. So as a security director, he needs a visibility and awareness across the security standards. He needs to evaluate the posture of his cloud infrastructures, his cloud workloads. And now there are some industry standards like NIST, Kubernetes CIS, Docker CIS. They cover various security controls across workload, across Kubernetes core services and across cloud services. But even the sheer numbers, if you look into the numbers, they are huge. So we need automation. And there are some emerging standards like OSCUL that provides this controlled related information from the security standards into a machine-to-develop format. So it can be consumed and easily automated. There are some enterprise solutions available like AWS Security Hub or IBM Security and Compliance Center that power the central management of the compliance across organizations and regularly guidelines. Let's look at the developer because developer is the one person who is programming the cloud. Now if we revisit our days with the open stack, I remember writing a lot of shell custom shell script to provision virtual machines to configure network. We are evolved from that. Now we have standard programming construct like Tiraform, Ansible, Crossplane to enable this to standardize this ground programming model. Even the DevOps, it has ensured that we have consistency across build, test and deploy practices across industry, across communities. And now the emerging practices of DevSecOps, which is essentially putting developer in the center of the security practices. We don't expect developers to be a security expert. And we want to keep it that way. But we want to enable developers to identify the problems in a security problem early in this development process. And so we want to automate the security checks and embed them into the existing development practices. We don't want to invent new development practices, new tools for developers to learn. Whatever their existing practices are, we need to strengthen them with the security process. Also, we need to translate these complex security measures that developers hardly understand and provide them actionable recommendations that you don't care about the configuration, the CM6 control in NIST. If you just change this variable name, add this, change the value of this firewall from on to off, that's the kind of feedback we want developer to give. Now, as a community, again, putting that Gartner number in perspective, that 99% of defaults are going to be blamed to the developer. We need to empower developer, the right setup tools. We need to educate developers on the importance of the DevSecOps practices. And that's how, as a community, we can help and grow. Now, let's revisit what are the current DevSecOps practices across infrastructure as well as application code. Now, if you look into the Git repositories, all the artifacts that are present in the Git repository, we can broadly classify them into four different categories. We have some artifacts, application artifacts, like build artifacts, which includes your Docker file, our package manifest, that says how your application is going to get built. We have deployment artifacts, like all deployment channels, and in-chat that dictate how your application is going to be deployed. And then finally, configuration artifacts, the config maps, or network policies, and our infrastructure as a code artifact, like your data form, cross plan, Ansible. And we have existing DevSecOps solutions that are embedded into this Git workflow. Whenever a developer makes any change, this security CR pipeline is triggered. It accesses this source artifact. It performs various checks, like on the build artifact, we determine all the dependencies, all the open source packages, we determine if there are any vulnerabilities, what are the licenses that they are using, what are the risk of using those vulnerabilities. On the deployment artifact, we analyze if there are any misconfigurations, like you haven't set some resource limits, or you're running as a privilege when it's not required. We also measure the risk of such misconfigurations. We check the configuration artifact to see if there are any application misconfigurations, like if I'm deploying some applications, have I set the right configuration for it? Am I using the right protocol? Am I using the right certificates? And finally, the infrastructure artifact, like our data form or cross plan, we can evaluate them and identify any misconfigurations of security holes early in the development, while we are actually, before even we're using it for provisioning. And then once we have this CI pipeline, our artifacts pass through the CI pipeline, we diverge, right? We have a separate CD pipeline for infrastructures, like adding schematics that takes our infrastructure as a code artifact and provision the associated resources on the cloud. And then for the application, we typically produce some intermediary artifacts, like our images in the registry, and then we use CD pipelines, like Argo CD or Tecton, and deploy our applications on the cluster. We typically have this second layer of protections in the form of admission controller or gatekeeper, where we perform some enforcement checks, like Argo, it is basically second layer of protection. And once our application is deployed, our cloud services infrastructure is provisioned, we have separate monitors, we have continuous monitors to see if the new security issues gets popular, if someone gets malicious activities happen on the cloud or on the workload. And all these signals from our continuous monitor gets fed into a centrally managed compliance dashboard, where a security director can easily evaluate and see the overall security posture of this infrastructure. Now we'll see how the cross-plane is affecting this existing DOC cost workflow. Thank you, Shilpat. So first of all, the first observation is that we cross-plane application infrastructure can share the same Kubernetes resource model, the same KRM. So that basically allowed to now use the same infrastructure, the same tool chain for basically doing a pipeline for both application infrastructure deployments. So the DevSecOps pipeline for apps infrastructure now is the same, we can use the same tools, because now I don't need to introduce different tools. For example, if I'm doing a deployment with Terraform, I have to use the Terraform CLI, but here I can just use the existing Kube CTL CLI for doing deployment of application infrastructure. In addition, we have now the possibility to have a single pane of glass, where we provide clear feedback to developers about potential issues on the deployment of application, on the configuration and vulnerabilities, as well as the infrastructure configuration. So we can see that cross-plane is certainly simplifying the model. If we go to the next chart, we'll see how the pipeline that Shilpat was illustrating earlier on, where we add two separate pipeline, one for infrastructure, the one for application, now can be basically merged into one single pipeline, because I'm sharing the same tool chain. So in this case, I don't need to have a separate, for example, Terraform deployment pipeline, but I can have one single pipeline, because I'm using basically the same model, the same tool chain, so I can do things as well here using, for example, Argo CD, using Tecton, I don't need to basically introduce something different. So this actually simplifies and makes them more accessible to developers. Also, the ability now to provision the infrastructure they need, based on the application requirements. And this is really what we see as the application-centric provisioning and configuration of infrastructure when we start using cross-plane in this picture. Next. So as an example scenario, we can see now how we can actually leverage all these ideas and this concept with cross-plane. We have here a cloud native app, where we have three microservices. You are in front a command microservices and a query microservices. These are standard CQRS pattern. And on the back end, we have cloud object store. This is essentially IS-3 compatible service from IBM cloud. For this particular example, we're using the IBM cloud provider, but of course we could use any other resource infrastructure cloud provider with cross-plane. And we are actually running this on Kubernetes cluster, where we have cross-plane runtime install, where the IBM cloud provider, we have also the cross-plane hand provider. So we do the deployment of the hand application using the hand provider and we use the cross-plane resources for the IBM cloud provider and composition to configure the cloud object store and other related resources that I need for my configuration. And now let's take a look next to how actually the insights I can get looked like. First, a few words also on the IBM cloud provider. This is something that is still an experimental release. We released this last year and currently a maintainer for this project in the cross-plane community. It provides a number of available features such as support for the IBM resource controller API that allow to provision a number of IBM cloud services from the IBM cloud catalog and provides also features such as goal templating to shape credentials based on the requirement on application needs and supports currently also the IBM cloud database API that is used to configure certain characteristics of IBM cloud database services such as auto-scaling or scaling, a whitelist, etc. And also we support currently the IM API so we can actually configure also security constraints around the services. We also allow to import existing cross-services in the provider and as a roadmap we are currently adding more support for more cloud APIs for IBM cloud and looking also at generation from open API definitions of the provider. Let's take a look now to the kind of insights I can get when I basically run that exemplary application into my single pipeline. So first of all I can get information about the configuration of my deployment and in this particular case I'm getting through my PR that there is some issue as a comment in my PR that there is some issue with the resource limits for my container. So as a developer I will know that I need to adjust those resource limits so that I can pass the checks. And then I get also some report on vulnerabilities or packages that I'm using in my application, in my microservices, so I have to update those packages to the latest version. And finally and this is the interesting part that cross-plane allows I can get in the same pane of glass also information about the security of my infrastructure configuration. And all this configuration of course is somewhat dictated by the security controls such as NIST security controls so other standards that the compliance officer set up for me as a developer. And then of course I get a more clear indication of exactly what I need to do in order to be compliant to those controls. For example, in this case I need to make sure that the cost service allow only a certain range of IP to basically invoke the service. I have to make sure that I set up a log DNA for logging and tracking basically the API cost to the service, etc. etc. So this basically allow me now to have this very clear feedback for the developers and then I can actually make sure that my application is compliant to the security controls that we are set up. Thanks. Thanks Paolo. So we are thinking like what would be the sustainable security model, right? So there's some thoughts like if we make a security default configuration for every cloud service. So that would essentially means that as a developer whenever I create some cloud resource it is by default it is secure. I'm asked by basically it's mandatory for me to provide all the necessary security controls or security configurations and only configuration that are available are to disable the security guards. So as a developer I'm well aware whenever I'm making any security misconfigurations, right? And whenever I'm doing it right it's basically off-scare for me. It's oblivious to me. So I think this oblivious security is going to be very important. Now if you look into the stack of the standards that are available, right? So for virtual machines we have standards like CIS and the security control that guide us on how we should configure virtual machines, how we can we should configure our operating systems. Then we have Kubernetes plane on top of that where we have some CIS benchmark which guides us on how we should configure our worker nodes, our master node, our core services. For our workloads we have Docker CIS benchmark, Naste PCI DSS and a lot many other security controls. For cloud services we have again some NIST security controls. So essentially we have security controls across the stack but the cross plane, right? So this is essentially the question that we are trying to ask that are the cross plane or the runtime providers, right? Do we consider them as the workload on top of Kubernetes and they be subjected to the workload security profile or they are part of the core Kubernetes control plane and need to be subjected to the Kubernetes CIS profiles, security profiles or we actually need a separate category to model the security profile for cross plane. With that we would like to conclude our presentation and thank you for your time and we are open for questions. Thank you. Thank you everyone.