 Welcome to the Cloud Foundry Summit. It's good to be back here. I was here last year presenting on UAA, but my role has changed now. I'm still doing some IAM, but I've taken on a broader role with platform security. On the pivotal front, we've formed something called as a Security Leadership Council, and I'm part of that and Ramiro here is here with me as well. He leads everything networking and the security aspects to and we have John Field who is not here. So the three of us are responsible for security of the platform in all its shapes and forms. So I've been with Pivotal for four years and it's been an interesting journey. So my background is an enterprise security and spent nearly, I think, 14 years in the industry working on various security products. But it was a growing and learning experience, like learning about how to do product management in open source. So we are here to talk about security roadmap for Cloud Foundry, and I do have some things which are pivotal specific, but I'll make sure that I call them out during the presentation. But if you have any questions, then please hold them for the end. I'll leave some time for Q&A. So we'll get started. I've put the talk in various sections. So I'll start off with platform IAM and we'll move to the other sections eventually. So on the platform IAM front, I wanted to start off with announcement that we've actually certified UAA as OpenID Connect IDP. So we are listed now officially on the OpenID Connect certification website. So what this means is we, as a certified provider, can interoperate with other identity providers and relying parties in the industry. So we've certified not just UAA, but also the Pivotal proprietary version of it, which is the Pivotal single sign-on service, so both of them are listed on the website. This is coming at an interesting time because if you look at Kubernetes and how the identity integration is done there, on the Kubernetes front, you can basically bring your own identity provider, but it needs to be OIDC compliant, so the certification place well for us there. So from a UAA and Kubernetes integration standpoint, UAA is bringing in its proxy capabilities. So what that means is if you have existing enterprise users stores, which could be enabled via SAML or OpenID Connect or LDAP, UAA is able to proxy the user identity from there and present a consistent identity to Kubernetes. So this comes in very handy when you are dealing with a heterogeneous enterprise system or identity system, and UAA is able to bring some stability to that heterogeneous environment in terms of presenting a common layer. So from a UAA standpoint, the integration is pretty straightforward. You have to specify the various OIDC or OpenID Connect parameters. One thing that I wanted to call out here was UAA supports discovery, so that comes in handy. There are not a lot of configurations to be specified manually. In addition to that, the two important features here are, UAA is able to propagate external group memberships. So anytime you are setting up access for your Kubernetes cluster, you can map your Kubernetes roles to external group memberships in SAML or LDAP, and then beyond that, another feature that we've introduced is the ID token refresh. So previously in the previous version, only access tokens could be refreshed, but since it is an OpenID Connect integration to Kubernetes, so if you are using the Kubernetes command line interface, then and you plug in the ID token for a longer lived access, you can actually plug in the refresh token also now, and it can be refreshed through the Kubernetes flow. So those are the updates on the IAM side. I'm doing a more detailed talk tomorrow around IAM, and I'm not sure at what time it is, but I'll try to update you towards the later half of the presentation, but that will be a more involved feature update of what all we've done on the UAA side. So another project that I wanted to introduce was permissions, and this is something that got spun off, I think, more than a year back now. It was created with the intention of standardizing fine-grained authorization. So what the permissions project gives you is a standard API for defining roles and for evaluating roles, and this was done initially with the intention of standardizing a role-based access control for Cloud Controller. So we started off with that work, but it had to be put on hold because of the V2 API. So in Cloud Controller, there are two versions of APIs, so there is V2 and V3. So we started integrating PIRM to provide consistent authorization experience on the V2 API, and we hit some roadblocks. So we decided to first perform a complete V3 migration before PIRM will be integrated. So that aspect of the project is currently on hold until V3 migration can happen. While that is happening, we've been able to add other features to PIRM, so the API itself is authenticated and authorized. There is auditing and logging capabilities and backup and recovery, so it's a Bosch release, so we are able to leverage those capabilities. We are also planning to add more features around how roles can be derived from external group mappings, so that's some future work that we'll be doing in that arena. Future-wise, where we see PIRM is basically as a common authorization plane across CFCR and CFAR. So if you look at IAM today, right, like Platform IAM today, we are doing it in a very siloed manner. So we have the Cloud Foundry application runtime wherein you have a UAA which is responsible for plugging in the identity, but the roles and tendency itself is very much baked into the platform experience of the Cloud Foundry application runtime, and then we have a services marketplace which also relies on the same authorization. On the other side, we have the CFCR and again it comes with its own roles and tendency model, and again it has its own UAA. So it's a very siloed approach. So when you think from a customer standpoint, it's a very split experience because from a customer standpoint, these are just abstractions, product abstractions, but from a customer operator or security admin standpoint, they should be able to drive consistency in terms of setting up identity providers and setting up tendency. So what we see as the future is, there is still obviously external identity doesn't change much, so it still comes from an existing system, but on the platform front, we want to create a common IAM experience and UAA has a role to play as an identity server and perm as a fine-grained authorization engine. We want to take the tendency and the roles one level above and make them global such that you can define your tendency. By that I mean like orgs and spaces. Today you define them within the CFAR experience, right? But if you were to take it a level about think about the operator experience, you can set up roles that can cut across these platform abstractions. So you can define a role that gives you access to both the application runtime and the container runtime. And then there is also work underway to actually convert the marketplace also into a global marketplace. So anything authorization and authentication related will also be driven through that common layer. So the marketplace will appear as like a common entity to both the platforms. So let's switch gears and get to credential management. I think everyone here is aware of CredHub, so I don't have to make that introduction. But what we've been doing in the last few months is increasing the adoption of CredHub with different service brokers. So just a quick primer on what is the role of CredHub. So previously, right, like if applications were to interact with services, the credentials, the service credentials themselves were proliferated in different areas within the platform. So they lived in application as application environment variables, so application developers had access to it. And then Cloud Controller API also was holding onto a record of those credentials. So that's not a great experience from a security standpoint. And that's where CredHub brings in the value. So all these credentials become references. So the only CredHub is in the know of what the actual credential is and everyone is holding onto references. So the Cappy API, the application environment, everything is holding onto references. So at runtime, if a service broker, so on the pivotal front, we have Spring Cloud Services, MySQL, RabbitMQ. So the credentials for those services can actually be references in environment variables and CredHub will be responsible for, along with the service broker framework will be responsible for plugging in the credentials at runtime. So that means application developers and any other parties actually do not have any knowledge of the credential itself. So from an adoption standpoint, we are making sure that all the services out there in the marketplace are using this mechanism when it comes to credential handling. CredHub Enterprise Readiness, most of the things have been there. So the CredHub API itself is authenticated with MTLS. We've added support for UA tokens. On the authorization front, CredHub's authorization model is access control lists, but we are improving it to create a namespace based authorization also. What that means is, especially when you are running CredHub in as a service, you want to create buckets in terms of a set of applications wanting to use CredHub and their credentials and what kind of access they need to CredHub as opposed to other apps, they should not be able to get the same level of access. So with namespaces, you are able to create those boundaries and sets of applications have limited access to their own credentials within CredHub. So this is important when it comes to running it as a service. So that's something that is work in progress right now. On the auditing front, CredHub now generates audit logs in a format called CEF, which is Common Event Format. This is followed by other Cloud Foundry components as well. So CAPI, for example, UAA, they all follow the Common Event Format. CredHub roadmap-wise, we are planning to add support for more encryption providers. So today, CredHub supports Luna, HSM, and in future, we plan to interoperate better with other IaaS key management solutions, so AWS, Azure, and GCP have their own key management solutions and we are planning to integrate with them better. CredHub, since it is responsible for credential management, one thing that we want to do is to provide a better experience when it comes to rotation of credentials and we want to start that off with platform credentials first. So one feature that we would like to deliver on is zero downtime rotation for all platform credentials. So that's something that you can expect from the product. Finally, from a Kubernetes interoperability standpoint, Kubernetes sort of defines a GRPC interface for encryption and decryption of secrets and CredHub is planning to adopt the same interface. So from that interoperability standpoint, so if there is a provider that works in the Kubernetes ecosystem, then CredHub will be able to leverage that or interoperate with that in a better way because it will be using a consistent API interface there. Okay, so moving on to networking. One of the features that we've added lately is the dynamic egress policy. So if you're aware of application security groups, so these are entities or policies that you create when applications have to interact with external services and one of the drawbacks there was every time you were applying an application security group, applications needed restart. So when you're setting up that policy and you want it applied, the apps need to get restarted. So potentially there could be downtime. So with this dynamic egress policy, the one of the main benefits that you get is there is no restart needed. So anytime you are setting up an egress access policy for the app, it will be applied without any restarts. So you don't get that sort of experience of the downtime. Another thing though is application security groups could only be applied at a space level. So if you wanted self-service app developers being able to deploy their own application security group, that was not possible. But with this new model now, application security groups can be defined at an app level also. So you get that flexibility. Finally, ASGs sort of predate the C2C policy. So the container to container security policy that can be set up. ASGs or application security groups predate that. But this dynamic egress policy is sort of based on the C2C concept. So you sort of get like a single source of truth for your C2C policies as well as for your outbound egress policies. So overall you get better experience, single source of truth and no downtime when it comes to apps. From a networking extensibility standpoint, from a pivotal perspective, we have integration with NSX. And that is obviously a multi-cloud solution, gives you advanced capabilities around network segmentation and distributed firewalls and load balancing functionalities. So overall it's a great experience if you're using NSX. But we want to play well in the open source community and we want to make sure that we are able to support options beyond NSX. So if you want to bring your own SDN, we want to make sure that you get a similar experience that you could potentially get with NSX. So this is still sort of in its initial stages and we have not necessarily taken any particular steps in that direction, but we plan to. So we would like to hear from you, what are the different SDNs that you are using in your environment so that we can prioritize what to work on next. Compliance, so few updates on compliance. On the compliance front we have, we finally have a stick, a desa stick for Ubuntu. So this is actually a benchmark. It's a technical implementation guide. So Red Hat had it for a very long time. And now finally, we have one for Ubuntu, which is packaged as the stem cell with Bosch. So that benchmark can now be downloaded. I've put the link in there. So you can download the benchmark from the official website. In the past couple of months, we've actually documented the NIST controls also. So all the NIST controls that apply to the platform can be, you can find the detailed documentation around how we comply with NIST. And then PCF Compliance Primer has been out there for some time, sorry, PCI Compliance Framework. And then future-wise, there are a few things that we are investing in. We plan to provide a compliance dashboard. However, it is PCF only, but there is another thing that we are investing in, which is compliance tests. So compliance tests are basically to prove compliance of your component to different controls. So there is a test suite called InSpec. So we are experimenting with it, how those InSpec test suits can be applied to different Bosch releases, which make up the platform and that they can sort of give you the state of compliance to various controls. Okay, so that was all about platform security. Let's switch gears and look at what's happening in the application security plan. So if you look at application security with the platform today, it's pretty split in terms of what you can achieve. So if you look at feature consistency of applying application security, you have to deal with various language frameworks. So for example, with Spring, you have Spring security. For .NET, you have the steel toe framework, which gives you very similar experience to Spring security and boot to apply authentication and authorization patterns. It's good, but it puts a lot of burden on the developers because anytime you have to do anything security-wise, your application developers have to add limited to what is available within the framework and it's a very split experience. So when you look at it from an enterprise standpoint, it's very hard to push consistency when it comes to a consistent security policy. So if your Infosec team is laying out guidelines, like, hey, all applications need to connect to an enterprise user store or all applications need to interact with services and all application interaction should be MDLS, those kind of things. It's very hard to drive at a platform level because you're expecting app developers to make it happen. So one thing that we're looking at is how to make this consistent. How do you make application security consistent? And if you're aware of the Istio service mesh architecture, that is something that we would like to bring more into Cloud Foundry ecosystem as well because it provides you a declarative way for security policies. So instead of application developers having to code a lot of security into apps, using different frameworks, security becomes a declarative thing and you can define it as a policy and it gets enforced outside the application in a consistent manner. So that's something that we would like to support for Cloud Foundry as well and we'll be doing that with the Istio integration. So Istio is a service mesh architecture wherein it provides you a control plane which is pilot and citadel. So what a control plane does is it is able to push policies, in this case, security policies to the side cars and the side cars are able to enforce that policy. So pilot is one of the components which is responsible for pushing consistent application security policy. There is another piece to Istio architecture which is the citadel which is responsible for provisioning identity for each application. So today, right, with Cloud Foundry application runtime, Diego is able to push a service, or I'm sorry, an application instance identity. So in Istio, citadel does something very similar. So citadel is able to push a service identity or an application identity and that can be used for secure communications. Finally, there is an extension point through something called as a mixer. So mixer is a plugin model. So if you have an external security system or a policy system and you want to use that to augment your security policy, you can use mixer to extend your capabilities. So what I've laid out is the different use cases that we could potentially achieve with Cloud Foundry and Istio integration. The first one is service to service authentication or transport authentication. So basically being able to push an MTLS, a mutual TLS policy across all applications. So this is made possible again through sidecars. So citadel is able to push a cert to the services or the applications and the sidecars are able to negotiate MTLS. In addition to MTLS, we can actually do an authorization check also. Whether application A can talk to application B, that check can also be done within the sidecars. So when the client side is invoking a request to another application or service, the client side envoy can actually make an authorization check for the target service access. The policies themselves are flexible as in they can be applied at a per application level or they can be applied for a group of applications which in Kubernetes world is referred to as a namespace or it could be applied for the entire deployment or mesh, so to speak. So you can say that like, hey, I want MTLS across the entire deployment or I want MTLS only for a group of applications or I want MTLS for a very specific app to app or a service to service interaction. The second type of policy is the end user authentication policy. So if you are using a job based authentication, one thing that you have to do as an app developer is you have to validate your tokens. So that comes in as part of the framework but with Istio, that validation can happen within the sidecar. So in Istio, you can define a policy which says that, hey, I want to trust tokens coming from a certain issuer. So you specify the issuer. We were able to test this with UAA tokens. So you will see that the issuer is UAA in there and then how to validate the token in terms of validating the signature. So you specify the signature URI, how to download the token keys. So one good thing about this policy is you can specify more than one issuer also because you may be getting tokens from more than one provider. Finally, service and end user authorization. So with Istio, Istio follows a role-based access control designed for service to service and end user to service authorization. So it's very similar to PIRM in that way. So you define a role which is a group of permissions to access services and then you define a binding which is associating that role with a subject. It could be a user or it could be a service. So here I have examples of how to set up that policy. So you have a role. So here we have a service role which is saying that the services with that named pattern can be accessed so it supports regular expressions, what methods can be accessed. And then finally, you can put constraints on based on request parameters also. So you can say, I want to only allow requests with certain headers so that can be done. When you bind that role with a subject, at that time also you can specify more filters. So you can specify filters based on user identity coming in the token or you can specify filters based on the service identity. So it's not like you could not do all these things before. So you can still achieve authorization between applications or between users and applications with different language frameworks. So Spring gives you the capability to do most of these things but again, how do you do this in a consistent manner across all language frameworks? So that's what this sidecar approach gives you. So that's pretty much the different areas that we are investing in. Happy to take any questions. Is this on? Yeah. Sorry. For the egress policies, how does that work? Does it follow the Kubernetes network policies and does it support layer seven policies as well or is it just IP address and port combinations? I believe the egress policy is a network policy. So if you want to have layer seven controls then you would be doing something like a token-based authorization. So that's not something you can do within the egress policy. For that you will have to use some IAM controls. But the mechanism is fairly straightforward. So if you've seen how C2C policies are set up so like a firewall, so source IP, desk IP, port protocol, it's similar to that. Yeah. Any other questions? Hello? Yeah. Can hear you. You were mentioning unification of marketplace between application runtime without design docs that being shared with the community or not. That's a great question. Just to repeat it again, the question was about the services marketplace and sharing it between the two platforms, whether any docs have been shared. So it's in very initial stages. It's, I think it's been worked on by the team right now with the plan to eventually share it with the community but no actual work has started on it. Yeah. But the thoughts are there. Might have time for one more short question. Proxies and inclusion in place. In the old times I could just fire up a TCP dump and see what's going over the wire. A new start and end point. Are there any ways to debug this new setup or logging? So with respect to observability and debugging, like with sidecars, they do emit a lot in terms of like logs and metrics that can be collected and you can know the state of the system. So yes, you can do a lot with that supports because the sidecars themselves are emitting a lot in terms of their current state and the current state of policy and the current state of request response processing. So this is about the sidecars and debugging them, right? The communication between the applications. Sure, yeah. That can be because it will be flowing through the sidecars. So eventually what we envision is every request ingress and then every egress is flowing through the sidecar. And since it is flowing through the sidecar, you will basically get that observability through the sidecar as to what is the current state of the policy, what has been enforced and what it can talk to and what it cannot.