 Okay, hi everyone. I'd like to thank you all for joining us today. Welcome to today's CNCF webinar self-service of cloud services for Kubernetes application. I'm Christy Tan, Marketing Communications Manager here at CNCF. I'll be moderating today's webinar. We would like to welcome our presenter today, Louis Marshall, cloud native delivery advocate at Activa. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all your fellow participants and presenter. Please also note that the recording and slides will be posted later today to the CNCF website at cncf.io slash webinars. With that, I'll hand it over to Louis to kick off today's presentation. Take it away, Louis. Thank you, Christy. And thank you CNCF for hosting this webinar. So self-service of cloud resources for Kubernetes applications. So a little bit about myself and before I kick off. So I'm a site reliability engineer and developer and tech evangelist. So we'll see how that goes. At Apfia, I've had over 28 years now of development experience, always with an operations focus started off with 8086 assembly for that little machine at the top right there. And more recently, concentrating on Golang Kubernetes and cloud with Apfia. So self-service of cloud resources for Kubernetes applications specifically. So what we're going to try and cover today, introduction and why this isn't necessarily easy. A bit about the problem domain with a specific focus for a developer experience and self-service description of what that is and what that looks like in reality. A quick demo of using customer resources in Kubernetes to self-serve cloud resources and a summary of the industry and a little bit about Apfia's approach as we have our open source product. Trying to tackle this very issue. And then a summary and some questions that would be good to get some feedback and answer that. So introduction, why isn't this easy. So we've learned from doing self-service for many years is the industry is moving rapidly in this space space. And there's a lot of products in the industry with a focus around cloud infrastructure per se. Many of them have a developer specific focus or slant and the developer solutions or solutions that come close to the developer domain seem to be being replaced in the industry. There's a movement within the industry away from self-serve and simplicity back towards operation and cloud domain specific solutions. We'll cover a bit more about that later. So we've had to implement a solution that's custom and wrap some of what exists in the industry to get a good outcome for our open source product. So let's have a little look at the problem we're from a developer focus. So developer works with Kubernetes, but before even that they'll deal with containers. And they used to shipping containers for a method for shipping their application itself, but also for testing their application with application dependencies obviously containers make it very easy to consume a lot of application dependencies like databases, message cues, etc. But fundamentally the developer concern is around the libraries that they build into their application and the containers that they may use locally to provide the access that those libraries need over a protocol. They really deal with cloud directly per se in that space. And as soon as they do, the velocity of using the cloud services slows their development cycle down. And cloud consumption is often broken and gated and requires specialist knowledge which is why that velocity is affected. What does the developer need when it comes to consuming application dependencies. An application will be shipped to a Kubernetes cluster. And here we have a picture of that, but they depend through that code library that wraps a protocol on a dependency and here we depict database. But why do we need managed services at all. It's really about removing operational overhead from a team and the crown jewels are in the state of what your application delivers on. Sometimes having as many eyes on that service as possible. So the reliability. Could dramatically be improved by consuming something from a cloud provider that provides something across the entire industry in that space. So reliability and potentially ideally simplicity would be provided. So what's the problem. Here we have an application in Kubernetes and a developer who's responsible for that application. Basically there's a dev ops or a different mindset required for the operations side of the piece where it comes to cloud services. And there's at least a mind shift between those two types of operation and why does that mind shift exist. It's really around the way you deliver those two very different solutions traditionally. So developers create their containerize applications by familiar with that potentially extraordinarily familiar with creating those containers and then shipping them to Kubernetes which then orchestrates those in a Kubernetes cluster. But an operations guy or a DevOps professional may be using configuration management tools. Here we have an example of Terraform talking to Amazon Web Services or any specific cloud provider, which would then broker and manage an instance of a cloud service or a cloud resource and that the application may depend on. So let's talk a bit more about self service because why is it even important why do we need this. So when it comes to cloud services, there are a dependency of an application and would want to reduce lead time. So when there is no separate team and no manual intervention required. There's no manual process around that separate set of steps required to procure those cloud services. But fundamentally when you've got two different mindsets or two different tooling paths and worst case to hold different pipelines to deliver updates, you've got application updates have to be coordinated and communicated with cloud updates. So there's a brittle. An issue that can occur due to those separate concerns and coordination effort that may be required to deliver updates. And fundamentally why self serve we want to enable an agile process. Enabling agile delivers on a reduced cost fundamentally, but the reason it does I mean it's talked about often elsewhere so I'm touch on this lightly but fundamentally reducing defects in a code base or the code base is dependency early in the cycle is vitally important because the cost of remedying those same defects later on when it's in front of customers or in production is vastly increased. So reducing cost. Now, there's a perceived risk in some circles around self service. So we want to ensure that we can limit that risk for providing an informed choice. So at this point on to introduce an analogy here between our developer operations with regards to mobile phone. So, in a mobile phone shop, we procure, not just a mobile phone but an operations piece to that a service, often referred to as a mobile phone, which can condense that best practice and simplify it so that all the operational concerns aren't back up and the cost of running and other security issues could be simplified down to a simple choice. Now, limiting risk with regards to running costs. There will be a perceived risk. If you don't trust your staff now obviously the staff development at the moment. We're all in a climate where we've with trusting staff a lot, but the staff that fundamentally put code together that has a business focus should be trusted, arguably the most in an organization. There's a further reduction of cost with the agility and reliability concerns that I've already spoken to. But further, the security risk, we think is reduced by having an informed choice and packaging that up as a product and a service that can be easily consumed and not communicated or lost in communication and maybe introducing human aspects. So developers of service, what would that look like in our ideal world should be able to procure a cloud dependency of our application with a simple service description should be able to give that instance of an application a name and refer to a plan. As if we're just taking off the shelf it should be very straightforward shouldn't limit our agility. So we post a request to something which then talks to any cloud provider and then provides back exactly what the application actually depends on which at this point is access configuration, which typically comes down to an end point the network connectivity details and credentials, the means of accessing a resource securely. So, what's that look like in practice. There's a lot of industry assumptions around Kubernetes resources being the best way to deliver this. So native Kubernetes resources are well documented, they manage resources very well. And typically pertaining to applications generally so application domain is well understood by developer who's created their application and shipped it in a container. So most of what they would look up around Kubernetes makes sense. And the idea around reconciling an intended state and not describing the how exactly you move a container from node A to node B is very desirable when it comes to cloud resources, but in practice, when Kubernetes is extended to relate to cloud resources. It's not that simple, because there's a lot of domain specific knowledge required in the operations space. In the worst case, and the developer may not quite understand what's the benefit to them to have to change that persona and understand those other concerns. So a customer source for a cloud service could look just like a Kubernetes source like any other. And there's a couple of examples here with operators that the industry is moving towards one from Google and one from Amazon here, both with almost identical requirement around a redis dependency for application. And we can use familiar tools to create and procure and ship those application dependencies, but fundamentally, the parameters between those two couldn't be more different. And they, in this case, they both show some operation domain knowledge that is required under study of a cloud provider in the worst case specific infrastructure that may be required. So how do we scale customer sources and make a bit more sense maybe. Well, at the moment, we have different specifications for each cloud, there's domain specific knowledge required, and there's no consistency for the developer. And there's no guidance almost more importantly about the security and the best practice about which plan and how to use it. Fundamentally, there's no high level abstraction for those Kubernetes resources and those operators. So this point it's worth delving into history as it informs where we are today in the industry and some of the most mature aspects of providing this capability in Kubernetes is around the service catalog project which is a means to wrap the open service broker API on Kubernetes. It's a Kubernetes native project. And it's born out of using a facility that existed and was invented by cloud foundry in the industry. And it does provide customer sources, but it's production readiness status with regards to self service specifically is questionable because the industry is changing and moving its focus, maybe because the cloud foundry requirements may not match where the cloud operators are going with regards to Kubernetes specifically. But it does provide service broker plans that provides a way of simplifying access to cloud service and ideally provide some best practice point of consumption if configured and set up right. And also can vet or reduce the scope of what services and which particular plans around those services are available in a particular cluster. So I'll take one lay down to go over that quickly. So we have service brokers, which are code that already exists and they can be deployed in multiple systems but in this case we're showing how they integrate with the service catalog project in Kubernetes by publishing plans and then the developer can request a service from that service catalog, which then passes that request onto a service broker, which talks to a cloud provider, which provides status information back to the service broker, which allows the application to be wired up to use it so a concrete example of that would be using the Amazon web services service broker and a relational database service production development plans being published to the service broker. The developer can choose a production ready relational database service, which generates a service instance request to the service broker using the open service broker API and then the cloud provider can be managed by that service broker, which would provide the details about a specific instance and the status back to the service catalog, which can then provision the details for consuming that in what's called a service binding. So I'm going to do a quick demo of what that looks like and I'm going to cover the use of plans and how we consume that application configuration. And on the right here we've got a view of consuming an S3 bucket with two resources and Kubernetes is one called a service instance, which has the name of a plan and a service and no parameters in this case so we're quite happy with defaults provided in this instance. And then a service binding which relates to the shape of how our application will consume that specific service with a secret name. So the developer can request both the service instance and the service binding which describes what we need and how our application will be shaped to consume that configuration. So without further ado, I use a demo I provided earlier a bit of webinar magic so here we have a look at secrets. So here's the default token for service account in this default namespace here. And we'll have a quick look at a S3 service catalog instance description, the same one that I showed earlier with no parameters and more importantly the service binding how our application will be wired up to consume that. So at this point I can go ahead and apply that to a cluster, a bit of magic later. We can see that that service instance should exist within the cluster. Now, although the service instance description is in the cluster, the actual service is obviously being created and managed by the service catalog and the service broker in the cloud operator. And we'll watch that resource and use a little bit of magic speed up there and we'll see the status goes to true, which means the cloud service should have been provisioned. So at this point, we have a look at the service instance, maybe clear up the screen, look at that again, and we can see that its status has turned to ready. So what we now want to look at is how to get to that with our within our application, we can see that a secret name has been referred to by our service binding, which also says it's ready so if we'll have a look at secrets, as it by magic, we have a secret with the name that we specified in our service binding. So now all there is to do is consume that secret in a Kubernetes deployment. So we have a Kubernetes deployment YAML here with a reference to a secret and specific environment variables that are familiar to anyone who's consumed Amazon services from an application. And we can see that that deployment gets ready. Now, instead of running a fully fledged application at this point, I'm going to use Kubernetes debugging technique to connect my laptop to an interactive shell session inside the workload inside the Kubernetes cluster. And at this point, we can have a look at the environment variable bucket name, and we can use the Amazon Web Services API for that command line interface to look like there's nothing in that bucket, and we can copy something that exists in every container or most to that bucket and see that it exists. So, that was a real quick walkthrough of the Amazon Service Broker and a tangible hopefully overview of what that actually looks like in practice, but it covers up a lot of complexity. So here we have a product comparison across open source capability in the industry. We have the Amazon Web Service Service Broker there with loads of application services. Now, they're dependent on cloud formation templates that are provided and there's a bunch of default templates provided, but really you need to look at those in detail, at least understand them for production use cases. And you've got to understand a lot more besides about how that's actually wired up in practice when you're self serving across multiple clusters or scoping credentials to particular workloads. But more importantly, the future is what's uncertain in this space. We like the idea it's got plans and service brokers for self service providing plans is a very good high level abstraction. So for a developer, it's a nice thing. But Amazon have stated that their service operator is their intended direction, but it's a very early stage in the project. So it's a bit too early to comment in any more detail. I would say there's a minimum viable product on a branch for the latest iteration of that project. There are other service brokers that exist for Azure and Google Cloud Project. But both of those large cloud operators have put a lot of resources into slightly different directions going forward. So Google have Google Confit connector, which is referred to in their Anthos product suite has been called to Anthos. But importantly to unpick here is the fact that the although Anthos describes itself as being multi cloud, the Google Confit connector is certainly focused only on Google cloud platform. So for self service. It's not as simple as it could be cross plain is worth a mention here. They, it's provided and backed by Microsoft and Alibaba cloud. But it's also got some really interesting ideals around application focus with traits and a new initiative around the open application model, but more of the resources pertain to cloud infrastructure than applications at this point. So it's an interesting one to watch. Terraform, there's new operators, both from HashiCorp and Ranchelabs, but both of those are extraordinarily early. And because of their infrastructure focus and the domain specific language, they don't really relate to developer self service. So that's an overview with regards to self service of developers, leaving a developer, maybe a little bit welded about what they could deploy to their cluster and consume within their team. So, at this point I want to touch on the cloud vendors direction and they have motives that are clear commercial and quite rightly so, and they want to support many customers. And their focus is about reliability at scale, which is a fantastic reason to want to consume managed services in the first place. But because of that holistic lowest common denominator approach, the complexity has to be put on the customer. So the self support aspect is very well. And also, they're not really driving out multi cloud obviously they've all got their own reasons and tie in products so they don't necessarily save the customer time and money with regards to self service for cloud resources here. So, Apia has an open source product called Core, which centers around self service of Kubernetes clusters, and we're iterating into providing cloud services so a developer would be able to request a service plan a high level abstraction from core, which would broker that to different approaches as we filter the industry and provide a set of concrete absolute production level ideals in an open source product, and then at the point of consumption the application can get that simply. So I'm going to do a quick demo now of using a user interface to self service plans and provide that simplicity and then run a an application to consume them. Here we have the open source cloud project core, running the interface. Now we think it's important to have a distinction between the operations side that can set the plans, both from an open source project where the community can procure what are good plans, but then again, a particular organization can further define what is in a plan and what that looks like. So we're going to consume that from a team where the agility and work can have been prior organized. So we have an EKS cluster here, and we want to provide cloud service now under the hood, we want to provide a custom resource definition and command line interface. So familiar tooling and techniques can be used but why we're discovering how to use those looking to use your interface gives a big uplay uplift for a developer where they can look at the parameters and decide whether they suit a particular requirement. So here we're going to create a cloud resource for S3 and we're going to get details back about that cloud resource in a secret in a specific Kubernetes cluster in a specific namespace and all the developer needs to provide is a service name for how they're referring to this with a service plan. Well, that's providing a cloud service instance for development. Now, it's possible to use one instance for multiple environments for some types of services. In this case we want to provide a discrete bucket for each instance of our application, but we want the same service name now it's easy to keep these the same when the plans provide an abstraction point where all the configuration can be done once in advance. So at this point we want to get access to that and the benefits of a user interface obviously allows to have a rich experience for the developer where they can find out how to consume that within Kubernetes, what the resource names are called, etc. So now we're going to deploy an application to Kubernetes looks very similar to before. So we have a secret reference there. And we're going to consume a Kubernetes cluster by getting access on the user interface here, finding out how to do that switch you back to the CLI and can set up a single sign on. So we have a switch context to get access to that specific cluster and the namespace where we want to deploy an instance of that application. So now we can deploy that application. We didn't really need to know much upfront. So we have CRDs that have been created behind the scenes, we can look at how they're consumed using kubectl later, and then put those in GitOps or however we work within any particular organization, we've created an application there. We can see it's already running. I've done that before. And now what we're going to do is connect our desktop to the running pod, the running container within that Kubernetes cluster. In order to use a web browser to access it without worrying about ingress and other types of resources at this point. So as a developer, it's very straightforward to consume S3 when we have a rich user interface to do that, and then get access to an application which has picked up the relevant configuration from whatever technology in the background has provided it. And we can then persist some state. So there's a quick overview of some of the thoughts and ideals we think are important in this space. I hope some of this resonates before we just finish up there. We create a couple of objects which are persisted. Okay, so in summary, we think the current solutions that exist across the industry in this space have operations heavy focus and they replay a certain amount of complexity back, which can hamper the agility. We like the idea of a high level abstraction and plans as a terminology makes a lot of sense. It provides simplicity for developers and hopefully increases agility, but almost more importantly, you can condense all the best practice into a plan and provide oversight and good practice and audit and compliance around ISO standards and what have you with regards to those plans. So I'll invite you to look up about that for you. I won't go into any of the details on to skip quickly over this slide. I would say we provide the core project and it'll be good to answer any questions at this point and I'll leave contact details and get repository there for your access there. Okay, thank you very much. That's the end of my portion. If I could now have the questions. Yeah, thanks so much Lewis for a great presentation. And as Lewis mentioned we now have time for questions. Just a reminder if you have a question that you'd like to ask please drop it in the Q&A tab at the bottom of your zoom screen and we'll get through as many as we can. We have a few that populated here so I'll read them aloud. The first one here is from an anonymous attendee. It says, can you import an existing S3 buckets as service instance into the Kubernetes cluster? That is a brilliant question and it's one that a lot of projects in the industry have skirted around and indeed if you're providing a new way of getting access to cloud resources, it's probably best to migrate to something that provides that uplift rather than trying to ask something that condenses a whole bunch of best practice to change its shape suddenly to bring in something that may or may not be compliant. It's a good question though but there's no magic bullet here in that space. Okay, the next question is and I apologize if I'm butchering this it's Vladislav. He's asking who manages a service broker. Is it a corresponding CSP collaborating with Kubernetes SIG or an independent vendor? So at the moment in the industry delivering the service brokers and how they're wired up in a specific cluster or in a operations cluster and what that looks like is a problem for the individual organization. But there is some good technology there, not just from a particular broker, but across a whole bunch of operators. So in our instance in the core product, we provide plans and automation in the core product to deliver all of the technology stack. So that at the point of consumption, the developer doesn't need to worry about it. We provide building blocks in core to deliver all of the above so that you don't need to worry about it as an operational concern. That's something we want to take on. Okay, we have another question here from Daniel. Daniel says, thanks for a nice presentation. Is there any offering overlap with VMware Tanzu applications catalog self service. He says I believe there is an open source version of it as well. So not really directly. I mean, I've been speaking to cloud services specifically. And Tanzu has an application catalog which talks across cloud and in cluster. And it's a similar area to where we're addressing but I think our focus is certainly on driving down complexity to make it absolute simple at the point of consumption for developer. So it's slightly overlapping in respect to many, many of the products in the industry, but at the point of use, the simplicity is where we think a product like core. The open source project and the community can engage please do to test and make sure that we're on track in that space. Okay, well looks like that's all the questions that have been submitted so far. And any, oh, here we got another one. Apologies, Vlad if I'm saying your name wrong it's a lot of slab is asking follow up to my previous question. So the core approach is excellent and clear, but what about service brokers you've compared in the table. So under the hood, we are using in our first iteration we have a feature gated capability, which is alpha at the moment where we're using the Amazon service broker, but we're populating the specific plans and specific cloud formation templates that we know to be good and fit operational concerns that we have tested. And we also have some operators from the Google config connector that we want to integrate with in that space so I guess the in the simple the simplest answers to say, yes, some of the technology in that table we would want to use, we want to orchestrate and automate how it's delivered and provide a higher level of abstraction so that regardless of whether it be a service broker providing the specifics of a plan, or an operator, you get the same experience as a user, of course. Okay, yeah. Yeah, well looks like those are all the questions and today. Thanks again Lewis for a very important presentation. That's all the questions and time that we have for today. Thanks again to everyone for joining us a friendly reminder that the webinar recording and slides will be online later today. We look forward to seeing you at a future CNCF webinar. Have a great day and stay safe. Thanks everyone.