 the people dialing in to settle down a bit so we can actually get started. So, okay, we'll go ahead and get started. First, I want to thank everyone who is joining us today. Welcome to today's CNCF webinar. Accelerate containerized application delivery using Kubernetes on the AWS cloud. I am the Kubernetes community manager at Red Hat and a CNCF ambassador. I will be moderating today's webinar and I want to go ahead and welcome our team of presenters today. So, we have Brent Smithurst, the cloud application and product marketing manager. So, our panel are all from Susie, which is Susie Inknow. Anyway, Susie. And the, which is, so Brent Smithurst is the cloud application platform marketing manager. Troy Topnik, the cloud application platform product manager. The technical marketing manager and Kevin Ayers, the cloud solutions architect. Hello, all. Yep, are a power team from Susie. Thanks, Josh. The, so, and as I said, I'm Josh Berkus. You've seen me in some of these other webinars and you will see me again. Now, there are a few other housekeeping items. First of all, you may have noticed if you dialed in for this that you are muted and you cannot unmute yourself. The way to ask questions is there should be a Q&A icon at the bottom of your Zoom screen. Open that up and then we'll open up a panel that allows you to ask questions. Some of the questions will be answered by the panelists or by a member of the CNCF team. The text in the screen, others will be answered out loud at the end of the presentation. You can ask questions in the chat window, but we don't pay as much attention to the chat window, which means those questions might not be seen. So, make sure that if you have a question that you really want answered that you add second in that Q&A panel. If you do have technical problems with participating, like the Q&A panel does not work for you, etc., then go ahead and chat directly, Taylor Wagoner, who will try to fix that for you. One other important thing, this is an official webinar of the CNCF and as such, is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. In short, please be respectful of all of your fellow participants and presenters. With that, I will turn it over to the SUSE team to kick off today's presentation. Brent, you want to take it away? Great. Thanks, Josh. Hello, everybody. The team here at SUSE is really excited to talk to you and demonstrate our solution to you today. I'll kick things off in just a minute by giving you a brief introduction to SUSE Cloud Application Platform and the challenges it solves for our users. We'll follow that with some more technical information, including a general solution architecture and a demo by Troy and Andrew, so you can see for yourselves how the platform works. Finally, Kevin will wrap things up before questions by telling you how the platform operates on AWS specifically. He'll demonstrate a quick start we have available to get you up and running and using the platform as easily as possible. The reason SUSE Cloud Application Platform exists is pretty simple. Most of our team has a background in providing tools for developers, and we noticed that while Kubernetes is the dominant container management platform for operators, it doesn't offer much for developers who write and create containerized applications. However, there is a complimentary technology out there that does offer that, and our team has a lot of experience with it. That's why SUSE Cloud Application Platform chose to use the Cloud Foundry application in runtime to add functionality for developers on top of Kubernetes. You may have heard about Cloud Foundry in the past, and you may have some preconceived notions about it. Whether that's the case or not, I'd just like you to hear us out, because what we offer is unique. It's a 100% open source project that containerizes the Cloud Foundry application runtime and runs it inside of any Kubernetes distribution. There's no virtual machines involved, and best of all, there's no Bosch. What the product does is it allows developers to containerize, deploy, run, and manage an application using a single CF push command from the CLI or even by specifying a Git repo in the UI. From there, it automatically identifies and pulls in the required language libraries, frameworks, and other dependencies via a technology called BuildPacks, which are open source and are now part of the CMCF. There are also open source service brokers to automatically create and bind services to applications. And then once the application is deployed, the platform automates lifecycle management to the app by assigning appropriate resources, managing routing, load balancing, scaling it up and down as required, and much more. So the platform eliminates manual IT configuration and helps accelerate innovation by getting applications to market faster. That's sort of marketing speak there. I had to throw that in, but it is true. That's what it does. Developers serve themselves and can get apps to the Cloud in minutes instead of weeks, all while staying within IT guidelines and without relying on scarce IT resources to perform manual configuration each step of the way. And again, it does it all within Kubernetes, which more and more organizations are already using. The key features and benefits include boosting developer productivity with that one step application deployment I talked about. Again, you just push an application with a command, platform automatically configures the environment, provides the dependencies, binds the services, and deploys the app as a container, which is then automatically managed and scaled inside Kubernetes. It increases efficiency by running in lightweight containers instead of resource 100 virtual machines. Consumes a fraction of the memory footprint of other distros, while being as fast or faster to recover and scale. It's fault tolerant, self healing, high availability for all critical components. The platform monitors the health of all containers and automatically restarts failed ones. And finally, it maximizes ROI by using industry leading open source technologies. So again, it's 100% open source, includes SUSE Linux Enterprise, Kubernetes, Cloud Foundry, Stratos UI, some bits from Helm and some other things. But I don't want you to just take my word for it of all the benefits. There are studies by the Cloud Foundry Foundation that show how organizations dramatically decrease the time it takes to deploy apps after they adopt the Cloud Foundry. So before a typical user deployed and configured Cloud Apps manually, or using custom install scripts or configuration management tools. Under those workflows, 51% of respondents required more than three months to deploy a new Cloud application. Only 16% say it took them less than a week. But after moving those apps to Cloud Foundry application runtime, those times dropped dramatically. Now 46% of respondents report Cloud App development cycles of under a week, including 25% who report it takes less than one day. The other key finding of the survey was that organizations can now deliver more apps in less time. So more than a third of users save a few months or more per development cycle. 10% report saving more than six months. Nearly a quarter of users report saving $500,000 per application development cycle. And on average, they reported savings of 10 weeks of time and $100,000 per development cycle. So what that means is that an app a year company can transform into an Apple week company. And that has a compounding effect when it comes to saving time and money. So that's why Sousa Cloud application platform uses Cloud Foundry. It's a certified distribution that is containerized and very easy for Kubernetes users to get started with. Compared to other non-containerized distros with a much smaller memory footprint, again our 100% open source, and of course use Sousa Enterprise Linux. So now I'll hand it over to Troy Topnik. He'll give you some more technical info and then kick into a demo with Andrew. Troy. Yeah, thanks, Brent. So when we were designing the solution, we were very mindful of the fact that there are different audiences involved for using Kubernetes. There's the operators who are interested in creating a fabric to run all their application workloads on, but there's also the users that have to interact with it to get their applications there, to actually stand things up to deliver business value. And we understand that there's a variety, sort of a spectrum of familiarity with Kubernetes. And different organizations are going to, and different parts of an organization are going to choose to deploy applications differently. So we wanted to be sure when we made the solution that people could use the CF push experience for a more streamlined developer experience to deploying typical web applications, but we wanted to make sure we left room for people with more DevOps experience, people who weren't afraid of the Kubernetes learning curve to be able to deploy applications directly to. So cloud application platform always makes sure that we've got the Kubernetes API and the Cloud Foundry API exposed and that we've created an interface for this. Additionally, though, Sousa has a Kubernetes distribution, Sousa cat's platform, we recognize that some organizations, some customers already have Kubernetes or have get Kubernetes from a cloud provider, like Amazon EKS. So we made this flexible so that you can choose to install it on a completely Sousa stack or mix and match. You can install just Sousa cast platform dealing just with the Kubernetes API and the variety of development and deployment tools that are available from the community, or you can combine it with Sousa cloud application flat platform. Alternatively, if you want to run it on any Kubernetes, we can do that too. We support, we have deep support for Amazon EKS. Azure, AKS and Google GKE as well as cast platform. But if it meets the minimum requirements, Sousa cloud application platform will run on any Kubernetes. So another cool component of this puzzle is the open service broker API. This is an API that is used to expose services either within Kubernetes or to cloud foundry applications. And using this, we can expose to users of our platform either services that we provide through our mini broker, which deploys service instances using Helm, or ties in to the host Kubernetes is maybe they have a cloud service provider has an existing service broker, such as the AWS service broker. So if you run it on Amazon EKS, you can tie into all of the hosted services on Amazon, like Amazon RDS, EMR, etc. These are available really no matter where you run your applications, provided you have a good connection and network line of sight. So here's a very simplified view of Sousa cloud application platform running on on some Kubernetes. We have roles in the system, which are inherited from cloud foundry. They're part of any certified distribution of cloud foundry. We have a broker, for example, our mini broker, UAA handling authentication for the system, volume management, a router to control ingress to the applications, CC APIs, cloud controller API, which is sort of the brains of the operation and logging subsystems as well. And what is shown above this is interesting in terms of the history of cloud foundry. We have these things in the darker green, which are called Diego cells, and they have traditionally been the things that run applications within this context. Diego is a container scheduler that predated Kubernetes, and it's now in some sense has been superseded by Kubernetes. So what we're doing at Sousa is working to transition from schedulers within Kubernetes that are actually Diego to integrating a new technology called Irene, which I'll talk about in a second, which runs the applications as Kubernetes native pods. So cloud foundry just becomes a gateway to deploying your cloud native applications on Kubernetes. So these projects are driven largely by Sousa. These are ones that are happening upstream in the cloud foundry foundation to move the cloud foundry application runtime into a future that is completely Kubernetes native, as that's where the industry is gone. So Sousa already has obviously a cloud application platform, which is deployed to Kubernetes using Helm. But we're also part of an upstream project to integrate Kubernetes operator to make life cycle management of cloud foundry inside Kubernetes much easier. This project has a lot of components, which could be generally useful to the wider Kubernetes community. But right now we're focusing on making the management experience for operators of cloud foundry excellent inside Kubernetes. The Irene project, which we work on with IBM, is the component which I mentioned earlier, which replaces the Diego scheduler with Kubernetes. Now this actually involves some other subtle changes within cloud foundry to give it an orchestrator provider interface. So in the future it will be possible to substitute for other scheduling engines. There's been some experimentation done, for example, with Knative for scheduling cloud foundry applications. So Irene is bringing cloud foundry application runtime fully into the Kubernetes community. And all of the cloud foundry vendors are actually actively involved in this project. But the one that we are probably most proud of because it works so well and it was very warmly accepted by the community is Stratos. Stratos is a web UI that was designed from ground up to be a multi-API, multi-end point, multi-cloud control interface for initially cloud foundry, but now other cloud native APIs such as Kubernetes and will even show some helm integration coming up. For those of you who are interested in what we just released the other day, we've been, we've had this product out for a number of months now. And 1.5 has just been released and we've added a few useful things, Terraform scripts for not only deploying it on AWS, Azure, and GCP, but actually setting up Kubernetes on those platforms appropriately to run cloud application platform. In the Stratos UI, we've added a helm interface so you can browse charts and deploy them. We'll show that. App Autoscaler is, there are features now exposed in the interface, which allow you to set parameters for the scaling of applications based on throughput, CPU usage, memory usage, that sort of thing, or a schedule, a scheduler to scale up and down the application instances to cope with load. And we've added more to the Stratos metrics component, which is a deployment that is pushed alongside of Stratos to provide a Prometheus database to track the metrics over time for display to the users. Without further ado, I think we should get into showing this. I'd like to invite Mr. Andrew Gracie to bring up his screen and walk us through the experience of using this. Right. So I'm hoping to show today that just how easy it is to deploy an application using cloud application platform. So what I've got is the first thing that I'm going to show is from the command line. And what I have downloaded is an application that is the 12-factor application, just a demo app showing how to do 12-factor. And the only modification that we have to do is add in this manifest.yml. So let me go ahead and show what that is. It's a very short manifest, basically showing, hey, here's the name. You can add in memory and discotas, what stack you want to do, and as well as what build pack you want to use. Once you have that set up, all you have to do is type in cf push, and then it will take care of the rest of this for you. There's no messing around with YAML, except for the manifest. You don't have to worry about Kubernetes deployments, anything. Yeah, basically it will do everything for you. So this is, it's good to mention exactly what's happening here. Go ahead. I was going to say, so yeah, so what it does is it tires up your directory, pushes it over to Cloud Application Platform, which then will unzip it and run a build pack on it. So the build pack will pull down all of the dependencies, will turn that into a droplet, which then gets built into a container, and then Kubernetes will take that container, pull it down and run it, based on the sayings that it has. Build packs are kind of like a generic pipeline. So whereas ordinarily with a CICD system, you would build a pipeline specific for your app. Here we have a build pack that can adapt to applications of a certain type with different dependencies. And so it will install the correct version of Ruby, the correct version of the gems automatically. It can either detect what kind of application you're pushing, or you can specify in the manifest YAML file, which build pack to use. And it's already running. So that's all we had to do. And there's a huge amount of customization that you can do inside of that manifest to really make it exactly what you need it to be. So I'm going to go ahead and copy out this route and show that it is indeed running. And there we go. We've got our tool factor app running. This command line can do a bunch of different things and as well as it can show you what you're actually running as well as a huge amount of features. For the purposes of this demo, I'm actually going to switch over to our UI, our graphical UI that is Stratos. So here what you see on the screen is our Stratos UI, the homepage for it. So we can favor it to all of the pieces that we have. And this is actually connected to several different clouds. So we've got two EKS clusters running as well as a cloud foundry that's also running in the IBM cloud. And yeah, so you can actually manage multiple clusters through this one UI. So you can see, I'm going to click through and show a few of these pieces that you have. If I can just sidebar on this a little bit, Andrew. This is an adaptive UI that will show you, A, what you're connected to, and B, what you're allowed to see through the permission controls of the system. So we've connected to a number of cloud foundry endpoints or two cloud foundry endpoints. And in one of them, we're connected as a regular user. We have limited permissions. We can only do basically what you saw Andrew do from the command line. You can deploy applications. You can request services. But the others were connected as an administrator. So it can tell based on the credentials you used to connect to the various endpoints, what permissions you have. And it will expose only those in the interface. Likewise, the items you see on the left hand side are exposed, like the cloud foundry component, Kubernetes and Helm, it will expose those based on what endpoints you have connected. So let's go to the top level applications view here. And we actually see an aggregation of all of the applications that were deployed to all of the endpoints that we specified. So we can see in the top left corner where it says cloud foundry there, we can click on that. And we can take a look at a specific endpoint. This is one that Andrew set up that is running Irene. So with Kubernetes native application scheduling. And then the other one is one that I set up also on Amazon. And that's running in the Diego architecture. And it's the same. The user experience is the same. The API interactions are the same. And we can dig in. And let's take a look at one of these applications to see what's there. So this is the 12 factor one that he just set up. We can see how many instances we're running. I think when we push, go ahead and select instances there. We can easily scale either with the auto scale or manually. We can SSH into application instances. To debug things. This is particularly useful for debugging. We can, which it's not doing right now. I think this is because you just scaled it. I did not pray to pray to the demo gods. Okay. Click back to the 12 factor view. We'll show it in a different app. We can see the routes that are assigned. Now this system will automatically assign routes based on a base domain. You can, as an organization manager, add other domains to this. But we're just going to add another host name. So we've got another route to it. This is excellent for A.B. testing. Quickly surfacing things. We can actually deploy multiple applications with the same route so that we can switch between blue-green deployments. That's a technique that's often used. The log stream, this is the same as you would see from the CLI after he does a push. We would have, if we'd clicked over here, seen the staging process. But now we see what's happening with this application. If we hit this URL, we would see more logs. So this is anything from that application that goes to standard out or standard error would appear here. We can also set a configuration parameter to add specific log files to this stream. And this is available through the web UI or the CLI. Services. We could add a service here, I think. Take a look at the marketplace services. Let's go to a different app, perhaps. No, we can show marketplace services. Okay, good, good. So Andrew's actually hooked this up very cleverly to open FAS to show how you can actually surface a function as a service containers as services exposed by an open service broker API. This is genius, I think. I don't know if we can demo this, but typically this is used for connecting an application to a database. And we can show, for example, the currency app later that has this connection. We can show creating the MongoDB blog. You can create applications through here through, if you want to hook up to GitHub or GitLab or really any Git or application archive files. You're able to deploy directly through Stratos. So I can go ahead and I'm going to deploy onto my own Cloud Foundry from GitHub directly. So if I click through here and I go and I type in my namespace, I can actually see that it will auto-populate all the projects that I have inside of my GitHub. This is done through the GitHub API. And I'm going to use the MongoDB blog. If I click on next, it will actually show me and allow me to pick which commits I want. I'm going to go ahead and pick the latest because I know it works. And give, and so you saw the manifest earlier for the 12 factor app. This actually allows you to come in and put an overlay on that so you can change any of the parameters that you need. So I'm just going to give it a different name. Let's see. Blog demo. Scroll down. And you'll see that it pulls from Git and then does the CF push exactly as if you were on the command line. And as soon as it gets to where it knows that it's going to start be pulling up, we can go ahead and switch to the summary view. We can see the same logs in the log stream view that we saw in the deployment view. We're actually going to need for this application. We should do this quickly because we're running out of time. But we can select from the marketplace the service that we need. In this case, we're going to need MongoDB. 409 should work. And we're going to should give it a refresh. Yeah. There we go. Okay. That's what we need. Just call it. Yeah. And now that will inject that will not only start in this case, an instance via helm of MongoDB, it will actually pass the connection credentials returned by that deployment to the application and expose it in the variables field in a special VCAP services environment variable which the app can consume. And the build pack, we're generally speaking know how to modify the application to connect to this database. If it isn't, then there's some simple techniques you can use to expose those credentials to the app. So let me go ahead and show the helm features real fast. Yeah. So we connected this to a few different helm repositories, the upstream stable and the SUSE charts repo. We can browse them here and we can try installing Aerospike. It'll pull in the read me so you can see the documentation for any given helm chart. We can select which cluster that we're connected to. We want to deploy to and give it a name. We'll put it in the default name space for now. And this is cool. So those of you who have used a helm before, sometimes where do you start with a blank page? Well, you can just copy in the default values YAML so you can see what options are there for you to edit and modify. We'll leave this one as it is and we'll just install. Questions from the field. There are some questions in the Q&A. We'll get to those at the end of the session. All of them are good and we might switch back to make sure we cover those. There's the deployment. Click on the notes so we can see the output of helm. That's how to connect to it, how to use it. You can actually insert these into VCAP services if you want to. We can see the pods that are running, but let's look at a more complex application. Maybe one that's already been deployed such as SCF. SCF, short for Susie Cloud Foundry, and it's the main components of Cloud Application Platform. We can see... Oh, this is the console itself. Click SCF. There we go. So those are the containers that are actually providing the platform functionality. And we can see, my favorite, you can dig into the pods, the values, see what values we deployed with, and see the services that it's exposing within Kubernetes. So if you... I just want to click on endpoints so we can wrap this up and show how easy it is for people to install it on AWS. The design of the system is meant to be extensible. We started with Cloud Foundry. We moved on to Kubernetes. We've recently added helm and adding more and more metrics functionality as we go. We're really aiming for this to be a true hybrid cloud, multi-cloud, multi-API interface. So you can manage all your cloud applications in one place. And I think it's coming along nicely there. I hope you enjoy it. So why don't we hand this back over? We can come back to this later if we have questions about specifics. Why don't we hand this over to Kevin to talk a little bit about how we've worked with AWS to make it easy to get started on this in that public cloud. Thank you, Troy and Andrew. Let's see. Let me grab the... deck and reshare that deck. Sorry. Brent, can you continue to advance slides, please? Here we go, Kevin. Appreciate that. So for about 10 years, we've been working hand-in-hand with AWS to help literally thousands of businesses move their enterprise workloads to the cloud. We collaborate with AWS Solutions Architects and build certified, well-architected solutions, which means, among other things, that it follows industry best practices for security and high availability. Today, obviously, we've got an ideal platform for deploying and running mission critical microservice workloads on the Elastic Kubernetes Service, which is AWS's native Kubernetes service. We also have a long history of this relationship with SAP workloads on AWS and a very mature support process in place. This is really key when you work with SUSE and AWS together. It's a very well-built, well-supported solution. Let's go to the next slide, and I'll go ahead and take a peek at this architecture. So we had an opportunity to build this quick start with AWS Solutions Architects, and very simply, it deploys cloud application platform across three availability zones using auto-scaling groups, deploys one or more bastion hosts, and uses Elastic load balancing, as well as a number of other native services, including the AWS Master Kubernetes Service, EKS. The solution deploys using SUSE Linux Enterprise Server worker nodes, which is key. At the end of this session, there are links. Please do follow those links, look at the quick start architecture, and get a deeper dive. Next slide. Let me go ahead and show you a quick demo of this quick start. If you go to aws.amazon.com slash quick start, filter by containers, microservices, we're right there. Go ahead and launch that. And viewing the deployment guide is key. I encourage everyone to peruse that. It's very interesting. We're going to go ahead and skip down to how to deploy, and this is a way for you to get a cloud application platform running in 45 minutes or less. We're going to notice I'm, well, let me move my Zoom control somewhere else. Notice I'm already logged in so that when I go to launch into a new VPC, choose a geo that I know I have available resources in. And the cloud formation template's already populated. I'm going to click next. I'm going to name it. I'm going to go ahead and insert a couple of key parameters that are required. I've already created a domain in Route 53, but you can also use your own domain through your external registrar. Grab an SSH key I've already created. Choose three or more availability zones. Grab a remote access sider. I encourage everyone to limit this as much as possible. A little more instead of my whole block. And a number of configurable parameters. If you want your worker, if you want a little more unfun, your worker load, worker nodes, you can, you can size those up. You can, because this is Diego cell based in its current form, there are two auto scaling groups, one for Diego nodes and one for application worker nodes. We can double those if we want, et cetera. We'll just go with the defaults for now. If you have additional EKS admins, you want to be able to remotely manage the EKS cluster, you can go ahead and add their resource names there. I'm going to go ahead and say yes to enable Stratos. Add some tags so I can find this later. We do a couple things. Create the stack. Takes about 45 minutes, a little less. I'm going to just move over to a stack that I've already created. And if we see, let me just refresh this. So this is a cap stack that I created just recently. We can simplify the view. Remove the nested stacks. There are two parent stacks that are called. One is cap, cloud application platform. The other is the EKS control plane, which is the master EKS service. A key component, let's see, we can take a look at the resources that are built. But the key component is the outputs. The outputs give you your Stratos console endpoint, your CF API endpoint, et cetera. There's our config path, the bucket, and all of that's right there. We can take our Stratos console endpoint, move over to that, log into Stratos. So very simple way to get up and running with cloud application platform on AWS. And let's go ahead. Let me drop this share so we can go to the last slide, Brent. Do you want to share? Great. Thanks. And so, you know, what are next steps? We definitely encourage everyone to check out the quick start guide and apply for AWS credits for qualified DOC and pilot projects. Visit the solution space. The cloud application platform documentation is very interesting. The deployment guide covers a number of things, as Troy mentioned, beyond AWS and EKS. It covers other clouds. It covers on-prem, et cetera. So a good resource there. Some of the open source projects that are involved in this. And any questions at all, go ahead and email us. AWS.SUSA.com goes to the entire cloud solution provider team at SUSA, and we'll answer those questions. So let's leave this up and go to questions. And I thank you all. We've had some really interesting questions, which I've been trying to answer online and type them out. But I want to call out some of the ones that I've tried to answer here so we can talk about them. Stoian asks, is there tenant isolation? Can devs have access to the UI with limited view of their resources only and look at logs for troubleshooting purposes? Yes. An enthusiastic yes, and this is one of the great things that using Cloud Foundry application runtime brings to Kubernetes because its role-based access controls are tricky in Kubernetes, and it's hard to do this kind of thing. So every user in Cloud Foundry system, or a cloud application platform system, will be assigned permissions within an org and permissions within a space within that org. So that's the sort of organizational structure for people within a Cloud Foundry system. And within an org, you can have an org manager that sets permissions on what they can do. The total system administrator can assign certain permissions to that org manager, and that org manager, in turn, can assign permissions to the members of that organization and delineate which spaces they have access to. Basically, within a space, all permissions are shared. Two developers that have access to the same space will be able to perform the same operations on applications within that if they both have developer permissions. There's also an auditor permission if you just want to be able to check on the metrics, for instance. So there's very, very granular control of the permissions users have when they get into the system, and very, very good isolation between the containers on within that system. So they have very limited network egress within the cluster. Connecting to services has to be explicitly allowed in a security group within the system. So there's a lot of tools that you can use to preserve tenant isolation. Stoyan also asks, is there's memory and disc quotas. Again, these can be assigned per org or per space. What about CPU quotas? Well, that's actually not available in the API. You saw we had some CPU usage scaling criteria, so we can monitor that, but that is not part of quotas at the moment in the system. So we could look at bringing that upstream because I know it's a concern. Noisy neighbors, people who will hog all, one application that would hog all the CPU to the detriment of other tenants. There are things you can do when you set up a system to minimize the chances of that happening, but it's not actually exposed in the API. Stoyan, again, sourced for all these great questions. What's the storage pack end? Whatever you've set up with Kubernetes. In the case of EKS, it's... Actually, I'd like to expand on Stoyanov's question here, which is, in answering this question, can you also go a little bit more into how Cloud Foundry works with CSI, which is the storage interface for Kubernetes? I haven't seen activity within the Kubernetes community on any specific CSI drivers for Cloud Foundry, and so I'm wondering how those work together. We inherit with the cube cluster, and I mean, somebody can jump in and correct me here, and my understanding might be a little bit limited, but in every case that I've encountered, we just use the storage class that is set up in Kubernetes. So, of course, on EKS, we're using GP2. We can set it up with NFS, for instance, with another Kubernetes. And the decision of which storage class and how that storage class is implemented is at the Kubernetes level, so that's not directly related to this. We have to be flexible with what we can use. However, if you have multiple storage classes, you can pick the one that you want to use. So it's a pass-through, then. What do people do if they're on a local or bare metal cloud? You can attach to, well, you can set up, for example, SUSE Enterprise Storage can be exposed. If you want to use, we discourage people from using Hostpath, and all but the simplest single node deployments. So that's not really useful for this scenario. But yeah, you can connect to work at that storage. For example, SEF can be exposed as a storage class. Thank you. Roy asks, do you need to set up AWS EKS cluster before you install SUSE Cloud Application Platform? Short answer is yes. Longer answer is we've made these Terraform scripts to do that for you. You set some parameters about what kind of cluster you need, and it will roll all the way through from setting up a cluster on EKS, or AKS or GKE, and then doing a helm install and final configuration of Cloud Application Platform. An anonymous user asks which Kubernetes can we use? I think we covered that. We do all our testing on CAS Platform, GKE, EKS, but we can support others. And if your Kubernetes matches our minimum requirements, which you can see, if you can have a look in the SEF repository there on that last slide, you can see there's a test script to see if Kubernetes is going to be compatible with Cloud Application Platform. Devandra asks about running a Cloud Application Platform on Fargate. We haven't tried that yet. That's an interesting question. It hasn't come up. This is the first time it's come up, but I'm not sure it would work, but that's just because we haven't looked at it. Also, another anonymous question is what the performance overhead of running CF on Kubernetes as opposed to CF on bare metal, if I understood the question correctly. It's hard to benchmark that because it's actually very hard to install Cloud Foundry on bare metal. Bosch can do it. We don't typically see it in the field. We have a lot of comparisons of performance of Cloud Foundry deployed on VMs versus Kubernetes. Cloud Application Platform running on Kubernetes is more performant at smaller scales. They move to parity as the systems get much larger. Those are the ones that I answered. There were some other ones that just came in. I'll try and do it on the fly here. Is pricing for EKS control planes still in place, or is it going off? I don't know if I understand that. Kevin? I don't think that's a question you actually can't answer. It's an AWS pricing question. There is a cost for using the EKS control plane for using the EKS master service, but again, that goes back to AWS. It's not a significant cost in terms of the overall solution. I want to get back to another question from Stoyan. What do the app endpoints translate to in EKS, for instance, is it an ELB? If so, is it a shared among all apps, individual ELB per app, or is it configurable? The ELB fronts a component called the Go Router. That's what it is right now. It will be replaced at some point in Core Cloud Foundry with Istio and Envoy when it is as performant as the Go Router. The ELB hands off to the Router, which in turn does host and path management for the system. The Go Router will handle all ingress into the system. Hopefully that answers the question. Okay. Sajid has a good question. I just don't know if I can answer it. Does EKS facilitate provisioning multiple Kubernetes master nodes to avoid single points of failure as far as the Kubernetes cluster is concerned? Again, that's an AWS question. But I think it does. I mean, you're consuming Kubernetes as a service, and they have certain guarantees about how highly available that is. I think behind the scenes, there is some high availability built in. But you're right. It is an Amazon question. Okay. I have a question about compatibility with AWS ECS, if there is any. Again, it's like Fargate. We haven't actually tried it yet, so you can't answer that. I don't believe it's compatible. And then Stolenoff has another question here. I don't actually understand the question, so I'm hoping that you do. Out of the box, what does CAP do for preventing privilege escalation within the pod on my cluster? I use PSP to set limits properly. Does CAP rely on underlying Kubernetes cluster configuration, or is it manageable via CAP? It relies on the underlying Kubernetes cluster configuration, but there are additional restrictions on what an application within Cloud Foundry can do. And that's an entire webinar in itself to explain the security groups and the configuration of the base container image that Cloud Foundry uses to build these applications. And because we didn't show it, you can deploy discrete containers directly, container images, if you allow that permission. You can deploy those to the Cloud Foundry API, but generally, all of these things are pushing code. They're built on a base container, and there's a lot of things that the system does to prevent breakout from those, to prevent malicious actors within those. But if you dig down, it's all running in containers, and especially in Ireeni, it's going to depend on how well you've set up your Kubernetes cluster to manage that. Hopefully that answers it. That's answering it to the best of my ability. We could follow up later if you've got specific questions, or if we need more depth there. Yeah. Okay. Well, we are actually at the end of the hour, so kind of perfect timing in terms of getting to the questions. Thank you very much for the presentation. And everyone, thank you for joining us. We will have news on recording and slides after the presentation. And otherwise, there will be another webinar tomorrow, which I'm going to schedule to see what the topic is, but there'll be another webinar in this time slot tomorrow for anyone who wants to dial in. Thanks, everyone. Thank you, Josh. I also want to encourage everyone to just do a last plug to get people to the solution space, the last link in the presentation. I'm sorry, the first link at the top of the page, and that will provide more information. So, thanks all.