 Okay. I'm going to go ahead and get started because these 30-minute sessions are relatively short. And the first couple of minutes, I'm going to be doing a lot of, you know, just introducing myself anyway. So, thank you so much for joining today. I am going to talk about what we have until this morning been calling Kubo, and I will probably still call Kubo because otherwise I'll kind of trip over the new words that were announced this morning. So, cloud foundry, container, not engine, runtime. There we go. Container runtime. My name is Cornelia Davis, and I work at Pivotal. I'll tell you more about myself in just a moment. At the moment, my role is that I am driving the go-to-market for Pivotal container service, which is the commercialized version of Kubo. And so, forgive me, I will try to stay in the open source here, but I may, again, trip over that and use the Pivotal container service or PKS name as well. So, let me tell you a little bit more about me. I have been in the industry for a long time. I have the gray hair to show for it about 30 years. Always have been a developer. Didn't come from the operation side of the house at all. And I say wasn't ops because I'll tell you, honestly, when I first started working with cloud foundry, I believed everything everybody said about Paz being all about the developer. And then when I actually started working with customers, because the role that I'm in is really in taking our products out to the customers and then taking kind of these early customers and bringing that back to the product, I realized very, very quickly that platform as a service and now the container service as well is as much an operations product as it is for the developer. So, I know here at CF Summit, we keep talking about the developer, but there's a lot of value on the ops side as well. So, I have learned ops and even paired on our cloud ops team. My background is that I've been working in web architectures for a long time. More than 10 years was one of the first people that talked about restful services back in my EMC days when I worked in the corporate CTO office. And have been really working with cloud native applications for five years in the context of cloud foundry. So, I was still at EMC. It was before the pivotal spin-off. Working on emerging tech in the emerging tech area I was working on was cloud foundry about five years ago, a little more than five years ago. And so, like I said, I work for Pivotal. And more recently, just a little shameless plug, I am working on a book. I'm working with Manning on a book called Cloud Native. And in fact, tomorrow at around lunchtime, we're going to have many versions, the first three chapters of that will be available on our Pivotal booth. All right. So, what I want to talk about today is this is a slide that you may have seen or in fact, my colleagues, Fred Mello and Megan from Google, are presenting on Kubo at the same time. So, it's unfortunate we have two Kubo talks at the same time. And they probably have this slide as well. And the whole point here is that for the last four or five years, we have been running cloud native applications. And that was really one category of workload that existed in your enterprise. But every one of you in the enterprise has workloads that you need to run and that you need to take care of that are not cloud native, right? Anybody running non-cloud native workloads in your environment? And so, the whole point here is that there's different workloads, different types of workloads, and we want to provide the environment for all of those. And that's what the announcement even this morning was about in that cool little snazzy video that we saw at the beginning of the keynotes that showed those different dial tones. Now, you have all of these different workloads, and what we've given you in the past was this. And when I say we, I mean both the Cloud Foundry Foundation as well as Pivotal and various other providers is what we gave you an application platform. And if it didn't fit on the application platform, well, we also gave you IaaS. We gave you something that was at the infrastructure layer, and that's Bosch, right? So Cloud Foundry has actually always been more than the elastic runtime, more than the application service. It's always had Bosch as well. So this is what we used to give you. And now the whole idea is that we are moving, we're shifting around those abstractions. And IaaS is still there under the covers, but we've shifted it around so that we are no longer saying if your workload doesn't run on application dial tone, on C's haiku, here's my code, run it in the cloud for me, I do not care how. If it's not using those abstractions, you had to go all the way to IaaS, we're giving you other abstractions now as well. So containers being the abstraction that we're going to talk about today. So that's the kind of grounding. And what Kubo does is essentially one of the first things that I want to point out is that we've often talked about Kubo being a way that you can say, well, if it doesn't run on the ERT, if it doesn't run in the app platform, that you can run it on Kubo, what I'd like you to do instead is I'd like you to think about it from the perspective of, well, what am I running on infrastructure as a service today or virtualized infrastructure, maybe even bare metal, what am I running down at the infrastructure layer that I could be running on some higher level platform because I get more value out of that. So that's really, it's really about that arrow that I want to talk about today. So okay, we have this container platform, which I'm not going to go over the details of what Kubo is, but it is essentially giving you the ability to get managed Kubernetes. And so we will be talking about some of those values, but it's Bosch managed Kubernetes clusters. So how many Bosch fan folks in the room? A handful of you. So I am definitely one of the biggest Bosch fan girls on the planet. In fact, when I first started working with Cloud Foundry five years ago, I wasn't working with the ERT at all. I was working with Bosch. So one of the first things that I did was I built the very first, very early Bosch release for Gemfire. So this was, like I said, five years ago. It wasn't any good because I'm not a very good programmer anymore because I don't spend enough time cutting code, but it kind of proved the point. So I have been a Bosch fan girl for a very long time. So that's what Kubo is, is this Bosch managed Kubernetes clusters. And again, we'll talk about those things. But okay, I have a Kubernetes cluster. Great. What I want to do today is really focus on the workloads, not focus on the management of the cluster. All of that is great and wonderful. And we'll talk about those values, but it's only really relevant when you start thinking about what are the workloads that I'm running on my Kubernetes clusters and what are the values that I'm getting out of that. So when I started thinking about organizing this talk, and by the way, this is based on customer conversations. So I've been working out there talking with some early customers for the last, I would say, three or four months. This is all reflecting some of those conversations. And when I talk to them about the workloads that they're running, they kind of fall into two different categories. There's code that they, as an organization, are developing. Now we'll talk a little bit more about that in detail, but it's code that they own. So they have the source code in some repository somewhere. And then there's a whole other set of workloads that they want to run in a containerized environment where somebody else owns that code. So specifically, I'm talking about things like an ISV providing software. So MongoDB, CouchDB, Spark, Elasticsearch, those types of things. So I want to talk about those workloads and those two different categories. And let's start with code that you develop. Now if you know Pivotal, you'll know that we like two by twos. So we do a lot of two by twos in a lot of different places. And it actually is a really helpful abstraction, a helpful way of kind of framework for us to organize things for looking at these workloads as well. And so I've got two different axes here. On the one axis, on the vertical axis, what we have is the architectural style of the application. So the workload that I want to run can range anywhere from being fully cloud native, totally stateless, can be totally ephemeral, nodes can come and go, IP addresses and identities don't matter, all of that stuff. And on the opposite end of the spectrum, oh, and microservices, of course, small components, on the opposite end of the spectrum, we have what I'm going to call a traditional workload, which tends to be larger bodies of code. They tend to be more statically deployed, not quite as resilient to changes in the topology, those types of things. On the horizontal axis, I have the frequency with which you're changing that code. So you have code that you're cycling all the time. You've got weekly iterations, you're doing a push into GitHub several times a day. Each time you do a push, it runs some unit tests, and then once a day we run the integration tests, and maybe we're even releasing that software to our users on a weekly basis, or every two weeks, or maybe every day. So we've got the very frequent changes on the far right-hand side, and on the left-hand side, you've got infrequently changing code. So you might see where I'm going with this, because I'm betting that you have a lot of workloads that are traditional workloads that you're not really doing active development on, but they're still providing value and you're still running them in production. Yes? Okay, so we'll talk about each one of those. Now, let me organize what went into each of those categories as I started to organize this talk. So when it comes to fully cloud native with frequent changes, we're really going after developer productivity, as well as operational efficiency. And we're here at Cloud Foundry Summit, right? This is the traditional Cloud Foundry product. So what we're now calling the application runtime. Okay, so the traditional Cloud Foundry setting. No argument there, and I'm not going to talk about that in much detail. On the left-hand side in the upper quadrant, we've got cloud native stuff, so it runs well on Cloud Foundry, but maybe I'm not in active development anymore. This goes back to my earlier point of Cloud Foundry isn't just for the developer, there's all sorts of operational benefits as well. And so there, again, arguably, the elastic runtime, or the application runtime, what we're calling it now, provides a great platform for that operational efficiency. Now, let's talk about that lower left-hand quadrant, which is, hey, I've got code that I'm not cycling a whole lot, and it's more of a traditional architecture. There, what we've done in the past is we've run that on infrastructure directly, and now we're going to be running that on the container runtime. And I'll go into that in more detail. And then the interesting one is in this other side, which is to say I've got a traditional application, let's say. It's not broken down into microservices, but I do want to iterate it a little bit more frequently. Now, we know that from a DevOps perspective, that if I've got large monoliths where I have to do integration testing, maybe I can't release it every day, but it's more frequently than I've done in the past. Maybe I'm releasing it once a month. And there, developer productivity is still important, but there's a little parenthetical remark that says down there that one of the differences, of course, between the application runtime and the container runtime, is that the things that you have to have to support that agility, the developer productivity, you got to bring it yourself. And we'll talk about that more in just a bit. All right, so I'm going to not talk about the elastic runtime anymore, so I'm going to push that off to the side, and we're really just going to talk about these two categories. So let's talk about the lower left-hand quadrant to start. So this is where I'm running traditional code that you have developed with infrequent changes. Now, what are you doing for those infrastructure deployments today? And I'm going to kind of postulate that you do something that looks a little bit like this. The developer writes the code, then you have some kind of an approver that says, all right, this is ready to go for security approval. So it goes to your CSO office. Your CSO office does whatever they do. They maybe do code scans. They do some manual code reviews, those types of things. And they give it their approval, and now it moves on to the compliance office. Now, the compliance office goes ahead and does its compliance, hands it over to QA, and maybe the orders are a little bit different. That's not the point. But you can see here that we have an approval process. And then before I roll this out into production, and by the way, maybe QA is doing that final like performance testing. Before I can roll out it out into production, I also have to involve the change management folks. And so I'm getting approval from the final QA. Yep, it passed those final QA tests. Change management says, yep, everything's cool. And you can deploy that into operations. And this whole process is generally happening as you're deploying into the infrastructure directly. So we've got all of this happening against the infrastructure. So this very long drawn out process, and who has a process that looks roughly like this? You might call it promoting into upper environments. Yep. So it looks something like that. And again, what you're doing is you're generally doing these things as you're doing dev, you're testing in the infrastructure, et cetera, et cetera. So the interesting thing about these workloads when it comes to containerization is that you can, and this is super important, I'm not going to ask you to retool your entire process. For those of you who have been working with Cloud Foundry, and maybe you're even working with Pivotal or some of the other Cloud Foundry related vendors that are here, what we've been doing for the last many years is we tried to get you to change an awful lot. We said, ah, well, you know what? We want you to change your process. We want you to change your architecture. We want you to change all of these different, even your organizational structures. We talked about all of this change, and you've implemented that for a subset of your workloads. But again, there's all of these other workloads. Do I really need to retool everything to just be able to run those? And so what I'm going to suggest to you here, and what we're working with customers to do, is not retool all those processes. We want to take that same process, and instead of doing that process against infrastructure, we're doing that process during the creation of the image. So all of those approvals, so that by the time I get my image at the end, it's approved, and it's ready to go. And now I just need a place to run that, and that's where the container runtime comes in. So that's the first thing to realize is that we're not asking you to retool everything to be able to use this platform. All right. So to summarize that, it's minimal disruption to the current process. And again, the reason we can get away with this is that manual process, or maybe slightly automated process, takes a long time. So this only really can come into play in those things that are infrequently changing. Because if it takes six weeks to go through the process, well, it's going to take six weeks to go through the process to generate the next docker image, right? All right. So then you might ask the question, why do I get any value in doing that? I'm not asking you to change your processes a ton, but I am asking you to do it instead of on infrastructure, do it against the docker image. So I'm asking for some changes. Is it worth it? Well, the answer is, it's absolutely worth it. Because the whole point with PKS, sorry, PKS, container runtime, is that you get all of the benefits, all of the magic that you got from Bosch. So you get health management, logging and scaling. Actually, that comes from Kubernetes. So Kubernetes has some of those capabilities that we've had in the elastic runtime for the last four or five years. Some of those things exist for your workloads on Kubernetes. So you have the ability to have somebody who's watching the health of that, who's doing log aggregation, those types of things. Definitely valuable. Multi-cloud. So when you were doing that against infrastructure, you had to select the infrastructure first. So now you have the ability to create a docker image. That docker image can be run on the container runtime on a number of different clouds. So you have the ability to cloud bursting those types of things. You get kernel and Kubernetes upgrades. So when Kubernetes, and I forget who talked about this, it was in a different meeting yesterday. When Kubernetes has, for example, a vulnerability or some type of update that it does, we will do those rolling upgrades for you. And I'll talk more about this in just a moment. And then there's consolidation. So we are seeing this. We're hearing this from customers that even though they had infrastructure as a service, which gave them some level of consolidation, they didn't need independent physical devices anymore. They could use a single host and support multiple workloads on that with multiple VMs. They're finding that they get even more density on those hosts by going to containers. So those are all the values that you get by moving from an infrastructure direction into containers. But there's a but. And there's almost always a but. And I'm going to speed up a little bit because I'm going to run out of time here. I only have about 10 minutes left and about 20 slides to go. The point is here that the application may not be changing all the time. But the things that are running underneath the application, the Kubernetes version itself, the stem cell that it's running on top of, that might be changing much more rapidly. So can I really roll things like the Kubernetes version without getting in lockstep with the application team? Can I do those, even though maybe I'm releasing every quarter on the application, can I do more frequent releases under the covers? And there, I might be asking you to do something more than you do today. Because it turns out that the real killer feature here is that you need to have a test suite. So I'm going to quickly show you here a couple of animations that I've used in talking about the application runtime over the last several years. What we have here is we have three different environments. These are three different cloud foundry instances or three different orgs or spaces. And I've got my application team, which is now able to just commit code, commit it into the CI environment, so the dev environment. Every once in a while, they do integration tests in the acceptance environment. And when the business says, okay, let's release it to production, they release it to production. And so the cycle goes. So that's what cloud foundry is designed for is to support this very agile developer workflow. Now, the platform team that's responsible for under the covers, so the platform itself stem cell upgrades, handling CVEs, those types of things. What this separation of duties did was the way that it's architected allows the platform team who's dealing with the CVE to say, hey, let me check out that CVE in a staging environment. Here's the important part. Run the smoke tests for my application on that staging environment. If the smoke tests pass, then I can go ahead and deploy in a rolling upgrade fashion, rolling upgrade fashion. That's what allows me to not have to sync with the app teams, roll it out to production. You need to have the same thing. What enables this workflow is you have to enable it, even if you're infrequently changing, if you want the platform team to be doing frequent updates, what we need is we need those smoke tests that the platform team can run in their staging environment before they roll out into production. All right. So that's that lower left hand quadrant. Now, let me talk about the lower right hand quadrant then for a moment. So notice that this is, I've got traditional code that is frequently changing. What that means, therefore, is that I'm frequently changing the image. So remember, I said that in creating that image before, if it took six weeks to go through those approvals, no problem. I could create that and I'm only changing the image every six weeks. Now, I'm frequently changing the image. So how do you make sure that those images are okay to run in your production systems? I'll tell you honestly, two years ago, at the height of the Docker, Docker, Docker, Docker, Docker, now we're Kubernetes, Kubernetes, Kubernetes, which is a lot harder to say in sequence, but Docker, Docker, Docker, Docker, I actually had people say, I would ask them, well, why do you want to use Docker? And they would say, oh, because the developer can just give me the container and it'll just run. Great. And I'd say, really, you're going to let your developers hand you containers and you're going to run them on your production systems. And they were like, oh, right. Good point. So that's really what I'm talking about here, is this recognition. And nobody says that anymore. Everybody recognizes now what's involved in creating these container images that meet those corporate requirements. But it's now up to you. So for example, on the application runtime on the left-hand side, when you had to come up with an approved base image built into Cloud Foundry, what you need to do now is figure out what your standardized base images are and manage those, manage the accessibility to those. Then what about runtime and other file system dependencies? On the traditional Cloud Foundry side, you have an approved build pack. So your CSO office has gone through and approved that build pack. On the other side, you've got to figure out how to do that yourself. You've got to come up with the pipelines, the docker files, all of those things. You're going to only allow approved images. Well, you're going to have to do things like, on the Cloud Foundry side, you've got identity and access management, which is keeping you from being able to do things that you're not allowed to do. On the right-hand side, you have to have identity and access management. You also might want container scanning, container image scanning, and many, many more things. So the real point here is that, yes, you get these types of things. So why would I want to do that in that lower right-hand corner? Well, the beautiful thing is that you still get all of those values. So it may be worth you going through that work. But this is now no longer the case where I'm saying you don't have to change a ton. I'm saying if you're in that lower right-hand quadrant, you have a lot of work to do. So you really want to look at that and see whether you can push things into the upper right-hand quadrant. Okay. So that takes care of that quadrant. And now I have about five minutes to go. Perfect. So that takes care of the left-hand side there. So let's talk for just a few minutes about the right-hand side, which is the code that someone else develops. What's interesting here is, initially, I didn't think I'd use the same 2x2. But as I got things organized in order to share this with you, I realized that this 2x2 helps as well. I'm going to lay a couple of things over this 2x2. The first thing that I'm going to do is something like Oracle WebLogic. Okay. So it falls into the lower left-hand corner in that Oracle releases this software, I don't know, what are they releasing it in? Every six months or, you know, 12 months or something like that. So it's infrequently changing. And it's also traditional. And I'll go through what I mean by that. The architectural style is slightly different here. It's similar, but I'll make some slight nuances. Then we have other ISV software like Spark or Elasticsearch that they release software more frequently, for sure, because these are companies that were kind of born in this more agile world. But they're also more cloud-native. And I'll describe what I mean about those. Now, that doesn't necessarily mean that they're so cloud-native that they run on the ERT on the traditional cloud foundry, but we'll talk about that in a little bit of detail. So instead of going along the frequently and infrequently, I'm really just going to look at that spectrum of traditional versus cloud-native. All right. So let's talk about cloud-native. What I mean by cloud-native here is that they are not necessarily independent microservice type of applications where each of the microservices really has a lot of individual autonomy. And you're designing the system so that they can run independently. These are more cloud-native, what I call cloud-native clusters, which is to say that there's multiple components. Even Kubernetes itself has workers and masters and LCDs. So it's a cluster of components. But really, those components are meaningless if they don't have the other components in the cluster. So it's a cloud-native architecture from the perspective that, and in fact, let me go ahead and advance. So from the perspective that it does things like supports nodes, not having a fixed and rigid identity. So when a node goes away and we bring it back, it can have a new identity and it'll add itself back to the cluster with that new identity. That's what I mean by cloud-native there. Or it might be availability zone aware. So we, for example, or the platform makes it availability zone aware so that you're deploying things across and you get some level of high availability that way. And you can have flexible cluster topologies. Arguably some of these traditional software, you can't just add another node. So these are some of the things that define things like spark and elastic search. In terms of data-centric workloads, you need things like persistence. So this completely stateless thing doesn't exist anymore. And I want to point out that no shared storage. And what I mean by that is on the left-hand side here, we see multiple instances of a workload, multiple instances of an app all tied to the same shared storage. That is something that is not available to you in Kubernetes because it really isn't cloud-native. So on the right-hand side, you see here that each instance gets its own data. Well, here's a picture from Elasticsearch where it's now no longer the case that you're handling that data system just in the database. The application is actually handling the replication and the distribution of those things. And maybe even some of the availability zone awareness. So that's what I mean by cloud-native. Now, these things, so why would we run these on Kubo? Well, number one, that you might already have them containerized and all of this on the right-hand side. You get all the goodness of Bosch. So finally, let me talk about the very last thing. And I promise I'm almost done here. Apologize for going, I'll go a couple of minutes over. So let's talk about that other workload, which is Oracle. Oracle WebLogic. So arguably, Oracle WebLogic is not cloud-native. Sure, they're trying to do a little bit more. But Oracle WebLogic is one of those systems where if you lose a node, you have to bring it back with the same identity that it had before. You can't just, it can't take on a new identity. I need to, if I, for example, I've got the storage and I bring back the node and I reattach the persistent volume, great. But if my node has taken on a different identity in the cluster, then everything's broken because it expected that same identity. So what am I talking about here? So let me get very concrete to talk about what do we mean by Oracle WebLogic. So we've got Bosch. We've got the container runtime, which is managing the Kubernetes cluster. And then I'm talking about actually running Oracle WebLogic inside of a Kubernetes cluster. So I have a couple of colleagues that have been experimenting with this and actually have it working. And once you do that, then you just run your normal application inside of Oracle WebLogic server. So going back to this notion of I'm not going to ask you to reach, to retool everything, I'm not going to ask you to make huge code changes, but you can run your stuff on the platform anyway. That's the whole point. Your same application code is still running on WebLogic. Now, you should be asking the question, why? What is the value? Why would I do that? Well, it turns out that all the same values that you saw before. Now, arguably, WebLogic itself does some of this. So for example, they've over the years, they've added some health management. So if a node goes down, bringing the node back up. They've added things like logging. In fact, that's one of the things that they've always done. Maybe allowing you to scale some of those things. But most of the stuff, multi-cloud, can you just do multi-cloud for WebLogic server easily? Nope. Upgrades of the kernel and maybe the Kubernetes cluster underneath? Nope. Consolidation, like we talked about earlier, also very difficult. And one of the interesting things that we found is that there might even be some license consolidation that you can gain. License benefits here because of the way that licensing model is for WebLogic under the covers. So they tend to license on a fairly low level infrastructure abstraction. And so if that's used by multiple instances across the top, you might even be able to save some licensing costs. Now, the difference between that and the elastic search and spark workloads that I used as an example beforehand is that because those things are much more traditional and you have things like persistent identity, you need to leverage something in Kubernetes called stateful sets. And this is taken directly from the documentation there. Stable persistent storage, stable identifiers, ordering. So starting things up in a very specific order. These are the types of things that you needed to do with these traditional applications. I came from documentum. I know very well that the dock base has to start before the dock broker does. So we're familiar with those things. The interesting thing is that stateful sets replace what they used to call pet sets, which I thought was just a great name. So stateful sets don't necessarily mean always state persistence. It just means it's something that you care and feed for a little bit more than you do true cloud native stuff. And it's beta in 1.8. So this kind of summarizes both sides of the spectrum. So code that you write and code that somebody else wrote. And what I want to do is I just want to leave you with a call to action. So Cloud Foundry Foundation announced the kind of the change in organization on this. Your call to action is run workloads on Kubo. Because it's running those workloads that's going to help us tease out. And when I say us, it is the truly the Cloud Foundry Foundation. We have committers on Kubo from Pivotal, from Google, and from VMware are all committing on that open source project at the moment. And the only way that we're going to get that right is by having lots and lots of workloads and finding those edge cases and making sure that we are standing up Kubernetes clusters that can support all those workloads. All right. And with that, oh, and of course, when you do run those workloads, please share with everyone else. And with that, I am at the end of my time. I'm sorry, I almost never leave time for questions. But I'm going to stick around for a little bit. I don't have to rush out right away. And I thank you for your attention.