 Welcome everybody. My name is Doug Davis from IBM. I'm here to talk about the promise of cloud computing and pretty much talk about whether we've actually been successful in terms of what that promise has been for the community. Okay, so before we get into some of the nitty-gritty details, let's first talk a little bit about what I mean by the promise of cloud computing. Now I'm assuming that most people have seen a chart very similar to this where we talk about the progression of things. You start out with bare metal, we've moved to virtual machines, containers, platforms of service, and then they'll be serviced. And through this entire migration, there's always been this sort of promise that as you move to the right, you get this decreasing level of concern in terms of what you have to understand relative to the infrastructure. But at the same time, you then get the benefit of being able to focus more on your business logic. So you don't have to worry about, as we said, the bare metal, the virtual machines as much and the function space, all you have to do is sort of write your code and magic happens under the covers. That's been sort of one of the promises. Now related to that is as you do the migration to the top right, we talked about this whole notion of breaking up the monolith. We have this whole notion of targeted scaling, meaning you can then scale individual components of your application as opposed to the entire monolith itself, which of course means you get better research utilization, you squeeze more into the virtual machines or the hardware, and that's supposed to eventually reduce your cost of this stuff. Now, as I said, a lot of this is about abstracting the infrastructure, and it's all about making sure your devs can then focus on their code, not the infrastructure itself, which then of course leads to faster time to market, which is either saving money or making more money, depending how you look at it. So when I talk about the promise of cloud computing, that's really what I'm talking about here is this whole notion of abstracting away the infrastructure more than anything else, and I want to talk about whether we've really been successful in making that happen and what the benefits of it have been. So let's focus on the three areas at the top right of the graph. There's containers, platform as a service, and function as a service, and I'm focusing on these because we're talking about containers for the most part here, and let's actually get a little bit more specific about here. So let's talk about containers as a service, Kubernetes, platform as a service, let's just pick on Cloud Foundry, and for function as a serverless, let's just use OpalBisk. What I want to do now is talk about the features of each particular platform, and let's just start with Cloud Foundry. Now, keep in mind, this is just a high-level list, and it's not meant to necessarily be a 100% accurate statement about Cloud Foundry itself. It's a little bit more generic. I just like using concrete names on there to give a little bit more of a grounding. So with many platforms as services, like Cloud Foundry, you have a simplified user experience, obviously. Everybody knows Cloud Foundry is really, really good at that. The whole push-deploy model is wonderful. They're container-based. It's targeted for the microservices, small-ish type task type of things, and it's meant for statements workloads. And under the covers, it will manage things like the load balancing, your endpoint, and stuff like that automatically for you. And as I mentioned, the push model, where it does the build for you with the build packs, that's all wonderful. The pay-for-users model, obviously, for an hosted environment, it will only charge you for what you're using, and the infrastructure will scale automatically with boss and older tooling. Now, what you don't get with something like Platform as a Service or Cloud Foundry is necessarily access to the advanced features. Some of them are there depending on the platform, but in many cases, they actually try to hide it from you because that's the whole point. They want you to focus on, just give us your service code and we'll host it for you. Now, let's switch over to Kubernetes. A little bit different beast. Obviously, it's still container-based, microservice-based statements, all the other stuff, but there are a lot of other features there that are either missing or you have to kind of do it yourself. So, for example, endpoint management load balancing, stuff like that, it's there, but you have to manage it yourself, and it's not necessarily the easiest thing in the world. On the main infrastructure, there is auto-scaling, stuff like that. Now, other things such as the build, getting the container image itself, that's not there. It's not even there as part of Kubernetes itself. You have to go to third parties to do that. That's why that one's completely blank, but obviously, you do have access to the advanced features. Everybody understands all the wonderful things that Kubernetes offers. Now, the one thing that Kubernetes is really missing, though, is the simplified user experience. I think most people would admit that Kubernetes is not really the easiest to use. It's not designed for the end user. It's meant for an operator or an advanced user. Now, let's talk about OpenWisk. Now, OpenWisk, what's interesting about that one is it's actually very similar to Cloud Foundry in terms of features or platforms of service. You've got to kind of wonder why they didn't sort of merge those two worlds, but for whatever reason, they didn't. You see a lot of the same features there, very heavily focused on the user experience. However, with many serverless or functions of service platforms, you will get additional tooling, for example, event-driven type of tooling, meaning they will do things like help you subscribe to the producers. They'll manage some level orchestration of the events that are coming through the system. Things you don't necessarily get with Cloud Foundry or out-of-the-box Kubernetes. Now, it will do other things for you, scale to zero, not necessarily all platforms of services do that. Kubernetes, you can get that, but you've got to do it yourself. Asynchronous tasks, not built into Kubernetes, you can do it as part of the application, but OpenWisk or functions of service, many of those offered that out-of-the-box. Now, non-restrictive execution times. Many functions of the service platforms limit how long you can actually run your functions for on each request. Now, technically, there's no reason for that other than that's the choice they made as part of their infrastructure, and you don't get those restrictions on things like Cloud Foundry and Kubernetes. It's just a different choice of how they chose to expose these things or how they chose to implement it, I should say. Finally, you have the same thing with memory usage. You have more freedom with things like Cloud Foundry and Kubernetes, not so much with functions of service. Our assertion is you shouldn't necessarily be forced to choose between these various platforms. As you can see, there are differences between them, but most of the differences aren't necessarily because the infrastructure in terms of containers forces them to make these choices. They're just implementation choices that they've chosen to make as they choose to expose it to you as a user. When you're forced to make these choices, what that really means, though, is it's going to mean more work for you, and what do I mean by that? As you break up the monolith, what happens if each component can't actually run on one particular one of these platforms? What if some of it is best suited for, say, a function? Others are best suited for Cloud Foundry because it's a normal 12-factor app kind of thing. Others need some of the complexity or advanced features of Kubernetes. What if you have to split it across all three? That's more work because what if you then have to have a separate DevOps pipeline for each one? More work, more headache, more things to manage. Then what do you do if you need to actually integrate those workloads across three platforms? You need to do that securely. You don't necessarily have to have all the network traffic go back out to the Internet every single time just because you want these components to talk to one another. That's not ideal. Of course, to sort of sum it all up, you're going to have three times learning curve, three times the management aspects to all this. There's a lot of choices here, but there's also means a lot of work with those choices. A lot of times I like to say that sometimes having too many choices isn't necessarily a good thing. This is one of those. What do you do? One option, of course, is just to accept your fate. What does that mean? Well, assume Kubernetes wins. That's one way to go. You can actually see some of these platforms are rebased themselves on top of Kubernetes, but the problem with that is while they're doing that, they're still limiting what the user can do by just saying, well, you're still stuck in the platform as a service world. It's just under the covers we use Kubernetes, but that doesn't necessarily help the user. So if you do choose to go with Kubernetes, what does that mean? Well, you still have the problem of, well, you're missing all the advanced features. I'm sorry, not missing the features, but you're still missing some features like the build asynchronous task built in. You definitely don't get the simplified user experience. You have all those do-it-yourself features that we talked about that you have to manage yourself. They're still there, you still have to worry about all that stuff. So choosing Kubernetes and solve that, it just removes the choices. But you still have to deal with the complexity of Kubernetes. Anybody who's deployed stuff to Kubernetes, you know, you have to understand containers, positive access services, secrets, ingress, load balancers, and you don't have to just understand the concepts. You have to manage all those yourself. Now, there are some tooling that will help you, Helm, stuff like that. NSTO did manage some of those things for you, but those aren't necessarily trivial to use either. They're not necessarily designed for the simplistic value that you might see with something like Cloud Foundry. And of course, you can also do things like blue green deployments, but again, you're on, you're going to make that happen. So while Kubernetes has done a wonderful job in terms of abstracting things, so you don't necessarily have to completely understand, necessarily all the details or the exact implementation choice or deployment choice of the infrastructure, it's abstracted, it does expose that infrastructure to you. So you don't necessarily get the same level of abstraction that I claim we were promised with that original promise chart that I showed you before. And let's face it, as a developer, all I want to do is deploy my code. But even that's not 100% accurate, because technically I don't want to just deploy my code, I just want to code as a developer. And the entire notion of deploying and managing my application at runtime is something I have to do, it's like a necessary evil. I don't necessarily want to have this bad all my time doing that. And so why should I? Now, even beyond all that, let's talk about a couple more things. First of all, what about other types of workloads? What about things like batch computing and the orchestration of batch computing? Now, Kubernetes has jobs, and that's sort of the beginning of batch computing. But realistically, it doesn't have the orchestration, the stuff around it, you can do third-party tooling to bring that in, but it's not there by default. And again, it's more things that you as a developer would have to manage and learn. And I'm sure there are other platforms that I'm even thinking about that we have to integrate and concern ourselves with or are there other options? It just comes a jumbled up mess, basically. But then we have other things that I've kind of touched on, things like how do you even get the images to begin with to deploy into Kubernetes? Kubernetes doesn't have that by default. Now, you can bring that in through third parties, but it's not there by default. Again, more learning. And finally, we have things like event orchestration. Okay. What I mean by that is let's say the things coming into your applications aren't just messages, they're actually events. And by that, what I mean is maybe you want to have the system help you manage your subscriptions to the event producers, right? As the events come into the system, maybe they need some sort of fan out because they need to go to more than one service, or maybe you need some sort of filtering thing, right? All those kind of things that you will see in some functions of the service or service platforms aren't there by default inside of Kubernetes. You might be able to find it, be able to write yourself, but it's not there by default. And I'm sure there are lots, lots more, okay? But with that, let me stop my venting and complaining a little, and let's move on to some of the sort of progress that's been made in this space. Now, first of all, what I want to do is talk about what I call community chatter, okay? If you actually listen to what people have been talking about in the community, you can sort of hear some of the grumbling of some of these things that I've talked about, okay? I've heard many people talk about how the line between some of these platforms are becoming very, very blurry, and I've hinted at one of those in particular in the beginning of this presentation, right? The whole notion of platform as a service versus function as a service is very, very blurry, and you saw that with my chart, or with the table, okay? I mean, after all, these are all just containers. So why do we have to make such a big deal between, you know, which of these as-as-a-service things do you want to actually deploy to? It's kind of a silly, okay? But of course, you also hear people complaining about Kubernetes not from an infrastructure or feature perspective, but more from a usability concern, okay? As I said, most people will not think of Kubernetes as very usable, and most people don't necessarily want to expose Kubernetes to their users, but for whatever reason, that's the way it sort of turned out, okay? And so the way I like to think of it is, Kubernetes is not the endgame here. It may be the endgame release for today from a technology perspective, but it's not the endgame in terms of what we should expose to our users, okay? So that's one piece. That's sort of the, you're hearing gromies in the community, and that's a good sign because that means people are starting to realize that there's a problem here, okay? Now, the other thing that's happened in the community is a project called K-Native has been spun up. Now, K-Native is a simplified user experience on top of Kubernetes. Now, it doesn't aim to necessarily expose all the features of Kubernetes. It's mainly aimed at deploying 12-factor applications. Some people call it serverless. It depends on your point of view, but the simple command line that I show you here is just a quick example, right? So from the K-Native command line, I do a service create, pass in the name of the application I want, and then pass in the image. Now, if you look at that, all I did is pass in two bits of information, application name, name of the image, and that's it. Under the covers, K-Native managed everything else for me from a Kubernetes perspective. It's going to manage the pods, it's going to manage the replica sets, deployments, the ingress, the load balancing, the networking, you name it, it manages it for me. So it is managing all these, what I would call modern or advanced features under the covers automatically, so I don't have to manage those things myself. So the reason I'm mentioning this is one, I work on the project and I'm very excited by it, but also because I think this is an admission that we've sort of let our users down, and this is an attempt to try to fix it. Now, as much as I love K-Native, it is just a stepping stone there, and it doesn't necessarily, as I said, aim to solve all the problems that we talked about previously in the slide deck. In particular, a lot of what they talk about is focused on sort of low latency applications. So for example, it doesn't necessarily do a good job of processing requests that takes, say, three hours to run. Now we're working on stuff like that, and not necessarily everybody's on the same page, whether we should or not, but at least the framework there is maybe we can start expanding to that space. So as I said, K-Native is a stepping stone to something bigger. And let's talk a little bit about what that next step actually might be. So this slide talks about what if. What if, as a developer, I decide what I want to use as sort of the input into the system. Let's say I want to start with just a container image, or let's say I want to start with source code, and maybe the source code is part of a function, maybe it's part of an application, maybe it's part of a batch job. It doesn't really matter. It's just source code. Okay. Now the other bit of information is what if I also want to provide just the runtime semantics to where I'm going to deploy this thing. And by this, I don't mean things like the YAML that says, oh, what should my load balancer look like? How should it, you know, what should it do in terms of scaling and network traffic splitting, all this stuff? No, no, no. What I mean here is give us the high order requirements that you want for your running application, right? And the infrastructure will manage the rest for you. You'll figure out how to make that happen. You just give us the high, low requirements. Okay. So with those inputs, what if you can then hand those off to your platform? Now at the core of the platform, you obviously need some sort of workload manager, right? It's going to manage and host your application, whether it's a 12-threaded app, a RESTful API application, batch, asynchronous, high, low latency, whatever. As long as it's containerized, this thing will run it for you. And you don't need to understand the things in the green box, but let's say at the core is this engine that's hosting your application. Okay. So you take the image as input to there. If you give it source code, it will maybe generate the image for you through some mechanism, maybe through Docker build, maybe through build packs, whatever. And then add another, the last set of input is the runtime semantics. And that's all input into this core engine that just knows how to make it all happen at that point. Okay. Now obviously your application in many cases, not all, but many, will need to get input or messages coming in to actually process. So this infrastructure should manage the networking, auto scaling, you get a lot of load, traffic splitting, stuff like that. It should manage all of that for you. Okay. Now, what if those messages aren't just quote messages, but they're actually events? Well, as I talked about earlier, what if you need some sort of event management orchestration, right? Help in managing your subscriptions to these vent producers. What if the events come in and they need to be some, do some sort of filtering? What if you need them to be routed to multiple places inside your infrastructure, right? Let the infrastructure manage these things for you. You just give us the same antics you want at a high level and magic happens under the covers. And obviously, if you're in the cloud, then you may want to talk to manage cloud services managed by the infrastructure. And of course, throw in security and compliance because everybody knows you need that, right? So what if this is where you could go with this stuff, right? Sort of sort of the dream state, at least from the IBM perspective in terms of all these various platforms and these choices, right? So the point here is as a developer, you should only be concerned with the stuff in the purple box there, right? Your inputs, meaning a container image, you've already have one or just source code and the runtime semantics you want. Okay. The entire question of which as a service platform should you choose should basically become moot, right? It should not be a question you have to ask yourself or even think about, nor should your platform provider force you to choose. Okay. Now, this isn't just a pipe dream. IBM actually does have an offering out there that we just went into beta for. And I know, you know, not necessarily trying to make this a product pitch, but the point here with showing you what we have is that IBM believes so strongly in this that we're actually have a offering out there today that people can actually use and play with. Okay. Now, what is code engine? Code engine, as I said, if there's unified platform, it's a managed unified platform for hosting all your containerized applications in the cloud. Okay. As of today, it's in beta, so it's really free. When we go GA, you will only pay for what you use. And the main purpose here is to get your developers back to coding, not managing infrastructure. Okay. Now, I won't go through the long list of things in the blue box, but you can see there are all the things that we talked about before, right? All the things they said you shouldn't have to worry about, but you still may want access to are still there. Okay. Now, the important thing that I do want to point out here before I move on is that it is based on Kubernetes. And some people will think of that as like and say, well, okay, that's great. It's built on Kubernetes and you dump and you dumb down the user interface. But that means I have limited choices now at that point. Well, not true. Okay. We are built on Kubernetes, but we still allow you to get to Kubernetes if you really, really need to. Our goal is that you never have to, right? We have a simplified user experience. We have a wonderful user UI. We have a wonderful command line. But if for some reason you need to go around us, we don't stop you. You can still use coop control if you really, really want to do, which means your code engine workloads can seamlessly work with your existing Kubernetes workloads. Okay. So this isn't about necessarily limiting your choices. It's about saying only be forced to deal with the complexity when you really, really need to in those advanced scenarios. You're not forced to deal with the complexity of Kubernetes right out of the box. And that's the benefit of code engine. So it's there. Play with it. Go ahead and play with it. We'd love to hear your feedback. Cloud that IBM.com slash code engine. So just to wrap this up because I'm pretty much out of time here, let's talk about the next steps here and what we'd like you as a community to think about going forward. So first, as an infrastructure provider, okay, I think we all need to step back and realize that we may have forgotten our original goal and target audience here. Okay. There are times when it feels like we're producing technology for the sake of technology and not necessarily for the benefit of the user. As I said, Kubernetes is wonderful, but it's not necessarily for the end user. Okay. We need more collaboration. We need more projects like Knative that are not proprietary, right? They're shared across the community and focus on harmonization of these platforms, because there's a whole notion of unified platform. To me, goes in this whole notion of removing obstacles for the end user. Okay. Cause this is also about producing vendor lock-in. So if we can get shared infrastructure, focus on the usability side of things and removing all these artificial choices from the end user, it benefits the end user and it makes our life easier as infrastructure providers. And finally, there's also this usability integration and interoperability projects, things like cloud events. You don't know cloud events is go look at it. It's specifically designed to help the integration of these platforms together. Okay. Again, we need more projects like that to make life easier for not just the end user, but also for integrators so that they can make life easier for the end user. And finally, as end users, you guys don't, you know, you're not off the hook here. For you guys, I think you really need to take a step back and as you use these various platforms, look for opportunities to push back on us. Okay. Ask why are you being forced to do all these various steps just to host your application? Okay. In other words, you should demand less complexity, less friction to get your job done and less of a learning curve. Okay. You should be talking to them and saying, look, I want my developers to develop, not become IT or operation experts. Okay. So demand it of your platform. If you don't complain, nothing will change. We need to hear from customers because we all do listen to customers, believe it or not. Okay. And with that, thank you very much for your time. Thanks for listening to me ramble here. As I said, please, if you get a chance, go look at IBM code engine. It's out there to try to address these concerns. We'd love to hear your feedback because we loved it. If this was sort of the first step for the rest of the community to take this journey with us, right? And again, thank you very much for joining