 How you get started? Good morning. How's everybody doing? Little bit ladder. It's always tough to be the first breakout session after a keynote, so how is everybody doing? Thank you. That's a little bit better. This is going to be interactive, so I'm going to make people raise their hands. We're going to do calisthenics, so I can get a feel for everyone. So first, I actually want to ask the audience to understand how many of you have heard of Cloud Foundry, have used Cloud Foundry, so raise your hands. All right, good. Most of you, fantastic. That's good news. So my name's Chip. I head technology for the Cloud Foundry Foundation, which is, as of late January, actually stood up. It had been a work in progress for quite some time. So we're now official. We're now an independent foundation with a number of member companies that are similar to the OpenStack group. So today, what we're going to do is we're going to talk a little bit about the power of two. And actually, it'll be the power of many as we spend some time thinking about both the business context within which Cloud Foundry is working and some of the technology behind Cloud Foundry and how the two integrate, and how we integrate with some other projects that are further down the stack. So this might seem like the type of thing that you hear all the time, but we are, in fact, at the dawn of a new era as it relates to computing. So a few decades ago, entire companies had about as much computing power as we have on our phones today. We're hyper-connected. This is changing the nature of business. It's changing the nature of innovation. It's changing the nature of competition between companies. And that's really very important because a lot of the consumerization that's occurring within the technology industry is putting a lot of pressure on companies to respond with agility, respond quickly, and be able to really keep up with the shifting expectations that are out there for us. So how many of you are familiar with the MIT Sloan School of Business Management? Anybody heard of them? Anybody read their type of article? So for decades, at least 20 years, the Sloan School has been talking about the way for a company to succeed long term being to find a sustainable competitive advantage. And this is the type of thing that's driven corporate strategies. It's driven the way that boardrooms think about product investments. It's the way that CEOs have been interviewed. What are their thoughts around how to create competitive, sustainable advantage? But the Sloan School of Business Management doesn't say that anymore. They've actually said that it's now impossible to maintain a sustainable, competitive advantage. And instead, we need to be thinking in terms of continuous innovation. So continuous innovation, this is not just about technology, although technology is one of the leading reasons why you can't sustain that competitive advantage anymore. This is really how you're going to capture the opportunities that present themselves to you as a company. This is how you're able to respond as another firm might be taking advantage of technology, iterating faster than you are. This is all really, really important. And frankly, the continuous innovation notion is exactly what Cloud Foundry is very focused on. So how many of you have heard this term before? Water scrum fall. So how many of you do agile development? Yeah? All right, keep your hands up. Keep your hands up. How many of you deploy continuously to production? Keep your hands up if you do that. Everybody else drop them down. So we're down to like a handful of people. So you're trapped in a water scrum fall model right now. And that is absolutely affecting your ability as engineers if you're developing applications to get what the business needs done and out the door. You're still you're trapped. And what we think you need right now is the pairing of cloud native applications with this continuous delivery business value. Again, that's the continual innovation. Now, we're here at OpenStack Summit. And there are a lot of projects and a lot of market pressures that are leading different groups to collaborate in different ways on projects that are solving pieces of the puzzle. We have OpenStack. We have the Zen project. There's OpenCompute. We've got OPNFV, Docker, obviously. Language frameworks like Node. All of these are open source projects. And they're driven by the basic concept that to deliver continuous innovation, we need to be in a position where as an industry, we're sharing resources in ways that are not where we're competing, but in fact in places that allow us to have a platform that we can use to move forward. And that pressure continues. And it's starting to cause the creation of projects like Docker, like Rocket. Automation, obviously, has been around for a long time. And it's part of the solution. And then you have cluster management solutions like Kubernetes and Apache Mezos. There's a lot of work happening out there right now. And a lot of the challenge that the enterprise is going through is how do you take all of this new technology? How do you take all of the business problems that we've got and come to some sort of a solution that's going to apply the technology to help the business move forward? And what companies are doing is that they're going through a selection process to determine what their data center stack is going to look like. Now what's really interesting is that we're seeing that this is open from top to bottom, whether you're talking about that open compute layer, or whether we're talking about projects like Node.js or other language frameworks, open by default is allowing us to then compete in other ways, but also collaborate to bring the whole industry forward. What we're actually striving for is this concept of a cloud-native application platform. So a cloud-native application platform is exactly what we just talked about. It's a solution that allows us to deploy applications in a way that's going to allow for continuous delivery. It's a solution that ties deeply into parts of the stack that are underneath and for capabilities that are going to ride on top. So let's think about this for a minute from user expectations, from the user expectations perspective. So when you take something like infrastructure as a service and you pair it with an app-centric platform, you're able to actually service really three different categories of use cases. So the IOS layer is going to give you the better SLAs on infrastructure deployment, more flexibility, faster time to get the infrastructure stood up. You're going to get higher availability out of your infrastructure, and that's all fantastic. But the application developers are really looking for faster time to market for the apps themselves. They're looking for how do you get that type of agility, that iterative deployment out there? How do you leverage open source software, the new frameworks that you want, the new technologies that you want to deploy without being stymied by an IT department saying, this is the language that we know how to support in production, and so this is what you have to write everything in. You don't get to take advantage of newer technologies. And then the ops side has a problem too. So if the developers are busy speeding up their delivery, they're using new technologies, that presents a real challenge. They're looking at it from the perspective of continuous delivery, ensuring no downtime, focusing on instant scaling, and making sure there's consistency and automation in every environment. So let's just think about the unit of value for a minute. A simplistic way to view this is to say that for infrastructure as a service, the virtual machine is that unit of value that you get. It provides you with an operating system that you can then deploy your applications on. It allows you to do a level of orchestration of the infrastructure, but really you end up in a position where you need to add something like Chef or Puppet, you need to think about how you're going to deploy the application within that virtual machine container. And so it's not quite in line with what the application developers need. But the unit of value for an application platform is where the containers are transparent. You don't really care what's happening underneath your app. The life cycle is dealt with. You've got system changes in a very declarative way so that you're no longer scripting and dealing with any idiosyncrasies there. You're just simply declaring the state that you're looking for. So one way to look at this would be to say, we've got developers and they're producing a whole bunch of artifacts. They write code in their IDE. Everybody's heard the joke before. It works on my laptop. So what's the problem in production? So the goal for an app platform is to be able to take that artifact, the result of the application development pipeline, and turn it into a running production architecture. That seems very simple. You need some load balancing. You have an app. You need some type of staple service underneath it. And you need to be able to run it across a dev environment, a test environment, staging, and production. But really, that's just the day one problem. The day two problem, the day three problem, is how do you have that complete life cycle? Whether you're talking about the build application, the push, the maintenance, the updates, and then the retiring. Applications have a lifetime. That lifetime is getting smaller. And that's OK. So it needs to be part of the cycle. So how many of you are familiar with the microservices pattern? Yes? OK, great. Of course, you should be. OpenStack is based on that. This is actually a very important trend. And we've been spending a lot of time with enterprise IT shops that are attempting to figure out how they change their development approach from the monolithic approach to the microservices pattern. And it's a really fascinating transition they're going through. Huge amounts of education, but they're all coming to the realization they need a platform that's going to solve for this. Who here has heard of Martin Fowler? Couple of you? OK, great. So Martin does a good job of looking at the microservices pattern. Now, it's not a revolutionary pattern. It's more of an evolution. But there are some emergent properties and emergent requirements that come out of microservices. You have the need for rapid provisioning of infrastructure. We talked about that at the IS layer. That's going to solve the rapid provisioning problem for us. He describes it as basic monitoring. But I kind of generalize a bit more and say that's basic operability of your application. You need to, by default, have an operable environment. You need to have logging and metrics. You need to have self-healing. Third, he describes it as rapid application deployment. And that's how you take your application pipeline and ensure that it's fully automated end to end, hitting all of your various environments, and pulling yourself into production. And last, and probably most important, is the people aspect. So we've been describing this in the industry as the DevOps culture. Obviously, that's a very well-known term today. But people are what make most of this work. You need a platform, but you're trying to enable people. You're trying to enable your application operations team, your platform operations team, but most importantly, the developers and their line of business partners. So how many of you have looked at 12factor.net before? I told you there were going to be some calisthenics, right? Lots of hand waving around. Good. So some of you have seen this before. Fantastic. So 12factor applications is kind of another way to describe the microservices pattern. It's a slightly different take. It talks a little bit about what are the factors that describe an app that can be considered cloud native? And so I encourage you to take a look at it. But you can really summarize it in these basic points. So you use declarative formats to set up your automation. You want to minimize the time and the cost for new developers to be onboarded. You need that clean contract with the underlying operating system for portability. You need to handle deployment on a modern cloud platform. You need to minimize your divergence from dev to test to production. And really, production, the only real difference between production and any other environment, should simply be a function of scale. How many instances are there? How many users are you servicing? That is the ideal state for a 12factor app. And that seems like a lot, but it's actually far from enough. Simply just having a solution that solves for those five problem demands isn't enough. You have a lot more that you need. You need to be dealing with things like cross-service configurations. You need to handle automatic healing. You need to handle HTTP routing on the front end, or really any traffic routing on the front end. Scaling up, both scaling down as well. Automatic failure recovery. And then it needs to just simply work tomorrow. How many of you deploy an application and think to yourself, well, this is great. It's working today. But what about tomorrow? What's managing it for me tomorrow? And that's what a cloud-native app platform is going to do. Now, I love containers. They're a wonderful ingredient. They're absolutely fantastic. I like Docker. I think that the Linux features that allow us to do containerization are really important. But the point that I'll make here is that they're an ingredient to a platform. They're critical. They're very interesting. But they're just simply an ingredient. And they're part of the solution. You need a cloud-native app platform. Now, of course, I wouldn't be standing here if I didn't believe that Cloud Foundry was that type of platform for you. And there are others. And you could also build your own by bringing a number of these pieces together if you'd like. But we believe that Cloud Foundry is a platform solution that solves for continuous innovation. So that was a lot of kind of businessy talk. And I'm assuming you'd want to spend just a few minutes maybe talking about some technology, right? Yeah? All right. So let's do that. So here's the architecture of Cloud Foundry. And for those of you that said that you're familiar with Cloud Foundry, a lot of this should be something that you're used to seeing. But let's go top down. End users, we know the story here. End users are mobile. End users are consuming everywhere. They're using their browsers. They're using their phones. But what we're seeing is actually in addition to the users, there's a much higher number of industrial internet or IoT devices that are consumers of these applications. At Cloud Foundry, we're seeing companies like GE, Lockheed, Honeywell. They're all trying to figure out how do they solve for that industrial internet problem in a consistent way where the mobile application experience and the browser application experience is all managed consistently by the same application teams. Moving down a layer, we have what we call the elastic runtime. This is what you normally think of when you talk about cluster management. So cluster management of the customer applications, this is where you get that 12-factor app design pattern being employed. Today, we support a build pack approach to deploying application code, build pack, in very simple terms. I have some code. I'd like to hand that code to the platform and let it figure out how to build it, stage it, containerize it, and deploy it. I'll talk a little bit about some of the things we're doing in this space where we're bringing in newer technologies and newer ways to deploy workloads into the runtime. So below the runtime, you have services. And services can be a number of different things. Really could be anything. But the majority of services that we support are those stateful services, where persistence is really the key, your databases. Whether you're talking about a Hadoop cluster, or whether you're talking about a simple Prokona-based MySQL cluster, whether you're talking about Ryex CS, the services layer is really important to the developers. And it's also part of an enterprise's job to make sure that they've curated an effective set of services, or they've purchased an effective set of services that are going to build the foundations for all of the applications that are going to live in the environment. We also support user-provided services as well, which we think is pretty important. What that lets the end application developer do is bridge this platform with legacy environments. We all have existing applications that exist today. There might be restful interfaces into it. You might have a mainframe sitting somewhere with some sort of proxy that's handling access in and out of it. You need to be able to connect Cloud Foundry apps to that type of legacy dataset. And then below the services, we have operations. And to us, this is something that's really important. The project name is Bosch, that takes responsibility for the operations layer. And what Bosch lets you do is deploy the elastic runtime and the services in a consistent way. So you're dealing with a pool of infrastructure. Hopefully, Anayaz is underneath it. You're able to ask that infrastructure layer for enough virtual machines and disk volumes to support the services in the runtime environment. But that Bosch layer effectively lets you describe in very declarative terms how large your environment's going to be, how many instances of each service will exist, respond to the provisioning requests that come through. It also deals with things like logging, scaling of the overall environment, health monitoring of all the services, and the runtime components themselves. And then last, I talked about the infrastructure layer. So we're here at OpenStack. OpenStack is one of the three officially supported infrastructure targets that Cloud Foundry can talk to. Amazon, of course, VMware, vSphere, vCloud as well. And then we've got an extension capability that lets us pull in other infrastructures and service environments. And that includes companies like IBM, who have integrated with SoftLayer. And it works very well. It gives us a consistent way to let the platform, the app platform operators, live on any infrastructure environment they choose. So what are we doing now? The biggest project that's happening right now within Cloud Foundry is a rewrite of the elastic runtime. We'll dig into that just in a minute. But the key thing to take away from a capability perspective here is that we're going from a purely build-pack model to a model where we can also accept Docker containers. We can also accept Rocket containers. The idea is that we need to allow the different application delivery pipelines to produce whatever artifact is going to be output and then take that and take responsibility for scaling and managing it, as well as routing traffic into it. So the work for that rewrite, we call that Diego, was really a lot of effort was put into thinking about the architecture of Cloud Foundry, which was a lot more tightly coupled than it is in this picture. And tearing it apart and continuing to iterate through the process of teasing different components apart. And what we end up with is an architecture that looks something like this. Now, we could spend the time to walk through each one of these in a lot of detail. But needless to say, you have the Diego block is the, I guess it's showing up a little bit purple, but the blue block here, where you have the cells. An old Cloud Foundry terminology that would be called a DEA, or a Droplet Execution Agent. So today we're calling them cells. There's a number of internal components here. This is allowing us to actually support not just Linux containers, but we can support containerized solutions for Windows. In fact, we've got a number of different approaches to that as well that are in the ecosystem. And it's a very general purpose container environment. We call that Garden. And then there's a Garden Linux or a Garden Windows implementation of it. That's where you would see something like Garden Docker, Garden Rocket. And we've been able to allow the containers to live within the environment, within the cell, in a way that can accept whatever that artifact is going to look like. So below it, we've got things like the BBS and the brain. This is really what's managing the overall cluster. It's tracking desired state versus actual state. And that's probably the hardest problem when we're dealing with infrastructure automation. So everybody here should understand that. Once you describe what you're looking for, that's great. Then you have this thing that's for real, and it exists. And those two are supposed to match. But we know that you're going to have changes to desired state that need to be reflected in real, in the real state. And you know that you're going to have occurrences that happen in the real state that have to be reflected in what you expect or what you're tracking is the desired state. So the thing that we've figured out, though, is that deploying platforms can be pretty hard. How many of you have actually deployed something like Cloud Foundry before? Yeah, well, not that many. But how many of you deployed OpenStack? All right. Platforms are hard. A lot of product companies do a great job of trying to smooth out the edges. But complex distributed systems are just fundamentally difficult to deploy, especially when you're talking about data center scale. And so for the Cloud Foundry project, we said there is a need to take the same type of experience and give it to a developer on their laptop, or give it to, let's say, a collection of developers in sort of a work group scale. And we wanted to make that a lot simpler. So everyone here, of course, has used Vagrant before, right? Vagrant or how many of you use Terraform from HatchiCorp? No? All right. Developers love those tools. One command, you get an environment up. With one command, you get either a virtual machine with maybe a bunch of components running inside of it. Or with Terraform, you can do things like run, deploy entire clusters with one command out on an OpenStack-based Cloud. And so this is where we've created Lattice. It's basically Cloud Foundry by subtraction. So you take that full Cloud Foundry architecture, and you delete a bunch of things from it that make it enterprise grade. And instead, you solve for the developer problem. And you say, well, you need Diego. Again, that's the cluster manager. And you also need routing inbound traffic. And you need log aggregation. That's it. Single command, you get a great cluster manager that's able to be deployed on really any infrastructure cloud that Terraform would support, where you can run it in your local laptop. And you get the same type of experience you'd get with a full-blown Cloud Foundry environment. So why do you care? I've been kind of yammering at you for a little while. So why do you actually care about this? It depends on who you are. So I think there's three reasons. So first, if you have OpenStack installed, this is going to help unlock some of the investment. The promise of infrastructure as a service is that level of agility that you're looking for, the virtual machines being deployed very quickly, access to the infrastructure in an elastic way. But to really unlock that value, you have to hand it all the way to the developer in a way that lets them simply code and scale, code deploy and scale. And so that is how we believe that you're unlocking a lot of that investment in what you're doing with OpenStack. The second is that I spent a lot of time talking about continual innovation. The reason for that was that that's where the business value is. My favorite quote in the last few months was from Michael Cote, who used to be with 451 Research. He's bounced around the industries at Pivotal right now. And my favorite quote from him recently was, great job deploying those servers. Said no CIO ever. How many CIOs actually said, great job deploying those servers. I've just achieved our quarterly business objectives. None. This allows you to get closer to the business value where the business users actually see value. It's the applications themselves. And then last, we're deployed all over the place right now. So if you're sitting in this room and you're a SaaS provider, you're an ISV, I highly recommend you take a look at Bosch enabling your software. We're in a ton of Fortune 500s right now. GE, Lockheed, Humana, Allstate. Kind of list goes on really big in Wall Street. So there's a lot of value to be added there if you're a SaaS provider. So then we'll step back a minute then. I mentioned continual innovation up front. That's the purpose of Cloud Foundry. We're very focused on that. With Sloan School of Business saying that this notion of sustainable competitive advantage going away, that really puts some companies in a really tight spot. The reality is that the percentage of companies that are in the Fortune 500 today that will be around in two decades is pretty slim. There's been a lot of churn in the Fortune 500 recently. And we're going to continue to see that accelerate. And that is a function of disruption, largely driven through technology, that's forcing us into a situation where you have to have continual innovation to survive. You can't have that sustainable competitive advantage. So I guess just to wrap up then, and I will take some questions if anybody has any. At Cloud Foundry, we're focused on really three main things. And this is the future that we believe for the Cloud Foundry software. So first, we see a world where there's a ubiquitous and flexible substrate deployed across public and private environments. It's likely going to be coupled with OpenStack, with a number of other open source projects underneath, with a lot of open source software frameworks that we support above. This will be ubiquitous. It's going to be deployed across a large number of telecoms. And second, there's this concept of portability. We're working very hard to make sure that we create a certification program that tells an application developer, if I write my application and I consume these types of services and I deploy it into a Cloud Foundry product or service, that same experience will exist if I go to another product or I go to another service. And that's going to give us some portability. It's going to give us a level of trust with the environment that we're working with. And then last, vibrant and growing. I mentioned the ISV story and why it's important to ISVs to take a look at how you deploy into Cloud Foundry. We actually have a very vibrant and growing ecosystem right now, but we expect that to continue to grow. And that's our main focus moving forward, is how do we help the ISVs deploy into this type of environment to give the IT departments the same consistent experience, whether you're talking about a bespoke application on one hand, or whether you're talking about a product that you've purchased from someone on the other hand. And with that, thank you very much. Pretty brief. I'm willing to take questions if anyone has any. There's a microphone, though, so. Duncan, how are you? If you flip back to the slide where you had the new activity in the elastic runtime, yeah, there. So if you introduce things like Docker and Rocket, is there going to be some kind of blurring between that and the services that you're trying to provide underneath the elastic runtime? That's a very good question. So what's the difference between the elastic runtime experience and that services experience? So what I can tell you right now is that persistence is best achieved through the services implementation. There are a number of reasons for that. But we are exploring right now within that elastic runtime, how can we use the same type of scheduler, the same cluster manager, to handle stateful services? They behave very differently. And I think we all know that volume management is critical. So there are questions of, do we tie into the underlying IaaS and expect it to completely solve for the problem? Do we do network-attached-based storage? Do we do some other type of trickery? So there's some challenges, and there are a couple of exploratory projects that are going on right now to see if we can unify the theme of persistence across the two layers. Any other questions? OK. A little bit early, but thank you very much. Appreciate it.