 So I guess we'll get started. I don't know if these have been starting on time or late. It's really, really interesting for me to be here. And I'm hoping that you guys will get a little bit of enjoyment out of a little bit of where the road started, where everyone, I think, in the room here and the people that have grown Cloud Foundry to what it is today, which is an amazing feat. And then I'm also going to talk a little bit about where I think things are going. Sometimes I get those things right. Usually I get the timing wrong, but I'm happy to share what I see as coming in the next wave and where things are in what I call the technology chaos soup that we're in right now. So the future is easy. Naming things is hard, and timing things is hard. Believe it or not, Cloud Foundry, at least as it started, is nearing seven years old. And I think I was talking to Abby in Chip coming in, and she said there's about 1,600 and some people here at the conference, which is pretty amazing. A bit about me. Moved out here in 1990. Did a lot of work at Tipco, which is a vendor here. Designed and architected all their high-speed messaging systems for about 12 years. I was at Google for about six. I created an architected Cloud Foundry at least the very beginning, nothing that what it looks like today. And I did that while I was at VMware. And so we'll kind of go down a little bit of a trip on how things got started and some fun facts and such. So one of the interesting things coming out of Google was the motivation for what was termed Project B29 at the beginning. And I'll explain why it was called that. But at Google, not only was distributed systems and all that stuff that I thought I knew at Tipco kind of turned on its head, we were in the full swing of web as an application framework, not just static pages. Gmail was born kind of when I was at Google. And it started this massive revolution that moved into mobile and now it's going to go to IoT, where everything can be an endpoint for visualizing, collecting data, interacting with things. But what was interesting was is that when we first got into that wave, and this will date myself very quickly, I remember the notion of deploying and distributing software was, what zip level did I have to use to get my software on one floppy? Yes, I am old. But that's what it was, right? And that's all I cared about. I didn't want two floppies. I wanted one. And web development all of a sudden was very, very hard. And midway through my time at Google, things like Ruby on Rails came out, Django, and all these things trying to make it easier. And they did, and they made it tremendously easy to do those things, like have a very opinionated framework to kind of scaffold out a web page with a little web server, all on your laptop. When you tried to transition that to something that people would call production, things got really kind of difficult. And although when I was at Google, I wasn't involved in Google App Engine, I was noticing that and Heroku. And I was noticing a couple things. One, it's a serious problem. How do you deploy these things faster? Ruby on Rails, Django, other frameworks, I think Spring, Spring Boot type stuff nowadays, makes it easy to develop it, but deploying it at least at the time was hard. And at the time, when I was entering my six year at Google, Paul Moritz was asked to take over VMware. And he came calling mostly for Mark Likovsky, but a little bit around me as well. And he says, hey, you should join VMware. And I said, why would I do that? He says, because I just want you to do something cool, which was funny, because that's how I got hired at Google. That was my job description, just do something cool. And so I thought about it for a while. And I actually decided to join VMware because of Paul. And Mark Likovsky and the guy that I worked with very closely at Google, Vadim Spivak, went over there. And Mark Likovsky, if you don't know, is very famous for lots of things. But the one that really should resonate is he was one of the core developers on Windows NT, which kind of sets the tone for modern APIs. And he also had a project in Microsoft called Hailstorm. So when we got to VMware, he wanted to recreate Hailstorm, so he kind of went off and did that. What I did was I sat down and started thinking about how could we actually take something like Heroku, maybe a little bit like Google App Engine, although Google App Engine was so opinionated, didn't have databases, didn't have anything, and bring it to the enterprise and adopt things that the enterprise might like, a relational database, Java, right, at the time. And so that was my pitch to Steve Herod and Paul Moritz. I said, hey, I want to go and build a pass for the enterprise. And he said, what's a pass, right? And I started pointing to Heroku and pointing to Makara, which turned into OpenShift, believe it or not, when Red Hat bought that. And so we started then starting on the path of designing Project B29. Now B29 is interesting because the name actually comes from Mark. That was his one contribution in the early days. In Microsoft, all the secret projects were named after buildings, where the project was actually being housed. VMware's campus in Palo Alto does not have a building B29, but we just thought that was kind of an interesting kind of play on stuff. And that's how that kind of came about. But again, I mentioned that it's almost seven years old now, at least from the original thing. And again, it's gone through an amazing journey. But it started in October of 2009. It took me about two days to design what we presented to Paul and Steve and Todd. And it was originally Venim and I coding. It took about three months for the first prototype that was actually running to show to Paul and Steve. Now, what was interesting about this was this architecture, which actually is actually the one that actually took this from my notes in the two-day design, looks probably radically different than the people in core cloud foundry world these days. And then might look relatively familiar. There's a notion of an intent state. What are you trying to do? Well, I'm trying to run this thing. And of course, that's the cloud controllers. And then the health manager was saying, well, I'm going to match what's really going on with what the intent is and try to fix things up if they skew. The, an apologies, this is why the talks called naming things are hard. I'm the worst at it. The droplet execution agent, or DEA, which I know is almost gone. I think it's on life support to be replaced by Diego. And then we had the notion of services. And the idea was that I pitched to Paul, because when he said, what's the past, I said, all developers care about are apps and services. They don't care about machines, virtual machines, memory, CPU, or anything like that. And so services and such, in addition to the droplets that would be running, which were the apps, were the key components. And again, this is, my drawing is very, very bad and it's getting worse. But Vadim actually drew this at first. And so this is some of the early days. So what was really interesting was that, at the time, VMware didn't know what to do with the notion of open sourcing something. They really didn't. And so we had talked about this thing called GitHub. And we were doing stuff on Dropbox. And I had pushed along with Mark. And Steve Herrod also supported it to open source whatever this thing was going to be called. Again, it was called the B29 to start out with. And one of the interesting side effects about that was that when we actually made the decision, the lawyer says, well, all of the history up until now, we've got to get rid of that. We don't know what's in there. We're afraid you're going to do something bad to the community from just the comments in GitHub. And they also said, oh, by the way, we're going to have a private repo. And then the lawyers are going to watch everything in the private repo and then bless it to get transposed to the public repo. And this is 2009, which is kind of interesting. So the video actually is kind of interesting, too. So this is the only, I think, artifact from the original code base. And you kind of see, I did a lot of work remotely from home. So that's why you see two of my heads flying around. But if you watch it in slow-mo, you'll see a lot of things that still have an influence on what actually exists today. And I think when I watch this, I feel pushing the envelope with VMware at open source it was the right thing. I think starting projects, lots of good projects start with a small amount of people. What makes them great, though, is the ecosystem that develops around it and the team that actually drives that ecosystem. So a huge hats off to the people at Pivotal and all of the partners, IBM SAP, I'm sure I'm missing lots of them, that have driven this ecosystem to what it is. Because it's pretty amazing. And that has nothing to do with me. I was just the one that kind of started twiddling bits early on. And so if you have time, look at this. It's pretty interesting from a historical perspective. Some fun facts or some myths, as I call them. We were not part of Spring Source. So Chris Richardson had a small project that was an automation framework for AWS that Rod Johnson liked. And he bought Chris's company about three months before Paul Moritz bought Rod's company. We had joined myself, Mark, and Vadim about three months prior to that transaction. And so Java was not the first language that I implemented in the DEA. The first one was Ruby, because I picked Ruby to write it in originally. And again, I apologize for that. Just make sure we're on the same page. I still actually really like Ruby to tink around. It's just not a production language. There's too many dependencies. We were struggling with figuring how to update it. And that, plus the virtual machine infrastructure provisioning, was the kickoff for the meeting in Palm Springs that created Bosch, which was a design by Mark, myself, and Vadim. DEA was named by me at 2 AM. We were getting really, really tired. We were joking that clouds make droplets. So why don't we call it a droplet execution agent? And I'm sure somebody from marketing, by the way, we had no marketing, would change it. And it's still probably around in some form or fashion today. Vadim is a lot younger than I am. And I'm going to show a picture. And you might have seen this picture from the wired photo shoot where I'm on one end and Vadim, who's six foot eights on the other end, he loves Java. And so he started coding in Java after we got the original design done. And he was on the Cloud Controller. And I was on the scheduling algorithm, DEA, the router, and such. And I was outpacing him. And it wasn't, trust me, from any talent of mine. It was just ruby. You can get a lot of stuff done faster than Java. But if you ever run into Vadim, he's an amazing person to know. And he works for Stellar that does kind of online storytelling on iPhone and Android. But he got so frustrated that he went home, learned ruby, because he didn't even know it, learned it on a Saturday, figured out what he was going to do on Sunday. And by Monday afternoon, he had rewritten everything, and then he was blowing by me at light speeds. He also was the engineering manager on the original Bosch implementation. Good, good guy. I think this is dead correct. It is the first open source project ever from VMware. Now Pivotal, obviously with the foundation, has actually embraced that. It's totally transparent, right? But again, we had shadow repos to start out, but it was worth the time. And the original CLI was named VMC, and it was actually written by Olake, who actually worked with Vadim building Bosch as well. And we're lucky enough to have him with us at Uppsara. Some more fun facts. The first customer was actually Salesforce. And so what was interesting is we actually met with Benioff, and what Benioff was describing, and actually Parker was describing, was very interesting to me. It was the notion of a SaaS ecosystem that had advanced so far that they had to allow their customers to write full-fledged applications. So think about this, you have a website first, you have some of the customer's data, then you have APIs, right? Then you have service APIs, then you have a scripting language called Apex, and that still wasn't enough for some of their more advanced customers. Yet what they didn't want to do is have someone say, me, I'll pick on myself, write an application, deploy it on Amazon, not monitor it, it falls over, and then me blame Salesforce for it. And so VMforce was born to essentially allow a landing spot for these applications that were purposely built, written in Java to consume service and data APIs for Salesforce in the force.com platform. I think Benioff liked it so much, but he really did not like the notion of sharing revenue with Paul and VMware. So at one point we got a call about eight o'clock at night and Paul calls me and he goes, hey, Mark wants to talk in the morning, what's up? He goes, there's a system up. And I had the whole system running on Rackspace on my own credit card. I had a monitoring system on it, and I was looking and I'm like, no, it's up, it's fine. I don't know what's going on. Well, what was going on was, is that he didn't want to share revenue, so the next morning he announced that they bought Heroku, which was pretty interesting. RabbitMQ, I actually proposed the buying of the parent company for RabbitMQ and Alexis Richardson into VMware, because RabbitMQ was actually selected as the first messaging backplane. I'm a messaging guy from the Tipco days. But it's interesting and I think you're seeing a trend now where these new microservices type systems, enterprise messaging systems might not actually be the best fit for them. And there was things that I think, to be honest, I didn't understand about Rabbit what it was trying to do, but its general sense was that it was trying to bend over backwards to do something that it thought the Klein app, which was the original B29 code base, was trying to do, and it kept locking the system up. So I got frustrated with that and I created Nats, if anyone knows what that is. And that was, I had played with some code and I had some Ruby code written around and I wrote it in a weekend and swapped out Rabbit and put Nats in. The first app was not Java Spring, I said it was Ruby. It wasn't a Ruby on Rails app either, it was just a Sinatra app. And that's how I got introduced to Blake Mizorani, who created Sinatra earlier on. When Rod Johnson in Spring came into the fold, Paul quickly pointed out that the number one citizen for App Cloud, by the way, was our first choice for the name. So he says you gotta stop calling it Project B29, we're not at Microsoft anymore. For App Cloud, he says has to be Spring, right? It has to be Java and Spring and all this other stuff. And as we were getting close to launch, we did kind of a video launch of it that I think is still on YouTube somewhere. The last weekend before the launch, I tried to pick the coolest two things at the time that I thought, for lack of a better word, cool kids would like. And it was no JS and MongoDB, so I slapped those two things in less than four days before we launched. And it was really interesting because we could track which languages were being used, which services were being used, and they actually did really, really well. Monitoring App was also built last minute, which wasn't too fun. It was kind of a, oops, we better actually do something about this. And that's kind of what it looked like. And there's still pictures online somewhere. Our first marketing person was actually Jerry Chen. And Jerry and I are standing at some conference somewhere and that thing's in the background. It was very simple. We would just monitor the state of things. And those things that were black would turn bright red if anything was wrong. And so that time when Mark Veneov called Paul was asking about what's wrong with the system, that's what I was looking at. I'm like, nothing's wrong with the system, it's fine. It was the business model that was off by a little bit. This is the wired photo in 2011. And this was kind of, we had just launched, and this was kind of like a coming out party about. I think if I recall correctly, three or four months after we had launched, the wire wanted to do an article about us. And what you see there is Jerry's not there. Jerry had brought in James Waters to kind of drive marketing in the ecosystem. And I think all of us can agree he's done an amazing job at that. I mean, look at all of us kind of sitting around here. But Mark Likovsky's next to James. Vadim, again, six foot eights on the end. Patrick Bowman, I think his name was, was next to Mark. I can't remember the guy, the other guy's name. I'm bad at names, great with faces. Yeah, yeah, good guy. He's at Facebook, I believe now. So good, good stuff. And this was already two years after we had started the original code base. So it's amazing how fast things feel like they're going. And then when you actually look back in the rear view mirror, it takes a while for these things to kind of develop. And we'll talk about what else has been developing in just a second. So what we got right. And I had a lot of fear and loathing and Sunnyvale about this talk. So I will term it what I think I got right, but I might be wrong. It was about absent services, not VMs and machines. I think that still resonates very well. Reduce the outback spend and speed up deployment, right? There's a tremendous amount of dollars being spent in trying to deploy these applications. And I think Cloud Foundry and all the success stories I've been seeing on Twitter and such kind of prove that that has really kind of been an amazing success story there. The distributed systems architecture, I think there's lots of changes that go on there, but I think for the most part it did well. I think in the launch video, I launched 100 node apps in nine seconds or something like that, which was good. Run on any infrastructure. This seems kind of obvious now, but I can promise you, I was not a very popular person inside of VMware when I said that to the principal engineer council. They did not like me whatsoever. I think it's the right decision. And I think what we see now is a lot of Cloud Foundry deployments, whether they're on-premise, on OpenStack, or VMware, vSphere, but also on Amazon, I think Azure now, and I'm sure Google, if it's not already running there. And so I think we got that right, but wow, that was painful early on. Got beat up a lot. I thought in general, and I don't know if it's still true, but the notion of stem cells, we had this notion of there's one base image and it had kind of everything it needed to be anything, a DEA, a Cloud Controller, or a router, or whatever, and we could just send it a message saying, hey, we want you to be this, right? And then, of course, the open source, which we've talked about. So again, fear and loathing in Sunnyvale, what I got wrong, Ruby has an implementation language. Again, I like Ruby. I like Matz. He's a good friend of mine. But for production systems, it's really nice when deployment is more of a copy in SCP versus run these Chef scripts or pick your flavor of the month to get all the dependencies are actually in place for you. At the time, I did layer seven Ingress only for the router. That's HTTP. I know that Cloud Foundry's added layer four. And I think that's more of this notion that it started out as apps and in services, but what I didn't say is at least in the original thought process of putting the system together, most services would run outside. And if you look at some of the original Cloud Foundry, there's no way for you to deploy like a MySQL database, but they were there and they were available through the services system, right? The services, gateways and such. But that meant that most of the applications we were gonna deploy would be web apps. So I only did layer seven. I think layer four within Cloud Foundry is, and I'll talk about it in a second, kind of where the future things are going, where these platforms are now ingesting a lot of the services. They're not meant to only live on the outside, right? And then you need to go to TCP and sometimes UDP, obviously for like SIP and things like that. I think Justin Smith gave a keynote here, amazing security guy. I didn't think hard enough about security and trust in the early systems. That's what actually kind of spurred the creation of AppSerap, the company that I work for now. I had bought into the two opinionated thing to get things sped up. And this could be very controversial, right? But I believe that people are willing to give up their opinions if they can get something that they can't feel they can get anywhere else or anyhow else. But I think when you can give them what they want and give them their opinion back, you see things like Docker, right? Docker is the only opinion that cares is what the developer's trying to do with stuffing everything under the sun into a two gig Docker image that takes 20 minutes to download, right? But it's there, right? And it's part of our presence and our future. And then this is the other controversial one. I don't think it's bad that Bosch is open source today, but at the time you gotta realize the environment I was in. We had designed it for Vadim and a team, which again was him and Oleg essentially, to both build it and run it to deploy these installations for us. VMware customers are used to looking at vCenter and clicking buttons. Bosch didn't have any buttons, didn't have any GUI stuff, right? It was all CLI stuff. And I think it was amazingly well designed and implemented by Vadim and Oleg. But my fear was at the time when I said I don't think we should open source it was is that the audience who can actually use it for what it's good at would be fairly small. And the lesson that I tried to bring and whether it was right or wrong was the same one I learned at Tipco. So at Tipco I designed a messaging system. I said this thing could do anything you want and it is blazingly fast. And after about a year on Wall Street I realized that it's kinda like a very sharp chef knife. If you give it to a chef they can do amazing things. If you give it to someone like me I'll chop my fingers off, right? And Wall Street was doing that with these early messaging systems. They'd be sitting there writing programs going for I equals zero, write less than a million, I plus plus send a message and they'd run it. And they go that couldn't have been right and they'd run it again. And they go run it again. And also their phone would ring and the network ops would call them and say you just blew out every single one of our routers. What are you doing? Stop that. And so I was taking that position on Bosch. But again I think I was wrong in that. I think it's a good thing and I heard the Bosch sessions were very, very well attended. So what you all got right. And again, this is where I think great projects are more about the second half lifetime than the first, right? It's about the great ecosystem, the independent and powerful foundation along with the different company aspects and interest from like the IBM and SAP and of course Pivotal. The developer tooling is amazing. And in that video, I don't know if you saw Romney Vossen was in there and he kind of came in was one of the first people to say we need to just plug this directly into Eclipse. And I said, why would you wanna do that, right? And it was just me not understanding the power of what that would bring. As of the last probably year and a half, two years, the Spring Boot and Netflix model and doubling down on that ecosystem and what that means I think has been incredibly beneficial to everyone. And then of course the microservices cloud native mantra which you can call it SOA 3.0. You can call it graph systems 10.0. But I think what's interesting and we're gonna get into this just in a second is we're kind of at that tipping point now where it makes sense. Before it didn't make a lot of sense because if you took one thing and you made it 10 it actually made my job harder and it took me longer to get things done, right? I think that's gone away. That's all great but where do we go from here? So again, Paz started, you know, well B29 slash Cloud Foundry started nine years ago. Heroku was even a little bit before that. It's almost in the 10th year of Paz. But we have these things like Docker, right? And Kubernetes and Mesos. We don't have just Puppet and Chef anymore. We got Ansible on Salt and all these other different things. And so we've got to figure out a way in my opinion to make sense of the chaos. And it's not just me looking at the ecosystems and seeing chaos, it's customers struggling with which decisions to make. Not all the time, but it has been for like the last three or four years, especially with Paz kind of coming up. And I don't know if you guys remember this saying, oh it's gonna solve world hunger and this is the normal thing. Every technology goes to this. It's gray since sliced bread, trough of disillusionment. Oh my gosh, what's it actually good for type stuff, right? So we had Paz back to IAS++. Then all this thing called Docker came out of nowhere which it actually came out of .cloud. That's just how Solomon actually provisioned a lot of his early Paz, which was also in the scene with Cloud Foundry and Makar which again turned into OpenShift. And so there's just a lot of stuff. I've got Cloud Native, we've got BaroS's which I know sounds weird, but there is a lot of deployment methodologies that I see on a daily basis which is give me a BaroS and give me a Chef Script and I'll figure out how to produce something that's runnable for you. Obviously the web apps, now IoT apps, the mobile apps, complex, .NET, BMs, IoT, big data. Continuous integration and deployment in my opinion is gonna go through renaissance, probably at the end of this year. Maybe not, again I'm really crappy at timing, but I think the time is now where we're gonna see a surgence of how we actually go through the CI CD process. Config management, right? We've got HashiCorp and Mitchell's company coming on with Nomad and Vagrant and all kinds of stuff that are gaining a lot of momentum. And again this notion of Kubernetes coming out and gaining a lot of momentum. The question I think we all should ask is why? Why did that do that? And of course we've got underneath the covers which is a little bit grayed out, but it's probably down to the big three on Cloud Platforms. I don't think that'll be the same big three in five years. I know that's probably not a popular decision either or opinion but I do believe that. We've got OpenStag, VMware and some other things and Bear Metal's gonna come back in to Vogue. By the way if you look 10 years out what's gonna be interesting is everybody thinks everything's going from hardware to software, virtualized and everything's going from the edges into a public cloud. I'm old enough that everything goes like this. So probably about two years we're gonna go the other direction. So everything's gonna move to the edges and we're gonna drive things into hardware. And I think the only reason I'd say that without everyone laughing at me is because of the big news around AlphaGo was actually mostly run, not trained but run all in A6, not even FPGAs. So interesting stuff on the hardware side too. But from my perspective when you look at all those chaos it's not that I don't want less choices. I actually think choices are good but I think we have inconsistent interfaces, boundaries and we lack trust across how we would put these technologies together. And so at least for me I look at what are the buckets of technologies? And I have three buckets. I always oversimplify things, it helps me think through things. I believe there's a bucket for infrastructure provisioning, one for workload orchestration and one for artifact to workload. Now what's interesting is that workload orchestration before PAS didn't really exist because it was part of infrastructure provisioning. You provisioned the infrastructure, you got a machine, that's what your workload was. We had no idea what was beyond that. PAS kind of brought that in and kind of made it kind of all in one but now a sudden Kubernetes is starting to say hey maybe there's a difference between artifact to workload which is give me something and I'll create something that's runnable and it doesn't have to be a singleton, it can be a system of workloads versus actually deploying those out, stitching them all together from networking trust policy perspective and running them. And at least for me I apply the 80-20 rule. And the 80-20 rule is is take any technology and say what is the 80% use case benefit to me the customer, me the end user. Not the person that wrote it or the vendor or anything like that. And when you do that I think you start to see some natural alignment into those buckets. And the only reason I bring that up is because I think that then defines where we wanna look at the opportunities for standardized interfaces, standardized boundaries. And so standardization for me of these interfaces is crucial for the ecosystem at large. And I think again maybe not popular but I don't think verticalization will succeed. I think it'll fail because we need that specialization. I don't think fewer options are better. I want more options. I just want a way to kind of, for lack of better word, Lego brick those things together. I want this and I want this and I know how they plug together. I don't think vendors should have to implement the complete stack. Again if you take that 80-20 rule I think that you should concentrate on exactly what you're good at and have fringe 20% benefits left and right depending on where you sit in the spectrum. And again the interoperability is key at least from my perspective. So I think I'm gonna echo and Sam Ramji and I have talked about this and I think I've had discussions with others from the Cloud Foundry Foundation around an open cloud ecosystem. How do we actually define something that everyone can get behind and actually make sense of? And I think from my perspective it's centered around about six things. Could be 12, could be two, I don't know. These are just my kind of assessing the ecosystems watching all the technologies and being involved with them and the customers. So in intent description I wanna run this app connected to these three services. I want you to run an SLA so I have 10 of these in Amazon. We have to figure out a way to standardize that so it actually makes sense. Right now we have 15 different ways to kind of do that. Container run times, right? The OCI I think has been doing a great job. Not only on that but they've picked up on the workload image format as well now too. Which means that someone who does artifact to workload will produce something that anyone can run if they want to. I think that's incredibly powerful. Doesn't it mean that you have to pick that option but I do believe it is incredibly powerful. Orchestration and deployment, how you actually orchestrate and deploy these intents. What's the standard interface for that? Obviously storage and network blocking and tackling stuff for CNI and things like that. And then for me one of my bends is policy and governance and the only reason that I mention that is not because it's a fun sexy word it's actually the exact opposite. But when I was at Google I felt like I was empowered to do anything I wanted to do and that Google trusted me. Google didn't trust me. Google could care less about what I was doing. Google trusted the Borg. And at the time the Borg was extremely fundamental. It just had gotten off of the previous system. And all it knew how to do was I'll try to do what you say unless you monkey was searching ads. And if you do that I'm gonna bonk you on the head and you're gonna lose. That was it. But Google trusted that rule. And so then I felt empowered to kind of do whatever I wanted to. I think that notion of these boundaries and these consistent interfaces allow you to LEGO brick things together also has to breed in this notion of how do we trust that system. So for me it's time to kind of work together from an ecosystem perspective. Again maybe not the most popular thing to say but I do believe that's kind of where we all wanna go because I believe there is a specific benefit to pass style deployment of applications. I do believe there's a benefit to leave me alone and let me pack everything I want to inside of Docker. I do believe that I should be able to make those choices and make another choice in how those things are being deployed and orchestrated and even provisioned from an infrastructure perspective. So thank you, hopefully that was helpful and useful and hopefully a little bit enjoyable. I don't think we have time for any Q and A but I will hang around and I'm happy to answer any questions that you might have.