 Live from Austin, Texas, it's theCUBE. Covering OpenStack Summit 2016. Brought to you by the OpenStack Foundation and headline sponsors Red Hat and Cisco. Now here are your hosts, Stu Miniman and Brian Graceley. Hi, welcome back to theCUBE. Here in Austin, Texas, six of your shows, 7,500 people here. Many of them were who's sitting in the keynote this morning where our next guest, Craig Mclucky, product manager with Google was on. Had Alex Pulvy from CoreOS on there, talking of course about how Kubernetes is helping to expand the OpenStack experience. Craig, first of all, thank you for joining us. Welcome back to theCUBE. Thank you so much. All right, so for those that didn't catch it, give us kind of the thumbnail. What were we talking about in the keynote, this whole OpenStack, just another application, as far as you guys are concerned? So what we went through in the keynote today was a demonstration of OpenStack running on Kubernetes. Kubernetes, if you think about it, is just a nice way to run a distributed system. It's a distributed systems environment. And if you think about OpenStack, in many ways it's a fancy distributed system. And so the demo today that Alex did was OpenStack packaged up in containers and running in a cluster environment. That had some really nice properties for the OpenStack operator, in that it made the platform farmer robust. So Alex showed how you could tear down one of the OpenStack services and it would spring up on a different node. It also made the operations of OpenStack far easier in terms of versioning, updating, and managing the life cycle of the various OpenStack components. Yeah, do we see that, obviously people know about Kubernetes. It came out of Borg at Google. So obviously it's done a lot of things at scale. People are very, very excited about it. Talk about the Kubernetes plus OpenStack integration. Where are those things coming together? How are the two organizations working together, the CNCF and the OpenStack Foundation? Talk a little bit about that. Sure. So if you think about it, let me kind of give you an example from my previous life. The first product I worked on at Google was a product called Google Compute Engine, which is a simple infrastructure as a service offering that just provides basic primitives for folks that want to run. So you can get a virtual machine, you can get a disk, you can get network. Which is very much like OpenStack today. So Compute Engine is a decent analog from OpenStack. And when we built Compute Engine, it was built on top of Borg, which is our internal infrastructure management framework. And because it was built on Borg, we got a lot of nice properties. It made it a lot easier for us to actually deal with operating at scale, to update and version and provision, you know, all of the subsystems that were actually running Compute Engine. And so when I think about OpenStack, it's a very close analog to what we had inside Google. So today you see a lot of folks that are building and managing OpenStack deployments. And they're starting to experience some issues with that. And one of the key issues that they experienced is challenges around achieving very high levels of operational efficiency. So one of the nice things about cloud native computing is that it provides a more effective way to pack and schedule and make intelligent decisions around where things should be run so you get more out of the infrastructure. And so the OpenStack community is really hungry to actually drive up the aggregate efficiency of OpenStack. And so bringing in a technology like Kubernetes is a great path to do that. The other thing that we see a lot of folks struggling with is just the operational life cycle. OpenStack is a very complex system. It's got a lot of moving pieces. And they just want a better way to start thinking about delivering and integrating and managing those pieces. And so by introducing Kubernetes or cloud native computing paradigms, they get all that utility out of it. Now if you think about the relationship between the cloud native computing foundation, which is a foundation that was established to promote these cloud native patterns, container package, dynamic schedule, microservice oriented solutions, it's not intended to create a closed system. It's really the charter was around the promotion of these patterns and the promotion of these technologies. And so OpenStack is an obvious place that can benefit from this approach to managing applications at scale. And so our aspiration in terms of these two foundations is one, to introduce some of the cloud native computing technologies into OpenStack so that the OpenStack community benefits from the efficiencies, performance, and reliability that cloud native computing brings. But also that the OpenStack supports the border cloud native paradigm because OpenStack's done a great job, for instance, of integrating the network and storage components. We don't necessarily want to go and replicate all of the awesome work that community's been doing. So we'd love to find ways to use a lot of what OpenStack's done in terms of the on-prem cloud pieces and then create these sort of blended solutions that introduce the technologies in a complimentary way. One of the things when you guys first sort of, it was surprising to a lot of people when Google said, hey, we're in on OpenStack. We're going to be part of the foundation. People went, oh, okay, people tend to look at the large web scale, three big, and they go, they have their own technology and OpenStack was kind of private cloud. You've talked about it in a hybrid cloud perspective. Google does a very good job of running open-source software and turning it into a service, whether it's SQL or Kubernetes or containers. Can you ever imagine a time when the Google Cloud platform will say, I can give you essentially OpenStack as a service and help with that that hybrid? Or is that the wrong way to think about how Google thinks about OpenStack for hybrid cloud? I don't think, you know, I can't talk to the specific strategic future, but I don't think that scenario is too far-fetched. You know, I'd love to be able to get to a point where the demo that we saw today would run on something like Google Container Engine. And I don't think it's too big a stretch of the imagination to imagine a world where we could package up a lot of the OpenStack tools and create these blended experiences where you have on-prem components that are running on native OpenStack or running on a Kubernetes hosted base, and then you can sort of burst to the cloud. Either consuming cloud services directly into an OpenStack environment, or potentially even extending the management of that into a sort of cloud-based resources. So I can't comment to the specific strategic futures, but I would say it's not that far-fetched. Okay, so Craig, it's interesting. I've noticed in your comments sometimes, you describe Google stuff and you call it simple. And you know, of course, when we talk about OpenStack, it's a little bit more complex. You know, simple for Google is not necessarily something that everybody else thinks is simple. I'm curious when you look at kind of what you're doing in OpenStack, the terminology sometimes, Google's infrastructure for everybody else is used. How do we think about kind of the usability, what kind of skillsets they have to have because most people aren't Google? No, it's actually a very good statement to make. One of the things I get asked a lot is, why didn't you just open source Borg? And why didn't you just make all of the Google tool chain available to the rest of the world? And the reality is that, you know, even if we did open source Borg, no one could actually deploy it. And even if someone actually successfully deployed it, they would find the learning curve unappealing. You know, when we tend to hire engineers from the outside world, it takes them six months to learn Google's infrastructure. And at the end of it, it's like they've developed a super power. They're able to deploy these applications and it makes even the most complex things tractable. You can build these really complex systems. But the learning curve is astronomically steep and very difficult to deal with. And one of the things that we find very attractive in terms of working with the OpenStack community, working with a lot of these open communities, working with technologies like Docker, is reducing the learning curve, not just for engineers that are using our technologies, our cloud products, but also for our own engineers. The learning curves are too high. It takes too long for a new Google engineer to get spun up. And so we will be actively working with communities that have a great set of experiences. It's like we hold the Docker community in very high regard. It created a very elegant and neat set of experiences for the developer. And our aspiration is to then extend that into the operator experience. So Docker's done a really great job of reducing barriers to entry for developers and creating very fast on-ramps for the use of new technologies. Our aspiration is to take that great first five hours experience, which you get with Docker, and create a great next five years experience as you try to build and operate these systems. And so it is absolutely in front of consciousness for us. We want to make the experience better for everybody and for our own engineers as well. And we do recognize that left-owned devices, we would probably create systems that are perhaps less simple than the rest of the world would like. But we think that working closely with partners and being relentlessly focused on the end user will get us to the right place in the end. Yeah, so your team, you've got a number of people on your team that are in the evangelism space or advocate space, Sarah Novotny and Kelsey Hightower, people who have been on the show, people who are really well-known. What feedback do you get from them as they're out in the community in terms of some of the interesting types of challenges, application types that people are building in these new distributed environments, things that are going to take advantage of Kubernetes that maybe you couldn't do before. What's anything sort of popped to mind that you go, that's a cool application. That's a really interesting challenge. Yeah, I mean, there's a lot of work to be done to make Kubernetes generally accessible to everybody. When we started off, we were really focused on the web-style application and really nailing that solution up to 100 nodes scaling. As we then sort of move past that, we're really starting to think about much more scalable clusters, beyond a thousand nodes, up to 10,000 nodes or 50,000 nodes. And then we're also starting to think about much more stateful workloads. So as we look to our next wave of applications that we're focusing on, stateful services are really important. Turns out that it's great that you can build these 12-factor apps and run them. But the reality is most enterprises have a lot of stateful workloads. So our immediate priority right now for the next release of Kubernetes 1.3 release is doing a much better job of providing support for the more stateful set of workloads. And then beyond that, we're starting to get a lot of interest from the community around much more optimal scheduling for niche workloads. So for some folks in the telco industry, they're running a lot of real-time, or near real-time processing workloads, and they have some very specific scheduling constraints around the co-location services. So as we start to diversify, we'll look at that. So I think the sort of the cardinality here is web, stateful services, big data, and then we'll start pursuing more real-time other workloads. So Craig, I'm wondering if you can comment on how Google thinks about using and participating in the open-source communities. I've heard said a few times that, I mean, of course, Google's done tremendous for not only using open-source, but there's, I mean, huge projects. MapReduce, we probably wouldn't have so much of the big data analytics in UPSpace if it wasn't for Google done there. But I've heard people say, we're all running on what Google was doing 10 years ago. So Kubernetes is out there, you're here at OpenStack. How does Google look at these communities versus contributing and consuming open-source? So it's interesting, you know, as we've shifted from being an internet company to becoming a cloud company, our posture with respect to open-source has shifted pretty drastically. You know, initially we would contribute open-source technologies as a way to accelerate the community and to help sort of smooth our own sort of, you know, processes as the upstream communities came to understand what we're trying to accomplish. But as we've become more of a cloud company than just an internet company, we're looking at Open as a major competitive advantage for us. So if you think about the way that we built Kubernetes, we released Kubernetes into the wild very, very early in its life cycle. You know, we put a lot of time and thought into getting the basic domain model right. But Kubernetes, the project, was by and large built in the Open. So it's the first hosted cloud product that was built from the ground up in the Open with the engagement of an open-source community. And it created a tremendous amount of power for us because our feedback loop was almost instantaneous. Historically, if you're a cloud company, you'll go through this long development cycle, you'll produce something, you'll take it to some early adopter folks and they'll look at it and they'll be like, I don't like it. And then you have to rework the whole thing. It creates these very slow cycles of development. When you're running an open source, the feedback cycle is near instantaneous because you're directly collaborating with your end customer to make sure that the technology meets their needs. So that's been huge in terms of just advancing the development cycle. The second piece of becoming a cloud company has been recognizing that our customers don't want to be locked in. We could go ahead and create this big, beautiful bespoke technology, but the reality is none of our customers that we're working with want to be in a single cloud environment. They either want to be in a hybrid world where you have some on-premise workloads and some hosted public cloud workloads, or they want to be in a multiple public cloud provider relationship. And so creating bespoke solutions gates our own adoption. So that's the second big piece of it. And then the third piece of it is, when you actually look at where we're going from here, we're looking to replicate the model that we bought to market with Kubernetes in almost every other domain. So if you think about our big investments in machine learning with TensorFlow, we're running that as an open project. The work that's being done in the open source community to help people understand and make sure the product's right. If you look at our next generation, the successor to a lot of the MapReduce technologies, it's called Dataflow Engine. It's also being built in the open as an open source project with the engagement of the community. And so we're looking to replicate this model because one, it just works better. Two, it avoids lock-in. And three is, frankly, we just get a lot of acceleration from the community. If you look at Kubernetes, 25% of the code base was written by Red Hat. The other 25% was written by a broad array of contributors that have a very diverse set of personalities. And that diversity of view and that diversity of experience really creates a stronger technology base. A couple, about a month ago, you were out, obviously, at the GCPnext conference. First big conference for the Google Cloud platform, at least from a public perspective, Diane Green was there. She's starting to set the direction. What was the feedback you got from the market, from customers, maybe that you expected or sort of surprised you? I think the biggest signal that I've got back from that was like, wow, Google's all in on cloud. I think that the one thing that that conference did is really set the tone of our intent, the focus that we have, and our relentless pursuit of becoming a cloud company, of moving beyond being an internet company to actually become a cloud company. And so just the overwhelming sort of narrative that I've heard from customers is, wow, Google's really serious about the cloud thing. And that's been great. Beyond that, the reception's been really good. There were a lot of good announcements and generally very positive uptake. A lot of inbound customer interest is real to that. All right, Craig McClucky with Google. Really appreciate you coming back to sharing everything that Google's participation in OpenStack and beyond. We'll be back with lots more coverage here from OpenStack 2016 in Austin, Texas. You're watching theCUBE.