 Live from Silicon Valley, it's The Cube, covering Google Cloud Next 17. Welcome back to The Cube's coverage of Google Next 2017. 10,000 people are in San Francisco, SiliconANGLE Media. We've got reporters there as well as the Wikibon analysts. I've been up there for the analysts event, some of the keynotes, and we're getting thought leaders, partners, really getting lots of viewpoints as to what's happening, not just in the Google Cloud, but really the multi-cloud world, and that's why I'm really excited to bring back a guest that we've had on the program before. Craig McClucky, who four months ago was with Google, but he's now the CEO of Heptio, and he's also one of the co-creators of Kubernetes, which anybody that's watching the event definitely has been hearing plenty about Kubernetes. So welcome back to the program. Thanks for joining us. Thanks for having me back. Yeah, absolutely. I know you guys, you were part of a little event that kind of went before the Google Cloud event, brought in some people in the cloud ecosystem and talked about a lot was going on. Maybe start us off with what led you to kind of pop out of Google? What is Heptio, and how does that kind of extend what you're doing with Kubernetes when you're at Google? Certainly. So Heptio is a company that has been created by my co-founder, Joe and myself, to bring Kubernetes to enterprises. And the thing that really motivated me to start this company was the sense that there was not a unfettered Kubernetes company in existence. I spoke to a lot of organizations that were having tremendous success with Kubernetes. It was transforming the way they approached infrastructure management. It created new levels of portability for their workloads, but they wanted to use Kubernetes on their own terms in ways that made sense to them. And most every other organization that is creating a Kubernetes distro has attached it to other technologies. So it's either attached to an opinionated operating system, or it's attached to a specific cloud environment, or it's attached to a Paz. And it just didn't meet the way that most of the customers I saw wanted to use the technology. I felt that a key missing part of this ecosystem was a company that would meet the open source community where it is and help customers that just need a little more help. A little more help with training, better documentation support, and the tools they needed to make themselves successful in the environments that they wanted to operate in. And that's what motivated Joe and I to start this company. Yeah, and it's interesting, as you look at the biggest contributors, probably, you know, Google's there, you've got Red Hat, you know, you've got, as you said, people that have their viewpoints as to where that fits. I think that that helps the development overall, but, you know, maybe you can help us unpack there. Why do you want to, is it separate? Is there, you know, that opinionatedness? What's inherently suboptimal about that? I think part of the key value in Kubernetes is the fact that it supports a common framework in a highly heterogeneous world. Meaning you can mix together a broad variety of things to your needs. You could mix together the right operating system in the right hosting environment with the right networking stack. And you could run general applications that are then managed and perform in a very efficient and easy to use way. And, you know, one of the things that I think is really important is this idea that customers should have choice. They should be picking the infrastructure based on the merits of the infrastructure. They should pick the OS that works for them. And they should be able to put together a system that operates tremendously well. And I think it's particularly critical at this juncture that a layer emerges that allows customers and service providers to mix together the set of things that they want to, you know, use and consume in a way that's agnostic to the infrastructure and the operating environment. I see the mainstream cloud providers taking us in some ways back to the world of the mainframe. If you think about, you know, what we're starting to see with, you know, companies like Amazon who are spectacularly successful in the market is this world where you have this deeply vertically integrated service provider that provides not only the compute, but also a set of core services and, you know, almost everything else that you need to run. And at the end of the day, it's getting to a point where a customer has to kind of pick their service provider. And, you know, like I mean most, as you say, normal good fight for using IBM. But it was also suboptimal from an ecosystem perspective. It inhibited innovation in many ways. And it was the emergence of Wavwintel, that sort of Windows and Intel ecosystem that really opened up the vendor ecosystem and drove a tremendous amount of innovation and advancement. And, you know, when I think about what enterprise customers want and need today, they want that abstraction. They want a safe way to separate out the set of services that run their business, the set of technologies that they build and maintain from the underlying infrastructure. And I think that's what's driving a lot of the popularity of Kubernetes is this idea that it is a logical infrastructure abstraction that lets you pick the environment that you operate in purely based on the merits of the environment. Yeah, it's been a struggle. I mean, I know through my entire career in IT, we've had that discussion of, do I just, you know, standardize on what we have? Because, you know, the enterprise today, absolutely. Every time I put a new technology in, it doesn't displace, it adds to it. So, I talked to lots of customers, still using mainframe. They're using the Wavwintel stuff. They're using public cloud. They're using, you know, yes, and, and, and therefore managing it, orchestrating it, doing all those pieces are difficult. The challenge when I put an abstraction layer in, one of the big challenges is, you know, how do I really get the full value out of the pieces that I had? You know, Sam Ramji said that, you know, when he was at Cloud Foundry, they were trying to make it so that you really don't care which, you know, cloud, whether it's on-premises or public cloud environments. And he said one of the reasons he joined Google was because he felt, you know, you couldn't make, you know, if you went least common denominator or something, you know, there was things that Google was doing that nobody else can do. So, there's always that balance of, you know, can I put an abstraction layer or virtualize something and take advantage of it? Or, you know, do I just go all in with one vendor? I mean, IBM back in the day, you know, did lots of great things to make it simple and cloud, trying to make it simple, lots of things, Amazon of course, that, you know, no doubt that they're trying to vertically integrate everything. It would like to do, you know, all your services. So, you know, where do you see that balance? And it's interesting. Does it solve customers the best to be able to say, okay, you know, you can take your mess that you have and therefore, you know, is this a silver bullet to help them solve it? No, I think it's a really good point. And, you know, consistently as I look through history, a lot of the platforms that people have pursued that created this sort of complete decoupling, introduced this lowest common denominator problem, where you had to trade off a set of things that you really wanted with the capabilities of the platform. And, you know, I think that absolutely, in some cases, it makes a tremendous amount of sense to invest in a vendor-specific technology. So let's take an example out of Google Cloud Spanner. You know, Cloud Spanner has, it's literally the only, you know, globally consistent, well, right now it's regionally consistent, but it's literally the only globally consistent relational store available. There is nothing like it. The co-created DB folks are, you know, building something that emulates some of the behavior, but without the TrueTime API, that sort of atomic clock, you know, crazy infrastructure that Google's built. It adds very little utility. And so in certain applications and certain workloads, if what you really want is a globally replicated, highly consistent relational data store, there is literally only one provider on the planet that would deliver it, which is Google. However, you might look at, you know, something that Amazon provides and, you know, they may have some other, you know, service perhaps you like, you know, perhaps you've already built someone on Redshift and you want to be able to use that. Or Microsoft might, you know, offer up some other technologies that make sense to you. And I think it's really important for enterprises to have the option. There's times when, for a given workload, it makes tremendous amount of sense to spend on a vendor. You know, if you're looking to run something that has, you know, deep machine learning hooks or needs some other science fiction technology that Google's bringing to the world, it makes sense to run that on Google. For applications that are potentially integrated into your productivity suite, if you're an Office 365 user, it probably makes sense to host it on Microsoft. And then perhaps there's some other pieces that you run on Amazon. And I don't think it's going to be, you know, pick one cloud provider and live in the static world forever. I think the landscape is constantly evolving and shifting. And one of the things technologies like Kubernetes provide is an option, an option to move, an option to decide which specific services you want to pull through and use in which application. Recognizing that those are going to bind you to that cloud provider in perpetuity, but not necessarily pulling the entirety of your IT structure through. Yeah. Craig, I'm curious, you know, when I look out as to kind of the people that commentate on this space, one of the things they say, you know, Kubernetes is interesting, but this whole hybrid cloud thing, I mean, you know, kill all the on-premises stuff, public cloud's really where it's at. You know, I know when I talk to most companies, they've got plenty of on-premises stuff, you know, most, you know, infrastructure that is bought is still, you know, there's a lot of it going on-premises. So companies are sorting out what applications go where, what data goes where, Diane Green, suddenly 5% of the world's data really is in the public cloud today. What's your view on kind of that, you know, on-premises, public cloud piece and Kubernetes role there? Yeah, I think it's a great question. And I have, you know, had some really interesting conversations with CIOs in the past. I remember in my very earliest days, you know, poo-pooing the idea of the private cloud and having a really intense CIO look across the dangling, he's like, you will pry my data centers from my cold, dead hand. He literally said that to me. And so there's certainly a lot of passion in the space. And I think at the end of the day, one has to be pragmatic. You know, first of all, one has to recognize that if you're an organization that has bought significant data center footprint, you're probably going to want to continue to use that asset that you've acquired. And that's, you don't want to use that in perpetuity. If you're a company, and most large companies are also naturally heterogeneous, meaning, you know, as you go through an acquisition, the acquired portion of your company may have a profoundly different IT portfolio. You know, may have a different set of environments. And so I think the world certainly benefits from an abstraction layer that allows you to train your engineers with a certain set of skills and then be highly decoupled from the infrastructure environment you run in. And I think, you know, again, Kubernetes is delivering some of that promise in a way that I think really resonates with customers. Yeah, absolutely. And even, right, you know, we've been telling people for years, stop building data centers. You know, there's very few companies that want to build data centers, even, you know, yes, Google talks about their data centers, but, you know, Amazon gets their data center space from lots of other players there. But, you know, if I stop building data centers today, I'm going to have them for another 25, 30 years. And even it's, what am I going to own myself? I talked to plenty of the big financial guys, you know, they're not going to move all of their information, you know, they want to have it under their control, whether it's their own data center, a, you know, hosted managed environment there. So, you know, we're going to be living with this, you know, multi-cloud, you know, thing for a long time. There is another thing that I don't think people have fully internalized yet, which is, in many ways, the way that cloud-provided data centers are structured is around power sources. At the end of the day, it's around cheap power and cooling. As you start looking at the dynamics of what's happening to our energy grid, it's no longer being quite as centralized as it was. And it starts to beg the question, does it make sense to, you know, think about, you know, smaller units that are more distributed? Does it make sense to start really thinking about edge compute capacity? The option to deploy something really close to your customers if you're, if you need low-latency entertainment scenarios, or the option to, you know, push a lot of capacity into your distribution center if you're running, you know, high, you know, like heavy IoT workloads where you just don't want to put all that data on the network. And so I think that, you know, again, it's, you know, certainly I think that people underestimate the power of the Amazon, Microsoft, and Google. You know, people that are still building data centers today don't realize quite how remarkable the vendors at that scale are in terms of the ability to build and run these things. But I do think that there are some interesting options in terms of regional locality, data sovereignty, edge latency that legitimize other types of deployment. Yeah, and you talked about IoT, you know, edge computing absolutely is something that, you know, comes up a lot there at AWS re-invent last year. You know, Amazon put their serverless solution using Greengrass out at the edge because there's, you know, tons of centers that I might not have the networking or I can't have the latency I need to do the compute there. How does things like, you know, serverless at the edge and IoT play into the discussion of Kubernetes? I think it's, I think it plays really well in so far as, you know, Kubernetes, it's not intrinsically magic. What it has done is created a relatively simple and turns out, you know, pretty reusable abstraction that lets you run a border of workloads. I wouldn't say it's exactly cracked the serverless, you know, paradigm in terms of event-driven, low-cost of activation computing, but that's something that can certainly be built on top of it. The thing that it does do is it provides you the ability to manage an application as if it were software as a service in a location that is remote from you by providing you a very principled, automated framework for operations. All right, Craig, last thing I want you to do is give us an update on Heptio, you know, how many people do you have? How are you engaging with customers? You know, what's the business model look like for that? What can you share? So we're currently 13 people. We've been in business for four months and we've been able to hire some really amazing folks out of the distributed systems communities. We are at a point where we're starting to provide our first supported configurations of communities. We don't position ourselves as a distribution provider. We rather like to think of ourselves as an organization that's invested in helping you get the most of the upstream community. Right now, our focus is on training, support, and services. And over time, you know, if we do that really well, we do aspire to provide a more robust set of product capabilities that help organizations succeed. For now, the thing that we focus most relentlessly on is helping customers manage down the cost of supporting a cluster. How do we create a better way for folks to understand what a configuration should look like when they're likely to encounter issues? And if they do encounter those issues, helping them resolve them in the lowest friction and least painful way possible. All right, and any relationships with the public cloud guys or what do you work with when you talk about, you know, OpenStack, Amazon, Google, Microsoft? You know, what's relationship and how do those work? So we announced the first joint quick start for Kubernetes with the Amazon folks last Tuesday. And that's been going pretty well. You know, we're just getting a lot of positive feedback around that. And we're now starting to think more broadly in terms of providing supported configurations on premises and then on Microsoft. So Amazon for us was the obvious starting point. It's felt like an under supported community from a Kubernetes perspective insofar as, you know, Microsoft had our friend Brennan Burns who helped us build Kubernetes in the first place. And he's been doing some great work to bring Kubernetes to the Azure Container Service. What we really wanted to do was to make sure that Kubernetes runs well on Amazon and that it is naturally integrated into the Amazon operating model. So, you know, cloud formation templates and we have a really principled way to manage, maintain, upgrade and support those clusters. All right. Craig McClucky, co-creator of Kubernetes and CEO of Heptio. Really appreciate you coming here to our Palo Alto studio helping us as we get towards the end of two days of live coverage of Google Cloud Next 2017. You're watching theCUBE.