 From Seattle, Washington, it's theCUBE, covering KubeCon and CloudNativeCon North America 2018, brought to you by Red Hat, the CloudNative Computing Foundation, and its ecosystem partners. Okay, welcome back everyone. This is theCUBE's live coverage here in Seattle for KubeCon and CloudNativeCon 2018. I'm John Furrier with Stu Miniman, breaking down all the action, talking to all the top people, influencers, executives, startups, vendors, the foundation itself, but we're here with two co-leads of Kubernetes at Google, legends in the Kubernetes industry, Tim Hocken and Brian Grant, both with Google, both co-leads at GKE, thanks for joining us, legends in the industry. Kubernetes is still a short life, but still, being there from the beginning, you guys were instrumental at Google building out and contributing to this massive tsunami of 8,000 people here, who would've thought? Yeah, it's pretty crazy. It's amazing, it's crowned. It's a little overwhelming. It's almost like you guys have celebrity status here inside this crowd. How's that feel? It's a little weird. I sort of, I don't buy into the celebrity culture for technologists, I don't think it works well. Yeah, we agree, but it's great to have you on. Let's get down to it. Kubernetes, certainly the rise of Kubernetes has grown. It's now pretty mainstream. People look at that as a key linchpin for the center of CloudNative, and we see the growth of Cloud. You guys are living it with Google. What is the importance of Kubernetes? Why is it so important, fundamentally at its core, has a lot of impact? What is the fundamental reason why it's so successful? I think fundamentally, Kubernetes provides a framework for driving migration towards CloudNative patterns across your entire operational infrastructure. The basic design of Kubernetes is pretty simple and can be applied to automating pretty much anything. And we're seeing that here. There are at least more than half a dozen talks about how people are using the Kubernetes to control plane to manage their applications or workflows or functions or things other than just core Kubernetes containers, for example. And CloudNative is about one of the things I'm involved with is I'm on the technical oversight committee of the CloudNative Computing Foundation. So I drove the update of the CloudNative definition. If you're trying to operate with high velocity deploying many times a day, if you're trying to operate at scale, especially with containers and functions, scale is increasing and compounding as people break their applications into more and more microservices. Kubernetes really provides a framework for managing that scale and for integrating other infrastructure that needs to accommodate that scale and that pace of change. I think Kubernetes speaks to the pain points that users are really having today. Everybody's a software company now, right? And they have to deploy their software, they have to build their software, they have to run their software. And these things, they build up pain. And when it was just a little thing you didn't have to worry about scale and internet scale and web scale. It was, you could tolerate it within your organization, but more and more, you need to deploy faster. You need to automate things. You can't afford to have giant staffs of people who are running your applications. These things are all part of Kubernetes purvey. And I think it just spoke to people in a way they said, I suffer from that every day and you just made it go away. And what's the core impact now? Because then now people are seeing it, what is the impact to the organizations that are rethinking their entire operation from all parts of the stack, from how they buy infrastructure, which is actually cloud, you see some cloud there and then not deploying applications. What's the real impact? I think the most obvious, the most important part here is the way it changes the way it changes how people operate and how they think about how they manage systems. It no longer becomes scary to update your application. It's just a thing you do. And if you can do it with high confidence, you're going to do it more often, which means you get features and bugs fixed and you get your rollouts done quicker. And it's amazing, the result that it can have on the user experience, right? User reports a bug in the morning and you fix it in the afternoon and you don't worry about that. Yeah, you bring up some really interesting points. I think back 10 years ago, from a research standpoint, we were looking at, how can the enterprise do some of the things that the hyperscale vendors were doing? I feel over the last 10 years, every time like Google released one of the great scientific papers, we'd all get up here inside and say like, oh, hey, when I went to the first DockerCon and heard how Google was using containers, when Kubernetes first came out, it's like, oh wow, maybe the rest of us will get to do something that Google's been doing for the last 10 years. Brian, maybe bring us back a little bit to Borg and how that led to Kubernetes and are we still all the rest of us just doing whatever Google did 10 years ago? Yeah, Tim and I both worked on Borg previously, Tim on the Node agent side and I worked on the control plane side. Borg, one lesson we really took from Borg is that really you can run all types of applications. People started with stateless applications and we started with that because it's simpler in Kubernetes, but really it's just a general management control plane for managing applications. And with the model of one application per container, then you can manage the applications in a much more first class way and unlock a lot of opportunities for automation in the management control plane. So at Google several years ago when we started, Google had already gone through the transition of moving most of its applications to Borg and it was after that phase that Google started its cloud effort and the rest of the world was doing VMs. When Docker emerged, we were in the early phases, Tim mentioned this in our keynote yesterday, of open sourcing our container runtime when Docker emerged, it was clear that it had a much better user experience for the way folks were managing applications outside of Google and we just pivoted to that immediately. Yeah, I mean, when Docker first came out, we took a look at it, we, my node agent team in Borg and we went, you know, it's kind of like a poor man's version of Borglet and we sort of ignored it for a while because we were already working on our open source effort and we were open sourcing it and not really to change the world and make everybody use it but more so that we could have conversations with people like the Linux kernel community when we said we need this feature and they'd say, well, why? Why do you need this? We could actually demonstrate for them why we needed it. And when Docker landed, we saw the community building and building and building. I mean, that was a snowball of its own, right? And as it caught on, we realized we know what this is going to. We know once you embrace the Docker mindset that you very quickly need something to manage all of your Docker nodes once you get beyond two or three of them and we know how to build that, right? We got a ton of experience here. Like we went to our leadership and said, you know, please, this is going to happen with us or without us and I think the world would be better if we helped. I think that's an interesting point. You guys had to open source to do collaboration with Linux to get that fly wheel going for you guys out of necessity. And then when Docker validated the community acceptance of, hey, we can just use containers. A lot of magic will happen. It hit the second trigger point. What happened after that? You guys just had a debate internally. Is this another map reduce? What's happening? Like we should get behind this. And I knew there was a big argument or debate, I should say, with on Google. At that time, there was a lot of conversations. How do we handle this? And that was around the time that Google Compute Engine, our infrastructure as a service platform was going GA and really starting to get usage. So then we had an opportunity to enable our customers to benefit from the kinds of techniques we had been using internally. So I don't think the debate was whether we should participate. It was more how, for example, should we have a fully managed product? Should we have to open source? Should we do managed open source? So those were really the three alternatives that we were discussing. Well, congratulations, you guys done great work and certainly huge impact to the industry. And I think it's clear that the motivation to have some sort of standardization, de facto stand, whatever word can be used to kind of let people be enabled. On top or below, Kubernetes is great. I guess the next question is, how do you guys envision this going forward? As a core, if we're going to go to decomposition with low levels of granularity tying together through the network and cloud scale and the new operating model to have confidence in this, how do we preserve, how does the industry maintain the greatness of what Kubernetes is delivering and bring new things to market faster? What's your vision on this? I talked a little bit about this this week. You know, we put a ton of work into extension points, extensibility of the system, trying to stay very true to the original vision of Kubernetes, right? It is a box and Kubernetes fits inside the box and anything that's outside the box has to stay outside the box. And this gives us the opportunity to build new ecosystems. You can see it in networking space and you can see it in storage space where whole sort of cottage industries are now springing up around doing networking for Kubernetes and doing storage for Kubernetes. And that's fantastic. And you see projects like Istio, which I'm a big fan of, it's outside of Kubernetes. It works really well with Kubernetes, it's designed on top of Kubernetes infrastructure, but it's not Kubernetes. And it's totally removable and you don't need it. And there's systems like Knative which are taking the serverless idea and up leveling Kubernetes into serverless space and it's happening all over the place. And we're trying to sort of pretty fanatically say no, we're staying this big and no bigger. It's a really, you know, from an engineering standpoint, it's much simpler if I just build a product and build everything into it. You know, all of those connection points, and I go back to my engineering training, it's like every connection point is going to be another place where it could fail. And I've got all these APIs, there's all the security issues and things like that. But what I love, what I've heard here is some of the learnings that we've had in open source is these are all of these individual components that most of them can stand on their own. They don't even have to be with Kubernetes, but all together you can build lots of different offerings. So how do you balance that? How do you look at that from kind of a design and architecture standpoint? So one thing I've been looking at is how do we ensure compatibility of workloads across Kubernetes and all different environments and different configurations? How do we ensure that the tools and other systems built in the ecosystem work with Kubernetes everywhere? So this is why we created the conformance program to certify that the critical APIs that everybody depends on behave the same way. And as we try to improve the test coverage of the conformance suite, we're focusing on these areas of the system that are highly pluggable and extensible, right? So for example, the cubelet in the node has a pluggable container runtime, pluggable networks, pluggable storage systems now with CSI. So we're really focusing on ensuring we have good coverage of the pod API, for example. In other parts of the system, people have swapped out in the ecosystem, whether it's Kube proxy for Kubernetes services or the scheduler. So we'll be working through those areas to make sure that they have really good coverage. So users can deploy, say, a Helm chart or their case on a configuration or whatever, however they manage their applications and have that behave the same way on Kubernetes everywhere. I think you guys have done a great job of identifying this enabling concept. I mean, what is good enabling technology and now others to do innovation around it? I think that's a nice positioning. What are the new problem areas that you guys see that are to work on next? And I'll see things are developing in the ecosystem. You mentioned Istio, service measures, people see value in that. Security certainly is a big conversation we've been having this week. What new problem areas or problem sets you guys see emerging, that are needed to just tackle and just knock down right away. The most obvious, the thing that comes up sort of in every conversation with users now is multi-cluster, multi-cloud, hybrid, whether that's two clouds or on-prem plus cloud or even across different data centers on your premises. It's a hard topic and for a long time, Kubernetes was able to sort of put our finger in our ears and pretend it didn't exist while we built out the Kubernetes model. And now we're here at a place where we've crossed the adoption chasm, right? We're into the real adoption now and it's a real problem. It actually exists and we have to deal with it. And so we're now looking at how is it supposed to work? Philosophically, what do we think is supposed to happen here? Technologically, how do we make it happen? How do these pieces fit together? What primitives can we bring into Kubernetes to make these higher-level systems possible? And would you consider 2019 to be the year of multi-cloud in terms of the evolution and trying to tackle some of these things from latency to? Yeah, I'm always reluctant to say the year of something because... Part of the media business. Someone has to get killed and someone dies and someone's winning. It's the year of something. It's the year of something desktop. But... EDI, just saying. I think multi-cluster is definitely the hot topic right now. It's certainly almost every customer that we talk to through Google and tons of community chatter about how to make this work. I mean, you've seen companies like NetApp and Cisco, for instance, and how they're been getting a tailwind from the Kubernetes. It's been interesting, right? I mean, you need networks, okay? They have a lot of networks. They can play a role. And so it's interesting how it's designed, how a lot of people put their hands in there without kind of mucking up the main... Yeah, I think that really contributes to the success of Kubernetes. The more people that can help add value to Kubernetes, more people at a stake in the success of Kubernetes. Both users and vendors and developers and contributors. We're all stakeholders in this endeavor now. And we all share common goals, I think. Well, guys, final question for you. I know we got a break on time. Thanks for coming on. I really appreciate the time. Talk about an area of Kubernetes that most people should know about, that might not know about. In other words, there's a lot of hype around Kubernetes. And it's warranted, it's a lot of buzz. What is an important area that's not talked about a much that people should know more about and pay attention to within the Kubernetes realms of that world? Is there an area that you think is not talked about enough that should be focused on in conversations, press, or just in general? Wow, that's a challenging question. You know, I spent a lot of my time in the infrastructure side of Kubernetes, the lower end of the stack. So my brain immediately goes to networking and storage and all the lower level pieces there. I think there's a lot of policy knobs that Kubernetes has that not everybody's aware of. Whether those are security policies or network policies. There's just a whole family of these things. And I think we're going to continue to accrete more and more policy as more people come up with real use cases for doing stuff. And it's hard to keep that all in your mind, but it's really valuable stuff down there. For programmability, it's like a holy grail, really. Thoughts on the things that put you in the spot there? Yeah, no. I think this question of how people should change what they were doing before, if they're going to migrate to Kubernetes. You know, to operate any workload, you need at least monitoring and you need really CICD if you want to operate with any amount of velocity, right? So when you bring those practices to Kubernetes, should you just lift and shift those into Kubernetes or do you really need to change your mindset? And I think Kubernetes really provides some capabilities that create opportunities for changing the way some things happen. I'm a big fan of GitOps, for example, and managing the resources declaratively using version control as a source of truth and keeping that in sync with the state in the live clusters. I think that enables a lot of interesting capabilities like instant disaster recovery, for example, migration to new locations. So there are a few folks here who are talking about that and giving that message, but we're really at their early stages there. All right, well, great, great to have you guys on. Thanks for the insight, got to wrap up. Thanks, Brian, thanks, Tim. Appreciate it, live coverage here. Kube is at KubeCon, Cloud NativeCon 2018. I'm John Furrier, Stu Miniman. We'll be back after this short break.