 from Seattle, Washington. It's theCUBE, covering KubeCon and CloudNativeCon North America 2018. Brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Everyone, welcome back to the live Kube coverage here. Three days at Seattle's KubeCon and CloudNativeCon is the conference put out by the Linux Foundation. theCUBE's been there from the beginning, breaking down all the action. 8,000 people doubling attendance in the last one. Now global on a global scale, seeing great attraction in China and other areas around the world. It's about the cloud global. I'm John Furrier with Stu Miniman, our next guest, Kelsey Hightower with Google, former co-programmed share. Now out on the wild on his own, super dope playing with all kinds of new technologies. Great to see you, thanks for coming on. Proper use of the word dope, by the way. So congratulations there. Yeah, I'm an attendee. I still have a keynote on Thursday, but I do get to enjoy the floor like everyone else. So what's new? So you're now, again, there's a lot of pressure now every year, there's more and more people here. So there's a lot of pressure to kind of get all the action packed, but the growth has been pretty phenomenal. You've been looking at server lists. We saw some tweets, again, you mentioned the super dope server lists. You got server lists, you got a lot of stuff going on within the CNCF, you got Kubernetes at the core. A lot of people are calling it the Kubernetes stack or the CNCF stack. Is it really a stack? Is it really a more of an operating model? Cause cloud is, I mean, there's stacks involved, but how do you describe it? Cause this is a point of clarification. Kubernetes isn't necessarily a stack, is it? How do people use it? What's the current state? Yeah, I think when people say stack, you think about the lamp stack, right? Lennox, Apache, MySQL. It's a way of packaging these ideas. This is something that worked for me. It may work for you. You say that enough times, and then you say things like the Kubernetes stack as a quick shorthand for Kubernetes and building on top of it. I think from the engineering perspective, when you look at Kubernetes and all the gaps that the CNCF is trying to fill these days, it's all the stuff you're probably building yourself. Someone else is building it, and now we kind of have a, we have an outlet now. If you're working on a service mesh like this was, you have an outlet to give it to the rest of the world, open governance and get some contributors. So I think what we're seeing now is that, hey, CNCF is kind of the place people go to figure out, is someone building the thing that I've already started building? And can I stop and just download that and go off? It's been very successful open source community, obviously it's been end user led, which has been great and it's been open source, community led, not so much vendor led, but vendors have been participating, which has been great. But now as Kubernetes is going mainstream, the rise of Kubernetes is undeniable, no one can really deny that. And the other end users are now coming in either to participate and or to consume Kubernetes. How is that going in your mind? What's going on in the landscape? Because people want multi-cloud, they want hybrid, they want choice. How are end users coming in to the ecosystem to consume Kubernetes in the variety of goodness around it and what's going on there? Can you give some color around that adoption? I think regardless of the industry buzzwords like multi-cloud and hybrid and all that, Kubernetes is good on its own. It solves a lot of problems that your previous tools didn't solve. So people are gravitating towards it regardless in that direction. When you start to talk about portability, yes, it's nice to have two different environments and have the same tools work in a similar way between those environments. That's working well. The people that started three years ago that were doing it themselves, they're finding value and treating that as a service. We saw this happen to DNS, email. So people are saying maybe the value isn't running it myself. So now you kind of see the vendor ecosystem understand what the value is. For a lot of the cloud providers, it's running Kubernetes, patching it, updating it, upgrading it. So then you can go focus on the other parts on top. That's where I think we are as an industry and then there's gaps to fill. So that's where you see things like native, people building CICD tools on top. That's just where the new opportunities are. I think we've kind of matured. People kind of know what Kubernetes is. They know where the value line is for Kubernetes. Now they're looking for their partners or vendors or community to just layer the new stuff on top. Yeah, Kelsey, you bring up a great point there because understanding that line of what I should do myself and what I have to do, which is what I can buy, consume as a service is really tough for people. I always ask IT departments, what do you really suck at? Because there's somebody else that probably does it better. A year ago when I talked to users at the show, they were really downloading stuff, putting their things together. And when you ask them why, it was, well, the Azure stuff hasn't matured, it just released. Amazon, I'm not sure where they're going with it. It feels like a lot has changed in the last year. You did Amazon the hard way a little over a year ago. What has changed over the last year? Are we ready for that? Like in Linux where everyone used to build their own Linux distro, you took pride in it. You used the Ingento and Slackware. And then you're like, I'm tired of that. So you go get Red Hat or Ubuntu and call it good, right? And then you go focus on the other things. So naturally, Kubernetes's early project has lots of gaps. You can fill those gaps by gluing together open source yourself. But now all most of the managed services fill in the gaps by default. You click a button in GKE and a thing comes up. It's secure, has most of the pieces you need. It's integrated. You're like, all right, I'm done with that part. The other thing, we talked a year ago, there's lots of companies here that are involved in Kubernetes. We've got over 70 that are compliant and then you've got the service providers. From what I hear, people aren't trying to differentiate with Kubernetes and that's probably a good thing. It's something that's going to be baked in the platform. It's something you're going to consume with the other services that I offer. What do you say? If you make it different, then it won't work. It'll be a different thing. So if you make it too different, then you lose most of the benefits that we're all talking about here. So the ability to learn a set of abstractions once, kind of like we do in Linux, if you start changing the system calls on Linux, then it's not Linux anymore. It's a different thing. So just to clarify though, if I'm running in one cloud that has their Kubernetes and I want to go to another, is it similar enough? Can I make that move? Do I need a vendor independent version? So I think up to this value line of run this container, shift the log somewhere, give me a way to secure access. That's pretty standard. Give me a low balancer. What is a standard is, how do I do CI CD on top of that? That's not standard, right? There's different opinions on how to do that. If I'm in Google Cloud, we have IEM one way. Azure has IEM a different way. And same thing for Amazon. So there's things around networking, security, that are going to be different based on the environment you're in. Same for on-prem. And that's where you start to look for help. If I go to Google, I'm going to use GKE, maybe instead of running it myself on just a bunch of VMs. So that's where you kind of see that little divide. Is that going to be custom work? This is a great point. Security, for instance, just pull that out there. Is that going to automate and be seamless? Or is that going to be a work area that's always going to have to be differentiated or coded or managed? So for example, we had the big vulnerability recently in Kubernetes where it arrives, a big CVE. It affected everyone running Kubernetes. That's a thing as a vendor for us. GKE people, we upgraded automatically for them and said, hey, there's a CVE. It's going to be really scary when you read about it. Hey, you're patched. We've taken care of you. So I think people will still look for that relationship. Will it always be custom? At the app level, that is a different story. When you run your container and you want to access the things in your environment. So if you're in Google Cloud, you may want to talk to Spanner, you're going to need an IAM set of credentials. That's a little out of scope of Kubernetes. So that's going to be integration work that the provider will do. So the holy trinity of computing industry has always been storage, networking, and compute. And it changes certainly with cloud and all the goodness that comes out from serverless to whatnot. So containers is interesting. We always love containers, but I've heard conversations recently where it's like, hey, you know, I'm going to treat containers not as a first class citizen because it doesn't meet my security boundary. I'm going to put a VM around that and run that under the covers with say Lambda. Is that feasible? What's the, is that an option? I've heard talk about it. Is anyone doing that? Is that an alternative? Is this going to introduce new elements? No, not thread, right? So in Kubernetes, by default, we chose to build on top of Docker, industry momentum, great developer workflow, but you're right, it made a security trade-off. We know VMs are much tighter security boundary that people are comfortable with. In that world at that time, they were too slow for what we needed it to happen. Thanks to Intel and others who pulled the thread of, let's make VMs faster. Recently you heard the announcement of Firecracker, right? It's part of a derivative from the Chrome VM. And that thing is optimized for these kind of workloads, containers and serverless workloads. So now we go from 10, 20 seconds to 100 milliseconds. Now it makes sense to probably have this become an underlying thing. Now that we have the speed, maybe people say, hey, we can maybe take the security without sacrificing the performance. That's the trade-off, yeah. So, you pulled on the thread, you mentioned Firecracker. There's still this tension between what's happening in Kubernetes and serverless. We saw Knative as a hot topic point. It's probably natural that there's some tension there because it's like, oh wait, why do you need to learn any of this stuff? Because serverless will just make it as a service and make it easy and you don't need to learn all that container stuff and everything. What do you see? If you're a Kubernetes user, if you really think about the very broad definition of serverless, meaning I'm not managing the database, I'm using a managed database, serverless database. Storage, I'm using S3 or Google Cloud Storage. Serverless, you're a load balancer. Also serverless. So most people in the Kubernetes ecosystem, networking, serverless, storage, serverless, their database serverless. So the only thing that you can say isn't serverless is this compute component. Everything else is. So now people are looking at serverless as this spectrum. How serverless are you? If you're on-prem and you buy a server and you rack it and install Kubernetes, you're less serverless. You're probably not serverless at all, no matter what you do. Now, if you put a lot of work in, you can probably put a serverless interface on top, and this is what native is designed to do for people. So maybe you have an organization that supports multiple businesses inside of your org. They may not know anything about Kubernetes. You just tell them, hey, put your code here, it will run. Oh, that feels serverless. You can provide a serverless experience. So the delta then becomes, what can we do between a container and a function? So the foundation of my keynote is exactly that. What does it mean to take a container and put it into Lambda? What do you have to change? So in my presentation, I don't even rewrite the code. There's a small shim between the two worlds because you're already using managed services around it. We're not talking about throwing away Kubernetes and then starting over our entire architecture. We're swapping out the compute layer. One is a subset of the other. Lambda is about events and functions. Kubernetes is about container and run it however you want. Do you want to run it when an event comes in? That's native. You want to run it as a batch job, run it as a job. You want to run it as a long running service, run it as a deployment. So that's all we're really talking about here. So when we break it down, you're just talking about compute. So you talk a lot about automation in the CICD areas, that differentiation with the value is. In the world as automation goes faster, what does Kubernetes look like when it becomes automated away? Because I don't want to manage anything, but why even have managed Kubernetes? It should just automate automatically. You mentioned the patching. So in an automated world, is Kubernetes just running under the covers? How does Kubernetes look down the road in your mind in terms of when automation comes in? I've been in this game maybe over 15 years and one thing holds true. Most developers want to focus on the business logic. We hire them because that's their skill set. When they check in code, it would be really nice if we can take it from there and get it where it needs to be. That's been the holy grail. We see it in mobile. You build an app, you put on the app store, Apple gets it to every device on the planet. Done, right? So now it's the server side turn to do this. Whether you're doing serverless functions, Kubernetes, VMware, or Linux, if you have CICD in front of any of that, the developer can still have the same experience. I check in code and you're picking a different deploy target. If you did that five years ago and you understood it and you were using, let's say, maybe Mesos or just VMs, you bring in Kubernetes, you don't even have to change this part of the equation. This is why I tell most people just focus on this in game. So my keynote last year was about this is the in game because this is your culture, this is your change management process, this is your discipline, and this is just a target where that compute goes. All right, we got two minutes left. I want to get your thoughts to share with the audience who's not here, a big waiting list. I know there's some lobby con going on all around Seattle, people flew in. Great place too. You can actually have some good lobby con meetings around the lobby area. So what's happening here in your mind's eye, now you're not in the throes of all the events, you're kind of in the wild here with us and everyone else. What's the top story? What's going on? What's the vibe? What are you extracting out of all this activity as the top story, top level stories here? I think everyone's fighting in their place. If you're a security vendor, you kind of know where your line is, right? I got this twist lock shirt on. You know, they want to play in a world where they need to integrate closer to the developer workflow, not just on the infrastructure side. If you're selling load balancers, service measures is a thing, where do you fit in? The lines are getting a lot clearer. Kubernetes is starting to say, maybe we should stop here, right? Maybe service measures should take it from here. And that's where Istio comes in. Traditional vendors can now play in this well-defined space on the storage side. Where do you integrate? Now we have the storage interface or the container storage interface. Now, if you're in that app, you know where you fit into the puzzle. You don't need to have your own Kubernetes distro. Two years ago, everyone was trying to come out with their own Kubernetes distro so they can actually have an anchor. Now you're like, ah, now I know where to play. And now we also know what's missing. After years of doing this, people look back and say, ah, there's a lot of stuff missing. And it's okay now to go create something new. So clear visibility into kind of a landscape. What about the impact to end users? What's notable in your mind, in terms of highlights, impact to end user organizations, really going through this, quote, digital transformation, which is very cloud-based, of course, but there's certainly changing in impact. What's your thoughts on the end user? We're using some of the same words now. Right, forget the technology piece. Now we can all start to talk about the same thing. So when we say container, we kind of now are talking about the same thing. When we start to talk about sidecars, whether that's a service mesh, Envoy sidecar, or something that adapts your existing code to the new world, now that we're using the same language, we can actually talk, traditional enterprise can talk to the startups and have a meaningful conversation. That's awesome. Any other observations here in terms of the size of the show? Got a lot more activity. It feels a little bit like reinventing, jumping into people swimming through the crowds. Swag's hot. It's 8,000 people here, and it feels like there's more users that have known nothing about Kubernetes. So even though we're about five years in, it reminds me we're just getting started. A lot more work to do, but congratulations on all the work you've done, Kelsey. Really appreciate you taking the time every year to come on theCUBE. We love having you on. Great commentary, great keynote. It's very entertaining. Thanks for coming on. Appreciate it. Awesome, thank you. I'm John Furrier, Stu Miniman here with Kelsey Hight Towers. We're talking about all the breakdown of KubeCon, CloudNativeCon. The beginning of the cloud tsunami is happening. Certainly changing businesses, changing open source, changing it to a global scale. We're here with coverage for three days. We'll be right back with more after this short break.