 Live from Barcelona, Spain, it's theCUBE, covering KubeCon CloudNativeCon Europe 2019. brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. Welcome back to theCUBE here in Barcelona, Spain. This is KubeCon CloudNativeCon 2019. I'm Stu Miniman, my co-host for two days of wall-to-wall coverage is Corey Quinn. Joining us on the program, we have two gentlemen from Red Hat. To my right is Josh Berkers, who's the Kubernetes community manager and sitting to his right is Dan Bozanak, who's a senior software engineer. And as I said, both with Red Hat, gentlemen, thanks so much for joining us. Well, thank you. All right, so Josh, a community manager in the Kubernetes space. So what brings you here to KubeCon and maybe explain to us and give the clarification on the shirt so that we can be educated to properly call this city in residence by how they should be. So, I mean, obviously I'm here because the community's here, right? Very large community. We had a contributor summit on Monday. They had a couple hundred people, 300 people at it. I have one thing, when we talk about community in Kubernetes, there's the general ecosystem community and then there's the contributor community. And the latter is more with what I'm concerned with because even the contributor community by itself is quite large. As for the t-shirt, speaking of community, so we like to actually do special t-shirts for the contributor summits. I designed this one. Despite my current career, my academic background is actually an art. This is obviously a moro pastiche. But one of the things I actually learned by doing this was I did a different version first that said Barca on it. And then one of the folks from here is like, well, that's the football team. That when they abbreviate the city, it's actually Barnett. It was news to me, I am today years old when I found that out. So thank you very much for that. Yes, and it was an additional four hours of drawing for me. All right, so a while back, I had a tweet that went out that I knew was going to be taken in two different ways. And you were one of the first people to come back on that in the second wave. Everyone first thought I was being a snarky jerk, which, let's be honest, fair. But what I said was that in five years, no one is going to care about Kubernetes. And your response was, yeah, that's a victory condition. If you don't have to think or care about this, that means it won in a similar way that a lot of things have slipped beneath the level of awareness. And I'm curious as to what both of you think about the idea of Kubernetes, I'm not saying it loses in the marketplace. I don't think that that is likely at all. But what point do people not have to think about it anymore and what does that future look like? Yeah, I mean, one of our colleagues noticed yesterday that this conference particularly is not about Kubernetes anymore. So you hear more about all the ecosystem, a lot of projects around it. So it certainly grew up above the Kubernetes. And so you see all the talks about service meshes and things we try to do for the edge computing and things like that. So it's not just the Kubernetes anymore. It's whole ecosystem of the products and projects around it that I think is a big success. Yeah, and I mean, I'll say taking sort of a longer view is I can remember compiling my own Linux kernels. I can remember doing it on a weekly basis because you honestly had to, right? If you wanted certain devices to work, you had to actually compile your own kernel. Now on my various servers and stuff that I do for testing and demos and development, I can't even tell you what kernel version I'm running. Because I don't care, right? And for core Kubernetes, like I said, if we get to that point of not needing to care about it, of only needing to care about it when we're developing something, then that looks like victory to me. Yeah, Josh, is there anything in the core contributor team that they have milestones and say, hey, by the time we get to 2.0 or 3.0, you know, Kubernetes is invisible? Yeah, well, it's spoken of more in terms of GA and API stability because really, if you're going to back up and you say, what is Kubernetes, what's definitely Kubernetes is, is a bag of APIs, a very large bag of APIs. We have a lot of APIs, but a bag of APIs. And the less those APIs change in the future, the closer we're getting to maturity and stability, right? Because we want people building new stuff around the APIs, not modifying the APIs themselves. Yeah, well, to that end, last night, here at Barcelona Time, a blog post came out from AWS where they set out a formalized deprecation strategy for their EKS product to keep up with the releases of Kubernetes. Now, AWS generally does not turn things off ever, which means that 500 years from now, two trunkless legs of stone in a desert will be balanced by an ELB classic. And we're never going to be rid of anything they've ever built. But if nothing else, you've impacted them to formalize a deprecation strategy that follows upstream, which is awesome. It's great to start seeing a world where you don't have to support older versions of things as your user base and your community informs you. It's nice to see providers breaking from their model to respond to what the community has done. And I can't imagine for you, that's anything other than an unqualified success. All right, so Dan, when we talk about dispersion of technology, there are a few issues that get people as excited these days as edge computing. So tell us a little bit about what you're doing and the community's doing in the IoT and Edge space. Yeah, so we noticed that more and more people want to try their workloads outside of the centralized, one centralized data cluster. So the big term for the last year was the hybrid cloud, but it's not just hybrid cloud. People coming also from the IoT user space wants to containerize their workloads, wants to put the processing closer and closer to the devices that are actually producing and consuming those data and the users. And there's a lot of use cases which should be tackled in that way. And as you all said previously, like Kubernetes won the developers hearts and minds. So APIs are stable, everybody's using them. It will be supported for decades. So it's natural to try to bring all these tools and all these platforms that are already available to the developers try to tackle these new challenges. So that's why last year we formed the Kubernetes IoT Edge Working Group, trying to start with the simple questions because when people come to you and say Edge, everybody thinks something different. For somebody, it's an IoT gateway. For somebody, it's a full blown Kubernetes cluster at some Telco provider. So that's what we're trying to figure out all these things and try to form a community because as we saw in the previous also for the IoT user space is that complex problems like these are never basically solved by single company. You need open source, you need open standards, you need a community around it so that people can pick and choose and build a solution to feed their needs. Yeah, so as you said, right, there is that spectrum of offerings, everything from that Telco down to, okay, is this going to be something sitting on a tower somewhere or the vast proliferation of IoT which we can spend lots of time. So are you looking at all of these? Are you pointing, okay, we already have a Telco working group over here and we're going to work on the IoT thing. Where are we? What are the answers and starting points for people today? So we have a single working group for now and we try to bring into people that are interested in this topic in general. So as one of the guys said, like Edge is everything that's not running in the central cloud, right? So we have a couple of interesting things happening at the moment. So future way guys have a Cube Edge project and they're present at this conference and we have a couple of sessions on that that's basically trying to tackle this device edge kind of space. How to put Kubernetes workload on the constrained device and over the constrained network kind of problem. And we have a people like coming from the rancher which provide their own, again, resource constrained Kubernetes deployments and we see a lot of developments here but it's still I think early age and that's why we have like a working group which is something that we can build a community and work over the time to shape things and find the appropriate reference architecture blueprints for people that can follow in the future. Yeah, I think that there's been an awful lot of focus here on this show on Kubernetes but it is a KubeCon plus cloud native con. I'm curious as far as what you're seeing at these conversations, something you alluded to as well is that there's now a bunch of other services that are factored in. I mean, it feels almost like this show has become just from conversations, Kubernetes and friends but the level of attention that's being paid to those friends is very dramatically increasing and I'm curious as to how you're seeing this evolve in the community particularly, but also with customers and what you're seeing as this entire ecosystem continues to evolve. Yeah, well, I mean, part of it is out of necessity, right? As when Kubernetes moved from dev and experimental into production, you don't run Kubernetes by itself, right? And some of these Kubernetes you can run with existing tooling, right? Cloud providers, that sort of thing but other things you discovered that you want new tools. For example, one of the areas that we saw expansion in to start with was the area of monitoring and telemetry because it turns out that monitoring telemetry that you built for 100 servers does not work with 20,000 pods. There's just a volume problem there and so then we had new projects like Heapster and Prometheus and then new products from other companies that like Cystic and that sort of thing just looking at that space, right? In order to have that part of the tooling because it can't be in production without monitoring telemetry. One of my personal areas that I'm involved in is storage, right, and so we've had the Rook project here go from and pretty much a year and a half actually, go from being open sourced to being now a serious alternative solution if you don't want to be dependent on cloud provider storage. Please tell me you're giving that an award called Rookie of the Year. I do not apologize for that one. One thing that does resonate with me though is the idea that you've taken strategically that instead of building all of this functionality into Kubernetes and turning it into a you will do it this way or you're going to be off in the wilderness somewhere, it's decoupled. I love that pattern. Was that always the design from day one or was this a contentious decision? No, it wasn't. Kubernetes started out as kind of a monolith, right? Because it was like the open source version of Borg Lite, right? And which was built as a monolith within Google because there weren't options. It had to work with Google stuff, right? If you're looking at Borg, right? And so they're not worried about supporting all this other stuff. But from day one of Kubernetes being a project, it was a multi-company project, right? And if you look at, you know, OpenShift and OpenShift's users and OpenShift's stack, it's different from what Google uses for GKE. And honestly, the easiest way to support sort of multiple stack layers is to decouple everything, right? And that's not how we started out, right? Cloud providers, like one of our problems, cloud providers, Intree, storage Intree. Networking was the only thing that was separate from day one. You know, but all this stuff was Intree and it didn't take very long for that to get unmaintainable. Right? I mean, I think one of the, I've been following you and running into you on the conference circuit for years and one of the talks I gave for a year and a half was heresy in the church of Docker, where we don't know what your problem is but Docker, Docker, Docker, Docker, Docker. And I gave a list of 12 or 13 different reasons and things that were not being handled by Docker. And now I've sunset that talk largely because one, no one talks about Docker and it feels a bit like punching down. But more importantly, Kubernetes has largely solved almost all of those. There are a few exceptions here and there because it turns out, sorry, nothing is perfect and we've not yet found containerization, utopia, surprise but it's really come a very long way in a very short period of time. Well, a lot of it is decoupling because the thing is that you can take it two ways, right? It's one is that potentially as an ecosystem, Kubernetes solves almost anything. Some things like IoT are, you know, a lot more alpha state than others. And then if you actually look at just core Kubernetes like what you would get off of the Kubernetes, Kubernetes repo if you compiled it yourself, Kubernetes solves almost nothing. Like by itself, you can't do much with it other than test your patches. Right, in isolation, the big problem it solves is really limited to I want a buzzword on my resume. There needs to be more to it than that. So, and I think that's true in general because like, you know, if you look at why did Linux become the default server OS, right? It became the default server OS because it was adaptable, right? Because you would compile in your own stuff because we define POSIX and kernel module APIs to make it easy for people to build their own stuff without needing to commit to the Linux kernel. All right, so I'd like to get both of your thoughts just on the storage piece there because you know, one, you know, storage is a complex, highly fragmented ecosystem out there. You know, Red Hat has many options out there and boy when I saw the keynote this morning I thought he did a really good job of laying out the options but boy, there's, you know, it's a complex, multi-fragmented stack with a lot of different options out there and edge computing, the storage industry as a whole without even Kubernetes is trying to figure out how that works, so maybe we start with you. So yeah, I don't have any particular answers for you for today in that area but what I want to emphasize what Josh said earlier is that these APIs and this demodalization that is done in Kubernetes, it's one of the big important things for Edge as well because people coming there and saying, we should do this, should we revamp things or should we just try to reuse what's basically very good, very well designed system. So that's the starting point, like why do we want to start using Kubernetes for the edge computing? But for the storage questions I would hand over to Josh. So your problem with storage is not anything to do with Kubernetes in particular but the fact that like you said the storage sort of stack ecosystem is a mess, it's more vendor, everything is vendor specific, things don't work even semantically the same, let alone like the same by API. And so all we can do in the world of Kubernetes is make enabling a storage for Kubernetes not any harder than it would have been to do it in some other system. Right, and look, the storage industry would say, no, no, it's not a mess, it's just that there's a poor flavor of applications out there, there is not one solution to fit them all and that's why we have block, we have file, we have object, we have all these various ways of doing things. So you're saying storage is hard but storage with Kubernetes is no harder today, we're getting to that point. I would say it's a little harder today and we're working on making it not any harder. Excellent, well Josh and Dan thank you so much for the updates, always appreciative of the community contributions, look forward to hearing more about the, of course the contributors always and as the Edge and IoT groups mature, look forward to hearing updates in the future, thank you. Thank you, thank you. For Corey Quinn, I'm Stu Miniman back with lots more coverage here from KubeCon, CloudNativeCon 2019 in Barcelona, Spain. Thanks for watching theCUBE.