 Live from Seattle, Washington, it's theCUBE, covering KubeCon and CloudNativeCon North America 2018, brought to you by Red Hat, the CloudNative Computing Foundation, and its ecosystem partners. Okay, welcome back everyone's live coverage here with theCUBE at KubeCon and CloudNativeCon here in Seattle for a 2018 event. 8,000 people up from 4,000 last year, I'm John Furrier with Stu Miniman, my co-host, next guest Daniel Berg, Distinguished Engineer at IBM Cloud, Kubernetes Service, Daniel, great to have you on. Thanks for joining us, good to see you. Thanks. I'll say you guys know a lot about Kubernetes, been using it for a while, blue mix, you guys did a lot of Cloud, a lot of open source. What's going on with the service? Take a minute to explain your role, what you guys are doing, how it all fits into the big picture here. Yeah, yeah, yeah. So, I'm the Distinguished Engineer over top of the architecture and everything around the Kubernetes service. I'm backed by a crazy, wicked, awesome team, right? They are amazing. They're the real wizards behind the current, right? I'm the current, that is basically all it is. But we've done a phenomenal amount of work on IKS, we've delivered it, we've delivered some amazing HA capabilities, highly reliable, but what's really great about it is the service that we provide to all of our customers, we're actually running all of IBM Cloud on it, so all of our services, the Watson service, the Cloud data set-based services, our Key Protect service, Identity Manager, Billing, all of it, it's all running, first of all, it's moving to containers in Kubernetes and it's running on our managed service. So just to make sure I get it all out there, I know we talk a lot of other folks at IBM, I want to make sure we table it. You guys are highly contributing to the upstream, as well as running your workloads and other customers' workloads on Kubernetes within the IBM Cloud. Unmodified, right? I mean, we're taking upstream and we're packing in and the key thing that we're doing is we're providing it as a managed service with our extensions into it. But yeah, we're running, we hit problems, we've hit problems over the last 18, 20 months, right? There's lots of problems. Dan, take us into, people always wonder what happens when this reaches real scale. So what experience is what can you share with us? Well, so when you really start hitting real scale, real scale being, I don't know, 500,000, a couple thousand nodes, right? Then you're hitting real scale there. And we're dealing with tens of thousands of clusters, right? You start hitting different pressure points inside of Kubernetes. Things that most customers are not going to hit and they're gnarly problems, right? They're really complicated problems. One of the most recent ones that we hit is just scaling problems with CRDs. Now that, I mean, we've been promoting heavily CRDs, customized Kubernetes, which is a good thing. Well, it starts to hit another pressure point that you then have to start working through. Scaling of Kubernetes, scaling of the master, dealing with scheduling problems. Once you start getting into these larger numbers, that's when you start hitting these pressure points. And yes, we are making changes and then we're contributing those back up to the upstream. One of the things we've been hearing in the interviews here and obviously with the coverage is that the maturation of Kubernetes, great, check. You guys are pushing these pressure points, which is great because you're actually using it. What are the key visibility points that you're seeing where value's being created and what are some of the key learnings that you guys have had? I mean, so you're starting to see some visibility around where people can have value in the stack, or not stack, but in the open source and create value. And then learnings that you guys have had. Right, right, right. I mean, for us, the key value here is, first of all, providing a certified Kubernetes platform, right? I mean, Kubernetes, it has matured. It has gone better. It's very mature. You can run production workloads on it, no doubt. We've got many, many examples of it. So providing a certified managed solution around that where customers can focus on their application, not so much the platform, highly valuable, right? Because it's certified, they can code to Kubernetes. We always push our teams, both internal and external, focus on Kubernetes, focus on building a Kube native experience, because that's going to give you the best portability ability, moving whether you're using IBM cloud or another cloud provider, right? It's a fully certified platform for that. Yeah, Dan, Dan, you know, it's one thing if you're building on that platform, but what experience do you have of taking big applications, moving it on there? Remember a year or two ago, it seemed like it was sexy to talk about lift and shift and most people understand it's like, really, you just can't take what you had and take advantage of it. You need to be, that might be part of the journey, but I'm sure you've got a lot of experience. Yeah, we've got, I mean, we've seen almost every type of workload now, because a lot of people were asking, well, what kind of workloads can you containerize? Can you move to Kubernetes? Based on what we've seen, pretty much all of them can move. So, and we do see a lot of the whole lift and shift and just put it on Kubernetes, but they really don't get the value. And we've seen some really crazy uses of Kubernetes where they're on Kubernetes, but they're not really, like what I say, Kube native. They're not adhering to the Kubernetes principles and practices and therefore they don't get the full value. So they're on Kubernetes and they get some of the, okay, we're doing some health checking, but they don't have the proper pros, right? They don't have the proper scheduling hints. They don't have the proper quotas. They don't have the proper limits. So they're not properly using Kubernetes. So therefore they don't get the full advantage out of it. So what we're seeing a lot though is that customers do that lift and shift, but ultimately they have to rewrite a lot of what they're doing. To get the most value, and this is true of cloud and cloud native, ultimately at the end of the day if you truly want to get the value of the cloud and cloud native, you're going to do a rewrite eventually. And that will be full cloud native. You're going to take advantage of the APIs and you're going to follow the best practices and the concepts of the platform. And containers give you some luxury to play with workloads that you don't, that you don't have maybe time to migrate over. But this brings up the point of the question that we hear a lot and I want to get your thoughts on this because the world's getting educated very fast on cloud native and re-architecting, re-platform, whatever word you want to use, re-imagining their infrastructure. How do you see multi-cloud driving the investment or architectural thinking with customers? What are some of the things that you see that are important for 2019? It's people saying, you know what? My IT is transforming, we know that. We're going to have a little bit of public cloud. We're going to have on-premise, it's going to be multi-cloud world. I got to make investments. What are those investments architecturally? How should they lay that out? What's your thoughts? So my thought there is ultimately you've got, you've got to focus on a standardized platform that you're going to use across those because multi-cloud, it's here. It's here to stay, right? Whether it's just on-premises and you're doing off-premises or you're doing on-premises and multiple cloud vendors. And that's where everybody's going. And it's going to be, give it another six, 12 months. That's going to be the practice. That's going to be what everybody does. You're not one cloud provider, you're multiple. So standardization, community, massive. Do you have a community around that? You can't have vendor lock-in if you're going to be doing portability across all of these cloud providers. Standardization, governance around the platform, the certification, right? So Kubernetes, you have a certified process. So you certify every version. So you at least know I'm using a vendor that's certified. I have some promise that my application is going to run on that. Now, is that simple as, well, I picked a certified Kubernetes, therefore I should be able to run my application? Not so simple, right? And then operationally you're running CI-CD. You got to run that over the top. You got to have common, yeah, you got to have a common, like observability model across all of that. What you're logging, what you're monitoring, what's your CI-CD process? You've got to have a common CI-CD process that's going to go across all of those cloud providers. All of your cloud environments. Dan, take us inside. How are we doing with security? Is one of those kind of choke points go back to containers when they first started through to Kubernetes? You know, are we doing well on security now and where do we need to go? Are we doing well on, yes we are. I think we're doing extremely well on security. Do we have room for improvement? Absolutely, everybody does. I've just spent the last eight months doing compliance and compliance work. That's not necessarily security, but it dips into it quite often, right? But yeah, security is a central focus. Anybody doing public cloud, especially providers, we're highly focused on security and you got to secure your platforms. I think with Kubernetes and providing, first of all, proper isolation and customers need to understand what levels of isolation am I getting? What levels of sharing am I getting? Are those well documented and I understand what my providers providing me? But the community is improving. Things that we're seeing around, like Kubernetes and what they're doing with secrets and proper encryption, encryption notary with the image repositories and everything, all of that plays into providing a more secure platform. So, we're getting there. Things are getting better. Well, there was a recent vulnerability that's just got patched rather fast. It seemed like it moved really quick. What do we learn from that? Well, we've learned that, I mean, Kubernetes itself is not perfect, right? Everything, actually, I'd be a little bit concerned if we didn't find a security hole because then that means there's not enough adoption where we just haven't found the problems. Yes, we found a security hole. The thing is, the community addressed it, communicated it and all the vendors provided a patch very quickly and many of them, like with IKS, we rolled out the patch to all of our clusters, all of our customers, they didn't have to do anything and I believe Google did the same thing. So, these are things that the community is improving. We're maturing and we're handling those security problems. Daniel, talk about the flexibility that Kubernetes provides. Certainly, you mentioned earlier the value that could be extracted if you do it properly. Some people like to roll their own Kubernetes and want to manage service because it streamlines things a bit faster. When do I want to manage? When do I want to roll my own? Is there kind of a feel? Is it more of a staffing thing? Is it more scales? Is it more application? Like financial services might want to roll their own. And you start to see maybe a different industry. What's your take on this? Well, obviously I'm going to be super biased on this. But my belief there is that, I mean, obviously if you're going to be doing on-premises and you need a lot of flexibility, you need flexibility of the kernel, you may need to roll your own, right? Because at that point, you can control and drive a lot of the flexibility in there. Understanding that you take on the responsibility of deploying and managing and updating your platform, which means generally that's an investment that you're going to make that takes away from your critical investment of your developers on your business. So personally, I would say, it's a massive investment. I mean, look at what the vendor, look at what, IKS, I've got a large team. They live and breathe Kubernetes. Live and breathe every single release, test it, validate it, roll updates. We're experts at updating Kubernetes without any downtime. That's a massive investment. Let the experts do it, focus on your business. And that's where the managed piece shines. That's where the managed piece absolutely shines. Okay, so the question about automation comes out. I want to get your thoughts on the future state of Kubernetes because we go down the cloud native DevOps model, we want to automate away things. Kubernetes, is that in some differentiation, but I don't want to manage clusters, I don't want to manage it, I want it automated. So is it automating faster? Is it going to be automated? What's your take on the automation component, when and where and how? Well, I mean, through the managed services, I mean, it's cloud native. It's all API driven, CLI's. You got one command and you're scaling up a cluster. You get a cluster with one command. You can go across multiple zones with one command. Your cluster needs to be updated. You call one command and you go home. Sounds automated to me. I mean, that's fully, I mean, and that's the only way that we can scale that. We're talking about thousands of updates on a daily basis. We're talking about tens of thousands of clusters, fully automated. A lot of people have been talking the past couple of weeks around this notion of, well, all containers might have security boundary issues. Let's put a VM around it, maybe it's a day for, is that just more of a fix? Because why do I want to have a VM? Where is it better as a native core? Is that real conversation? Is that fun? I mean, it is a real conversation because people are starting to understand what are the proper isolation levels with my cluster. My personal belief around that is you really only need that level of isolation, those many VMs around your containers. Running a single container in a single VM seems overkill to me. However, if you're running a multi-tenant cluster with untrusted content, you better be taking extra precautions. First and foremost, I would say don't do it because you're adding risk, right? But if you're going to do it, yes, you might start looking at those types. But if you're running a cluster in, it's an isolated cluster with full isolation levels all the way down to the hardware in a trusted environment, trust being, it's your organization, it's your code. I think it's overkill though. Yeah, future Kubernetes, what happens next? I mean, people are hot on this. Istio, you got service meshes, a lot of other goodness. People are trying to really kind of stay with the pace of change, a lot of education. But it's not a stack like in the old, I hear words like Kubernetes stack and the CNCF has a stack. So it's not necessarily a stack per se. Right, it's not. Clarify the linguistic language around what we're talking about here. What's a stack? What's not a stack? It's all services. Well, we'll look at it this way. So Kubernetes has done a phenomenal job as a project in the community to state exactly what it's trying to achieve. It is a platform. It is a platform for running cloud native applications. That is what it is. And it allows vendors to build on top of it, it allows customers to build on top. And it's not trying to grow larger than that. It's just trying to improve overall that platform. And that's what's fantastic about Kubernetes. Because that allows us, and when you see the stack, it's really cloud native. What pieces am I going to add to that awesome platform to make my life even better? K-native, Istio, a service mesh. I'm going to put that, because I'm evolving, I'm doing more microservices, I'm going to build that on top of it. Inside of IBM, we did a cloud foundry enterprise environment, CFI. Cloud Foundry on Kubernetes. Why not, right? It's a perfect combination. It's just going up the level and it's providing more usability, better, different prescriptive usage of Kubernetes. What Kubernetes is the platform. When I think about the composability of services, it's not a stack, it's LEGO blocks. Yeah, it's pieces. I'm using different pieces here, there, everywhere. All right, well, Daniel, thanks for coming on, sharing great insight. Congratulations on your success running major workloads with an IBM for you guys and the customers. Again, just the beginning, Kubernetes the beginning, you get to the center of it. Congratulations. Here inside theCUBE, we're breaking down all the action. Three days of live coverage. We're at day one at KubeCon and cloud native con. We'll be right back with more coverage after this short break.