 From around the globe, it's theCUBE with digital coverage of Red Hat Summit 2020, brought to you by Red Hat. Hi, I'm Stu Miniman and this is theCUBE's coverage of Red Hat Summit 2020. Of course, the event this year is digital. We're talking to Red Hat executives, partners and customers where they are around the globe, pulling them in remotely. Happy to welcome back to the program. One of our CUBE alumni on a very important topic of course at Red Hat Summit, talking about OpenShift. And joining me is Clayton Coleman, who's the OpenShift chief architect with Red Hat. Clayton, thanks so much for joining us. Thank you for having me today. All right, so before we get into the product, it's probably worthwhile that we talk about what's happening in the community and talking specifically, Kubernetes, the whole cloud native space. Normally we would have gotten together. I would have seen you at CUBECon at the end of March, but instead here we are at the end of April, looking out to more CNCF events later this year. But of course, Red Hat Summit is a great open source event and broad community. So we'd really love your viewpoint as to what's happening in that ecosystem. It's been a really interesting year. Obviously with an open source community, we react to this, like we always react to all the things that go on an open source. People come to the community and sometimes they have more time and sometimes they have less time. I think just from a community perspective, there've been a lot of people just reaching out to their colleagues outside of their companies, to their friends and coworkers and all of the different participants in the community. And there's been a lot of people getting together for a little bit of extra time, trying to connect virtually where they can't connect physically. And it's been great to at least see where we've come this year. We haven't had KubeCon and that'll be coming up later this year, but Kubernetes just had the 118 release. And I think Kubernetes is moving into that phase where it's a mature open source project. We've got a lot of the processes down. I'm really happy with the work that the steering committee has gone through. We handed off the last of the Bootstrap steering committee members, handed off to the new fully elected steering committee last year and it's gone absolutely smoothly, which has been phenomenal. The core project is trying to be a little bit more stable and to focus on closing out those loose ends, being a little bit more conservative to change. And at the same time, the ecosystem is really exploded in a number of directions. As Kubernetes becomes more of a bedrock technology for enterprises and individuals and startups and everything in between, we've really seen a huge amount of innovation in the space. And every year it just gets bigger and bigger. There's a lot of exciting projects that have never even talked to somebody on the Kubernetes project, but they have made and built and solved problems for their environments without us ever having to be involved, which I think is success. Yeah, so Clayton, one of the challenges when you talk to practitioners out there is just keeping up with the pace of change can really be challenging. Something we really saw acutely was Docker was rolling out updates every six weeks. Most customers aren't going to be able to change fast enough to keep up with things. Love your viewpoint both as to really what the CNCF says as well as how Red Hat thinks of the product. So you talked about Kubernetes 1.18. My understanding even Google isn't yet packaging and offering that version there. So there's a lag between things and as we start talking about managing across lots of clusters, how does Red Hat think of this? How should customers think about this? How do we make sure that we're staying secure and keeping updated on things without getting run over by the constant treadmill of change? The interesting part about Kubernetes is it's so much more than just that core project. No matter what any of us in the core Kubernetes project or in the products at Red Hat that build around it, go up and shift and layers on top. There's a whole ecosystem of components that most people think of as fundamental to accomplishing building applications, deploying them, running them, whether it's their continuous integration pipelines or it's their monitoring stacks. We really, as Kubernetes has become a little bit more conservative, I think we've really nailed down our processes for taking that change from the community, testing it. We run tens of thousands of automation tests a week on the latest and greatest Kubernetes code. We give it time to soak and we fit it together with all those pieces of the ecosystem and then make sure that they work well together. And I've noticed over the last two years that the rate of, oops, we missed that in Kubernetes 117 that by the time someone saw it, people were already using that, that started to go down. For us, it really hasn't been about the pace of keeping up with upstream, but it's about making sure that we can responsibly pull together all the other ecosystem components that are still a much newer and a little bit, how do we say they are in the exciting phase of their development? While still giving a predictable, reliable update stream, I would say that the challenges that most people are gonna see is how they bring together all those pieces. And that's something that on OpenShift, we think of as our goal is to help pull together all the pieces of this ecosystem and to make some choices for customers that make sense and to give them flexibility where it's not clear yet what the right choice might be or where different people could reasonably disagree. And I'm really excited. I feel like we've got our release cadence down and we're shipping the latest cube after it's had time to go through the review. And I think we've gotten better and better at that. So I'm really proud of the team at Red Hat and how they've worked within the community so that everybody benefits from that testing and that stability. Great. I'd like to hear you dig in a little bit on the application side. What's happening from the workloads that customers are using? What other innovations happening around that space? And how is Red Hat really helping really the infrastructure team and the developer team work even closer together like Red Hat has done for a long time? This is a great question. I'd say there's two key groups coming together. People are bringing substantial, important, critical production workloads and they expect things both to just work but also to be able to understand it. And they're making the transition a lot of folks I talked to are making the transition from previous systems. They've been running OpenShift for a while or they've been running Kubernetes for a while and they're getting ready to move a significant portion of their applications over. And so in the early days of any project you get the exciting green field development. You get to go play with new technologies but as you start moving first one and then 10 and then a hundred of your core business applications from VMs or from bare metal into containers you're taking advantage of that technology in a responsible way. And so the expectations on us as engineers and community members is to really make sure that we're closing out the little stuff. No bug is too small that it can't trip up someone's production application. So seeing a lot of that whether it's something new and exciting like model as a service for AI workloads or whether it's traditional big enterprise transaction processing apps. On the other side on that development model I think we're starting to see phase two or Kubernetes 2.0 in the community which is people are really leveraging the flexibility and the power of containers. Things that aren't necessarily new to people who got into containers early and they've had a chance to go through a couple of iterations but now people are starting to find patterns that up level development teams. So being able to run applications the same way on a local machine as in a production environment well those production environments are there now. And so people are really having to they're having to go through all of their tools and saying well does this process that works for an individual developer also work when I want to move it through my production or my staging environments to production? And so new projects like Knative and Tecton which are Kubernetes native that's just one part of the ecosystem around development on top of Kubernetes. There's tons of exciting projects out there from companies that have adopted the full stack of Kubernetes they built it into their mindset this idea of flexible infrastructure and we're seeing this explosion of new ways where Kubernetes is really just a detail and containers are just a detail and the fact that it's running this little thing called Docker down at the heart of it nobody talks about it anymore. And so that transition has been really exciting. I think there's a lot that we're trying to do to help developers and administrators see eye to eye and a lot of it is learning from the customers and users out there who've really paved the way which is the open source ways learning from others and helping others benefit from that. Yeah, I think you bring up a really important point. We've been saying for a couple of years now that Kubernetes should get to the point where it's boring and boring in a way also because it's going to be baked in everywhere. We saw from basically customers just taking the code really spinning a lot of their own things by building the stack to of course lots of customers have used OpenShift over the year too if I'm adopting public cloud more and more they're using those services from that standpoint. Can you talk a bit about how Red Hat is really integrating with public clouds and your architectural and technical philosophy on that and how might that be differ from some other companies that you might call a little bit more cloud adjacent as opposed to being deeply integrated with the public cloud. The interesting thing about Kubernetes is that while it was developed on top of the clouds it wasn't really built from day one assuming a cloud underneath it. And I think that was an opportunity that we really missed and to be fair we had to make the thing work first before we depended on these unreliable clouds. When we started the clouds were really hitting their stride on stability and reliability and people were, it was becoming the obvious choice. So some of what we've tried to do is take flexible infrastructure as a given, assume that the things that the cloud provides should be programmed for the benefit of the developer and the application. And I think that's a key trend is we're not using the cloud because our administration teams want us for using the cloud because it makes us more powerful as developers that enables new scenarios. It shortens the time between idea and reality. What we've done in OpenShift is we've really built around the idea of OpenShift running on a cloud should take advantage of that cloud to an extreme degree which is if infrastructure can be flexible the machines in that cluster need to come and go according to the demands of the applications on top of it. So giving a little bit more power to the cluster and taking a little bit of away from the cloud. But that benefits, that also needs to benefit those who are running on premise because I think as you noted, our goal is you want this ubiquitous Kubernetes environment everywhere. And the operations teams and the development teams and the DevOps teams in between need to have a consistent environment. And so if you can do this on the cloud but you don't have that flexibility on premise you've lost something. And so what we've tried to do as well is to think about those ideas that are what we think of as quote unquote cloud native that starts with immutable operating systems. It starts with everything being declarative and working backwards from, I want to have 15 machines and then the cluster or controllers on the cluster say, oh well, one of the machines has gone bad let's replace it. On the cloud you ask for a new cloud infrastructure provider or you ask the cloud API for a new machine and then you replace it automatically and no one knows any better. On premise, we'd love to do the same thing with both bare metal and virtualization on top of Kubernetes. So we had that flexibility to say you may not have all of the options but we should certainly be able to say, oh well, this hardware is bad or well the machine stopped so let's reboot it. And there's a lot of that same mindset that can be applied. We think that'll, if you need virtualization you can always use it but virtualization is a layer on top benefits from some of the same things that all the other extensions and applications on top of Kubernetes can benefit from. So trying to pave that layer make sure that you have flexible reliable storage on premise through RCEF and Red Hat storage products which are built on top of the cluster exactly like virtualization is built on top of the cluster so you get cloud native storage mixed in working with those teams to take those operational best practices. You know, there's, I think one of the things that interests me is no one 20 years ago who was running an early version of CEPH wouldn't have some approach to run these very large things at scales. Organizations like CERN have been using CEPH for over a decade at extremely large scale. Some of what our mindset is we think it's time to bake some of that knowledge actually into our software. For a very long time we've kind of been building out and adding more and more software but we always left the automation and the knowledge about how that software supposed to be run to the side. And so by taking that and we talk about operators Kubernetes really enshrines this principle is taking that idea of baking some of that operational knowledge into the software we ship though that software can rely on Kubernetes, OpenShift tries to hide the details of the infrastructure underneath and our goal I think in the long run is really just to make everybody's lives easier. I shouldn't have to ship you a CEPH admin for you to be successful. And we think there's a lot more room here that's really gonna improve how operations teams work with the software that they use day to day. So Clayton you mentioned virtualization is one of the topics in there of course virtualization is very prevalent in customers data center environment today. Red Hat OpenShift oftentimes in data centers is sitting on VMware environments. Of course recently VMware announced that they have Kubernetes baked into the solution and Red Hat has OpenShift with Red Hat virtualization. Maybe without going into too much depth and you probably have breakouts and white papers on this but what kind of decision point should customers be thinking about when they're deciding do I do this in bare metal? Do I do it in virtualization? What are some of the just high level trade-offs there when they need to make those decision points? I think the first one is virtualization is a mature technology. It's a known quantity for many organizations and so those who are comfortable with virtualization I'd say like any responsible architect or engineering team you don't want to stop using something that's working well just because you can. And a lot of what I would see as the transition that companies on is for some organizations without a big investment in virtualization they don't see the need for it anymore except as maybe a technical detail of how they isolate and secure workloads. One of the great things about virtualization technology that we're all aware of over the last couple of years is it creates a boundary between workloads and the underlying environment that doesn't mean that the underlying environment and containers can't be as secure or benefit from those same techniques. And so we're starting to see that in the community this kind of spectrum of virtualization all the way from the big traditional virtualization to very streamlined to stripped down virtualization wrappers around containers like some of the cloud providers use for their application environments. So I'm really excited about the open source community is touching each of these points on the spectrum. Some of our goals are, if you're happy with your infrastructure provider we want to work well with it. And that's kind of the pragmatic of everyone's at a different step in that journey. The benefit of containers is no matter how fast you make of the end it's never going to be quite as fast as a container. And it's never going to be quite as easy for a developer to run on their laptop. And I think working through this is there's still a lot of work that we as a community need to do around making it easier for developers to build containers and test them locally in smaller environments. But all of that flexibility can still benefit from a virtualization under later or virtualization used as an isolation technology. So projects like Cata and some of the work that's being done in the open source community around projects like Firecracker taking the same open source ideas and remixing them at different points gives us a lot of flexibility. So I would say I'm actually less interested in virtualization than all of the other technologies that are application centric. And at the heart of it a VM isn't really a developer centric idea. It's specifically an administrative concept that benefits the administrator and developers can take advantage of it. But I think all of the capabilities that you think of when you think of building an application like scaling out and making sure patches are applied, being able to roll back, separating your configuration and then all of the hundreds of other levels of complexity that we'll add around that like service mesh and the ability to gracefully tolerate failures in your database. These are where I think virtualization needs to work with the platform rather than being something that dominates how we think about the platform. It's application first, not being first. Yeah, no, you're absolutely right. The critique I've always given for a number of years now is if you look at virtualization, the promise was, let's take that old application that probably should have been updated and just shove it in a VM and never think about it again. That's not doing good things for the user. So if I look at that at one end of the spectrum, way at the other end of the spectrum, trying not to think about infrastructure, you mentioned Knative. So one of the things that I've been digging in trying to learn more about at Red Hat Summit has really been the OpenShift serverless. So give us the update on that piece. That's obviously a very different discussion than what we were just having from a virtualization standpoint. So how does OpenShift look at serverless? How does that tie into if I'm doing serverless in Amazon versus some of the other open source options for serverless, how should I be thinking about that? There's a lot of great choices on the spectrum out there. I think one of the interesting things, and I love the word spectrum here because Knative kind of sits at a spot where it tries to be, as the name says, it tries to be as Kubernetes-native as possible, which lets you tap into some of those additional capabilities when you need it. And one of the things I've always appreciated is the more restrictive a framework is, usually the better it is at doing that one thing and doing it really well. We learned this with Rails, we learned this with Node.js. And as people have built over the years, the idea of simple development platforms, the core function idea is a great simple idea. But sometimes you need to break out of that. You need extra flexibility or your application needs to run longer or slow start is actually an issue. One of the things I think is most interesting about Knative and I see comers and user think this way, it's a good point that gives you some of the flexibility of Kubernetes and a lot of the simplicity of functions as a service. But I think that there's going to be an inevitable set of use cases that tie into that, which are simpler, where an organization has a very opinionated way of running applications. And I think that flexibility will really benefit Knative, whereas some of the more opinionated frameworks around serverless lose a little bit of that. So that's one dimension that I still think Knative is well positioned to kind of capture the broadest possible audience, which for Kubernetes and containers was kind of our mindset. We wanted to solve enough of the problems that you can solve, you can run all your software. We don't have to solve all those problems to such a level that there's endless complexity, although we've been accused of having endless complexity in Kubernetes before. But just trying to think through what are the problems that everyone's going to have and give them a way out. At the same time, for us, when we think about productization functions as a service about integration, it's about taking applications and connecting them, connecting them through Kubernetes. And so it really depends on identity and access to data and tying that into your cloud environment if you're running on top of a cloud or tying it into your backend databases if you're on-premise. I think that is where the ecosystem is still working to bring together and standardize some of those pieces in Kubernetes or on top of Kubernetes. What I'm really excited about is the team, as much, there's been this core community effort to get a native to a GA quality. Alongside that, the OpenShift serverless team has been trying to make it a dramatically simpler action if you have Kubernetes and OpenShift. It's a one-click action to get started with Knative. And just like any other technology, how accessible it is determines how easy users find it to get started and to build the applications they need. So for us, it's not just about the core technology, it's about can someone who's not familiar with serverless or not familiar with Kubernetes bring up an editor and build a function and then deploy it on top of OpenShift, see it scale out like a normal Kubernetes application without having to know about pods or persistent volumes or nodes. And so these are some of the steps I've been really proud that the team's done. I think there's a huge amount of innovation that will happen this year and next year as the maturity of the Kubernetes ecosystem really grows up. We'll start to see standardized technologies for sharing identity across multiple clouds, across multiple environments. It's no good if you've got these applications on the cloud that need to tie into your corporate LDAP but you can't connect your corporate LDAP to the cloud. And so your applications need a third identity system. Nobody wants a third identity system. And so working through some of this, these are the challenges I think the hybrid organizations are already facing. And our job is just to work with them in the open source communities and with the cloud providers to partner with them in open source so that the technologies in Kubernetes fit very well into whatever environment they run in. All right, well, Clayton really appreciate all the updates there. I know the community is definitely looking forward to digging through some of the breakout sessions, reading all the new announcements. And of course, what we'll look forward to seeing you and the team participating in many of the, you know Kubernetes related events happen in later this year. That's right, it's going to be a good year. All right, thanks so much Clayton for joining us. I'm Stu Miniman and as always, thank you for watching theCUBE.