 From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon North America 2020 virtual brought to you by Red Hat, the Cloud Native Computing Foundation and Ecosystem Partners. Hello everyone and welcome back to theCUBE's ongoing coverage of KubeCon North America. Joe Fernandez is here, he's with Stephanie Charras and Joe's the VP and GM for Core Cloud Platforms at Red Hat and Stephanie is the SVP and GM of the Red Hat Enterprise Linux BU, two great friends of theCUBE. Awesome seeing you guys, how are you doing? Good, it's great to be here, Dave. Yeah, thanks for the opportunity. Hey, so we all talked, you know, recently, Ansible Fest seems like a while ago, but we talked about what's new at Red Hat, really coming at it from an automation perspective, but I'm wondering if we could take a view from OpenShift and what's new from the standpoint of you're really focused on helping customers, you know, change their operations and operationalize. And Stephanie, maybe you could start and then, you know, Joe, you can bring in some added color. No, that's great. And I think, you know, one of the things we try and do at Red Hat clearly, building off of open source, we have been focused on this open hybrid cloud strategy for, you know, really years now. The beauty of it is that hybrid cloud and open hybrid cloud continues to evolve, right? With bringing in things like speed and stability and scale and now adding in other footprints like managed services as well as Edge and pulling that all together across the whole Red Hat portfolio from the platforms, right? Certainly with Linux and RHEL, into OpenShift and the platform with OpenShift and then adding automation, which certainly you need for scale, but it's continues to evolve as the definition of open hybrid cloud evolves. Great, so thank you, Stephanie and Joe. You guys got hard news here that you can maybe talk about the 4.6. Yeah, so OpenShift is our enterprise Kubernetes platform with this announcement, we announced the release of OpenShift 4.6. So we're doing releases every quarter, tracking the upstream Kubernetes release cycle. So this brings Kubernetes 1.19, which itself brings a number of new innovations, some specific things to call out. We have this new automated installer for OpenShift on bare metal. And that's definitely a trend that we're seeing is more customers not only looking at containers, but looking at running containers directly on bare metal environments. OpenShift provides an abstraction, which combines Kubernetes on top of Linux with RHEL, by really across all environments from bare metal to virtualization platforms to the various public clouds and out to the Edge. But we're seeing a lot of interest in bare metal. This is basically increasing the automation to install seamlessly and manage upgrades in those environments. We're also seeing a number of other enhancements, OpenShift service mesh, which is our Istio based solution for managing the interactions between microservices, being able to manage traffic against those services, being able to do tracing. We have a new release of that on OpenShift 4.6. And then some work specific to the public cloud that we've started extending into the government clouds. So we already supported AWS in Azure. With this release, we added support for the AWS government cloud, as well as Microsoft Azure government. And so again, this is really important to like our public sector customers who are looking to move to the public cloud, leveraging OpenShift as an abstraction, but wanted to support it on the specialized clouds that they need to use with Azure, the AWS. So Joe, can we stay there for a minute? So bare metal, we're talking performance there, because you really want to run fast, right? So that's the attractiveness there. And then the point about Istio in the OpenShift service mesh, that makes things simpler. Maybe talk a little bit about sort of business impact and what customers should expect to get out of these two things. So let me take them one at a time, right? So running on bare metal, certainly performance is a consideration. I think a lot of folks today are still running containers and Kubernetes on top of some form of virtualization, either a platform like vSphere or OpenStack or maybe VMs in one of the public clouds. But containers don't depend on a virtualization layer. Containers only depend on Linux and Linux runs great on bare metal. So as we see customers moving more towards performance and latency sensitive workloads, they want to get that bare metal performance and running OpenShift on bare metal and their containerized applications on that, a platform certainly gives them that advantage. Others just want to reduce the cost, right? They want to reduce their VM sprawl, the infrastructure and operational cost of managing a vert layer beneath their Kubernetes clusters. And that's another benefit. So we see a lot of uptake in OpenShift on bare metal. On the service mesh side, this is really about how we see applications evolving, right? Customers are moving more towards these distributed architectures, taking formerly monolithic or end-tier applications and splitting them out into lots of different services. The challenge there becomes then, how do you manage all those connections, right? Because something that was a single stack is now comprised of tens or hundreds of services. And so you want to be able to manage traffic to those services. So if a service goes down, you can redirect those requests to an alternative or a failover service. Also tracing, if you're looking at performance issues, you need to know where in your architecture you're having those degradations and so forth. Those are some of the challenges that people can sort of overcome or get help with by using service mesh, which is powered by STL. And then, I'm sorry, Stephanie, let me get to you in a minute, but just one follow-up on that, Joe. So the real differentiation between what you bring and what I can just... If I'm in a monocloud, for instance, you're going to bring this across clouds, you're going to bring it to on-prem and we're going to talk about the edge in a minute. But is that right from a differentiation standpoint? Yeah, that's one of the key differentiations. Red Hat's been talking about the hybrid cloud for a long time. We've been articulating our open hybrid cloud strategy. And even if that's not a strategy that you may be thinking about, ultimately where folks end up, right? Because all of our enterprise customers still have applications running in the data center, but they're also all starting to move applications out to the public cloud. As they expand their usage of public cloud, you start seeing them adopt multi-cloud strategies because they don't want to put all their eggs in one basket. And then for certain classes of applications, they need to move those applications closer to the data. And so you start to see edge becoming part of that hybrid cloud picture. And so what we do is basically provide a consistency across all those environments, right? We want to run great on Amazon, but also great on Azure, on Google, on bare metal in the data center, bare metal out at the edge, on top of your favorite virtualization platform. And yeah, that consistency to take a set of applications and run them the same way across all those environments is just one of the key benefits of going with Red Hat as your provider for open hybrid cloud solutions. All right, thank you. Stephanie, we'll come back to you here. So, I mean, we talk about RELL a lot because it's the business unit that you manage. But, and we're starting to see Red Hat's edge strategy unfold, kind of RELL is really the linchpin. I wonder if you could talk about how you're thinking about the edge and particularly interested in how you're handling scale and why you feel like you're in a good position to handle that massive scale and the requirements of the edge and versus, hey, we need a new OS for the edge. Yeah, I think, and Joe did a great job of setting up. It does come back to our view around this open hybrid cloud story has always been about consistency. It's about that language that you speak, no matter where you want to run your applications and between RELL on my side and Joe with OpenShift and of course, you know, we run the same Linux underneath. So RELL Core OS is part of OpenShift. That consistency leads to a lot of flexibility, whether it's through a broad ecosystem or it's across footprints. And so now as we have been talking with customers about how they want to move their applications closer to data, you know, further out and away from their data center. So some of it is about distributing your data center, getting that compute closer to the data or closer to your customers. It drives some different requirements, right? Around how you do updates, how you do over the air updates. And so we have been working in typical red hat fashion, right? We've been looking at what's being done in the upstream. So in the Fedora upstream community, there is a lot of work being that has been done in what's called the IoT special interest group. They have been really investigating what the requirements are for this use case and edge. So now we're really pleased in our most recent release of RELL 8, RELL 8.3. We have put in some key capabilities that we're seeing being driven by these edge use cases. So things like, how do you do quick image generation? And that's important because as you distribute what that consistency, create a tailored image, be able to deploy that in a consistent way, allow that to address scale, meet security requirements that you may have also, right? Updates become very important when you start to spread this out. So we put in things in order to allow remote device mirroring so that you can put code into production and then you can schedule it on those remote devices to happen with the minimal disruption. Things like, things like we all know now, right? With all this virtual stuff, we often run into things like non-ideal bandwidth and sometimes intermittent connectivity with all of those devices out there. So we put in capabilities around being able to use something called RPM ostry in order to be able to deliver efficient over the air updates. And then of course you got to do intelligent rollbacks for per chance that something goes wrong, how do you come back to a previous state? So it's all about being able to deploy at scale in a distributed way, be ready for that use case and have some predictability and consistency. And again, that's what we build our platforms for. It's all about predictability and consistency and that gives you flexibility to add your innovation on top. I'm glad you mentioned intelligent rollbacks. I learned a long time ago. You always ask the question, what happens when something goes wrong? You learn a lot from the answer to that. But we talk a lot about cloud native. Sounds like you're adapting well to become edge native. Yeah, I mean, we're finding whether it's in the verticals and the very specific use cases or whether it's in sort of an enterprise edge use case, having consistency brings a ton of flexibility. It was funny, one of our, I was talking with a customer not too long ago and they said, agility is the new version of efficiency. So it's that having that sort of language be spoken everywhere from your core data center all the way out to the edge, that allows you a lot of flexibility going forward. Joe, I wonder if you could talk about, I mentioned just mentioned cloud native. I mean, I think people sometimes just underestimate the effort it takes to make all this stuff run in all the different clouds, the engineering efforts required. And I'm wondering what kind of engineering you do if any with the cloud providers and of course the balance of the ecosystem, but maybe you could describe that a little bit. Yeah, so Red Hat works closely with all the major cloud providers, whether that's Amazon, Azure, Google or IBM cloud, obviously. And we're very keen on sort of making sure that we're providing the best environment to run enterprise applications across all those environments, whether you're running it directly just with Linux on RHEL or whether you're running it in a containerized environment with OpenShare, which includes RHEL. So our partnership includes work we do upstream. For example, Red Hat helped Google launch the Kubernetes community and I've been with Google, we've been the top two contributors driving that project since inception, but then also extends into sort of our hosted services. So we run a jointly developed and jointly managed service called the Azure Red Hat OpenShift Service together with Microsoft, where our joint customers can get access to OpenShift in an Azure environment as a native Azure service, meaning it's fully integrated just like any other Azure service you can tie it into Azure billing and so forth. It's sold by Azure or Microsoft sales reps. But we get the benefit of working together with our Microsoft counterparts in developing that service and managing that service and then in supporting our joint customers. We over the summer announced sort of a similar partnership with Amazon and we'll be launching or already doing pilots on the Amazon Red Hat OpenShift Service, which is the same concept now applied to the AWS cloud. So that'll be coming out GA leader this year, right? But again, whether it's working upstream or whether it's partnering on managed services, I know Stephanie and team also do a lot of work with Microsoft, for example, on SQL Server on Linux dot net on Linux, whoever thought you'd be running that net applications on Linux, but that's a couple of years old now or a few years old. So again, it's been a great partnership not just with Microsoft, but with all the cloud providers. So I think you just shared a little leg there, Joe. What's coming GA later this year? I just wanna circle back to that. Yeah, so we announced a preview early this year of the Amazon Red Hat OpenShift Service. It's not generally available yet. We're taking customers who wanna sort of be early access to pilots and then that'll be generally available later this year. Although Red Hat does manage our own service, OpenShift dedicated, that's available on AWS today, but that's a service that's solely operated by Red Hat. This new service will be jointly operated by Red Hat and Amazon together. It'll be sort of a service that we are delivering together as partners. As a managed service, and okay, so that's in beta now, I presume, if it's gonna be GA later this year. Yeah, it's like a, yeah, yeah. And that's probably running on bare metal, I would imagine. That one is running on EC2, so that's running on AWS EC2, exactly. Can I run it on bare metal? If I wanna turn it over. All of our, I mean, that OpenShift does offer bare metal cloud and we do have customers who can take the OpenShift software and deploy it there. Right now, our managed offering is running on top of EC2 and on top of Azure VMs. But again, this is appealing to customers who like what we bring in terms of an enterprise Kubernetes platform, but don't wanna operate it themselves, right? So it's a fully managed service. You just come in, build and deploy your apps and then we manage all of the infrastructure and all the underlying platform for you. That's gonna explode, my prediction. Let's take an example, a hard example is security. And I'm interested in how you guys ensure a consistent security experience across all these locations, on-prem, cloud, multiple clouds, the edge. Maybe you could talk about that. And Stephanie, I'm sure you have a perspective on this as well from the standpoint of REL. Who wants to start? Yeah, maybe I could start from the bottom and then I'll pass it over to Joe to talk a bit. I think one of these aspects about security, it's clearly top of mind of all customers. It does start with the very bottom and base selection in your OS. We continue to drive SE Linux capabilities into REL to provide that foundational layer. And then as we run REL core OS into OpenShift, we bring over that SE Linux capability as well. But there's a whole lot of ways to tackle this. We've done a lot around our policies around CVE updates, et cetera, around REL to make sure that we are continuing to provide and commit to mitigating all criticals and importance, providing better transparency to how we assess those CVEs. So security is certainly top of mind for us. And then as we move forward, there's also, and Joe can talk about the security work we do, there's also capabilities to do that in containerization. But we work all the way from the base to doing things like these images and these easy to build images, which are tailored so you can make them smaller, less surface area for security. Security is one of those things that's a lifestyle, right? You got to look at it from all the way the base and the operating system with things like SE Linux to how you build your images, which now we've added new capabilities there. And then of course in containers, there's a whole focus in the OpenShift area around container security. Joe, anything you want to add to that? Yeah, sure. I mean, I think obviously Linux is the foundation for all public clouds. It's driving enterprise applications in the data center. Part of keeping those applications secure is keeping them up to date. And through RHEL, we provide secure and up to date foundation as Stephanie mentioned. As you move into OpenShift, you're also then able to take advantage of essentially immutability, right? So now the application that you're deploying is an immutable unit that you build once as a container image. And then you deploy that out to all your various environments. When you have to do an update, you don't go and update all those environments. You build a new image that includes those updates and then you deploy those images out in a rolling fashion. And as you mentioned earlier, you can go back if there's issues. So the notion of immutable application deployments has a lot to do with security and it's enabled by containers. And then obviously you have Kubernetes and all of the rest of our capabilities as part of OpenShift, managing that for you. We've extended that concept to the entire platform. So Stephanie mentioned RHEL CoreOS. OpenShift has always run on RHEL. What we have done in OpenShift 4 is we've taken an immutable version of RHEL. So it's the same Red Hat Enterprise Linux that we've had for years. But now in this latest version RHEL 8, we have a new way to package and deploy it as a RHEL 8 CoreOS image. And then that becomes part of the platform. So when customers want to, in addition to keeping their applications up to date, they need to keep their platform up to date. So they need to keep, up with the latest Kubernetes patches, up with the latest Linux packages. What we're doing is delivering that as one platform. So when you get updates for OpenShift, they could include updates for Kubernetes. They could include updates for Linux itself as well as all the integrated services. And again, all of this is just, this is how you keep your application secure is making sure you're taking care of that hygiene of managing your vulnerabilities, keeping everything patched and up to date and ultimately ensuring security for your application end users. I know I'm going a little bit over, but I have one question that I want to ask you guys. And it's a broad question about, maybe it's trends you see in the business. I mean, you look at, we talk a lot about cloud native and you look at Kubernetes and the interest in Kubernetes off the charts. It's an area that has a lot of spending momentum. People are putting resources behind it, but really to build these sort of modern applications, it's considered state of the art. And you see a lot of people trying to really bring that modern approach to any cloud. We've been talking about Edge. You want to bring it also on-prem. And people generally associate this notion of cloud native with this kind of elite developers, right? But you're bringing it to the masses. I mean, there's 20 million plus software developers out there. And most, with all due respect, they may not be the elites of the elite. So how are you seeing this evolve in terms of re-skilling people to be able to handle and take advantage of all this cool new stuff that's coming up? Yeah, I can start. OpenShift, our focus from the beginning has been bringing Kubernetes to the enterprise. So we think of OpenShift as the dominant enterprise Kubernetes platform. Enterprises come in all shapes and sizes and skill sets, as you mentioned. They have unique requirements in terms of how they need to run stuff in their data center and then also bring that to production, whether it's in the data center across the public clouds. So part of it is making sure that technology meets the requirements and then part of it is working the people process and culture to help them understand what it means to sort of take advantage of containerization and cloud-native platforms and Kubernetes. Of course, this is nothing new to Red Hat. This is what we did 20 years ago when we first brought Linux to the enterprise with RHEL. And in essence, Kureza is basically distributed Linux. Kubernetes builds on Linux and brings it out to your cluster, to your distributed systems and across the hybrid cloud. So nothing new for Red Hat, but a lot of the same challenges apply to this new cloud-native world. Awesome. Stephanie, we'll give you the last word. All right. And I think just to touch on what Joe talked about, it's, and Joe and I work really closely on this, right? The ability to run containers, right? As someone launches down this, because it is magical what can be done with deploying applications using a container technology, we built the capabilities and the tools directly into RHEL in order to be able to build and deploy, leveraging things like Podman directly into RHEL. And that's exactly so, folks, everyone who has a RHEL subscription today can start on their container journey, start to build and deploy that. And then we work to help those skills then be transferable as you move into OpenShift in Kubernetes and orchestration. So, you know, we work very closely to make sure that the skills building can be done directly on RHEL and then transfer into OpenShift because as Joe said, at the end of the day, it's just a different way to deploy Linux. Guys, you're doing some good work. Keep it up. And thanks so much for coming back in theCUBE. It was great to talk to you today. Good to see you, Dave. Yeah, thank you. And thank you for watching, everybody. theCUBE's coverage of KubeCon NA continues right after this.