 From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2021 virtual. Brought to you by Red Hat, the Cloud Native Computing Foundation and ecosystem partners. Hey, welcome back everyone to theCUBE's coverage of KubeCon and CloudNativeCon 2021 virtual. I'm John Furrier, your host here in theCUBE, we've got Steve Gordon, Director of Product Management, Cloud Platforms at Red Hat. Steve, welcome to theCUBE, good to see you. Thanks for coming on. Hey, John, thanks for having me on. It's great to be back. So soon we'll be in real life. I think North America show, this is for the Europe virtual. I think the North America one might be in person. It's not yet official. We'll hear, but we'll find out, but looking good so far. But thanks for all your collaboration. You guys have been a big part of the CNCF. We've been covering on theCUBE, as you know, since the beginning. But I wanted to get into the edge conversation that's been going on. And first, I want to just get this out there. You guys are sponsoring Edge Day here at KubeCon. I want you to bring that together for us because this is a big part of what Red Hat's talking about. And frankly, customers, the edge is the most explosive growth area. It's got the most complexity. It's crazy. It's got data. It's got everything at the edge. Everything's happening. How important is Kubernetes to edge computing? Yeah, it's certainly interesting to be here talking about it now and having kind of a dedicated Kubernetes Edge Day. I was thinking back earlier, I think it was one of the last in-person KubeCon events, I think, if not the last of the San Diego event, where there's already kind of a cresting of interest in Edge and kind of topics on the agenda around Edge. And it's just great to see that momentum has continued up to where we are today. And really more and more people, not only talking about using Kubernetes for Edge, but actually getting in there and doing it. And I think, you know, when we look at why people are doing that, they're really leaning into some of the things that they sort of stress as Kubernetes in general, that they're now able to apply to edge computing use cases in terms of what they can actually do in terms of having a common interface to this very powerful platform that you can take to a multitude of foot, and growing multitude of footprints, be they your public cloud providers where a lot of people may have started their Kubernetes journey or their own data center to these edge locations where they're increasingly trying to do processing closer to where they're collecting data basically. You know, when you think about Edge and all the evolution with cloud native, what's interesting is Kubernetes is enabling a lot of value. I'd like to get your thoughts. What are you hearing from customers around use cases? I mean, you are doing product management. You've got to document all the features, the wish list. You have the keys to the kingdom on what's going on over at Red Hat. You know, we're seeing just the amazing connectivity between businesses with hybrid cloud. It's a game changer. Haven't seen this kind of change at this level since the late 80s, early 90s in terms of inflection point impact. This is huge. What are you hearing? I think it's really interesting that you use the word connectivity there because one of the first stage computing use cases that I've really been closely involved with and working a lot on, which then kind of grows into the others is around telecommunications and 5G networking. And the reason we're working with service providers on that adoption of Kubernetes as they build 5G basically as a cloud native platform from the ground up is they're really leveraging what they've seen with Kubernetes elsewhere and taking that to deliver this connectivity which is going to be crucial for other use cases. If you think about people, whether they're trying to do automotive edge cases where they're increasingly putting more sensors on the car to make smarter decisions, but also things around the infotainment system using more and more data as well. If you think about factory edge, all of these use cases build on connectivity as one of the core fundamental things they need. So that's why we've been really zoomed in there with the service providers and our partners trying to deliver 5G networking capabilities as fast as we can and the throughput and latency benefits that come with that. If you don't mind me asking, I got to just go one step deeper. If you don't mind, you mentioned some of these use cases, the connectivity, IoT was the big buzzword. Okay, IoT, it's an edge. It's operational technology or it's a dumb endpoint or a node on the network has connectivity, it's got power, it's a purpose built device, it's operating, it's getting surveillance data, whatever the hell it's doing, right? You get, it's got edge. Now you bring in more intelligent, which is an IT kind of thing, state, databases, caching is the database too slow, is it too fast? So again, this brings up more complexity. Can you just talk about how you view that because this is what I'm hearing. What do you think? Yeah, I agree. I think there's a real spectrum when we talk about edge computing, both in terms of the footprints and the locations and the various constraints that each of those imply. And sometimes those can be constraints can be as you're talking about a specially designed board which has a very specific chip on it, has very specific memory and storage constraints or it can be a literal physical constraint in terms of I only have this much space in this location to actually put something or that space is subject to excess heat or other considerations environmentally. And I think when we look at what we're trying to provide not just with Kubernetes but also Linux is a variety of solutions that can be people no matter where they are along that spectrum of the smallest devices where maybe Red Hat Enterprise Linux or Relfor Edge is suitable to those use cases where maybe there's a little more flexibility in terms of what are the workloads I might want to run on that in the future or how do I want to grow that environment potentially in the future as well. If I want to add nodes then all of a sudden the capabilities that Kubernetes brings can be a more flexible building base for them to start with. So with all these use cases and the changing dynamics and the power dynamics between operational technology and IT which we're kind of riffing on what should developers take away from that when they're considering their development whether they just want to be app developers programming the infrastructure or they're tinkering with the underlying some database work or if they're under the hood kind of full DevOps. What should developers take into consideration for all these new use cases? Yeah, I think one of the key things that is that we're trying to minimize the impact for developers as much as we can. Now of course with an edge computing use case where you may be designing your application specifically for that board or device and that's a more challenging proposition but there's also the case increasingly where that intelligence already exists in the application somewhere whether it's in the data center in the cloud and then they're just trying to move it closer to that endpoint where the actual data is collected and that's where I think there's a really powerful story in terms of being able to use Kubernetes and OpenShift as that interface that the application developer interacts with but can use that same interface whether they're running in the cloud maybe for development purposes but also when they take it to production and it's running somewhere else. I got to ask you the AI impact because every conversation I have or everyone I interview that's an expert as a practitioner is usually something the lines of chief architect of cloud and AI. You're seeing a lot of cloud SRE, cloud scale, architects, meeting and also running the AI piece especially in industries. So AI is a certain component that seems to be resonating from a functional persona standpoint people who are doing these transformations tend to have cloud and AI responsibility. Is that a fluke or is that just the pattern that's real? No, I think that's very real. And I think when you look at AI and machine learning and how it works it's very data centric in terms of where, what is the data I'm collecting? Sending back to the mothership maybe in terms of actually training my model but when I actually go to processing something I wanna make that as close as I can to the actual data collection so that I can minimize what I'm trying to send back particularly people may not be as cognizant of it but even today, many times we're talking about sites where that connectivity is actually fairly limited in some of these edge use cases still today. So what you're actually putting over the pipe is something you're still trying to minimize while trying to advance your business and improve your agility by making these decisions closer to the edge. What's the advantage for Red Hat? Talk about the benefits. What do you guys bring into the table? Obviously hybrid cloud is the new shift. Everyone's agreed to that. I mean, pretty much the consensus is public cloud's great, been there, done that. It's out there pumping out as a resource but now enterprises are going to keep stuff on track especially when you talk about factories or whatever on-premises things that might need stuff on-premise. So it's clear, hybrid's happening. Everyone's in agreement. What does Red Hat bring to the table? What's in it for the customer? Yeah, I think I would say hybrid is really evolving at the moment in terms of I think hybrid has kind of gone through this transition where first of all it was maybe moving from my data center to public cloud and I'm managing both of those through that transition and maybe I'm multiple public clouds and now we're seeing this transition where it's almost that some of that processing is moving back out again closer to the use case and the data. And that's where we really see as an extension of our existing hybrid cloud story which is simply to say that we're trying to provide a consistent experience and interface for any footprint, any location basically. And that's where OpenShift is a really powerful platform for doing this. But also it's got Kubernetes at the heart of it but it's also worth considering when we look at Kubernetes there's this entire cloud-native ecosystem around it and that's an increasingly crucial part of why people are making these decisions as well. It's not just Kubernetes itself but all of those other projects goes directly in the CNCF ecosystem itself but also in that broader CNCF landscape of projects which people can leverage and even if they don't leverage them today know that they have options out there for when they need to change in the future if they have a new need for their application. Yes, Steve, I totally agree with you. And I want to just get your thoughts on this because I was kind of riffing with Brian Graceley who works at Red Hat on your team. And he was saying that we were talking about KubeCon plus cloud-native com as the name of the conference. He's a little bit more cloud-native-con this year than KubeCon inferring and implying and saying that, okay, it's all about Kubernetes, Kubernetes, Kubernetes now it's like, whoa, cloud-native starting to come to the table which shows the enablement of Kubernetes. That was our point. The point was, okay, if Kubernetes does its job it's creating a lever, some leverage to create value. And that's being rendered in cloud-native and that enterprises, not the hardcore hyperscalers and other early adopters, I call classic enterprise are coming in. They're contributing to open source as participants and they're harvesting the value and creating cloud-native. What's your reaction to that? And can you share your perspective on there's more cloud-native going on than ever before? Yeah, I certainly think, you know we've always thought from the beginning of OpenShift there was about more than just Linux and Kubernetes and even the container technologies that came before them. From the point of view of to really build a fully operational and useful platform you need more than just those pieces. And that's something that's been core to what we've been trying to build from the beginning. But it's also what you see in the community is people making those decisions as well is what are these pieces I need? Whether it's fairly fundamental infrastructure concerns like logging and monitoring or whether it's things like trying to enable different applications on top using projects like Qvert for virtualization, Istio for service measure and so on. Those are all considerations that have been people have been making gradually. I think what you're seeing now is there's a growing consensus in some of these areas within that board CNCF landscape in terms of, okay what is the right option for each of these things that I need to build the platform? And certainly we see our role as to guide customers to those solutions, but it's also great to see that consensus emerging in the communities that we care about like the CNCF. Great stuff, Steve. I got to ask you a final question here. As you guys innovate in the open I know your roadmaps are all out there in the open I got to ask you, you know product managers is about making decisions about what you work on. I know there's a lot of debates Red Hat has a culture of innovation and engineering. So this heated arguments, but you guys align at the end of the day that's kind of the culture. What's top of mind if someone asks you, hey Steve, bottom line, I'm a Red Hat customer I'm going full throttle in the hybrid we're investing, you guys have the cloud platforms what's in it for me? What's the bottom line? What do you say? I think the big thing for us is, you know I talked about that, this is extending the hybrid clouds to the edge and we're certainly very conscious that, you know we've done a great job at addressing a number of footprints that are caught where the way people have done computing to date. And now, you know, as we move to the edge there's a real challenge to go and address more of those footprints. And that's whether it's delivering open shift on a single node itself, but also working with cloud providers on their edge solutions as they move further out from the cloud as well. So I think, you know that's really core to the mission is continuing to be able that those footprints so that we can be true to that mission of delivering a platform that is consistent across any footprint any location. And certainly that's core to me. I think the other big trend that we're tracking and really continuing to work on, you know you talked about AI machine learning the other space we really see kind of continuing to develop and certainly relevant in the work with the telecommunications companies I do but also increasing in the accelerator space where there's really a lot of new and very interesting things happening with hardware and silicon whether it be kind of FPGAs, EA6 and even the data processing units lots of things happening in that space that I think are very interesting and going to be key to the next three to five years. Yeah, and software needs to run on hardware love your tagline there. It sounds like a nice marketing slogan any workload, any footprint, any location. Hey DevSec office, you got to scale it up. So good job. Thank you very much for coming on. Steve Gordon, director of product management Cloud Plaza of Red Hat, Steve, thanks for coming on. Thanks John, really appreciate it. Okay, this is theCUBE coverage of KubeCon and CloudNativeCon 2021 Europe Virtual. I'm John Furrier, your host from theCUBE. Thanks for watching.