 Live from Boston, Massachusetts, it's theCUBE, covering Red Hat Summit 2019, brought to you by Red Hat. Welcome back to theCUBE and our continuing coverage here at the Red Hat Summit. This is six time around for us. Fifth time for Stu Miniman, so he still gets almost the perfect attendance gold mark. First time for me, so Stu, I have a lot of catching up to do. Stu Miniman, John Walls, and Steve Speicher now joins us. He is the Senior Principal Product Manager, a developer tools at Red Hat, and Steve, good afternoon to you. Thanks for joining us. Thanks for having me. Yeah, let's just talk about, first off, development in general. I mean, there's a lot of give and take there, right? You're trying to listen. What are the needs? Where are the deficiencies? Where can the improvements be made? But how much do you drive that on your side and how much do you listen and respond to what you're hearing from the community? Yeah, we do a little bit of both, and so a lot of it is responding to the community, and that's one of the areas that Red Hat has really excelled is taking what's popular and what's working upstream and helping moving along, make it a stable product or stable solution that developers can use. But we also have a certain agenda, or certain platforms that we want to present. So we start from various runtimes to actually container platforms, and so we want to have to drive some of that initiatives on our own to help fill that need, because we hear it from customers a lot. It's like, things you're doing are great, but there's all these projects that need to come together sort of as a product or unified experience. And so we spend a lot of our time trying to bring those things together as a way to help developers do those different tasks and also focus across not just the Java runtimes, which we hit a lot of Java. So you might have an in-product in mind, and you realize that there might be a gap in terms of development, and so you encourage or you try to bridge that gap a little bit to get to that in-product? Is that what you're saying? Yeah, so we do a lot of things to help build the pieces so that people can sometimes build their own experiences they want, because in the end, developers control kind of their own destiny, their own set of tools, and a lot of customers have their own unique requirements even, like some tools they develop in-house for loans, kind of regulatory reasons and other things. And so we have to, one, build the pieces, but also stitch the pieces together to help them have that kind of out-of-the-box experience, because some customers really don't want to do that. They just want to say, kind of a turnkey solution, but then we may need to make some adjustments here and there. But Steve, it's funny, it rhymes for me with what I saw 15, 20 years ago with Linux. A lot of changes, a lot of pieces, I want to take advantage of it, but boy, can somebody help me with this? And that's, of course, Red Hat rode that wave pretty well. Today, Kubernetes is even more sprawling. There's so many different projects, there's so many pieces. Boy, it is complicated, and therefore, how do I take advantage of that? What do I need to know? What can my platformer vendor do for me so that I don't have to manage that? Love it, you expand on that, give us a little bit of comparative trust, but what's the same, what's different? Yeah, and so there's different aspects, I think, to developer experience. One thing that we talk about is it just works sometimes. So if it's Kubernetes, we spend a lot of time making sure it's hardened and works well, so you're not debugging and spending a time on things that waste developer time. Instead, we focus a lot on that, and we also look at how we can build abstraction layers on top of that. So we built a CLI tool called ODO, which is a Streamline Developer Experience for OpenShift, and it's really focused on OpenShift. That way, the developer really just can focus on their application, they can deploy it, they can quickly work on the changes before they commit to Git, and then they can then also have a similar experience in the browser with things like Eclipse-J or Codewarding Workspaces, our kind of commercial offering behind that, and that takes actually using the platform itself to do development, which is really super cool. So now you can have an IDE in the browser, you can also have the workspace, like all your dependencies, like everything you would normally have on your laptop, you now don't need to worry about it, it's now containerized and quickly spun up as a way to do development, and it's really a thing that enterprises really enjoy because they get quick satisfaction, like they get the stuff off, the proprietary code off the laptop, they're using their container platform, and it's building the same way they would build when they deploy in the production. I mean, Steve, my background's on the infrastructure side, and the whole reason we have infrastructures to be able to run our apps, and the holy grail we've wanted is, my developers shouldn't need to think about the stuff underneath. We looked at virtualization, we look at containerization, the nirvana of serverless, as they call it, is that I shouldn't have to think about that. How are we doing? Because at the end of the day, I talk to users like, oh geez, well I need to worry, what if something breaks, or I need to understand the security for my environment? What are you seeing and talking to customers about from their app development? Yeah, so they're able to, it's like we hear different stories of like tool factor apps. So it's like if you stay in certain parameters, you can have a lot of success. And that's still kind of true today. Serverless kind of takes that to the next level, where you can really just have a predefined either a function spectacode to, and then things are real easy, and you don't have to worry about various aspects. But even though you look at the various vendors when you're working with different functions, it's even complex, like oh, I need to provide the security, I need to make sure all these things are wired together, how do I log these things, how do I debug when things across this mesh go wrong? And so it's like, it's getting better, but there's still a lot of work to do to continue to improve that. And you will see a lot of innovation happening in that area, especially the work that we're working on. Yeah, what kind of give and take do you have in terms of what, not only what is that community learning from you and the tools that you're providing them, but what are you getting back from that other than advancing a project or whatever. In terms of expertise, in terms of understanding, maybe a new way to build a different mousetrap that someone comes up with an interesting idea, and you're like, wow, didn't think of that. Yeah, and I think that's where, like the partnerships we've had with various companies before, Google obviously starting out with Kubernetes, even the Knative project last year, and that really took a different way of looking at serverless and moving it forward to say, yeah, this is a different way we thought about how we would do this on Kubernetes, even kind of like you abstract that API away, and it's like, it's just to Knative and serverless, and then Kubernetes is kind of implementation detail behind that even. And so that's really interesting to see things like that, and then also the recent work announcements with Microsoft and the Azure Functions, where people like, they may be into the event sources there, they want to make sure that workloads that they're doing, the functions they're building are running on good Kubernetes and Kubernetes is OpenShift, and so it's really kind of completing the life cycle. So I wonder if we could just step back to, if you talk about Kubernetes and OpenShift specifically, you've got partnerships with Google, and they've got the GKE and the Anthos stuff, you've got partnerships with Amazon, they've got AKS, these things are not fully seamless and interoperable, I definitely hear some confusion in the marketplace as to Kubernetes can run lots of places, but all the various, if you choose an implementation, well, that's your implementation, and you should run that everywhere, not I can't take all the various implementations and they're not inter-swappable. So maybe you could help expand on that a little bit as to what's the goal, where are we with this maturity here, and where do we need it to get? Because, boy, it definitely is a little bit complicated, at least from the seat that I sit in. Yeah, so it's somewhat complex. I think it goes back to your early days talking about Linux, you would say you'd have an application and it could run anywhere to have Linux, that's kind of true. There's always certain security settings or packages you have enabled, and that just holds true for the fact that Kubernetes world as well. You can lock it down a certain way, you can open it up a certain way, and so you see a lot of content that's delivered, assuming certain privileges they have on the system and other systems that don't allow it. And so I think more and more we see through the API standardization some of the work it's done in conformance testing, it really helps people know when they're getting in their hands on an instance, it's really a full-fledged Kubernetes, or the part that they care about the most is working out well. And so I see that can be evolved. I also see tools that kind of abstract even more, so like Kate Native as I mentioned, sort of serverless workloads or functions themselves, and then even how CICD tooling kind of works on top of that, like natively understands the platform, that platform, and those requirements to move those applications across the different systems, because we have a lot of customers who run OpenShift Kubernetes as well as many other Kubernetes kind of instances. So they have, we have this requirement to make sure we stay conformant, allow them to make sure the workloads are portable, and it's an important part to move it forward. So I still think there's a lot of work to be done to make these things a smoother process. It's a lot of interesting things going on though. So any interesting trends with the workloads, it's one of the things we always look at is, you know, am I just taking the old workloads, am I doing them in a new place, or you know, are there new workloads and anything jumping out at you from customers that you talk to? Yeah, so the, we talked, I know I mentioned serverless multiple times, the whole idea around this auto-scaling and only losing your resources when you need to is a big deal, so we see a lot more and more of those kind of small functions, single-purpose things that are occurring up until like machine learning, big data, it just continues on GPU resources. We've talked about running VMs in Kubernetes. When I first heard this, like four years ago, I laughed out loud and I realized, oh no, they're serious, this is something that happens. And yeah, it's becoming mainstream now. So now kind of everything kind of fits within the current orchestrator of those workloads. You're not laughing anymore, right? No. Because there's so many areas in which the concerns are certainly understandable. Secure is one of those, a lot of attention being paid automation these days, right, and a lot of opportunity there. Is there one, or are there a couple areas where you say this is kind of where we have maybe greener pastures in terms of providing developers with really unusual tools or really more sophisticated or more complex or effective tools than in any other or an area where you could use that kind of a boost? Yeah, I think there's a lot of, I think the one thing that I see in this area is still a lot of fragmentation. Like I'm not sure if I see like this kind of a single way that things work, seeing a lot of great work like with the Microsoft VS Code tooling pieces. And I'm just saying that from an abstraction way to bring certain things together. Nice work going on with Microsoft and the Kubernetes plugin for there. And we're collaborating with them on that to extend it for some of the OpenShift use cases. But that just kind of moves, I think more to meet the developers where they're at and we'll continue to invest across the different set of tools. Like the more I keep up with these lists of all these tools in the ecosystem, anytime I present it, someone says, oh, I didn't know about those, but here's more that I didn't know about. So this just continues to grow and people continue to innovate. And I think it's just, I think it's exciting because we continue to evolve it. So I don't think there's much in the way of kind of narrowing down on a smaller set of things. I think it's going to continue to expand in a sense. Speaking of expansion at Microsoft Build yesterday, there was an announcement of, I believe it's KEDA, K-E-D-A. Azure Functions with OpenShift. Help us parse a little bit what that is. Yeah, so what that's about is really taking the Azure Functions and allowing those workloads to run on OpenShift because they're targeted towards Kubernetes and of course OpenShift is Kubernetes distribution. So it allows that to happen. There's also that it's a unique auto-scaler that kind of allows workload to be more serverless run. So then also it ties into some of the Azure event sources, so like the message queue event bus Kafka that's there. And so now you can wire in your Azure pieces, you can run it across, you're either hosted Azure or on OpenShift with those Azure Functions. Okay, and just to clarify, this is today separate from the K-Native initiative that you were talking about earlier? Yes, that's right. So this is touching on some of those points and the idea behind this project, this is like an early dev preview and now it's like showing some progress, but they're looking and wiring in some of the OpenShift, sorry, the K-Native serving pieces to allow running in those applications on OpenShift, but also the K-Native event sources. So you can take Kubernetes events and triggers your functions and do some of these exciting things. Yeah, can I ask you, you're doing sessions here at this show, you know, how many of the people here are, you know, talking about serverless and looking at that bleeding edge or there are other technologies that you find them spending a little bit more time in the tooling space? It's a wide range. I'm always shocked by what some of the customers are like bleeding edge K-Natives, like, oh, you know, we saw whatever is 0.3 release out there with this and we'd really like this auto scaling capability because we're spending a lot of money running these applications that are not doing anything. So we like the better auto scaler that's out there. The others are really just like trying to understand more about container technology. I was just talking to a gentleman early after a session. He's like, this is what we're trying to do. We need to containerize applications. How do I build a CI CD pipeline around it? So it's a wide range of things you see here. We're certainly at the center of this inspiration, the innovation of the industry. I know you're an exciting place and it's kind of something new every day for you, probably, right? Oh, it is, yeah, especially when these big conference announcements come out. Right, gear up, right? Yeah, exactly. Well, good job, Steve. Thank you for joining us here. We appreciate the time and wish you well down the road. Thank you very much. I enjoyed being on. You bet, Steve Spiker from Red Hat, joining us here for the first time on theCUBE. Good to have you, Steve. Good to have you with us as we continue our coverage from Boston at the Red Hat Summit.