 Live from San Francisco, it's theCUBE. Covering Red Hat Summit 2018. Brought to you by Red Hat. Hey, welcome back everyone. We are here live in San Francisco at Moscone West. This is theCUBE's exclusive coverage of Red Hat Summit 2018. I'm John Furrier, co-host of theCUBE. This week, John Troyer, guest analyst. He's the co-founder of Tech Reckoning, an advisory and a consulting firm around community. Our next guest is Matt Hick, Senior Vice President of Engineering at Red Hat. He's going to give us all the features and specs of the roadmap and all the priorities. Thanks for coming on. Hey, thanks guys. He's like, oh no not. So thanks for coming on. I'll see a successful show for you guys. Congratulations. Thank you. Paul Comey was on earlier talking about some of the bets you guys made. And it's all open source, so those bets are all part of the community, with the community. But certainly there's a big shift happening. We're seeing it now with containers and Kubernetes really showing the way, giving customers clear line of sight of where things are starting to fall in the stack. Obviously got infrastructure and application development all under a DevOps kind of concept. So congratulations. Thank you. It's been fun. It's been, I think Paul shared this a couple. We started OpenShift from 2011. So it's pretty cool to be here now in 2018 and just see how far that's come in terms of how many customers are using it, how successful they've been with it. So that's been great. You know we always like to talk on theCUBE. We love talking to product people and engineers because we always say, the cloud is like an operating system. It's just all over the place. Decentralized network, distributed computing. These are concepts that have been around. A lot of the Red Hat DNA comes from systems. You sell an operating system that you offer for free but also have services around it. It's a systems problem as we look at the cloud. Cloud economics. So when you go look at some of the product and engineering priorities, how do you guys keep that going? What are some of the guiding principles that you guys have with your team? Obviously open source being an upstream project, but as you guys have to build this out in real time, what are some of the principles that you guys have? It's a great question. I'll try to cover it on two areas. I think the first for us is workload compatibility where you get down into the building that new apps is great. It's fun. A lot of people can do it and that's an exciting area but customers also, they have to deal with apps. They've built over 10 plus years and so in everything we design, we try to make sure we can address both of those use cases. I think that's one of the reasons, we talked about OpenShift and how coupled it is to RHEL and Linux. It's for that. You can take anything that runs on RHEL, run it in a container on OpenShift, stateful, not stateful. That's one really key design principle. The other one, and this we've actually experienced ourselves, of the roles and responsibilities separation, we run an OpenShift host environment publicly. I joke like anyone that gives me an email address, I'll run their code and my operations team doesn't have to know what's inside of the containers. They have a really clear boundary which is make the infrastructure infinitely available for them and know that you can run anything on that environment. So that separation, when customers talk about DevOps and getting to Agile, I think that's almost as critical as the technology itself letting them be able to do that. That's been a real theme here at the show. I've certainly noticed, sure there were technology demos up on stage, but also a lot of talk about culture, about process or anti-planning maybe, or helping people. The role of Red Hat with OpenShift and the full stack all the way down is bigger now than it was just Linux. So I mean, is you and your team, I mean you're in engineering, as you work with the open source communities, surely it seems like you're having to deal with a much broader scope of responsibilities. Yeah, I started in Red Hat when it was just Linux and part of it is Linux is big and it's complex and that in and of itself is a pretty broad community, but these days it is, we get to work with customers that are transforming their business and that touches everything from how they're organizationally structured, how we make teams work together, how I make the developers happy with their rate of innovation and this security team still comfortable with what they're changing and I love it. Like it is, you know, and we're open source at our core, so I feel like I'm an open source guy, I always have been, you're seeing open source drive a much wider scope of change than I ever have before. Let's talk about functionality product-wise because again we interviewed Jim Whitehurst yesterday and we had Denise Dumas on as well on the REL side and we talked about security, these things are going on and with OpenShift and with Kubernetes and containers, it makes your job harder. You got to do more, right? So talk about what does that mean for you guys and how does that translate to the customer impact because it's more complicated. There's abstraction layers that are abstracting away the complexity and complexity is not going away, it's just being abstracted away. This is harder on engineering. How are you handling that and what's your approach? So I've looked at it as a great opportunity for us. So I've been working with Linux for a long time and I was a big fan when we introduced SC Linux and for a long time moving from traditional Linux hosting to operations teams wanting to turn on SC Linux, it's been a really tough climb. It'll break things so they're not comfortable with it. They know they need that layer of security but turning it on has been a challenge. Then go to C groups or different namespaces and they're not going to get there. With OpenShift, the vast majority of OpenShift deployments under the covers we run with SC Linux on by default, customize policies, everything's in control groups. Containers uses Linux namespaces. So you get a level of workload isolation that it was unimaginable five, 10 years ago and I love that aspect because you start with one aspect of security, you get much, much stronger. So it's our ability to, we know all the levers and knobs in Linux itself and we get to turn them all and pull them all. I want to push on the spot and it's not an insult to you guys at all but we've heard some hallway conversations just in a joking way because everyone loves Linux, open source, we all love that. But they say nothing's perfect either, no software actually runs all the time, great. So one customer said, I won't say the name, when OpenShift fails, it fails big. Meaning it's very reliable but it's taking on a lot of heavy lifting. There's a lot of things going on there because it's Linux, when it breaks, it breaks a lot and you're trying to avoid that but my point is is that these are important components. How do you make that completely bulletproof? How do you guys stay on top of it so that things don't break? I'm not saying they do all the time, just saying his comment was more of an order of a magnitude thing. Yeah, I think it's a couple of things. So we invested in OpenShift Online and OpenShift Dedicated and those were new for Red Hat if we're running hosting environments so we could learn a lot of the nuances of how do you, OpenShift Online is roughly a single environment. How do we make that never break as a whole? A user might do something in their app and make their app break. How do we not make the whole break? The second challenge I think we've hit is just skills in the market of it's not necessarily an easy system. There are lots of moving pieces there. The deal with Azure and the partnership there, having managed service offerings I think is really going to help users get into, I have a highly available environment. I don't have to worry about SCD replication of those components but I can still get the benefits. And then I think over time as people learn the technology they know how to utilize it well. We'll see less and less of the it catastrophically failed because I didn't know I could make it highly available. Those are always painful to me. That's education. Yeah, yep. So Matt, there's a clear conversation here. Very clarity of roles and responsibilities even in the stack. I think even as recently as a year or two ago people were having conversations about the role of OpenStack versus Kubernetes and you were getting kind of weird like what's on top of what and even in terms of other parts of the stack. I mean here, it's very clear. OpenStack is about infrastructure, OpenShift on top of it and even in terms of virtualization, containers versus VMs. The conversation this year seems more clear. As an engineer and an engineering leader did the engineering teams rolling their eyes going well we knew how this was going to work out all along or did you all also kind of come along on that journey the last couple years? I think seeing the customer use cases refine a little bit while education builds those has been great. We always, we're engineers. We like clear separation and what each product's good at. So for us it's fantastic. OpenStack is great at managing metal. One of my favorite demonstrations was using OpenStack Director to on a boot machines, put OSs on them and leave OpenShift running and be able to share network and storage planes with OpenStack. Those things are great for me as an engineering lead because we're doing that once as well as we can but it's nice in engineering if you get to optimize each side of the stack. So I think I have seen the customer's understanding as they've done more with OpenStack they've done more with OpenShift they know which product they want to use what for that has helped us accelerate the engineering work towards it. You mentioned skills, skills gaps and skills in general. How's the hiring going? Is there a new kind of DevOps Rockstar out there? Is there a new kind of profile? Is there pieces of the stack that you want certain skills for? Is there generalism? Are the roles in engineering changing? Can you just add some color to that conversation around, because we're talking about engineering now, it used to be called software engineering when I graduated college and you became a developer. I don't know which one's better but to me there's real engineering going on which is using software development techniques. So what's the skills situation? For us I think it is nice you're seeing a lot of gravitation to Linux at the host level and Kubernetes has helped just at the distributed system level so obviously skills there play pretty well in general. I would say what we have seen is there's been a stronger increase in having operational skills as well as development skills and it's a spectrum. You're still going to have operational experts and algorithmic experts but the blended role where you do know what it takes to run an application in production to some extent and you do know some about infrastructure and development. I certainly look for that on our teams because that's where customers I've seen struggle for years and years is in the handoff and the shift between everyone can write functional apps. They usually struggle getting them into production and it's really neither team's fault it's in that translation and these platforms help bridge that. People that have some skills on either side have become incredibly valuable in that. So that's where the DevOps action is right, the overlay. It really is, yeah. So what do you about the networking growth with DevOps? DevOps has always been infrastructure's code. In all concerts it's still minimum I always talk about. There's always a network that gets the beat on the most. I need better latency. And so networking software to find networking is not a new concept. So I'll find data centers out there. What's new in networking that you could point to that's part of this new wave? Two geeky things that might not have been noticed. One is the work we've done on Ansible Networking has been stunningly popular to me and that was just the simplicity of Ansible it just needs SSH in a minimal set of dependencies. Most switches out there can actually, they have SSH running and having automation of switches and the actual gear itself was surprisingly not unified and Ansible was able to fit that niche where you could remotely configure switches and that has grown and exploded because if you think of the, I'm going to do a DevOps workflow but now I need to actually change routing or bleed something. You're often talking to switches and being able to couple that in has been, it has been fun to watch. So I've loved that aspect. The other portion when we combine OpenShift on OpenStack, the courier work which we've talked about some is you know OpenShift often describe as it consumes infrastructure that OpenStack provides and the one exception was usually the networking tier. It's like we have to run an overlay network on it. When we run OpenShift on OpenStack it can actually utilize OpenStack's networking to be able to provide that instead of doing its own overlay. That is critical. So the policy comes in handy there is that war configurations? Where's the benefit? Both on network topology which do you have two teams that are building different structures that may collide in the night? So it gets it from two teams down to one and then the second is just the knock controls and isolation it's done once. It's been nice for me in the engineering side where we put a ton of effort in the OpenStack community. We put a ton of effort in Kubernetes and the OpenShift communities and we're able to pretty nicely combine those. We know them both really well. So take us through some inside baseball at Red Hat. What's going on internally within your group? I want to probe on developer and software engineers productivity. If you quote DevOps works, the test is the freeing up their time from doing mundane tasks and you got cool things like you said but the network thing is pretty positive. This is going to free up some intellectual capital from engineering. So okay, if that's true, I'm assuming it's true. If it's not, then say it's not true. But if it sounds like it's probably going to be true for you. What are your guys working on? What's next? So can you share something because you guys are doing your own thing, you're doing your own software. Is that intellectual capital being freed up on the developer side? Are they doing more programming? Are you seeing some more creativity? What are they doing with that free time, extra intellectual cycle? I'm excessive. Tell Paul how he was up before me. Like Matt Steen barely has to work anymore. They're clipping coupons at the beach, you know. That's right. It's all running, we're busy. So a good creative example and this was I think the second demo we showed. Red Hat Insights has been in the market for a while and that was our, can we glean enough information from systems to get ahead of a support issue? And this year we showed the, it's not just known fixes, we match it to a knowledge-based article but can we interpret fixes from peer analysis and machine learning type techniques? That's a classic example where we use the creativity in free time to say that stack internally runs on OpenShift, running on OpenStack using Red Hat Storage and we're applying some of, you know, TensorFlow and other capabilities to do that. That was probably my favorite example at Summit where if we weren't getting more efficient at what we worked on, we wouldn't have been able to stand up that stack ourselves, much less execute to it and show it live in Summit, doing the analysis across a hybrid cloud. But this is the whole point of DevOps. This is the whole purpose, being highly productive to use those intellectual cycle times to build stuff, solve problems. Yeah, absolutely. The vision servers or networks. That's right, yeah. Awesome. Well, hey, thanks for coming on theCUBE, really appreciate it. Thank you guys. What's the priorities for you guys this year? What's the focus? Share some, share your plans for the year. Yeah, I think it's similar to the last thing we showed today. We really want to make customers feel like they can deploy hybrid cloud, whether it's compute applications, they have the services they need, down to storage, it works. They're on premise. They know we're going to have the best combination we can. This year is a stay ahead of people on that path, make sure they're successful with it. We'll see you guys at OpenStack Summit in Vancouver. Thanks for coming on, Matt Hicks. Thanks a lot, guys. Senior Vice President of Engineering, Red Hat. I'm John Furrier, John Troyer. Stay with us. We're day three of three days of live coverage here in San Francisco, Red Hat Summit 2018. Stay with us. We'll be right back after this short break.