 Live from Boston, Massachusetts, it's theCUBE. Covering Red Hat Summit 2019, brought to you by Red Hat. Good to have you back here on theCUBE as we continue our coverage live at the Red Hat Summit 2019. The day three of our coverage, we've been with you since Tuesday and now just fresh off the keynote stage, joining Stu Miniman and myself. Chris Wright, the VP and Chief Technology Officer at Red Hat. Good job up there, Chris. Thanks for being with us this morning. Yeah, thank you, glad to be here. Great, among your central themes, you talked about this new cycle of innovation and those components and how they're integrating to create all these great opportunities. So if you would, just share with those at home who didn't have an opportunity to see the keynote this morning. That's what you were talking about. I don't think they play together and where that lies with Red Hat. Yeah, you bet. So I think an important first kind of concept is a lot of what we're doing is laying a foundation or a platform. I mean, Red Hat's focus is in the platform space. So we think of it as building this platform upon which you build and innovate. And so what we're seeing is a critical part of the future is data. So we're calling it data-centric. It's the data-centric economy. Along with that is machine learning. So all the intelligence that comes in, what are you divining, the insights you're grabbing from that data. It introduces some interesting challenges, data and privacy, and what do we do with that data? I mean, we're all personally aware of this. You can see the Cambridge Analytica stuff and we all have concerns about our own data. When you combine all of this together, techniques for how we can create insights from data without compromising privacy, we're really pushing the envelope into full distributed systems, edge deployments, data coming from everywhere and the insights that go along with that. So it's a really exciting time built on a consistent platform like OpenShift. So Chris, I always love getting to dig in with you because that big trend of distributed systems is something that we've been working at for quite a long time, but we fully agree you said data at the center of everything and that role of an even more distributed system, the multi-cloud world, customers have their stuff everywhere and getting their arms around that and managing it and being able to leverage and take advantage of that data is super challenging. So help us understand some of the areas that Red Hat and the communities are looking to solve those problems. Where are we and what's going well and what's still left to work on? Well, there's a couple of different aspects. So number one, we're building these big complex systems. Distributed systems are challenging. Distributed systems engineers are really solving hard problems and we have to make that accessible to everybody, operations teams. And it's one of the things I think the cloud taught us when you sort of outsource your operations to somebody else, you get this encapsulated operational excellence. We need to bring that to wherever your workloads are running. And so we talked a lot about AI ops, how you harness the value of data that's coming out of this complex infrastructure, feed it through models and gain insights and then predict and really ultimately we're looking at autonomic computing, how we can create autonomous clouds, things that really are operating themselves as much as possible with minimal human intervention so we get massive scale. I think that's one of the key pieces. The other one, really talking about a different audience, the developers. So developers are trying to incorporate similar types of intelligence into their applications. You're making recommendations, you're trying to personalize applications for end users. They need easy access to that data. They need easy access to trained models. So how do we do that? How do we make that challenging data scientist centric workflow accessible to developers? Yeah, just some of the challenges out there, I think about 10, 15 years ago you talked to people. It was like, well I had my central source of truth and it was a database. And you talk to most companies now and it's like, well I've got at least a dozen different database and all my different flavors of them and whether they're the cloud or whether I have them in my environment. Things like AI ops trying to help people get involved with them. You talked a little bit in your keynote about some of the partners that you're working on. So how do you bring these together and simplify them when they're getting even more and more fragmented? Well it's part of the challenge of innovation. I mean, I think there's a natural cycle. Creativity spawns new ideas, new ideas are encapsulated in projects. So there's a wave of expansion in any kind of new technology timeframe and then ultimately you see some contraction as we get really the clear winners in the best ideas and in the container orchestration space. Kubernetes is a great example of that. We had a lot of proliferation of different ways of doing it. Today we're consolidating as an industry around Kubernetes. So what we're doing is building a platform, building a rich ecosystem around that platform and bringing our partners in who have specific solutions. They look at whether it's the AI ops side of the house, talking to the operations teams or whether it's giving developers easy access to data and training models through some partners that we had today like Perceptilabs and H2O AI. This partnership bringing it to a common platform I think is a critical part of helping the industry move forward and ultimately we'll see where these best of breed tools come into play. Can you maybe help a little bit with in terms of practical application. You got open source where you've got this community development going on and then people customize based on their individual needs. Oh well, great, right? How does the inverse happen where somebody who does some customization and comes up with a revelation of some kind and that applies back to the general community? I mean, can you think of a time where maybe something, I'm thinking like Boston Children's, they're imaging that hospital we saw or ACA is actually related to another industry somehow and gave them an aha moment that maybe they weren't expecting an open source where it was the driver of that. Yeah, well I think what we showed today were some examples of what, if you distill it down to the core, there's some common patterns. There's data, there's streaming data, there's the data processing and there's a connection of that process data or a trained model to an application. So we've been building an open source project called Open Data Hub where we can bring people together to collaborate on what are the tools that we need to be in this stack of this kind of framework or stack and then as we do that we're talking to banks and they're looking at any money laundering and fraud detection and we're talking to these hospitals that are looking at completely different use cases like HCA healthcare which is taking data to reduce the amount of time nurses need to spend gathering information from patients and clearly identify sepsis concerns. Totally different applications, similar framework and so getting that industry level collaboration I think is the key and having common platforms and common tools and a place to rally around these bigger problems is exactly how we do that through open source. So Linux sits at an interesting place in the stack as you talked about, we went commonality and everything like that but we're actually at a time where the proliferation of what's happening at the hardware level is something that I'm an infrastructure and hardware guy by background and it was like oh I thought we were going to homogenize everything and standardize everything and it's like oh well you're showing off cool NVIDIA stuff and when we're doing all these AI pieces there's all these new things, every big thing. You work from the main frame through the latest ARM processors, give us a little insight as to how your team's geeking out and making sure that they provide that commonality yet can take advantage of some of the cool awesome stuff that's out there that's enabling that next wave of innovation. Yeah, so I share that infrastructure geekness with you so I'm so stoked that we're in this cycle of hardware innovation. I'll say something that maybe sounds controversial. If we go back in time, just five years or a little more, the focus is around cloud computing and bringing massive number of apps to the cloud and the cloud had kind of a t-shirt size, small, medium, large view of the world of compute. It created this notion that compute is homogenous. It's a lie. If you go today to a cloud provider and count the number of different machine types they have, or instance types, it's not just three. It's a big number and those are all specialized. It's for IO throughput, it's for storage acceleration, it's big memory, it's all these different use cases that are required for the full set of applications. Maybe you get the 80% in a common core but there's a whole bunch of specific use cases that require performance optimizations that are unique. And what we're seeing, I think, is Moore's law and the laws of physics are kind of colliding a little bit and the way to get increased acceleration is through specialized hardware. So we see things like TPUs from Google, we see Intel doing DL boost, we've got GPUs and even FPGAs and the operating system is there to give a consistent application runtime while enabling all those hardware components and bringing it all together so the applications can leverage the performance acceleration without having to be tied directly to it. Yeah, and you actually, I think you wrote about that, right, one of your blog posts that came up about how hardware plays this hugely important role. You also talked about innovation and change happening incrementally and that's not how I'm, we kind of think about like big bangs, right? Like wow, this is, but you pointed out that in the open source it really is step by step by step which we think about disruption as being very dramatic and there's nothing sexy about step by step so that's how we get to disruption. See, I kind of hate this innovation disruption that they're buzzwords on the one hand so it's what captures attention. It's not necessarily clear what they mean. I like the idea that in open source we do everyday incremental improvements and it's the culmination of all these improvements over time that unlock new opportunities and people ask me all the time, where's the future? What are we doing? What's going on? You know, we're kind of doing the same thing we've been doing for a long time. You know, think about microservices as a way to encapsulate functionality, share and reuse with other developers. Well, object-oriented programming decades ago was really trying to establish that same capability for developers. So, you know, the technologies change. We're building on our history. We're always incrementally improving. You bring it all together and yes occasionally you can apply that in a business case that totally disrupts an industry and changes the game but I really want to encourage people to think about what are the incremental changes you can make to create something fundamentally new? All right, I need to poke at that a little bit, Chris, because there's one thing that I look back at my career and took a look back a decade or two decades and we used to talk about things like intelligence and automation. Those have been around my entire career. Yeah. When you look at today though, you talk about intelligence, you talk about automation. It's not what we were doing, you know, just the amount of degrees, what we're having there. It is like, if we've looked at it before, it was like, oh my gosh, science fiction's here. So, you know, we sometimes lose when we're doing step-by-step that some things are making step-function improvements and now the massive computer-massive changes. So, love your opinions there. Yeah, well, I think it's a combination. So, I talk about the perpetual pursuit of excellence. So, you pick a field, you know, we're talking about management, we've got data and how you apply that data. We've been working towards autonomic computing for decades. The concepts and research are old. The details and the technologies and the tools that we have today are quite different. But I'm not sure that that's always a major step-function. I think part of that is this incremental change and you look at the number or the amount of processing power in a GPU today. Well, this is a problem that that industry has been working on for quite a long time. At some point we realize, hey, the vector processing capabilities in a GPU really suit the machine learning matrix multiplication real-world use case. So, that was a fundamental shift which unlocked a whole bunch of opportunity in terms of how we harness data and turn it into knowledge. Yeah, so are there any areas that you look at now that we've been working at that you feel we're kind of getting to those tipping points or the waves of technology are coming together to really enable some massive change? I do think our ability to move data around, like generate data for one thing, move data around efficiently, have access to it from a processing capability and turning that into a model has so fundamentally changed in the past couple of decades that we are tapping into the next generation of what's possible. Things like having this holy grail of a self-healing, self-optimizing, self-driving cluster is not as science fiction as it felt 20 years ago. It's kind of exciting when you talk about you've been there in the past and the present, but there's very much a place in the future, right? And how would that future looks like? Just from that AI perspective, it's a little scary sometimes too to some people. So, how are you going about, I guess, working with your partners to bring them along and accept certain notions that maybe five, six years ago might've been a little tough to swallow or to feel comfortable with? Yeah, well, there's a couple of different dimensions there. One is finding tasks that computers are great at, that augment tasks that humans are great at. And the example we had today, I love the example, which was let's have computers crunch numbers and nurses do what they do best, which is provide care and empathy for the patients. So it's not taking the nurses job away. In fact, it's taking the part that is drudgery. It's computation. And computers are great. I forget, what was the... We call it machine enhanced human intelligence. That's right. And a couple of different ways of looking at that. But the idea that we're not necessarily trying to eliminate humans out of the loop, we're trying to get humans to do what they do best and take away the drudgery that computers are awesome at. Repetitive tasks, big number crunching. I think that's one piece. The other piece is really from that developer point of view. How do you make it easily accessible? And then the one step that needs to come after that is understanding the black box. What happens inside the machine learning model? How is it creating the insights that it's creating? And there's definitely work to be done there. There's work that's already underway to help understand really what's behind the insights so that we don't just trust, which can create some problems when we're introducing data that itself might already be biased. And then we assume because we gave data to a computer which is seemingly unbiased, it's going to give us an unbiased result, right? Garbage in, garbage out, so we've got to be really thoughtful about what the models are and what the data is that we're feeding it. It makes perfect sense. Thanks for the time. Good job on the keynote stage again this morning. I know you've got a busy afternoon scheduled as well, so I will let you, we'll cut you loose, but thank you again. Always good to see you. Yeah, always enjoy being here. Thanks guys. Chris Wright joining us from Red Hat. Back with more of the Red Hat Summit 2019. You're watching live here on theCUBE.