 Live from Boston, Massachusetts, extracting the signal from the noise, it's theCUBE, covering Red Hat Summit 2015. Brought to you by Red Hat. Now your hosts, Dave Vellante and Stu Miniman. Downtown Boston, Stu, back bay at the Red Hat Summit. Stu and I are really pleased to be at home for a change. Chris Wright is here as the chief technologist at Red Hat, CUBE alum. Just had Chris on at OpenStack Summit. You guys are up there in Vancouver. Welcome back. Thank you. So, of course, different venue here. You guys obviously heavily involved in OpenStack. You're more than heavily involved at this event. So, congratulations. It's really exciting, a lot of great discussion going on. What's your take on the event so far? Ah, it's great to see the familiar faces and the new people coming in to check out what we're up to. This time around, I think we have a lot of interesting things to talk about with some of our newer products and how we're bringing products together to create solutions for customers where historically you look at a lot of the things we've done, we've invested in technologies. Maybe it's a point solution. Here we're really working together with our customers to identify how we pull all of our technology pieces together and provide comprehensive solutions for their data center. So, chief technologist, you were telling me off camera, your role is very broad, but you're also quite specific where you go deep. I suspect that you've probably forgotten more than I know on most of those broad topics, but we can go deep. But talk about your role a little bit at Red Hat. Well, my role is to help define our strategic technology vision. So, we're working on understanding where technology intersects with our product roadmaps and our customer needs and really look forward. Where are we not? Where do we need to be going? What are the issues that our customers are grappling with today that we don't have great solutions for and where open source technologies are emerging and bubbling up to the surface as great solutions to this problem? So, dial back a little bit. You know, I don't know, maybe, you know, coming out of the downturn 2010, let's say five years ago, what was the conversation like? What were the trends that you were observing? I mean, obviously the proprietary versus open thing, that discussion you had a long, long time ago, but what was the conversation like just five years ago? The interesting thing about, especially because of the downturn, the interesting thing about that time was very much about cost of ownership. So, we came with a great feature rich solution that was cost competitive with the proprietary solutions and that cost edge was really, with the economic squeeze was something that for us helped push us along. It gave us a new toehold with different customers and today the conversation is really shifting away from just that commoditization and total cost of ownership to how do you operationalize big complex systems and you heard a little bit of that today or yesterday and Jim's keynote about the change in open source technology from a commoditization play to a place where real innovation is happening and that's what's so exciting about this event in particular where we're starting to showcase the innovation and the open source world and how we can bring those to products. So, innovation at scale. Innovation at scale, yeah. We talked with Stu and I where I had the pleasure we were in London with the guys at MIT talking about the second machine age and how computers have always replaced or machines have always replaced humans. Now computers are replacing humans in cognitive functions and we start thinking about the infrastructure for that next generation. We talk always talking cloud, mobile, social but so what does that infrastructure look like for the next generation of apps? How would you describe that? Well, for one thing it's highly distributed. It's designed around scalability. Everybody wants to operate their systems at the same large web scale that the massive web companies are doing these days and to get to that distributed large scale system you're building a system that's expecting failure all the time. You're building systems that can route around the failure. They're redundant localized but it's not this big massive huge redundant system that you have two of. You've got large number of nodes. You're expecting things to change dynamically. You need to provision your systems to adapt to the current use cases or the ways that you're needed to allocate your resources are changing quickly and that's quite a bit different. So when you talked about commoditization five years ago today we're talking about how quickly can you introduce new services into your data center. So that implies a lot of automation more than a lot. I mean a highly automated environment. Are enterprises in your opinion ready for that to give up the knobs and the bells and the whistles and the control that they have physically? I think it's a trade off right? So they have to see the benefit and the benefit is how many IT people in your organization does it take to manage a number of servers? And when you start to see the multiplying effect of adding APIs and using programs really to operate your infrastructure that's compelling. That really changes the discussion and those the CIOs are seeing the efficiency of I used to have us 10 servers to admin ratio and now I have a 100 or a thousand servers to admin ratio and that's hard to figure out. So the idea of infrastructure as code and John Furrier let's talk about data as code actually using the data to predict what's actually going to happen in the infrastructure. It's awesome. That's kind of an emerging trend so it's not really well established in the industry but first we had to operationalize of this complex system. So how do you automate the infrastructure? As the infrastructure grows and we're always pulling analytics data out of the infrastructure, the next thing we need to do is I could literally call it automate your automation. It's the learning stage of how do you actually, you know, a large scale infrastructure, you can't still programmatically interact with all of the components as there's faults. You need to have the system kind of pay attention to that and potentially learn from what's happening within the- So self-learning systems do? Yeah. Yeah. Never been closer. So Chris I'm wondering if we, you know, dig down a level deeper. You know, we've seen the Linux operational model and Linux specifically starting to spread throughout, you know, other parts of the operating system. You know, Red Hat has gone into some of the storage stack, some of the networking stack. You've got companies in both of those spaces that are now, you know, we're talking about the software defined storage and networking and operational models. Can you, you know, walk us through what you've seen in that space the last couple of years? Well, software defined is a big component. So it's putting an API on something as, so it's now available programmatically as a service. The underpinnings of Linux as a mature technology and a consistent way to do your management is I think really critical here because now you have systems that despite whether they're providing compute, storage or even networking, you have the same common building blocks. And so if you're using something like Puppet or Chef for configuration management, you can use that across all of your infrastructure as we really continue to grow these storage and networking stacks outside of just the traditional compute side. So one of the big challenges we've had, I mean, especially you look storage and networking, I mean, things move glacially slow. I know when I do my roadmaps and presentations talking about networking, I mean, I draw decades out there. I mean, just moving from a speed bump, you know, from one gig to 10 gig, we've been doing it for over a decade and we've still got a lot of work to do there. So how do we move things forward? How do we push them? People tend to deploy something and never change it and wait, that's just changing the components but changing the mindset and the people. We know that takes a lot of time too. Cultural shift is the really challenging side of this. So you talk about something like DevOps, you can discuss the tooling associated with DevOps but it's the cultural shift in an organization that makes it happen, that's the challenging part. It's the same thing here. With the networking, I think one of the key differences is historically we had an industry that was focused on producing standards through a fairly long process and then providing multiple implementations of that same standard all from proprietary, you know, preferably from a vendor's point of view, vertically integrated stacks. As we move to a more open world where we can focus less on the standardization process and more on common code as a way to build a de facto standard, that helps us accelerate the process and if we're building from common building blocks like Linux where we already know how to operationalize it, you know, we're giving ourselves a leg up to help move this with the industry forward. Okay, so can you speak a little bit to, I know NFE is an area we want to dig into this space, you know, we said SDN, you know, a lot of the problem we had on SDN is it wasn't quite well defined. I think NFE has a little bit more of a definition tends to be kind of the telco service providers, certain specific application focused deployments as opposed to SDN was more kind of an operational model. Maybe give us your take on what NFE is and where's Red Hat's play there? Well, first of all, the two are interestingly related and in the beginning it was often confusing which is which and NFE is really the service providers effort to take appliances, function specific hardware appliances and move that network functionality into software that you could run anywhere on a commoditized compute storage network fabric in a cloud. SDN is a networking operational model as you described it that allows you to steer traffic through some infrastructure and with NFE you've got historically boxes that are positioned in well-defined ways in your data center, you can sort of cable your flows to a certain degree. Here you've got a very dynamic environment, your functions are moving around and being instantiated and moved dropped quickly within your cloud environment and to steer traffic through that dynamic environment takes something that's highly automated. So there's the SDN controller component. For us NFE is first and foremost a platform to run these new applications. So we're providing the infrastructure for this platform. It's OpenStack, it's Linux, it's KVM, it's all these low level building blocks to create a runtime environment for these virtual network functions. And then it's also an SDN controller. We work with a variety of industry partners and then we're also focusing some upstream projects around the SDN space like Open Daylight. Okay, can you speak a little bit to I guess the requirements? You think the enterprise I needed, Mission Critical, I built it highly available. When I moved to a more distributed software-based world I still need some of those things but I kind of feel like the hyperscale model is getting a little bit more enterprise-y and the enterprise is slowly moving along to get a little bit more distributed. What's your viewpoint on that? Well I think one of the key things to think about is if you're coming from a telco background you're thinking in terms of five nines, six nines, it's about the availability of potentially a specific piece of hardware. As you scale out the system you have to change how you think about it. There's faults happening at all points in time, somewhere in the system. You can't consider a single system as a five nines or six nines system the same way we did historically. We need to look at service level availability. How available is the service and that can be done through redundancy of all the different compute nodes in the infrastructure, redundancy at the application level and awareness of how to load balance and steer across a number of different instances of the same service in your network. Pretty big shift and that's something that will require the service providers to also change how they view their definition of reliability. There's also performance, critical performance characterizations that you have when you're processing packets. It's not just about in a cloud environment packet processing looks like a web application that looks like a few packets come in, you do some interesting work, some look up in a database, you send some packets back out. If your application's job is to process packets exclusively, very different workload, very different performance profile on the platform and we're spending a lot of our time optimizing a stack like OpenStack to host packet processing applications in the most efficient way possible. So when we talk to practitioners in the telco industry about NFV, they're maybe not as sanguine as some of the folks in the vendor community were talking about this off camera and they'll say, NFV's okay but it really solves a hardware problem kind of like virtualization sort of solved the hardware problem. I have a software management problem that NFV doesn't really attack and we need to see more of the road map sort of developed out. Is that fair in terms of sort of the road map, the white space of NFV or is there maybe a misunderstanding there? No, I think there's some fairness there and the reason is the initial focus has been very much at the low level. There's an orchestration issue or an integration issue and that's not as well addressed in the current open source projects. So most of the focus has been at the very bottom. How do you provide virtualized infrastructure? As you go up the stack and you need the ability to manage your various software components, orchestrate where they're placed. You've got multiple data centers. That's in the NFV parlance, it's usually called MANO and the MANO space is still more focused on vendor solutions and less on open source development efforts. So from an open source, how can you solve this problem? We're doing a great job at the platform level and we'll slowly move up the stack to help kind of flush out the white space as you described it, but I think it's a fair assessment that that's a big challenge. Operationalizing the software environment, that's the next step. So maybe talk about some of the things that as a technologist really excite you, Chris. I mean, when you look at all the innovations that are going on in mobile and cloud and we're talking a lot about this whole new sort of programming model, the dev-out-bops thing is clearly taking off. As you look out even further, what's exciting you? Well, first and foremost, what excites me is Linux is ubiquitous. Open source is the common development model and one of the things I love about dev-offs is it takes what we've been doing in the open source world, which we call release early release often, just to the next level. And it's about integration and how do you actually take code to production as quickly as possible where you want to recognize your failures as quickly as you can so that you can revert them or roll forward through them and fix them. Similar kind of mindset that we've developed in the open source development communities. So just the sheer notion that Linux is ubiquitous and open source is the common development model is really exciting and you see in there a lot of point technologies that are emerging. So containers, awesome, really exciting technology, a ton of cool stuff going on there, a great level of enthusiasm in the industry around it. That's fun to see. I'm sure that that's a trend that will continue. It's not just a point fad. It's something that's going to really impact how we build our data centers and deploy our applications. To me, it all is these different building blocks of distributed systems and we're trying to make distributed systems accessible to people managing data centers and that means a lot. So, I wonder if I could follow up on that if you don't mind. So the interesting thing about what you said and that answer is a lot of that is cultural and a lot of that culture came from the web scale and a hyperscale guys and so we always talk about five, six, seven years ago they were doing sort of what the enterprise is doing now so let's figure out what they're doing now and see. But that cultural shift, the whole DevOps mindset, is that a sort of permanent transference of knowledge if you will that will lead the enterprise to actually more innovation or do you think we'll still see, I'm sure we'll see a lot of borrowing but will the enterprise close the gap with the web scale innovations is really my question. Well, that's a place where Red Hat really focuses on. So our customers, our enterprise customers, we deploy a lot of applications with our customers in environments that don't look like the modern web scale environment and you hear the kind of Gartner bimodal IT world where you're recognizing that these two different worlds exist at once. Question that you're asking is do we get to a place where that mode two is just the steady state? And I think there's a possibility that we have a cultural shift that supports that. There's always going to be a notion of more mature applications and more rapidly evolving applications. But can we get to a space where we can consume both of those with the same kind of agile mindset? That I think is possible. Yeah, the problem with bimodal IT, I mean, with you respect to my Gartner colleagues is it's a great two stove pipes, old and new. Which one do you want to be in? And so to me, bimodal IT is not sustainable. It's what you described. It's really the- And we want to build Nirvana, right? Towards the cultural shift that supports innovation. So you talked a little bit about containers and there's two aspects that I was hoping you can comment on. One is just the speed at how fast things are changing. We said open stacks released in every six months. How do I keep up with that? Talker's released in every two months. And you know, it's just going faster and faster. How do we keep up? And the thing that goes with that is the problem in enterprises is a lot of times they deploy something they don't want to upgrade it, especially networking. I mean, once I deploy that code and I get everything in good shape, don't breathe on it because it won't break. But you described the new model is it needs to be dynamic and upgrading. Do you see a time where we're just automatically upgrading and getting the new features? I mean, something like CoroS talks about. What's your thoughts on kind of those two angles, the speed and the upgrades? It's a part of how we're innovating. So we're innovating by changing rapidly and one of the ways we can mitigate the risk associated with that is introduce those changes quickly. So there's small incremental changes so that you can very directly see the impacts on your infrastructure. The challenge is you need to have the right testing capabilities to actually validate something that looks like a real world use case so that when you go to production you have a high degree of confidence that you're not going to break the world. I do think we have the tooling and the know-how to get to the point where we can do these kind of consistent upgrades. But it's a journey, we're not there. So we have so many discussions. Open source is ubiquitous and Linux is doing great but if I have a proprietary model it's easier for me to upgrade. If I use an Apple device, I think it's like 85% of Apple devices are on the current version of code or up to N minus two. If I look at Android, it's like 4% of people because there's just that interoperable body problems. I know Red Hat solved some of that but maybe comment on how do we move that discussion forward. Well, the Apple versus Android one is interesting because there's a control thing where you have control of the hardware as well as the software. So on the Apple side it's not surprising that you have this sort of common rollout. In our world, we map maybe more to the Android world where our customers are deploying on a wide variety of hardware. They're picking up software at different points and are in the product cycle. So what we need to be able to do is help them get to the point where they can consume the changes as quickly as we can validate that they're stable and we're ready to support those. That's something that we've learned together as a supplier and a customer. And that's a journey that we're on right now and we're building the tools to deliver the changes as quickly as we can, as quickly as our customers can consume them. And I think it's a real challenge right now where on the one hand, you've got tons of exciting innovation happening and the valley lights up with enthusiasm over the next thing, which we're still two things behind the enterprise. So how do we get those great technologies into the enterprise? And that's something that we're really working hard to facilitate. So we understand where the technology is and it's maturation curve and then how we can find the sweet spot and deliver it to our customers and then update. Things that have been around for a while, we don't update as regularly. Things that are brand new, we have to update more rapidly and that's just sort of practical reality. Yeah, I mean, so much of it. I think about the interoperability matrix of how stuff goes together and something Docker's trying to help, but I mean, Linux with, you've got the network effect of the community to be able to test the various pieces, who maintains the various pieces of it. The enterprise, I said, if I don't buy open source and I make some change, I own it forever as opposed to, the value is if I can get it into the code and upstream and get other people participating, at least there's some shared responsibility there. Yeah, huge, super important to get your code upstream. Upstream first is a critical mantra that we speak at Red Hat, which is addressing exactly what you're saying. The minute you make local modifications, you run the risk of putting yourself on a permanent fork that you own and maintain and you've lost the ability to leverage that external development community that was critical in building the infrastructure. What do you see happening in organizations that are going through this cultural shift? Actually, more importantly, maybe the parts of the organizations that aren't that old stovepipe that I talk about. What are organizations doing? Are they doing enough to sort of train this new generation? I mean, the new generation, not so much, they're coming out of school with this mindset, but the existing resource pool of developers, are they able to pick up on this new culture? Are they, I mean, you guys provide a lot of training, I know, we're trying to get some folks on from Red Hat training, but what are you seeing in terms of the ability of the old dogs to learn new tricks? The ability is there, so we see that, which is great. If we didn't see that, we'd be really worried. It is about, it's about this cultural shift means you're trying to understand why would I make this change? I've lived in a world where I've been risk averse, and what's happened is you continue to get more and more pressure for new features from your line of business and potentially fewer and fewer resources. So at a certain point, you really have to adapt your model, and that's sort of the tipping point where we're seeing that's what's creating the momentum towards being able to take a risk averse environment and turn it into a place where you're willing to understand that introducing change does introduce risk, but that also brings value, and it's the value that's the important part of the equation. So you've kind of got, I mean, to really make it simple. You've got, when you look at application types, really got three, I mean, you could have zillions, but to simplify it, you've got existing apps that are 15 to 20 years old that you want to get more agility out of, make them look more like new apps, leave them on premises. You got apps that you're going to develop in the cloud, and then you got existing apps that you want to move to the cloud. I mean, it's kind of what companies are doing, it really boils down to those three. So is to you buy that and how does Red Hat sort of fit into that pattern? Well, for the app that you wrote 15 plus years ago, you're probably not touching it. So it's really about just keeping it running and providing infrastructure around it. It's giving it compute storage networking so that you're, and maybe some operational interfaces so that it's easy to turn it on, replicate it, turn it off, whatever you need to do within your data center. Maybe a coat of paint. Right, freshen it up a little. The other two categories that you described where we're building tools to help our customers build applications that are either directly cloud aware applications, and then you can deploy it on your own infrastructure, you can deploy it on a public cloud, or slowly build your, the individual pieces of your application out of some of these sort of cloud services. So you don't necessarily have to take your whole application and rewrite it, but you may be able to immediately use a storage layer instead of writing directly to direct storage in the current environment that you have. So how do you piecemeal bring your application to a newer, more cloud-friendly environment or if you're writing brand new applications, you're writing them from the ground up in a cloud aware way. All right, Chris, we're out of time, but thanks very much for coming to theCUBE. It was great to have you. It was great to have you, thanks for sharing your insights. Okay, keep it right there, everybody. This is theCUBE, we're live at Red Hat Summit in Boston. We'll be right back after this short break. Keep it right there.