 Live from Boston, Massachusetts, extracting the signal from the noise, it's theCUBE, covering Red Hat Summit 2015. Brought to you by Red Hat. Now your hosts, Dave Vellante and Stu Miniman. Welcome back to Boston, everybody. This is theCUBE, we're here at Red Hat Summit. Live, this is our second day of coverage, the third day of Red Hat. The keynote started Tuesday night. More keynotes yesterday, more keynotes today. We're going to have all the keynote speakers on. Lars Herman is here as the General Manager of Containers at Red Hat. Lars, welcome to theCUBE. It's great to be here. Love the title, when did you get that? I mean, Container's been around for a long time. Have you had that title for a decade or? It's actually the short version of the title. I'm also responsible for Reddit Enterprise Linux and Reddit Enterprise Virtualization as product lines. Containers nicely fits into this world because it is an OS technology and we actually drive a lot of innovation in container space out of what we do with Linux and the operating system. You're paying for all the bills here at Red Hat. That's what you're going to do some say. Well, you've got the biggest group here at Red Hat, which is the infrastructure piece, but the containers are all the. Was it this week? Yeah, it was this week. Three shows this week. We're all over the place. But so I wonder if we could start with containers. Containers have been around for a long time. It's the core, the container lives in Linux and all of a sudden, Docker comes out, gets funded, takes off, explodes. Let's go through a container 101. What is a container? Why does it matter? Why should we care? So simply put, a container from a technology standpoint consists of two elements. There is the ability to take an operating system instance and slice it into a number of execution environments. And that's what's around for a long time. What's new though is the second part, which is you can package an application in a container image and now run it in such an execution environment. This is really what Docker introduced. That's why there's all this excitement around Docker because they combine the Linux container capability with that container based image. Now, that's what it is. The implications are vast because. So let me clarify. So what you're saying is you can put stuff in the containers, content, processes, and actually do something inside that container. Is that correct? That is absolutely correct. So you package an application and all its runtime dependencies into this container image and you run it as a process. Okay, now what you have to consider though is what is inside that container? Now it's actually the upper part of the operating system before. So what actually happens with containers, the operating system gets cut in half. There is the lower part, the container host, which runs all the containers. And there's the upper part, which packages the application with their dependencies in there. So you get a lot of flexibility out of this because now for each application you can choose which operating system components do you want in there. Programming languages, libraries, frameworks, all these kind of things. And at the same time you can minimize the container host. That's one reason why customers are excited about it because it makes everything lightweight and it makes it more nimble and easier to manage. You get more flexibility, more granularity. So that's really the answer to the second part of the question is why it matters. It changes the application development paradigm. It is, yes. As we say, it's the future of application delivery and it starts from packaging when we have a unified method to package any application. That's kind of a dream that as an industry we had a number of times but has never really worked out which is why it's so important what happened on Monday when the open container project was put to fruition. Now under the Linux Foundation, an open project that we initiated with Microsoft and Docker and lots of other companies joined to actually drive the open standard around that packaging format so we don't fragment. The second amazing thing about containers is that minimization that I talked about because it addresses the number one pain point in the enterprise. The number one task is patching existing systems to maintain security, to address problems, to carry on with versions from vendors and there is lots of innovation and technology has gone into making that easier but now if you can actually reduce the footprint, you're not just making it easier, you're making less patching and less patching is always better than faster patching. So that actually has a significant footprint in reducing the maintenance burden in the enterprise. And the third thing is there's almost no overhead. So they're actually designed for dynamic applications and infrastructure services. So they have the perfect method to implement cloud big data and the modern set of applications. That's what makes containers exciting. Yeah, Lars, in Marco's keynote this morning, he talked a little bit about understanding if you've got security grace, how that patching happens. If I look at CoreOS that they've put forth the vision that I shouldn't need to patch, it's actually just going to upgrade and it's going to be more in an automated fashion kind of like we do for Chrome. Maybe it's a little bit oversimplified, I'm saying, but I want to get your viewpoint as to how much of it is just alerting it, making simpler and how much should it just happen in some kind of autonomic fashion. So I think if I pass this a little bit, the concept of automatic updating through push instead of pull, I think is generally viable. Even though there is always the risk that if a change happens uncontrolled, that that change introduces some sort of damage, introduces an incompatibility or breaks existing running processes. So typically in the enterprise, customers are not necessarily ready for that, I would say. The second component is also the complexity. Customers have invested into processes and tools to manage the complexity, the many applications they have and security updates specifically in the OS, they are horizontal. When hard bleed came out last year, almost every web application was affected because you have SSL somewhere in there. So now if you introduce this patch for SSL to solve hard bleed, how can you be sure that all your applications are still working afterwards? Because the worst thing you want to end up with is that suddenly a third of your applications goes down for reasons you don't know and you don't understand. So typically there is a need for some sort of testing and verification. And the second is the tools around it. So I think what we agree with, the chorus approach of automatic updating is the aggregate, patch the aggregate. And we've implemented this also in relatomic host as our optimized container host, but also in containers. You also manage containers as the aggregate. So in principle we agree, the automatic push, I'm not so sure because we see the enterprise needs to have this control over managing the change. What do we do for them though? We make it very easy starting from what Marco presented this morning, knowing what is actually relevant to you, that's where it starts, but then second give them the tools that allow them to do that at scale. All right, so Lars, one of the biggest announcements last year was baking containers, Docker specifically as a first class citizen into rel 7. We've seen a whole lot of new operating systems kind of thinned down Linux, talked about CoreOS, Rangers out there, Intel made an announcement. You've got atomic, can you help us understand kind of traditional rel and atomic, how those go together and how you see that landscape? So it's a great question. And to us, to be honest, it's actually not that much of a new concept. We have, our customers have been building kind of minimal footprint Linux variants out of rel forever. For the reasons I stated earlier, if you have less footprint, you have less dependence, less attack surface and less to manage. We have not necessarily productized in the early days. Now in the container space, it becomes a little more natural to productize an optimized container host because you have that line of separation. The container host kind of ends at the ability to run a container. And that isn't that new. So therefore now we can work against this line. We can all work against that line. From a differentiation point of view, I would definitely highlight that rel atomic host is a trusted container host for the enterprise because it is redhead enterprise Linux 7. It is fully compatible with, it is the same source code, it's the same binary, it's the rel kernel, it is supported on the same hardware, virtualization cloud environments, it has the security certifications, it has the trust in the enterprise, which many of the alternatives certainly don't have. The second thing is also, we drive not only the container host, we drive a complete vision that we call the atomic application architecture, which spans from the container host infrastructure to the cluster with Kubernetes, Ashish was here talking about OpenShift, it's to include with what the content of what goes into the container, which none of the other guys really have. I mean what kind of container do you run on coroids, right? You don't get access to the life cycle security of rel 7 or rel 6 or jbos middleware in that environment. All the way to the management tools you need to hold it all together. Yeah, so I like the way you kind of split where the operating system goes. The OCP announcement, I'm wondering if you could just help us walk the stack a little bit. So you've got a new thinner operating system, I've got that run time there, I've got various management orchestration sits on top. It's not like all the container guys have all gotten together and say, hey, we agree on everything, we're not going to compete or try to get revenue in these spaces. I'll help bring that back. And if I could add, so we have a CrowdChat going on, CrowdChat.net slash RH Summit and we have some questions in the crowd and one of them, so maybe you could answer this as part of that, is what is the impact of the OCP, and more specifically between Docker and CoreOS, what does that all mean? Okay, it's a great question. So to take your suggestion, to walk up from the stack, we have identified, actually I've blocked about it a couple of times, we've identified four areas of open standards that we need in the communities, that we need in the container space. It starts with how the isolation in the Linux kernel is done, and that is actually not very controversial. It's C Group's resource management, it's namespaces, it's SL Linux for most offerings that you're building around these standards. The second and probably most important one is the format, because the container format, what Docker introduced is the notion of an image-based deployment, so that is the format, and this is so important that we're not fragmenting on this because all the tools are building around the format. So now the OCP very specifically targets the format and the associated runtime, like the core primitives on a container host to launch a container to start, stop, and pause. So that is the scope of OCP to establish these standards at the very low level of the platform where then other technologies can build on, but we're not introducing incompatibility or we're breaking into operability. We articulate two other standards areas that we believe are equally important, but I think right now, not subject to a formal standardization, there is the ability to describe a multi-container application as a thing. Now, this is very closely tied to orchestration engines, so typically each orchestration engine has their own way of doing it. We made strategic investments into Kubernetes, so we would like Kubernetes as the language to become the standard for how you describe multi-container apps. And we've also launched another project called NewlyQL, which implements our, which has an atomic app implementation, which actually tack- What's that project called, sorry? NewlyQL. It's a bit of a weird name, I apologize for that. No, it's cool though. The implementation is smooth, so it's an atomic app, and what that aims at is the ability to package a multi-container app, again into a single package. So an ISV, for example, could package their application, which consists of five different container images that need to be instantiated together as one thing. So that's another area we're working on. And the last area of standards is in the software distribution, container distribution. Docker introduced the registry protocol, which is conceptually very, very similar to many of the distribution technologies we have in Linux, like RPM and YAM, where there's a repository where you publish something and then you can search what's there and download it. And we actually believe there needs to be a standard or the mechanics of that distribution, and then an open federated namespace where it doesn't actually matter which registry you talk to so much, you can go to many, many different registries depending on your policy choices as to what you want to get. So that OCP targets the most important of all is the format and the core run times. And just this, really the primitives. And I think that's intentional also. If you see the way it was positioned, the way we designed it is to really focus on these things, not try to ball the ocean, enable competition on the technology for the best solution, for orchestration, for management, for services, for a lot of value at that we start to see emerging ecosystem, but establish the open standards that holds that large ecosystem together. And Kubernetes is a key to that. Is there, I mean, is the community pretty comfortable with that? I mean, presumably behind it, everybody's talking to Kubernetes. So Kubernetes was introduced by Google to combine their container orchestration with the Docker format, right? So it was kind of its mission, if you want. And certainly Google has been an active participant in the project as well. It's also in all this work towards standards. Now, the app-see, on the other hand, when Chorus came out with the app-see specification, there were lots of really good ideas in there, which is why we sort of tried to be the negotiator or a facilitator. We talked to Docker, we work in this community. We were the second largest contributor in Docker. And we were engaging, we were submitting, we were providing a maintainer to the app-see specifications. We kind of had a lack in both sides of the discussion of the argument. And that helped to actually bring people together in the end. And you're the second largest contributor to Kubernetes behind Google as well, right? You're two, yes. And that's typically what we do. That's our model, right? We pick the communities that drive innovation and then we take a leading position there, which puts us in a position to add value, drive value, and then we productize it for our customers. So we had another question from the crowd on containers. We've been talking the whole time about containers. It's in the title, if I may. So, and you kind of touched on it before when you were talking about Harkley, but what happens, the question is what happens when I have 10,000 containers in my environment and a virus hits? And you sort of touched on that before, but at that scale, people are nervous. Everybody talks about making containers enterprise ready. Absolutely. So the answer to this is as we drive it through our product strategy is built around three pillars. The first is about what we call container introspection. You need to know what is in your containers. If you don't know what's in your containers, then you will not be able to manage it. So we build tools that help with that and it starts with our products living inside containers because that gives you the frame. The second pillar is around the ability to automate change. Now this is what containers as immutable software artifacts are actually easier to manage than traditional software artifacts because at least conceptually, they're supposed to be throwaways. You just throw away the existing one, put the new version in place, done. And because containers are lightweight, you can do this at scale, which is the third point, right? Bring this to scale. So 10,000, I think was the numbers actually not a lot. We're going to see way more than that because effectively- Millions and millions, I mean. Millions and millions, yeah. And also we have an open shift online, we have two million applications or over two million applications up there. So the scale quickly comes together. What we are driving really in our products is to build around these three themes. Have the insight, so for example, our system management product, Satellite, is going to be the central, conscious repository where you see what is actually in my enterprise, where did it came from, does it have problems? The second component is that automation. This is where, for example, OpenShift has an amazing capability called Source2Image, which actually monitors the source code repository and the container images of which the application is built. If it detects a change, it can actually automatically apply the change all the way to deployment if you want. So you don't actually have a human interaction that is necessary there, it can be all automatic. The moment you make an updated image available, that's quite amazing. So that's an example where cloud and theory can be more secure than the typical on-premise platform. To be honest, absolutely. We totally see that the number one obstacle for security in the enterprise, from a patching perspective, is not the availability of patches or the tools that are there. It is the reluctance to introduce the change because the change means risk. And therefore- Don't touch anything. Correct, don't touch it if it's not broken, don't touch it. That's no longer acceptable because the security risks out there, the sensitivity is there. But there's huge backlogs. We see customers being way behind. If a fix like Heartbleed, if it's on the news and the headlines, then okay, that goes quick. But things that don't have names are typically not adopted as quickly. If we can now with containers build automated pipelines, we actually get to a more secure environment because the patches get applied. Yeah, so I'm sorry I interrupted you there. You're talking about the introspection, the ability to automate, and then of course the third is scale. The scale, absolutely, yeah, bring it to scale. And the scale to us also means are in this vision of the open hybrid cloud, you run your containers on top of different infrastructure footprints. You can run them on bare metal. You can run them on data center virtualization. You can run them on private cloud or public cloud. Now from a management point of view, we want to enable consistency that you patch and manage them the same way across all these footprints. That's also how you achieve scale. Stu, I interrupted you before as well. So jumped in with a crowd question. Yeah, wow, so I'm wondering Lars, if you talk a little bit about what customers are saying about containers, go to the valley, go to DockerCon and 97% of all people are using containers already. So what are you seeing here? We live out in the East and how much has it? I mean, most people I think understand that it's more than just ants, so the whole Docker thing. So where are we with the customers? So first of all, it is amazing how much pool there is. And it's really, it's interesting for me, I don't think I've ever seen anything like that. I've been open source for over 15 years, but I think I've never seen anything going from, wasn't on the radar to becoming the hot, shiny object thing in that short of a time. Yeah, I agree, fastest I've ever seen in my career. Yeah, but there are reasons to that. So what we clearly see is what containers promise, and that's why there's all this excitement, it promises the holy grail for developers, consistency across development tests and pride, agility, everything is fast and nimble and lightweight, and portability, that mythical promise of, you do it in one, you do one thing in one place and it will work the exact same way in the other place. Now, all three are not necessarily fully true as in 100%, but at least from an experience, it comes across as it's true. That's developer side and a lot of the customer adoption right now is developer driven. Developers just use that, hey, that's a cool way for me to consume someone else's work or to package something or to move something from A to B. So there's a lot of early kind of adoption, I would say grassroots adoption. It's not necessarily strategic as in we made this big choice and we want to containerize the data center yet. But what is interesting, we of course also have very strong relationships to the infrastructure side of the house. That's where we sell tons of red enterprise Linux and OpenStack and all the management products and containerization plays to their problems too. Because it actually helps them to better automate and manage the existing applications. It introduces a level of standardization across the environment. So we also see projects driven by the ops guys to actually make their life easier of whether today they spend 80% of their time managing the existing systems and they're trying to reduce that, right? Reduce the cost, reduce the time. Put this together, inside an organization where normally the dynamic is one guy wants to do something new and everybody else is either against it or doesn't have time to think about it. That problem we don't really have here as much because suddenly there are multiple groups and perspectives having an incentive. The second thing is also, and this is important to note, containers are certainly new, disruptive and are changing very quickly. But they're building on a very solid foundation. The capability for containers in the Linux kernel has been around for years. So there's a degree of maturity that actually makes it practical. If you go off and you do something Docker, most likely you have a great experience out of the box. And this is because it is really a fast moving, relatively thin layer on top of massive technology that's very mature for a very long time which also makes the experience great, right? That just amplifies the adoption. Third thing I want to say, and then I will head back to you, is really we see that, we've seen in one year the learning about it, to wanting to make strategic choices. And that is really interesting. The market seems to be at a point where customers want to actually set themselves on a strategic path. Do I want to go to the pass route? Do I want to build something myself? Do I want to optimize for hybrid cloud solutions? There are a number of ways and it's still a little bit complicated because there's so much overlap between the technologies but we definitely see that desire to go do something. So it sounds like the basic education we're almost through that phase which is amazing how fast it happened. Just anecdotally at the show, people that are testing it, anybody running in production, anybody doing it at scale? So I think, I mean, we had Amadeus here. I think that's probably the poster child example just because of the depth of the relationship and working together. But we have, I had lots of interactions here. I had a session yesterday where there was pretty full, a couple hundred people in there about containers. I wouldn't say that the education phase is over yet. I think it depends inside the organization. There are typically a group or a couple groups, they know, but then there's lots of other people who are still asking, how is that different than virtualization? That's kind of the frequently asked question still. But I think we see a lot of early stage proof of concept type engagements. We have seen a good pickup with Relatomic hosts when we made this generally available three months ago. But other than that, I would say in the enterprise, we literally just G8 OpenShift 3 yesterday. We only launched Atomic host in March, which sounds like a bad thing, but it's actually amazing because a year ago, we kind of started working on this within a year. We got these products to G8 and here we are and have them, but of course, customers are sort of, they're waiting for the real products to come their way. So they have something real to work with. So we see that that's building up the more we add in the portfolio and these capabilities, the more pickup we see in our customer base. The pace is amazing. I mean, I remember Stu with a dupe, everybody said, oh, we got to make it enterprise ready. And it actually went really fast, but relative to what's happening with Docker, I mean, people talk about largely, talk about, well, we got to make it enterprise ready. You're in that make it enterprise ready business. That's what we do. You're doing it. It seems to the cycle to get there seems to be compressing quite dramatically. I mean, can you validate that? So I would say that there was, we were building on a lot of stuff that was already there. If you look at, take, for example, Relatomic Host, right? What is in Relatomic Host? The core ingredients is obviously Linux, the kernel, so RAL7 kernel. Now that's mature, we invested in this. Of course, we know how to productize an enterprise, Linux, kernel. Then you go in there, the resource manager, system D, something we've been working on for many, many years for the services, then the Docker piece, that's what we spend so much resource on over the last 18 months, joined the project, and really did some heavy lifting and injecting enterprise features. Like how do you do storage? How do you do, how can you make it more secure? We integrated with the SLA, that's right, all these things. But again, we were building on technologies that we already had. We integrated with other things. And then you go into Kubernetes as the orchestration engine, which certainly is very fast moving, but it's actually usable. It already has enterprise qualities to it. So it was possible because there was this ability to piggyback on the other technologies that were already there. And also from an architectural point of view, there were actually nice integration points. That is not always the case if you drive a technology roadmap. So that made it easier. Going forward though, managing the complexity of what I talked earlier, automation, scale, there are going to be more greater challenges ahead. We are now working containerization into our middleware offerings to create that marriage between microservices as to the application paradigm and containers as the infrastructure paradigm. We're working in our management products. And this is where you get into the next level of problem. How do you actually deal with infrastructure applications that literally only run for maybe 10 minutes, yet your compliance requirements require you to be able to show in audit trail what was running on which machine, when, how do you do that? If you have that much volatility in this, that is a challenge that I think we haven't even really tapped into yet. Well, Grau just brought that up. Somebody said, who manages all these containers? Is everyone going to have their own Kubernetes? Is there, you know, what's the other management layer that's coming? Yeah, we have the strategy, so we will bring it into our products, of course. And we have a roadmap that we show. We're working incrementally all these capabilities into our existing products. Because containers to us is a delivery method, but it doesn't fundamentally change what you do or what you manage. So that's why we drive the content management into our satellite product. We drive the actual state management into CloudForms, which is very, very interesting in that regards. Because CloudForms already is a hybrid cloud management platform that aggregates a single pane of glass across virtualization, private cloud, public cloud deployments. And now we're adding containerization to this. So you get a single pane of glass from which you can manage your infrastructure either by metal or on virtual machines or in containers, as well as your applications in any of these paradigms across the data center and public cloud environments. So we're working on creating that unified view. And at the same time, we know there's lots of partner solutions, our ISV ecosystems, lots of specialized solutions to solve specific problems around log file management or security or monitoring or you name it, right? So there's a whole ecosystem emerging and that's still forming. I think that's, in a year from now, we will look at a much richer ecosystem. All right, Lars, we're out of time. I'm sorry, we have to leave it there, but I'll give you the last word. You have a deep understanding of the business impact of all these technologies, so fantastic segment. What's the message that you want the audience to take away from Red Hat Summit this year, specifically in your group? It's a, we bring containers to the enterprise to help organizations to get to more agility, which is mostly an organizational problem and containers allow you to redefine how you define autonomy in the organization. That's what makes it all exciting. There's a lot of value there. All right, we'll leave it there, Lars Herman. Thank you very much for coming on theCUBE. Keep right there, everybody. We'll be back with our next guest. Check out crowdchat.net slash rhsummit. Join the conversation. We're here live. This is theCUBE. We'll be right back.