 Live from Austin, Texas, it's theCUBE. Covering DockerCon 2017, brought to you by Docker and support from its ecosystem partners. And we're back. Hi, I'm Stu Miniman, joined by Jim Kobielus and this is theCUBE worldwide leader in live enterprise tech coverage. Happy to have on the program Scott McCarty who is technical product marketing for containers with Red Hat, thanks so much for joining us. Thanks for having me. All right, so obviously this is the big container show. Red Hat, I saw when you talk about the number of contributors you're one of the top contributors there. But first tell us a little bit about your role at Red Hat, how long you've been there, some of your passions, what do you work on? Yeah, for sure. So I've been at Red Hat six years and I started actually as a solution architect six years ago, came from a startup before that. And so been in the operations space for a long time, did a lot of programming, background in anthropology and computer science. You're dating yourself, you're called programming because it's coding now. I used to program, but what is this coding stuff? I'm dating myself. Did you say anthropology? I did. Oh, you got to connect that with Red Hat at some point in our interview here. It matters in the culture of things. Yeah, the culture is important. So I did, so a very wide swath of our portfolio, I understand from being a solution architect. And then about two years ago moved into, well, but when Docker first started off, kind of got into containers and got pretty heavy in that and was excited about it. And then moved into just doing strictly technical product marketing for only containers, first or focusing on containers. Okay, so talk to us about how containers fit into the Red Hat portfolio. So containers is really something that touches every part of our portfolio because, whether at the lower levels, it's like the Linux layers, that's the actual nuts and bolts of what builds the containers and what the containers really are. But then at the other end of the stack, so if you look at our storage and our middleware, containerizing those applications and then figuring out how to package them in a cloud-native way and make them work in a cloud-native way so that they can operate inside of something like OpenShift, there's a lot of work to be done there. So there's a wide swath, I would say, across our entire portfolio of work around containers going on. Yeah, in the keynote this morning, I like there's the maturation of the use cases because it sounds a lot like, remember the early days of Linux or the early days of virtualization? Yes. Once again, they put together a bunch of use cases and like, oh, we're running applications and a wide variety of applications in containers. So what are your customers seeing? Any kind of cool use cases or things that people are doing and anything new that they're doing that they couldn't do before? Well, I'll give you a little take on that. So even for the last two years that I've been going out all over the world and talking to customers, I've noticed that there was a little bit of a disconnect between the industry and kind of only focusing on the app dev side of things. I think today, kind of here in Summit we talk about some of the other more traditional use cases, traditional or non-cloud native or we don't like to say the word legacy, but people say it. I would argue those have been a huge portion of what people were experimenting with and playing with, but we don't talk about them. Also, I think there's a little bit of a notion of this mode one, mode two kind of mentality, but that limits the way we think about it into only production workloads. So I have some really funny use cases. So give you some examples, network scanning. So like there are some vendors that provide network scanning software and I was a couple months back up in Canada talking to a telco and they mentioned that they were actually putting a commercial network scanning package in containers because when you think about, you say you have production Oracle database and you talk to the Oracle DBA and you say, hey, I'm going to install this giant network scanning package on your server and they're like, no, you're not doing that. So a container makes it very easy to just bring that application down, do this network scanning, troubleshoot something and then delete it and it's gone. That's just a tools use case, right? But it's something that people have been doing for a long time, but nobody's really talking about it. Another one is even affecting business more transformationally. So if you think about the way startups hire people, this happened to a friend of mine that's a CTO at a startup. They're interviewing a developer, it's very common to send him home with like a homework program, you know? And so they send him home with a Ruby on Rails program and he comes back with a GitHub repo that has like a database schema file for Postgres and a working Ruby on Rails application. And there are two hiring managers, the one hiring manager and says, okay, and I'm sorry, also he says, by the way, I have a Docker repo, you can go out and pull it down if you want to just run my program and see if it works. The one hiring manager decides to try to rebuild it from scratch, takes about two hours messing around, trying to get the database schema to work because he used a newer version of Postgres than she had on her laptop. You can imagine the dependency chaos that is. The other hiring manager literally just said, okay, just Docker run this thing and then kind of ran the container and looked at the code. The one spent two hours, you know, getting it up and running, the other one spent five minutes. And so now if I can give you back, you know, the most valuable people in your organization, these very, very technical architects that are doing hiring decisions and trying to evaluate really critical core developers for your startup, if I can give you back two hours, if you have to interview 10 of those, that's 20 hours of your time. That's transformational, that's like digital transformation essentially, but for a startup, you know, like we don't want to have to spend all this analog time doing it. In addition to the traditional applications like databases and, you know, even typical web servers, all of those things, but not just mode two or cognitive, but also just traditional workloads. And we've been seeing that for a long time. I mean, this is similar to the virtualization journey. It's like you said, everyone said it wasn't possible and even two years ago I was saying, wait a minute, just wait for this, it'll happen. And we're seeing it happen. Yeah, anything particular, you know, we've made a lot of progress, but we're still working on storage, networking seems to be a little bit more mature than storage, you know, what are you guys helping to work on at Red Hat and, you know, what do you want to see going forward that, you know, we come here a year from now, we're going to say, oh, cool, we knocked down this barrier or we're doing something even better. So one of the things I'm excited about is kind of, if you look at the integration points between cloud infrastructure software like OpenStack and even the cloud providers, and then something like our OpenShift solution or Kubernetes, if you look at the storage and the network interactions, today the networking is pretty mature, but the interaction is pretty static. So if you provision OpenStack, you know, you say you have an OpenStack environment, you want to run OpenShift on top of it, you would go pre-provision kind of a VLAN, you know, a subnet for it, and then you would, and then we build actually like heat templates to deploy OpenShift inside of it within that subnet. In the future, we're investing in career and, you know, a year from now, I'd like to see some really dynamic interactions happening between OpenShift and OpenStack. I'd like to see an administrator say, oh, I need to provision a new project and that project needs its own network isolation. When that happens, OpenShift goes and talks to OpenStack, provisions a subnet that's encrypted with OVS and actually already is kind of set up, comes back and says, okay, cool, and then can provision a project inside that. On the storage side, we've actually already got that going. So we have what's called dynamic provisioning. So if you need storage inside of OpenShift and you have a persistent volume claim that needs access to storage, we actually have something called a dynamic provisioner that will actually go create that persistent volume and go talk to the storage and carve off a LUN of exactly the size you want or an NFS share of the exact size that you want. So I'd like to see more and more of that dynamic provisioning happening between the infrastructure and the container. Is that a capability that should be built into Kubernetes or entirely independent of that? So the current project is kind of neutral, but it will be kind of, think of it as almost like an interface that Kubernetes will be able to use as an interface to all the networking providers. So it's kind of a neutral third party thing. Really, it could be used by other things, other than Kubernetes. I'm going to take on Project Moby that was a really interesting announcement today. To what extent would Red Hat consider possibly using that as a tool to build custom container applications for your own product, family support? Probably the most interesting thing I found about the announcement was kind of a validation of, you know, already kind of a strategy that we had around Project Atomic and if you look at Origin and Project Atomic and Fedora, they mentioned Fedora, that model. I think it's a good model and we appreciate it. I think that there's some validation also around the idea of an immutable host and having control over the host. And honestly, I think it kind of validates that the Linux itself is not a commodity. There is actually something very technical there and you do need to actually be able to drive features in that kernel to actually support the containers because I think containers made the kernel hot again in a lot of ways. So I think it's validation of that and I think that's exciting. At the beginning, we talked about culture a little bit. We've interviewed Jim Whitehurst, I've read his book, The Open Organization. When you come to a show like this, where I mean today, we talked about the developer, we talked lots about open source and right, there's Linux Kit, there's the Moby Project, all these different things out in open source. What's your take on this ecosystem and what's going on in the industry? I think ecosystems are harder to build than what people first think. I don't think you can just, so if you look at certain, if I were to analyze the way open source works, there are sort of open core models which are like, let's give enough away to get free marketing. Then there's kind of open source models where we give away all the code but we don't really have a community. We don't really take patches, we just kind of put it out there, use it however you want, that's fine. And then I think there's truly community driven open source which is what Red Hat really tries to focus on. So if you look at Fedora, it's truly a community. I think building those and maintaining those takes a lot of nurturing and a lot of care and a lot of love and feeding. I also think it takes a lot of discipline around allowing these best of breed ideas to kind of happen the way they're going to happen and then also fail if they don't work. And so that can be tough. If you look at the model of a lot of startups, it's more kind of like unilaterally make decisions and then kind of release it and if it sticks and it's fail fast, the community driven model is a lot harder to handle because consensus is harder to build. And so you've seen Jim talk about this. I mean, one of the dangers in an open organization of our size is consensus, finding consensus and not going towards a completely consensus driven decision model. But that's hard because you have to satisfy everybody in the community and make sure everybody's getting something out and everybody's putting something in. And so it's tough. It's funny, I remember like an open stack for a couple of years. It's like, do we need the fanatical dictator of this ecosystem? Red Hat obviously is not a fanatical dictator of its community. Do you think Docker has a fanatical dictator of their community? Well, I think, or is that person a visionary? I mean, you know, they'll put the positive view from his mantra. Yeah, yeah, or the joking word in the community is the benevolent dictator. Benevolent dictator for life. I think some of the communities work that way. I think if you look at Python, you'll get the Linux, you know, it works that way. But if you look at bigger projects, and I'm going to date myself, but if you think about like Katie and Nome and some of those, there's no benevolent dictator. They're so big and so wide-ranging and such wide use case differences between what people do with them. But I think it's hard to have that. There are visionaries within the group and even that's true in the kernel. I mean, if you look at what's happened, Linux has other generals essentially that kind of, I mean, it's become a very big community, a very boisterous community. I think that that takes, again, though a lot of discipline and maintenance to make that happen and keep that alive. All right, Scott, to take us on home, why don't you give us a little view as to what Red Hat has going on this week. Of course, you guys have your big show, Red Hat Summit, coming up in a couple of weeks. We'll have theCUBE there. I'm excited to be there also, but talk a little bit about this week and what you guys are doing. So this week, we're excited because we have kind of a bunch of 3.5 on the, I don't know if you guys have, if you guys heard about Atomic Image, we released Atomic Image. So it was not discussed in Brian's interview this morning. I'd love to hear a little bit about it. So Atomic Image, we've kind of looked at kind of some of the use cases around how people are consuming containers. And I've blogged on this a lot and talked in it. And honestly, it's pretty deep technically. You kind of get into it. It's about having, like, you know, Salman talked about today, you know, image size matters, and there is definitely a hunger for smaller images. You don't want to have stuff that you don't want. But that is also a very fine line balance. So the challenge being that the typical way that enterprises operate is they have a core build where they will add all the pieces of that core build that they think should be everywhere, right? Because you don't, like, say you need a fundamental core library like G-Lib C, you wouldn't add that to all of the different applications. You would add it once and then inherit it and all that. So it's kind of the dry model. Do not repeat yourself, right? So when you get into this dry model, you got to balance the size of that base image versus, you know, and its flexibility versus conciseness, you know, and how concise it is. Atomic Image, though, is meant for, we essentially released a very minimal image that matters for those very concise applications. So if you look at, like, a C binary that's very small, maybe all it needs is DNS resolution, some other service from the OS, from the user space. It doesn't need much. But it's a real small binary. It wants a really small image to live on. So we released something called Atomic Image, really targeting those use cases that are very specific. I remember when the project Atomic was launched, so it sounds a lot like what, you know, Docker announced with kind of the Linux kit today, too. So maybe you could compare and contrast a little bit. Yeah, so I would compare, you know, the Linux kit to Atomic host, which we've had for a long time, which is the kernel and system D and kind of what runs the containers, right? But now we've released a different user space set that's smaller for, to run on top of, you know, it's not like an agile, minimum viable product. This is a minimum viable container for a particular function. Yeah, exactly, like busy box or some of the smaller, you know, images that you want to play with. And Scott, do you guys have their website or some documentation that you recommend people starting with on your sites? Yeah, absolutely. So I think Project Atomic's a great place to start. And that's in the blogs, I'm assuming, right? It is. If you blog for Atomic Image 2, you'll find a relblog entry. So relblog's a good place to kind of find some of that stuff. So relblog.redhat.com. And then also, if you look on just redhat.com, and also our container catalog is a good place to actually go and get started with that. So if you go to access.redhat.com slash containers, we'll get to that. Scott McCarty, it's great catching up with you. Next time we have you on, we've got to get the story behind Father Linux as your channel. Oh yes, yes, yes. All right, but we'll be back with more coverage here from DockerCon 2017. Thank you for watching theCUBE.