 Museum in the heart of Silicon Valley, extracting the signal from the noise. It's theCUBE covering OpenStack Silicon Valley 2015. Brought to you by Morantis. Now your hosts, John Furrier and Jeff Rick. Hi, Jeff Rick here with theCUBE. We are live in Mountain View, California at OpenStack Silicon Valley 2015. This is the second year of the show. The second year theCUBE has been here. It's really grown quite a bit since we were here last year. Went from one day to two days, left to get the update on how many people are here, but it's a great buzz. And really OpenStack has matured quite a bit since last year and the conversations have really shifted into production deployment. Everybody's talking about production deployment. So really excited to be joined here in our next segment by Alex Polvi, CEO of CoreOS, welcome. Thank you, thank you for having me. Absolutely, so you just came out of a talk, right? What was your talk for the people at home that missed it? Yeah, so I was talking about some of the myths that people are running into about containers. You can almost call them memes. There's these things rolling around around containers that are just keeping, causing hesitation for adoption. And I just wanted to address some of those. So what were some of the big ones? So, for instance, that legacy apps can't run in containers and talking about that. I don't know what can't be containerized actually because at the end of the day, it's just a process running on a server just like anything else. So for whatever reason, there's this perception that legacy applications can't work in containers. Or another one is that your applications have to be completely stateless in order to run in containers. And again, we manage the state, we manage our databases, we manage our application store their files and so on through various techniques on servers today. And you still have to do the same things with containers, but it doesn't mean it's not possible because you're using a container and so on. So those are two of the four, I guess. That's good to say. It's surprising to me that there's even two or four because obviously containers are all the rage right now. Every header on every web page in enterprise space or open source space has a container ship with a bunch of containers on the back. Why is this? Because you talk to the naysayers and they'll say, you know, containers have been around for a long time. This is really kind of a repackaging of something that's kind of been around. Why is it getting so much momentum right now? And why is it such a game changer today when it hasn't been prior? Yeah, I think everyone's ready for it at this point. So another example of this that I think happened was around when AWS was released. When Amazon EC2 was released, it was a very big deal. I mean, there were conferences about it and there was all this stuff. And what really happened? Well, they booted a virtual machine for you and they charge you per hour for it. And before AWS, they read all these VPS providers that were booting virtual machines and that were on you at least pay monthly for them. Maybe it weren't hourly. So there was a little bit of a pricing model tweak. But I think the big difference is that people were actually ready to like consume compute in this way for the first time in a big way and sort of the market timing side of it was right. The other piece of containers that I think's going on is containers represent not just this way of packaging and application, but actually this way of running infrastructure overall, which is this much more like what we've seen these hyperscale guys do. So it involves a lot of distributed systems and it involves sort of high availability and failover and all these things built in. It's not just that application being packaged. And I think that side of things is where most of the excitement is and that's where you see projects like Kubernetes or Mesos or Docker Swarm, those pieces and all the vendors in the container space are sort of duking it out for that part of the stack. And that's really where the most interesting part of all the container thing is, I think. Right, and do you think it's a maturity of the software, that kind of methodology? Is it just comfort with it or is it because people are actually starting to change their business, change their development that all these change their business models, kind of taking advantage of this new development paradigm? Yeah, I think it's just a shift that's happening around like where the importance in people's businesses are and the importance in their business is around the applications that they're running, not unlike your raw server that's running behind the scenes. Like the server is becoming just this fungible resource that's used to provide access to your application. You'll provide resources to your applications that are running. And as things like cloud have helped to monetize, compute more, the focus on the application is becoming the thing. And I think that that shift, we just are crossing a threshold right now. Things like Puppet and Chef kind of did that. They allowed you to define how to deploy your application against a set of servers. And now the containers kind of story overall is flipping it to the other side of the coin saying, here's how my applications consume these resources instead of kind of the inverse. Here are my servers and how do I run an application on them? It's kind of the other side of the coin, if that makes sense. And thinking about it that way first, thinking about the applications first. And I think it's really a timing and just where we are in the life cycle of the adoption of all these technologies. You know, everything goes one step at a time. Right, right. And then also just the portability and the transferability of, you know, developing on your laptop and running in AWS for good. One thing we missed with OpenStack and AWS, for instance, is API portability. So I can't take my OpenStack, my application that's programmed against the OpenStack API and run that against Amazon. They're two disjoint things. For the container, I actually have this logical unit that runs in both environments consistently. And so that kind of, you know, interop is one example of a benefit of this. And the only way that that's possible is by thinking about it at an application level instead of a server level. Right, that's funny, bring that up. Cause we covered the great AWS API debate a couple of years ago with Boris Ransky and Randy Byes up in San Francisco at the OpenStack user group. And Randy was really, you know, really bringing up, why wouldn't we support, you know, that API? And that's kind of pre the container explosion as a different methodology to do that. But, you know, at the end of the day, the customer just wants to deploy their workloads and that the method and or location of that could change based on the life cycle of the actual application. I think for it to be done right, you need essentially what both Rackspace and Google are trying to do, which is a big public cloud service provider supporting the same open APIs that you can get in your own environment. So Rackspace did that with OpenStack and Google's doing that with Kubernetes. Google, I think, is doing it. I feel like Google has a better shot at it because they're not worrying about the hypervisor at all. And having to have an interop between the hypervisors is quite a difficult issue. But having interop at the container level is a lot more manageable and that's what Google's doing with Kubernetes. So earlier today, Google GA'd their Google container engine which is based on Kubernetes. So it's their equivalent of like the Rackspace cloud versus OpenStack. And I think that that has the true shot at portability. What would really do it is if Amazon and Rackspace both supported the, or Amazon Rackspace and AWS, sorry, and Azure all supported the Kubernetes APIs as well. But now you're really getting some meaningful vendor interop. And interoperability. And your prediction that'll, is that only a matter of time for that to happen? So first, we will do that through our own technology on Tectonic and Tectonic is our platform based around Kubernetes that we can, because all we need is compute resources, we can make it work consistently on Azure and on AWS and so on, without the providers doing anything. I just think to really get that true interop, you want the cloud providers to be on board with it because what will happen is the cloud providers are going to present their own options. So AWS is already doing the Elastic Container Service, Azure hasn't really made their bet yet on their container thing yet, but they're kind of embracing all. Rackspace hasn't done anything yet either, but we will say, here's a set of APIs that work on all environments, including a pure open source one that you don't have to talk to us at all about if you want. Because at the end of the day, these are big companies, right? And they're not just on one stack, right? It's really workload specific, job specific, application specific, and so they're going to need to be able to interop those things over time. Yeah, I think it's similar to what happened with Linux. Like, yes, there is still need for Linux and Windows and other OSes out there, but by and large on production, web infrastructure, it's all Linux now. And you were able to get onto one stack. And how were you able to do that? It was by having a truly open, consistent API and a commitment to not breaking APIs and so on that allowed Linux to really become ubiquitous in the data center. Yes, there are other OSes, right? But Linux, by and large, for production infrastructure, what is being used? And I think you'll see a similar phenomenon happen for this next level upwards, where you're treating the whole data center as a computer. Instead of treating one individual instance as just a computer. And that's the stuff that Kubernetes and Mesa and so on is doing. And I think there will be one that shakes out over time. And we believe that'll be Kubernetes. Yeah. And then potentially just the whole suite of them just become one single compute resource where there's intelligence that parses it out based on some other intelligence. So yeah, what's funny here is what happens is we take, so virtualization took servers and carved it up, a physical server and carved it up into a bunch of little servers. What we're doing in this, you know, containers, cloud native, data centers, the computer ways, we're taking all these individual servers and turning them into one giant computer. Okay. So it's kind of the opposite. It goes back. It's like going back to the mainframe days again. And then it's so let's think we're just, just pretend for a moment we're post that era. Well now we need to start having like this cloud native, like these cloud native or we need to have failover between these objects, these data centers that are acting like a giant computer. We need to start doing high availability across those and so on. It's pretty interesting to kind of think about what happens next after all of this? Well, we kind of, whatever we do with a single server today, we're going to have to do it these big distributed systems of servers as well. I don't know. It's started me entering a little bit there. But it's a- So it begs the question though, what are some of your, you know, not long-term priorities, kind of long-term challenges? What are some of the short-term hills that you guys are looking to attack next to continue to move this evolution along? Sure. So we want companies to run infrastructure in this way. And the reason we want them to do that is because we think we can use that style of infrastructure to dramatically improve the security of their environments. So we started CoreOS around, what could we do to fundamentally improve the security of the internet? And our observation was that this style of computing, this way that involves containers, that involves distributed systems, unlocks a possibility of managing your infrastructure in a way that's radically more secure than was previously possible. And so we are first helping invent that way of running infrastructure by building all the components that don't exist in the world to do that. That's things like CoreOS Linux, things like EtsyD, our distributed data store, things like Rocket, our container runtime, you know, contributing heavily to projects like Kubernetes that are a key part of this story as well. But then in the long-term, we plan to leverage those things to help companies be far more secure than they've ever been. Some would argue, at least it was early days of cloud, right, as the cloud was less secure. That was kind of one of the first big hurdles that cloud had to kind of overcome. But you're saying it's more secure in this method. Why? For the people that see it as the opposite. You know, I like control, I like my doors, I like knowing who's got the keys. Sure. So our hypothesis is that the key to good security is actually about managing updates. And the way that you manage updates effectively are twofold. One, you have to make them as automatic as possible. So attackers are using automatic scanning and attacks in order to exploit your infrastructure. As soon as the vulnerability comes out, they'll write a little script that knows how to detect that and just sweep the internet for it and hack everybody automatically that has that vulnerability, okay? So we believe the same methodology needs to be applied on the other side. You need to be able to write a little script that then fixes all your server infrastructure automatically for you and doesn't require an ops guy to go and manually patch every single piece of infrastructure out there that you have, okay? So we have to step up our game to that level. Now to make that possible, you have to be able to build infrastructure that embraces a bad update because it's inevitable that you're going to deploy something to your server infrastructure that actually breaks it, okay? So you have to have a model of the way you're treating your compute that makes it okay for you to deploy a bad update. So it's sort of double solving the problem, right? It's not just making the automatic update possible. It's also about making the, when we screw up the automatic update, that'd be okay too, right, okay? And planning for that. And planning for that. And if that's possible, well, now you're truly in a position where you can do this. And so it's about the properties of how you're able to easily roll out and update your applications in a way that's safe that we think are very interesting about this style. Yeah, so getting towards the end of our time here, talk a little bit about this event, kind of what's the vibe? It's a brand new event. There's a lot of OpenStack events that are going on. We were just in Seattle last week. We were at Vancouver a few weeks ago. Tokyo's coming up. But talk about kind of the evolution of the community within the OpenStack world. Sure, I mean, OpenStack, we were talking about how long has OpenStack been around? It's been around five years. Five years, five years. Yeah, I mean, it was January 2011, I think, right? So early 2011, late 2010. I mean, wow, they've done a great job. I mean, OpenStack is in, we're running into it in pretty much every major, every major enterprise out there has touched it in some way. So it's been a great ride, I think, and congrats to the OpenStack team for doing that. Yeah, excellent. Well, Ox, thanks for stopping by. Ox Polvi, the CEO of CoreOS, stopping by theCUBE at OpenStack Silicon Valley 2015. I'm Jeff Frick, you're watching theCUBE. We'll be back with our next segment after this short break.