 From Austin, Texas, it's theCUBE. Covering OpenStack Summit 2016. Brought to you by the OpenStack Foundation and headline sponsors Red Hat and Cisco. Now here are your hosts, Stu Miniman and Brian Gracely. Hi, welcome back to the Austin Convention Center. This is OpenStack Summit 2016. This is theCUBE. I'm Stu Miniman joined by Brian Gracely. Happy to have on the program first time guest, Ajay Kaladi, who's founder and CTO of a company named ZeroStack. Ajay, thank you for joining us. Thanks for having me, guys, glad to be here. All right, so Ajay, for those that don't know you, tell us a little bit about your background and what led you to start ZeroStack. Sure, so prior to this, I was at VMware for a long time. I have worked on all layers of VMware Stack, have spent time on the Hypervisor, vCenter, built a couple of features in all layers. And what I realized over time is that with the adoption of the public cloud, a lot of people loved the ease of use of public cloud, that the fact that they didn't have to deal with a lot of hardware, software, putting things together, but many of them, they wanted it on-prem or in-house for different reasons. And the way a lot of companies were providing solutions in that space wasn't easy for vendors, the customers to consume. And with that theme, we started ZeroStack to really make it easy for customers to deploy, consume, manage a private cloud. Yeah, I know the first time we met you in person was actually at AWS re-invent and we're like, wait, no, no, no, you're for an on-premise solution. And now here at OpenStack, of course, it's a lot of discussion on customer running that. Talk a little bit about what is actually in your solution and that operational model. I mean, that's been a big discussion at this week and lots of shows as to how do companies actually get the benefit of public cloud but actually run it in an operational model. It's not an easy thing to solve. I think the way the traditional private cloud model has been is basically you take on the complete headache of getting the hardware, software, operations, management, all the pieces and doing it yourself. And I think that model itself is fundamentally broken and it's hard for people to consume. At ZeroStack, what we have come up with is a new model of deploying a private cloud. It consists of two parts. One, there is a hyper-converged node that has the compute storage networking as well as all the management software distributed across the nodes. Once you rack and stack the node, we actually have a SaaS layer, pretty much something like Salesforce, through which you actually consume the infrastructure. So you log into the SaaS layer like you're logging into a public cloud but whatever operations you do, they actually happen on the infrastructure. And I think that model is very unique to what we do and it offers a lot of advantages as compared to a traditional private cloud model. Just a clarification point, of course. You came from VMware, a lot of discussion about software, but there's hardware that you're offering that the full solution. Why'd you do that and tell us a little bit about what makes up your total stack? I think the reason the hardware is there because it lets us have some control over the performance predictability and offer a complete solution where we don't have to go to a customer and say, look, you need to bring the hardware and now they have to figure out what hardware to bring in. And it's hard to have a huge HCL list as a startup company to begin with. Over time, I think we'll go to the model where we'll validate some other pieces of hardware from other vendors that customers can procure, put zero stack software on it and build a cloud using that. Yeah, so you're sort of part of the now infamous VMware mafia, lots and lots of ex-VM where people starting companies. What's really interesting to me is almost everyone that I see sort of that came out of VMware said, what we used to do was great, it served a purpose, but the operation side has got to get easier, it's got to get better. What do you hear from as you talk to customers? Because obviously VMware operators sometimes want to feel like, I got to get back on the GUI, I need to do things. You're now trying to make easier for the business, how do the operators tend to view that or how does the customers respond to this idea that it becomes IT as SaaS as opposed to IT as a managed thing? The way I think of VMware, I think VMware is a great technology company and I think the way the company has progressed is like bottom up, where you have a hypervisor that gives you some fundamental advantages in terms of resource utilization. Then you have vCenter, which has a lot of cool features for IT. And the company was essentially selling to IT and that was the primary focus. And I think over time, the company is moving up the stack from the hardware to the management of the hardware hypervisor and building cool features for IT like vMotion and others. Whereas if I look at AWS, I think they are going top down. They are starting with the apps and they are saying, look, the developer should be deploying the app and you should not worry as much about infrastructure. And I think that's where the big difference in philosophy lies that now VMware admins who used to love to do a lot of cool stuff on the hardware and infrastructure, I think that is something with cloud coming on board that is becoming more complicated and people want developers to have self-service instead of making them go through the IT. So I think there is a lot more pressure on IT to give something which is the next level beyond the basic virtualization and beyond some of the initial benefits of what VMware provided. Yeah, so it's ultimately sort of that AWS ease of experience with the benefit of your data's local, your data, you can go see it, you can touch it, you can audit it, it's secure in ways that maybe you can't in the public cloud. Exactly, exactly. And you can also integrate with a lot of existing systems. So if you have, let's say, a lot of data sitting next to you and you want to have a cloud that can work on that data, then you have to have the cloud also sit next to it. It's much harder to ship 20, 30 petabytes of data to a public cloud and then start doing processing on that. Right, right, great. Ajay, can you just step back for a second and give us kind of some of the speeds and feeds on the company itself? How many employees you have? What funding can some of your major investors are? And where's the product today? Is it shipping? Any customers actually in production? Sure, so we started the company in June of 2014. So we are nearing about two years and the company is somewhere around 35 to 50 people. And we are at Series B funding. We raised the Series A in June of 2014. That was about 6 million. We raised Series B in October of last year. That was about 16 million. So right now the company is pretty well funded and we are continuing to build the product. We did the general availability of the product about two months ago. And we are planning to release the next version in a month or so. All right, so that's pretty rapid from companies starting to gene the product. Can you talk a little bit about the stack? There's always kind of that balance between how much you actually bake in, what features that you have, the maturity of the model versus there. I knew you guys don't consider yourself kind of the hyper-convergence. It's really the public cloud operational model, but I hear those guys always comparing the software that they have and everything they built in. So how do you respond to that and what's in the stack, what kind of use cases really fit and what things might be a little bit further down the road? I mean, pretty much the company has spent all time building software. And the software is designed to run on commodity hardware. It's just that we are providing a spec of the hardware where we have done all the testing. And in terms of the version one product itself, I think initially we focused on how do we build the cloud really fast? So we have built cloud for customers where we ship the box to them, they rack and stack it and they provide the IP address. And beyond that, you can go through the SaaS portal and you can get a cloud in less than 30 minutes. And that is something that people say that it's unheard of, getting a fully functional cloud running on a 2U box in less than 30 minutes. I think now we are focusing more on the application level features because one thing I noticed is that when we talked to customers, they said, okay, you have given us the cloud and you have given it really quickly, but now how do we port the applications on top of it? And to handle that, one, we recently announced a feature called Z App Store. It's almost like App Store on iPhone or Android where you can go and you can say, I want this app. You click on it and it gets deployed on the phone. So we also have apps on the App Store like Jenkins, Hadoop, Cassandra, ELK stack. And you can just click on that and you can deploy that fully functional app on the zero stack cloud within minutes. The second thing we are doing is making it easy for customers to port their apps across different platforms. So if they're running something on VMware that they want to port to the zero stack based cloud, they can actually in few steps migrate some of the workloads from that environment onto zero stack. And I think that is helping us go to the next level in customer discussions because now it's not just about building the cloud, it's about running apps on the cloud. And ultimately that's what people really want to do once they get a cloud to work with. Okay, so the ability to do some migration, the ability to get sort of App Store like almost like a SaaS within your local. What about, can you give developers just sort of raw resources? Can you give them VMs? Can you give them containers or maybe a database as a service? Is that within the portfolio of how you're thinking about that as well? Sure, so the developers already used to get a full self-service portal. So as part of the self-service portal, they can create VMs, volumes, attach the volumes to the VMs, they can create networks. These are private networks, they can put VMs on those networks. They can assign floating IPs to VMs to make them externally accessible. They can create load balancers and stitch them with the application that they are deploying. So all of these things they can already do with the platform. And I think with application templates, what we are making it is easier to deploy an app with single click, rather than them having to stitch these things together with multiple different operations. All right, so Audrey, we actually haven't talked about OpenStack yet. So explain how OpenStack fits into what you guys are doing. So when we were starting the company with the goal of providing private cloud, we were thinking of what stack to integrate within the solution itself. And we actually looked at CloudStack, we looked at Euclipters, we looked at OpenStack, and given the maturity and adoption of OpenStack, we decided to go with that. So what we essentially have is customers get 100% OpenStack APIs on top of our platform. And what we have done is taken some of the core pieces of OpenStack and they are built in and baked in into the platform. But then we have built a lot of stuff around it to make sure that you can get to a private cloud really quickly. It gives you OpenStack APIs and really make the consumption easier for enterprise. I feel one of the challenges that pretty much every OpenStack summit that you go to people talk about is the operational challenge or deploying OpenStack or consuming OpenStack. And I think that is something that we have solved using software that we have written. Yeah. Help us understand a little bit the operational side of what you deliver. You talked about the clouds up in 30 minutes. What goes on day two, day 365 in terms of not only you managing the system that runs on the covers, but are there going to be ways to help developers have a single click to do backup or restore? Those types of kind of ugly but necessary operational things. How far does your operations extend and where does it start for the client? That's a great question. So the way I look at OpenStack today is customers get OpenStack in two ways. One, you basically get a distribution from someone and you get a lot of professional services to support the deployment. And I think that's the primary way that most of the vendors has been providing it. What we have done is taken some of the work that those professional services would do and put that into software. So now our software essentially takes care of the deployment. It takes care of the self-healing. If any service dies, the software would detect it within few seconds and automatically bring that service up on some other node. So part of the operational challenge is handled by the software which is built in into the box. There is a second layer of operations that happens as part of our SaaS platform. So there we are collecting a lot of data in terms of health monitoring, events, stats. And now a single operator on the ZeroStack side can actually look at a lot of customer deployments and see that they are healthy and they are working fine. So the economies of scale that you get in terms of operations is very similar to what a public cloud gets where they have very few people operating a very large data center. But the difference is, in that case, there is one large data center operated by one large company. In our case, we are operating hundreds and thousands of micro clouds and they are all connected through the ZeroStack SaaS layer. So Ajay, I'm curious. When you're talking to customers, we've been digging in, what do customers really care about and what is really sticky in the environment? When it comes to the compute layer, a lot of times if I build a solution, some customers obviously have preference and there's definitely differentiate on there. But Exit is 86 platform, I build a good solution. When I'm going to deploy on AWS, I don't really think about what they're using. Hypervisor even, we're starting to see VMware of course has a strong position in the marketplace, but there's some solutions out there. It's just a feature of what we're doing. I'm curious, where OpenStack sits in the discussion? Does it fit in the discussion? When we're discussing it, OpenStack seems to come up later in the conversation. Does it even come up in the conversation with the customers or is it all about kind of that pain point in an operational model? So I would say typically our first conversation with a customer is more around the private cloud, their use case in their ID, what are they trying to do with the next evolution of their ID? And OpenStack doesn't come up in the first discussion. Then we go to the technical demo typically and at that point customers are like, okay now tell me what kind of APIs are you providing? And at that point we say, look we are providing 100% pure OpenStack APIs. And I think at that point people ask about the box as well, saying, oh what kind of hardware is that? Is that something very proprietary? Are you guys giving me something that is very unique that I cannot use for something else? And I think at that point we go into these details, but once people realize that it's essentially commodity hardware, it's all Intel based, and then we are adding a lot of software secret sauce on it, they are mostly comfortable with that. Yeah, how does a customer, so a customer obviously starts with a certain amount of capacity in terms of nodes, how quickly if they were to call you up and say, hey look we think we've got a project coming, maybe we want to do a little bit of an analytics project or something that's bigger, how long between them calling you and that additional capacity being up and running? That's a great question. So one thing I would like to mention is what we offer is pretty much world's smallest private cloud if I look at the smallest size. We have a 2U node, it has four servers and it's a fully functional, highly available private cloud. So you can literally start with that for a small project for a team. Now once you realize that oh this is something that you need more or you need to add capacity, first of all the SAS layer actually does capacity planning. So we are collecting data and we do projection on the data to say you would run out of your CPU in let's say 27 days, you would run out of memory in 60 days and we give those projections and we tell the customer that look you should add more boxes to it. And the solution is designed to be scale out. So once you get the new box, you pretty much rack and stack it, it would show up in the SAS layer as unconfigured, you say add to my cloud and it becomes part of the cloud. That takes probably four or five minutes but the longest step is the procurement of the box because once the customer decides to add a box, they tell us and now it takes some time for us to ship the box and get the box there and I would say that in worst case is probably a two week window and that we try to get ahead of with that capacity planning feature. Gotcha, so the components are all kind of standard, you're using OpenStack, you've got standard components. Talk a little bit about kind of your core IP, what do you have that isn't easily replicatable that what the team came from? True, so there are two pieces of core IP that we have built. One piece is a distributed control plane that runs across all the nodes on the platform itself and that control plane is like a mini operating system that monitors all the services that are running, it knows how to bring up everything, it knows how to migrate them, it knows how to monitor them and restart if there is a problem. So that is the piece that we have built and for us OpenStack is essentially a set of services that we run. The second big piece of IP is the model with the cloud and the SAS layer itself where we are collecting the data and we are providing the economics in terms of the server to operator ratio being very high where we can actually monitor and manage a lot of environments using the SAS layer and then the consumption layer being SAS lets us add features very quickly. So for example, we have customers who asked us saying, oh, we want to add this new feature in the VM workflow and we would add it and two weeks later the feature would show up and I think that's another big differentiation as compared to software that is being packaged and shipped because that would come out once every six months or once every year, but customers are not going to wait for a new feature for one year. In our case, we can add that very quickly and things like adding Z App Store, things like helping them with application mobility, a lot of these things we can add on the SAS itself without touching the platform. All right, so Ajay, I want to give you the last word for the interview. If you look forward, what are the things to expect going forward? The containers fit into discussion for what you're doing, anything else you want to highlight as a final takeaway? So I would say going forward, we are moving up from just building the cloud to deploying apps on the cloud and that's where Z App fits in, that's where application mobility or porting applications from different platform fits in. Containers obviously is something that we have been experimenting with and that is something we are planning to release later this year on top of the platform. So the way I see ultimately for ID, it's essentially about you want to deploy an app and you should be able to choose whether the app belongs to a private cloud or a public cloud and you would be able to choose that from our SAS layer. The second decision should be does the app belong in a VM or belong in a container and will make that choice also available to customers. And then on the same unified platform, you would be able to run things either in a VM or a container, whatever you choose to do because these are just packaging mechanisms for the app. And I think that is ultimately the nirvana that IT needs going forward in the cloud era. All right, well, Ajay Galati was zero stack. Thank you for joining us. Lot of time at these events, talking about the builders and of course, anyway we can simplify that environment, making it easy for the operators, focus on the business is all good. So we'll be right back with lots more coverage here from OpenStack 2016 after this quick break. You're watching theCUBE.