 Live from Austin, Texas, it's theCUBE, covering KubeCon and Cloud DatavCon 2017. Brought to you by Red Hat, the Linux Foundation, and theCUBE's ecosystem partners. Okay, welcome back everyone, live here in Austin, Texas, theCUBE's exclusive coverage of KubeCon and Cloud DatavCon, it's third year, not even third year, I think it's second year, not even three years old as a community, growing like crazy, over 4,500 people here, combined of all the other shows, it's double than it was before, I'm John Furrier, co-founder of Silicon Angle with Stu Miniman, analyst here, next guest, Gabe Munroy, who's lead PM, product manager for containers from Microsoft Azure, Gabe, welcome to theCUBE. Thanks, glad to be here, big fan of the show. Great to have you on, I mean, obviously container madness, we've got past that, now it's Kubernetes madness, which really means that the evolution of the industry is really starting to get some clear lines of sight, it's a straight and narrow, if you will, people are starting to see a path towards scale, developer acceleration, more developers coming in than ever before, there's cloud native world, Microsoft's doing pretty well with the cloud right now, numbers are great, hiring a bunch of people, give us a quick update, big news, what's going on? Yeah, so a lot of things going on, I'm just excited to be here, I think, for me, I'm new to Microsoft, right? I came here about seven months ago by way of a dais acquisition, and I like to think of myself as kind of representing part of this new Microsoft trend, right? My career was built on open source, right? I started a company called Dais, and we were focused on really Kubernetes-based solutions, and here at Microsoft, I'm really doing a lot of the same thing, but with Microsoft's cloud as sort of the vehicle that we're trying to attract developers to. What news do you guys have here of some services? Yeah, so we got a bunch of things we're talking about, so the first is something I'm especially excited about, so this is the virtual kubelet. Now, I'll tell a little bit of story here, because I think it's actually kind of fascinating, so back in July, we launched a thing called Azure Container Instances, and what ACI was, first-of-its-kind service, containers in the cloud, right? Just run a container, runs in the cloud, it's micro-built, and it is invisible infrastructure, so part of the definition of serverless there. As part of that, we want to make it clear that if you were going to do complex things with these containers, you really need an orchestrator, so we released this thing called the ACI connector for Kubernetes along with it, and we were excited to see people just were so drawn into this idea of serverless Kubernetes, right? Kubernetes didn't have the VMs associated with it, and folks at hyper.sh who have a similar serverless container offering, they took our code base and forked it and did a version of theirs, and Brent and I were thinking together and we were like, oh man, there's something here, we should explore this, and so we got some engineers together, we put a lot of work together, and we announced now this in conjunction with Hyper and others, this virtual kubelet that bridges the world of Kubernetes with the world of these new serverless container runtimes like ACI. Okay, can you explain that a little bit? Sure. People have been coming in saying, wait, does serverless replace? How does it work? Is Kubernetes underneath though? Yeah, so I think the best place to start is the definition of serverless, and I think serverless is really the conflation of three things. It's invisible infrastructure, it is micro billing, and it is an event-based programming model. It's sort of the classical definition, right? Now what we did with ACI and serverless containers is we took that last one, the event-based programming model, and we said, look, you don't need to do that. If you want to write a container, anything that runs in a container can work, not just functions, and so that is I think a really important distinction that I believe it's really the best of serverless, is that micro billing and invisible infrastructure. Well, that's built in, isn't it? Correct, yeah. What are the biggest challenges of serverless? Because first of all, it's nirvana in the mind of a developer who doesn't want to deal with plumbing. Yes. Meaning networking plumbing, storage, and a lot of the details around configuring, just program away, be creative, spend their time building. Yes. What is the big differences between that? What are the issues and challenges that serverless has for people adopting it, or is it frictionless at this point? Well, it depends on what you're talking about, right? So I think for functions, it's very simple to get a function service, add your functions and deploy functions and start chaining those together. And people are seeing rapid adoption and that's progressing nicely. But there's also a contingent of folks who represented here at the show who are really interested in containers as the primitive and not functions, right? Containers are inclusive of lots of things, functions being one of them. And betting on containers as like the compute artifact is actually a lot more flexible and solves a lot more use cases. So you're making sure that we can streamline ease of use for that, while also bringing the benefits of serverless, really the way I think of this is marrying our, AKS, our managed Kubernetes service, with ACI, our serverless containers. So you can get to a place where you can have a Kubernetes environment that has no VMs associated with it. Like literally zero VMs, you scale the thing down to zero and when you want to run a pod or container, you just pay for a few seconds of time and then you kill it and you stop paying for it, right? All right, so talk about customers. What's the customer experience that you guys are going after? Did you have any beta customers? Who's adopting your approach? Can you highlight some examples of some really cool and you know the name names or you can't? Anecdotal date will be good. Yeah, well, I think on the blog post, announcement blog post page, we have a really great video of Siemens Healthineers, I believe is the name, but basically a healthcare company that is using Kubernetes on Azure, AKS specifically, to disrupt the healthcare market and to benefit real people. And to me, I think it's important that we remember that we're deep in this technology, right? But at the end of the day, this is about helping developers who are in turn helping real world people. I think that video is a good example. And what was their impact, speed? Speed of development? Yeah, I mean, I think it's really, the main thing is agility, right? People want to move faster, right? And so that's the main benefit that we hear. I think cost is obviously a concern for folks, but I think in practice, the people cost of operating some of these systems tends to be a lot higher than the infrastructure costs, right, when you stack them up. So people are willing to pay a little bit of a premium to make it easier on people. And we see that over and over again. Okay, why don't you speak to kind of the speed of a company its size of Microsoft? So the day of acquisition, of course, was already focused on Kubernetes before inside of Microsoft. See, I mean, big cloud companies moving really fast on Kubernetes. I've heard complaints from customers, like I can't get a good roadmap because it's moving so fast. You know, I would say that was one of the big surprises for me joining Microsoft is just how fast things move inside of Azure in particular, right? And I think it's terrific. You know, I think that there's a really good focus of making sure that we're meeting customers where they are and building solutions that meet the market, but also just executing and delivering and doing that with speed. You know, one of the things that is most interesting to me is like the geographic spread. Microsoft is in, you know, so many different regions, you know, more than any other cloud, compliance, you know, certification. We take all that stuff really seriously and being able to do all those things, be the enterprise-friendly cloud while also moving at this breakneck pace in terms of innovation, it's really spectacular to watch from the inside. A lot of people don't know that when they think about Azure, they think, oh, they're copying Amazon, but Microsoft has tons of data centers. They've had browsers, they're all over the world, so it's not like they're foreign to region areas. I mean, they're everywhere. Microsoft is everywhere, and not only is it not foreign, I mean, you've got to remember, Microsoft is an enterprise software company at its core, right, we know developers, that is what we do. And going into cloud in this way is just, it's extremely natural for us. And, you know, I think that the same can't really be said for everyone who's trying to move into cloud. I mean, we've got history of working with developers, building platforms, you know, we've entire, you know, division devoted to developer tooling, right? I want to ask about two things that came up that comes up a lot. One is very trendy. One is kind of not so trendy, but super important. One is AI. Yes. The AI with software going to impact and strep storage and with virtual cube leds, this is going to be, change the storage game, it's going to enhance the machine learning and AI capability. And the other one is data warehousing or data analytics. Two very important trends. One is certainly a driver for growth and has a lot of sex appeal as the AI and machine learning. But all the analytics being done on cloud, whether it's an IoT device, this is like a nice use case for containers and orchestration. Your comment and reaction for those two trends. Yeah, and you know, I think that AI and deep learning generally is something that we see driving a ton of demand for container orchestration. I've worked with lots of customers, including folks like OpenAI on their Kubernetes infrastructure. You know, running on Azure today, something that Elon Musk actually proudly mentioned. That was a good moment for the Azure containers. Get a free Tesla on that. Broke some Chessos and get that new one that goes from zero to 104.5 seconds. Right, yeah. So you got a good customer in OpenAI. What was the impact of them? What was the big? Well, you know, this is ultimately about empowering people. In this case, they happen to be data scientists. You know, to get their job done and in a way where, I mean, I look at it as, you know, we're doing our jobs in the infrastructure space if the infrastructure disappears, right? Like the more conceptual overhead we're bringing to developers, you know, that means we're not doing our job. All right, so question then specifically is, deep learning in AI is enhanced by containers and Kubernetes? Absolutely. In what order of magnitude? I don't know, but in order of magnitude enhancement, I would argue, right? So just underlying that, you know, the really important piece is we're talking about data here. Yes. And one of the things we've been, you know, kind of trying to tackle for the last couple of years containers is, you know, storage. And that's carried over to Kubernetes. How's Microsoft involved? You know, what's your, you know, prognosis as to where we go with kind of cloud-native storage? Yeah, that's a fascinating question. And I actually, so back in the early days when I was still contributing to Docker, I was one of the largest external contributors to the Docker project in the earlier in my career, I actually wrote some of the storage stuff. And so I've been going around since Docker's inception in 2013 saying, don't run databases in containers. It's not because you can't, right? You can. But just because you can doesn't mean you should, right? And I think that, you know, as someone who's worked in my career as, you know, on the operations side, you know, things like an SLA mean a lot. And so this leads you to another one of our announcements at the show, which is the open service broker for Azure. Now, what we've done, you know, thanks to the Cloud Foundry Foundation who basically took the service broker concept and spun it out. We now are able to take the world of Kubernetes and bridge it to the world of Azure services, data services being some of the most interesting. Now the demo that I like to show this is WordPress, which by the way, it sounds silly, but WordPress powers tons of the web today still. WordPress is a PHP application and a MySQL database. Well, if you're going to run WordPress at scale, you're going to want to run that MySQL in a container? Probably not. You're probably going to want to use something like Azure Database for MySQL, which comes with an SLA, Backup Restore DR, Ops Team by Microsoft to manage the whole thing, right? So, but then the question is, well, I want to use Kubernetes, right? So how do I do that, right? Well, with the open service broker for Azure, we actually shipped a Helm chart. We can Helm install Azure WordPress and it will install in Kubernetes the same way you would a container-based system and behind the scenes it uses the broker to go spin up a Postgres, sorry, a MySQL and dynamically attach it. Now, the coolest thing to me about this, yeah, is the agility. I think that one of the underrated features is the security. The developer who does that doesn't ever touch credentials. The passwords are automatically generated and automatically injected into the application. So you get to do things with rotations without ever touching the app. So we're publishing WordPress. We'd love to help us with scale if we did Azure. Absolutely, after this is over, we'll go set it up. I mean, I love WordPress, but it breaks down. Well, this is the whole point where auto-scaling shows a little bit of its capabilities in the world, is that PHP does, we'd like to have more instances. That would be a use case. Okay, Redshift in Amazon, wasn't talked about much at re-invent last week. We don't hear a lot of talk around the data warehouse, which is a super important way to think about collecting data in cloud. And is that going to be an enhanced feature? Because people want to do analytics. There's a huge analytics audience out there that are moving off of Teradata, they're doing, you just have a lot of analytics at Microsoft, they might have moved from Hadoop or Hive or somewhere else. So there's a lot of analytics workloads that would be prime, or at least potentially prime, for Kubernetes. Yeah, you know, I think that it's on the radar. No, no, I think it's interesting. I mean, for us, we look at, I personally think, using something like the Service Broker, Open Service Broker API to bridge to something like a data lake or some of these other Azure hosted services, is probably the better way of doing that. Because if you're going to run it on containers, these massive data warehouses, yes, you can do it, but the operational burden is high, it's extremely high. You were pointing at the database earlier. Yeah, it's the same general point there. Now, can you do it? Do we see people doing it? Absolutely, right? They do think something they shouldn't be doing. That's IT. Yeah, of course, and then back to the deep learning example, those are typically big, large training models that have similar characteristics. All right, as a newbie inside Azure, not new to the industry and community, share some color, what's it like in there? Obviously, a number two to Amazon, you guys have great geography presence. You're adding more and more services every day at Azure. What's the vibe? What's the mojo like over there? Share some inside baseball. Yeah, I got to say it's a really, I don't know what I'm just saying, it's a really exciting place to work. Things are moving so fast. We're growing so fast. Customers really want what we're building. And honestly, day to day, I'm not spending a lot of time looking out. Spending a lot of time dealing with enterprises who want to use our cloud products. And what are the top things that you have on your PM list that are top stack ranked features people want? I think a lot of this comes down, in general, I think this whole space is approaching a level of enterprise friendliness and enterprise hardening, where we want to start adding governance and adding security and adding role-based access controls across the board and really making this palatable to a high trust environment. So I think that's a lot of our focus. Stability ease of use. Stability ease of use are always there. I think the enterprise hardening and things like VNet support for all of our services, VNet service endpoints, those are some things that are high on the list. Gabe Monroy, lead product manager for containers at Microsoft Azure Cloud. Great to have you on. I'd love to talk more about geographies and moving apps around the network and multi-cloud. But another time, thanks for coming on. It's theCUBE live coverage. I'm John Furrier, co-founder of SiliconANGLE and I'm meeting with Stu Miniman, looking back on more live coverage after this short break.