 Live from Barcelona, Spain, it's theCUBE, covering KubeCon CloudNativeCon Europe 2019. brought to you by Red Hat, the CloudNative Computing Foundation and ecosystem partners. Welcome back. This is theCUBE's coverage of KubeCon CloudNativeCon 2019. I'm Stu Miniman with Corey Quinn as my co-host, even though he says KubeCon. And joining us on this segment, we're not going to debate how we pronounce certain things, but I will try to make sure that I get Bridget Cromhout correct. She's a principle cloud advocate at Microsoft. Thank you for coming back to theCUBE. Thank you for having me again. This was fun. First of all, I do have to say, the bizazzled shirt is quite impressive. We always love the sartorial view that we get at a show like this because there's some really interesting shirts and there's one guy in a three-piece suit. There is. It's the high style, got to have that. Oh, absolutely. Bring in some class to the joint. Wearing a scoot is my primary skill. I will tell you that, yes, they sell this shirt on the Microsoft company store and yes, it's only available in unisex fitted, which is to say much like Alice Goldfuss likes to put it, ladies is gender neutral. So all of the gentlemen who say, but I have too much dad bod to wear that shirt. I say, well, you know, get your bedazzlers out. I say it's not dad bod, it's a father figure, but I digress. Exactly. All right. So Bridget, you're doing some speaking at the conference. You've been at the show, you know, a few times. Tell us, it gives a little bit of overview what you're doing here and your role at Microsoft these days. Sure, absolutely. So my talk is tomorrow and I think that I'm going to go with it's a vote of confidence that they put your talk on the last day at 2 p.m. instead of the, oh gosh, are they trying to bury it? But no, it's, I have scheduled enough conferences myself that I know that you have to put some stuff on the last day that people want to go to or they're just not going to come. And my talk is about, and I'm co-presenting with my colleague Jessica Dean and we're talking about Helm 3, which is to say, I think a lot of times with these open source shows, people say, oh, why do you have to have a lot of information about the third release of your, you know, third major release of your project? Why? I mean, it's just an iterative release. It is, and yet there are enough significant differences that it's kind of valuable to talk about, at least the end user experience. Yeah, so they actually got an applause in the keynote, you know, there's certain shows where people are hooting in hollering for every, you know, you know, different compute instance that that has released and you look at a little bit funny. But in the keynote, there was a singular moment where it was the removal of Tiller, which Corey and I have been trying to get feedback from the community as to what this all means. From my perspective, it seemed like a very strange thing. It's we added this, yay, we added this other thing, yay, we're taking this thing, ripping it out and throwing it right into the garbage and the crowd goes nuts. And my two thoughts are first, that probably doesn't feel great if that was the thing you spent a lot of time working on. But secondly, I'm not as steep in the ecosystem as perhaps I should be. And I don't really know what it does. So what does it do and why is everyone super happy to consign it to the rubbish bin of history? Right, exactly. So first of all, I think it's 100% impossible to be an expert on every single vertical in this ecosystem and look around. KubeCon has 7,000 plus people about a zillion vendor booths. They're all doing something that sounds slightly overlapping and it's very confusing. So in the helm, if people want to look or we can say there's a link in the show notes but people can go read on helm.sh slash blog. We have a seven part, I think, blog series about exactly what the history and the current release is about. But the TLDR, too long didn't follow the link, is that helm one was pretty limited in scope. Helm two was certainly more ambitious and it was born out of a collaboration between Google actually and a few other project contributors and Microsoft. And the tiller came in with the Google folks and it really served a need at that specific time. And it was a server side component. And this was an era when the role-based access control in Kubernetes was well-nigh non-existent. And so there were a lot of security components that you kind of had to bolt on after the fact. And once we got to, I think it was like Kubernetes 17, 118 maybe, the security model had matured enough that instead of it being great to have this extra component, it became burdensome to try to work around the extra component. And so I think that's actually a really good example of it's like you were saying, people get excited about adding things. People sometimes don't get excited about removing things but I think people are excited about the work that went into removing this particular component because it ends up reducing the complexity in terms of the configuration for anyone who's using the system. It felt very spiritually aligned in some ways with the announcement of open telemetry where you're taking two projects and combining them into one. Where it's, oh, thank goodness, one less thing that I have to think about or deal with instead of A or B, I just mix them together. And hopefully it's a chocolate and peanut butter moment. Delicious. One of the topics that's been pretty hot in this ecosystem for the last, I'd say two years now has been service mesh and talk about some complexity and, you know, I talked to, it's like, which one of these are you using? Oh, I'm using all three of them and this is how I use them in my environment. So there was an announcement spearheaded by Microsoft, the service mesh interface. Give us the high level what this is. So first of all, the SMI acronym is hilarious to me because I got to tell you, as a nerdy teenager, I went to math camp in the summertime as one did and it was named SMI. It was like Summer Mathematics Institute. And I'm like, awesome. Now we have a work project that's named that. Happy memories of lots of nerdy math. But in my first Unix system that I played with. So, but what's great about that, what's great about that particular project, and you're right that this is very much aligned with you're an enterprise. You would very much like to do enterprisey things like being a bank or being an airline or being an insurance company. And you super don't want to look at the very confusing CNCF project map and go, oh, I think we need something in that quadrant. And then, you know, set your ships for that direction and hopefully you'll get to what you need. And it's especially when you said that you mentioned that this, it basically standardizes it such that whichever projects you want to use, whichever of the end. And we used to joke about JavaScript framework for the week, but I'm pretty sure the service mesh project of the week has outstripped it in terms of like speed of, you know, new projects being released all the time. And like a lot of end user companies would very much like to start doing something and have it work. And if the adorable startup that had all the stars on GitHub and the two contributors ends up, and I'm not even naming a specific one. I'm just saying like, there are many projects out there that are great technically and maybe they don't actually plan on supporting your LTS. And that's fine. But if we end up with this interface such that whatever service mesh, that's a hard word. Whatever service mesh technology you choose to use, you can be confident that you can move forward and not have a horrible disaster later. Right. And I think that's something that a lot of developers went left to our own devices. And in my particular device, the devices are pretty crappy. It becomes a, I want to get this thing built and up and running and working. And then when it finally works, I do a happy dance. And no one wants to see that, I promise. It becomes a very different story when, okay, now how do you maintain this? How do you responsibly keep this running? And it's, well, I just got it working. And what do you mean maintain it? I'm done. My job is done. I'm going home now. It turns out that when you have a business that isn't being the most clever person in the room, you sort of need to have a longer-term plan around that. And it's nice to see that level of maturation being absorbed into the ecosystem. I think the ecosystem may finally be ready for it. And this is, I feel like it's easy for us to look at examples of the past. People kind of shake their heads at open stack as a cautionary tale of sprawl and whatnot. But this is a thriving, which means growing, which means changing, which means very busy ecosystem. But like you're pointing out, if your enterprises are going to adopt some of this technology, they look at it and everyone here was eating cupcakes or whatever for the Kubernetes fifth birthday. To an enterprise, just because this got launched in 2014, okay, June 2014, that sounds kind of new. Like we're still running that mainframe that is still producing business value. And actually that's fine. I mean, I think this maybe is one of the great things about a company like Microsoft is we are our customers. Like we also respect the fact that if something works, you don't just yolo a new thing out into production to replace it for what reason? What is the business value of replacing it? And I think that's why this kind of Unix philosophy of the very modular pieces of this ecosystem, and we were talking about how them a little earlier, but there's also draft, brigade, et cetera. Like the Porter, the CNAD spec implementation stuff and the cloud native application bundles, which that's a whole mouthful. Well, no disrespect to your sparkly shirt, but chasing the shiny thing and this is new and exciting is not necessarily a great thing. I've heard some of the shiny squad that were on the show floor earlier complaining a little bit about the keynotes that, well, there haven't been a whole lot of new service and feature announcements. And my opinion on that is feature not bug. It turns out most of us have jobs that aren't keeping up with every new commit to an open source project. This is the, I think what you were talking about before, this idea of, I'm the developer. I yoloed this out into production. It is definitely production grade as long as everything stays on the happy path and nothing unexpected happens and I probably have error handling. And yay, we had the launch party. We're drinking and eating and we're happy and we don't really care that somebody is getting paged and it's probably burning down and a lot of human misery is being poured in to keeping it working. I like to think that, considering that we're paying attention to our enterprise customers and their needs, they're pretty interested in things that don't just work on day one, but they work on day two and hopefully day 200 and maybe day 2000. And that doesn't mean that you ship something once and you're like, okay, we don't have to change it for three years. It's like, no, you ship something, then you keep iterating on it. You keep both fixing. You keep, sure, you want features, but stability is a feature. And customer value is a feature. I'm glad you brought that up. Last thing I want to ask you, because Microsoft's a great example. If you say as a customer, if you're an Azure customer, I don't ask you what version of Azure you're running or whether you've done the latest security patch that's in there because Microsoft takes care of you. Now, your customers that are pulled between their two worlds is, oh wait, I might have gotten rid of Patch Tuesdays, but I still have to worry and maintain that environment. How are they dealing with kind of that new world and still have certain things that are going to stay the old way that they have been since the 90s or longer? I mean, obviously it's a very broad question and I can really only speak to the Kubernetes space, but I will say that the customers really appreciate, and this goes for all the cloud providers, when there is something like the dramatic CVE that we had in December, for example, it's like, oh, every Kubernetes cluster everywhere is horribly insecure. That's awesome. I guess your API gateway is also an API welcome mat for everyone who wants to do terrible things to your clusters. All of the vendors, Microsoft included, had their managed services patched very quickly. They're probably just like your heart bleeds of the world. If you're rolled your own, you are responsible for patching, maintaining, securing your own. And this is, really that's that tension, that's that continuum we always see our customers on. Like, they probably have a data center full of, you know, vSphere and sadness and they would very much like to have managed happiness. And that doesn't mean that they can easily pick up everything in the data center that they have a lease on and move it instantly, but we can work with them to make sure that, hey, say you want to run some Kubernetes stuff in your data center and you also want to have some AKS. Hey, there's this open source project that we instantiated that we worked on with other organizations called Virtual Kubelet. There was actually a talk about it happening, I think like in the last hour, so people can watch the video of that. But we have now, we now have virtual node, our product version of it in GA. And I think this is kind of that continuum is like, yes, of course your early adopters want the open source to play with. Your enterprises want it to be open source so that they can make sure that their security team is happy having reviewed it. But like you're saying, they would very much like to consume a service so that they can get to business value. Like they don't necessarily want to take, you know, Kelsey's wonderful Kubernetes the hard way tutorial and put that in production. It's like, probably not because they can't. These are smart people. They absolutely could do that. But then they spent their, you know, innovation tokens as the McFundley blog post puts it, the, you know, it's like choose boring technology. It's not wrong. It's not that boring is the goal, it's that you want the exciting to be in the area that is producing value for your organization. Like that's where you want most of your effort to go. And so if you can use well vetted open source that is cross industry standard, stuff like SMI that is going to help you use everything that you chose wisely or not so wisely and integrate it and hopefully not spend a lot of time redevelop it. If you redevelop the same applications you already had, it's like, I don't think at the end of the quarter anybody is getting their VP level up if you waste time. So I think that is like one of the things that Microsoft is so excited about with this kind of open source stuff is that our customers can get to value faster and everyone that we collaborate with in the other clouds and with all of these vendor partnerships you have the show floor can keep the ecosystem moving forward because I don't know about you but I feel like for a while we were all building different things. I mean, like instead of, for example, managed services for something like Kubernetes. I mean, a few jobs ago I was at a startup that we built our own custom container platform as one did in 2014. And we assembled it out of all the Legos. We built it out of I think Docker and Packer and Chef and AWS at the time and a bunch of janky bash because like if someone tells you there's no janky bash underneath your homegrown platform they are hiding. It's always a lie. There's definitely bash in there. They may or may not be checking exit codes but like we all were doing that for a while and we were all building container orchestration systems because we didn't have a great industry standard. Awesome, we're here at KubeCon. Obviously, Kubernetes is a great industry standard but everybody who wants to chase the shiny is like, but service meshes. If I review talks for, I think I review talks for KubeCon and Copenhagen and it was like 50 or 60 almost identical service mesh talk proposals. And it's like, and then now like that was last year now everyone is like serverless and it's like, you know, you still have servers. Like you don't have sensation to them, which is great. You still have them. I think that that hype train is going to keep happening and what we need to do is make sure that we keep it usable for what the customers are trying to accomplish. Does that make sense? It does and unfortunately we're going to have to leave it there. Thank you so much for sharing everything with our audience here. For Corey, I'm Stu. We'll be back with more coverage. Thanks for watching theCUBE.