 So I know what I'm talking about. Well, we'll see after we've finished here. So I wasn't the Windows user, the Windows phone, it wasn't me. So I don't know who it was, but I work in the Azure Linux team. My job is to grow Linux faster than we grow Windows. There is a whole load of people in Microsoft who funnily enough want to grow Windows faster than Linux. We're winning. So that's an interesting thing. I joined Microsoft about five years ago. My background is very much in open source, very active in the Apache Software Foundation and a number of other projects around. And as you can say in the services, as far as I'm aware, the first, but not now the only, but certainly the first service that we deployed, which was using 100% open source software. So what I want to talk about is ask the question, are we there yet? And in order to answer that question, you have to know where there is. I have young kids. Here are we there yet, all the time in the back of the car. And they've got no idea where we're going. So let's have a look at where we're going. The panel talked about this a little bit earlier on. So this is a Gartner hype curve. I'm sure many of you have seen this before. It's a consultancy view that simplifies the world in ways that are not realistic. But nevertheless, it is a useful model for structuring a conversation. Over here on the left-hand side, somebody comes up with a smart idea. And they create an early stage innovative product. It's not complete. It doesn't do everything that we think it's going to do. There's a lot of customization needed, a lot of hand-holding in order to make it work for us. But it has a huge amount of promise. And consequently, people get excited about it. And they come along, and they try and use this technology. They do the configuration. And we get really, really excited, more excited all the time. Expectations, which is the left hand, the vertical axis, continue to rise. The media, the blogosphere, all of that starts to pick up. And everybody gets even more and more excited. Until eventually, there's enough people who realize that it's not quite ready that the negative press starts. And once that starts, and people start complaining, ah, it doesn't work, it's rubbish, we drop down into this trough of disillusionment. But a few people stick around. A few people say, no, the promise is real. This is a really innovative, a really exciting technology. We should keep working at this. And so a second generation of products emerge. And that generation of products is a little bit better. But we don't get quite so excited, because many of us have already been stung. And we just let people get on with it in their areas, their focus areas. And over time, we learn. We learn methodologies, we learn best practices. We learn by applying, we learn by working together. And since we're all open source people, we know the most about that working together piece. And eventually, third generation products emerge. And it's the third generation products that really deliver. They're often specializing. They're narrowing the field. They're not gonna solve all of the world's problems. They're gonna solve the world's problems in this area by using the early innovations that started on the left-hand side of the graph. So there is what Gartner called the plateau of productivity. So are we there? Well, there's loads of survey data out there. And we can ask the surveys, are we there? Is there a winner? Specifically in container orchestration. Is there a winner? And guess what? There's loads of surveys out there that will tell us there is very definitely a winner. No question. There is a runaway winner. Interestingly enough, if you ask a community or you ask a business and their customers who is the winner, they will tell you it's the thing they're working on. It's the thing they know. It's the thing where they understand the hands-on configuration, the tweaking, the love and the care that goes into making an early-stage innovative product work. But that doesn't mean there's a winner. So we have to look at other data. In Azure Container Service, we said early on, hey, there's no obvious choice as to where we should go here. There's loads of different types of solutions. So let's do everything, okay? There's three leaders in the orchestration space and this was going back to two years when we started this. And we said, let's do all of them. Let's work with partners. Let's learn together. And so we did. And we've been doing that for a number of years and we have a lot of customers using Azure Container Services. So we have data that is not biased in any way. Yes, they choose Microsoft, but they're not choosing Microsoft products. They're choosing open-source solutions that we and our partners are helping them use on Azure. So the data isn't, it doesn't have a selection bias. So what does this data tell us? It tells us that the winner is, we don't have a drum roll, so you can imagine one for me. The winner is, thank you, I have one down here. I can't tell you. It's business-sensitive data and my lawyers won't allow me to tell you. However, even if they would allow me to tell you, I couldn't tell you because there is no winner, not today. There is no clear winner. If I look at the hard data of people paying money, there is no winner. It depends on the workload. If we look at developers who are kicking the tires or doing an early-stage experiment, there is one clear winner. If we look at data workloads, there is one clear winner. If we look at small startups, there are a couple of potential winners. If we look at large enterprises, there are a couple of potential winners. There just is no winner at this point. So we're nowhere near there, okay? So how do we get from here to there? I'm gonna do a demo in a moment and I'm gonna show you some very, very early-stage experimental code. It's all coming from GitHub repositories. It's no release for a lot of the stuff that I'm gonna demo in a moment. So please don't go away saying, Microsoft are doing this amazing stuff. We're not necessarily gonna turn it into a product. We're over here on the left-hand side. We're working in the communities. We're collaborating with our partners and we're figuring out what those methodologies and best practices are, along with people like yourselves. And in doing that, our goal is to make the ride across that trough of disillusionment smoother so that we can all succeed together. We like to say in the Azure team, we're selling electricity, essentially. Our job is to just keep the lights on. So we don't care what you're running. We just care that you can run it. And if you think back to the panel at the beginning and they said one of the question was, what is there in the future of orchestration? What's the thing that we need? And all four panelists said the same thing and it's exactly the answer I would have given. Well, I just wanna run my workload. I don't care where it's running. I just want it efficient, I want it cheap, and I want it to run. Okay, so that's what we need to do, but we're a long way from being there. We have to learn those methodologies. We have to learn those best practices. We have to take the best of all the things that we have available to us and bring it together into a set of products that are gonna run the workloads. That means we need to work together on Mesos. That means we need to work together on Kubernetes. DCUS announced Kubernetes integration, sorry, Mesosphere announced DCUS Kubernetes integration into DCUS. Very important step that allows us to collaborate and work together. So I'm gonna switch to a, oh no, I'm not gonna switch to a demo. I'm gonna do this slide first. So there's a talk later on today and this is a really good example of how we are taking experiences from our customers and bringing them to the open source. It's all windows focused in this case, just because I tried to grow Linux faster. We're winning. We have to give the Windows team a boost as well. We had customers very, very early on who said, this container thing is awesome. We want Windows server containers as well. It just so happened that way back when we first started working on this, we had an internal project that was doing containerization in Windows. It's about 15 years old. It was an MSR, that's the research division project. And we said, okay, do we use that project? We commercialize that and turn that into an equivalent of Windows server containers. Fortunately, we made what I believe to be the right decision. And we said, no, we need to leverage and work with and contribute to the Docker ecosystem that was emerging. This would be three and a half, three to three and a half years ago. And so Windows server containers is Docker. The number one contributor to Docker engine outside of Docker Incorporated is Microsoft for this reason. That then meant that when we had customers coming to us saying we want to run Windows processes on top of Mesos, we were able to deliver on that because we had started to standardize on the same technologies that exist in Linux. So it was not a simple job to bring Mesos support but it was simpler. So we did that and the team that did that worker here today and I'll tell you where you can find out exactly what they did. And as part of that work, we didn't just enable Windows in Mesos. We also spent a lot of time helping improve the build system, improve the CI CD, et cetera because we now had to make it cross platform and so on that meant refactoring a lot of it and that meant taking the opportunity to improve it. So this improves Linux workloads as well because ultimately all you want to do is run your workload. It doesn't matter which OS it's on, you just want to run your workload. Next up, people are seeing that they can do Windows processes in Mesos. So now we've got people saying, give us Windows Server containers in Mesos. Okay, well, that's fairly easy because we're working with the community on Docker and the Mesos community are doing things like incorporating Docker containers into Mesos. So this is relatively easy to do now. Easy for me to say, but it was done or is in the process of being completed, I should say. And then finally, we started having people saying, this is great. We do use Mesos, but we also use DCOS. So give me Windows in DCOS. And this is something that we've just started doing. There's a demo of it out on the Microsoft Store. We're working hand in hand with Mesosphere to deliver this. I personally am very surprised by the number of customers who want to run Windows workloads in DCOS. Very pleasantly surprised. So that's the kind of thing that we're doing. It's not just on the Windows side though. This is a different team from mine. I do all Linux. Shout out for the session. It's at 2 p.m. in Diamond Salon 7. So if you want to know more about that work, then go along there. So next I want to do a demo and now it's all going to be Linux. And interestingly, I'm going to start with Kubernetes. So I'm going to start on stage at MesosCon. I'm going to talk about Kubernetes. Why am I going to talk about Kubernetes? Because there is a lot of interest and there's a lot of innovation happening in the Kubernetes ecosystem. There's a lot of things that we can learn from. We should be able to take the things that are really advantageous in there and bring them to bear on this community and vice versa. And that's one of the roles that we see Microsoft playing in the community. We don't care what workload you're running. We just want to make sure you can run your workload. So it's in our interest to make sure that all of the really nice stuff in the Kubernetes ecosystem comes to the Mesos ecosystem and vice versa. So what I'm actually going to do in this demo is I'm going to deploy an application to Kubernetes. I'm going to use a thing called Helm. If you know DCOS, think DCOS package install. The equivalent in Kubernetes world is Helm install and you'll see that working. That's going to deploy an application to Kubernetes and it's the kind of application that we see a lot in Kubernetes customers. It's basically a microservice based application where APIs, a web frontend, that kind of thing. But we need a data back end on this thing. In this case, it's going to be Cassandra but it could be anything that you find in the data workloads. And we see Mesos is the hands down winner in that space. So what a customer needs is to be able to take advantage of both of those things at the same time. And so what I'm going to do is I'm going to say, well, okay, let's assume you're a Kubernetes developer. You're working on Kubernetes. You don't need to learn about DCOS in order to leverage the strengths of DCOS. Now this can happen the other way around. I'm deliberately demonstrating it from Kubernetes to DCOS just to prove the point. But there's absolutely no reason why it couldn't work the other way around. So instead of doing Helm install and deploying a DCOS cluster, you could do package install, install a Kubernetes cluster. And of course, you saw the announcement last week and the demos yesterday of exactly how you can do that inside of DCOS. So we're working with Mesosphere to make this happen in both directions. So what does it look like? If we could switch to the demo machine, please. Excellent. Okay, so we have this thing called Service Broker. And when I say we, I mean the community. This is in the Cloud Native Computing Foundation. So you can find it over there. It originally came from Cloud Foundry and the API and an implementation of that has moved into the Cloud Native Computing Foundation. And so if I query this using kubectl, I can see I have a number of services available to me. Now this could be any set of services and I don't know where they're run. These are enabled by my ops team. Okay, so my ops team has blessed four services for me. My SQL DBL, Postgres DBL, Redis cache and the top one, which is the one we're gonna use, DCOS Cassandra. I have no idea how that is gonna be deployed. I just know I have it available to me. So I build my application and I create my Kubernetes YAML file. So let's have a look at that. So what I'm gonna do here is I'm gonna, I'm gonna show the YAML file for this application. If you're not familiar with Kubernetes, YAML file is like a, oops, I pressed the one key. Here we go, look at that. I need to do something now because I hate the idea of doing a demo that isn't real. So this is a real demo. What I'm gonna do now is just prove that it's real and type something live and you can see that this is actually real. It's just that I always type terribly when people are watching me. So back to the actual demo. Let me just scroll back here. There's something I wanted to press the wrong key again. See, this is supposed to be perfect. This is supposed to make me look completely infallible. I don't know what I'm doing, I just need to scroll. Oh, I know what it is. I normally do this in Tmux and because the network's a little flaky, I'm not doing it in Tmux. I'm doing it straight in a shell. So my control B to scroll back is not working. Let me just complete this out and then I can scroll back. Unfortunately, also the, I can't see it on this screen either. Okay, let me scroll back and that's all run but we'll go through, we'll go backwards if I can just see the thing. There we go. All right, so I'll scroll back and I'll work through and tell you what's going on so we can recover from this. So this is where I was at. So this is a YAML file. This is kind of like if you know marathon, this is kind of like the marathon JSON file. It describes the application that I wanna deploy and in this case, this is the services that are going to Kubernetes. So this is my web API and the bit that I want to highlight to you if I can just click on the right thing is the very bottom there where it says Cassandra address. What's happening here is it's pulling the address for the Cassandra cluster from a secret inside of Kubernetes. Okay, so I'm not hard coding anything here. I'm not circling up my ops people and saying where is my Cassandra cluster? How do I connect to it? What are my secrets? I'm just writing a YAML file that says you'll get it from this secret store. So how does it get in there? I need to scroll down a little bit so you can see the following pieces. That's just the definition of the rest of the application that's going there. There's a few components that are going to Kubernetes. The piece in the middle here, that's the definition of the service that I want to deploy. And you can see it has the same service name. It has Azure DCOS Cassandra and it has a plan name, which in this case is three private nodes. So that is what my ops team has said. If you need a Cassandra cluster, this is how to get one. Okay, so what happens when we do the helm install here? That was the helm install command. And you can see it deployed some stuff to Kubernetes, but the, sorry about this. The important piece in all of this is right here at the end where it says at the, where's the binding gone? There we go. At the top where it says V1 Alpha binding, that is the binding that's being injected into Kubernetes to tell it where that Cassandra cluster is. So what is the Cassandra cluster? It's not running on Kubernetes. That's the important thing. If I do a kube control get instances, this tells me about the cluster, the Cassandra cluster that's being stood up. And you can see at the bottom here, it says that it is de-provisioning. That's not true. It's provisioning. So it's early stage, straight out of GitHub, et cetera, et cetera. It's provisioning. And I can show you it provisioning by switching over to the Azure portal, clicking okay, because it's timed out while we were downstairs, while we were waiting for the other sessions. And I can show you this is just the Azure portal. And this is the previous cluster that I used, so I can show it you running. And you can see this is DCOS cluster. But if I go back to my resources, I have a new one, which I believe is this one. And this should be in a provisioning. Yeah, there we go, deploying. Okay, that's the one that was just set off from creating that Helm deployment. And so what I'm doing is I'm creating a DCOS cluster. This uses Azure Container Service. Azure Container Service is a partnership between MesaSphere and ourselves to deliver this solution to you. So we're tying these things together. You don't have to think about which orchestrator, how to deploy Cassandra, et cetera. I stress it can work the other way around as well. Once we've completed that work, you'll have a DCOS package that can deploy this. So one last thing, if we could just go back to the slide deck. So the point here is that we're trying to bring everybody back together, bring all of the innovations together on solutions. Can we switch to the slide deck please? There we go, thank you. So just very quickly go to this slide. So the key thing here is I've stressed how important it is for us to learn from one another, okay? We are not there yet. We're a long way from being there. And we need to learn. And we need to learn by working with you on your workloads. So this is an announcement that we're making today. You can tell I didn't build this slide. This one comes from our marketing people. I'll highlight the limited time offer for Mesa's Con North America attendees only. So quick, all marketing messages. If you don't know it now, you'll miss the boat. So follow through on that URL there. And if you have workloads that you want to bring onto DCOS, we'll do it in Azure, but it's on DCOS. And MesaSphere will help you run that everywhere. We'll just run it better on Azure so everything's good. We will help you move that workload into DCOS. We'll bring MesaSphere into the project as well. So you have all the expertise of MesaSphere. We bring HCL technologies in. So you will have their expertise. And we'll all learn together and we'll build out that future that the panel wanted where your workloads will just work. Thank you very much.