 Good morning everyone. How we doing? Good? So I just got 10 minutes with you, so I'll be quick. And I wanted to talk about why at MESOS here, what we're doing is MESOS plus DCOS, not MESOS versus DCOS. So to do that, I kind of wanted to walk you through this story that we're seeing a lot. And the story was that last decade, we had challenges, we had software challenges, ops challenges. And one of the biggest challenges that we had was this idea that as our software evolved, developers and operators had this kind of conflicting relationship. Developers would update the software and they'd say, hey, operators, now figure out how to run this thing and get it running. So the ops guys would say, hey, I've got this new library I started using. Can you make it me deployed on all the machines that's running my stuff? And the ops guys would be like, oh, it's a pain in the butt. Or they would say, hey, I've got this new networking service. Can you open up a port for me? So forth and so on. And eventually over time, we realized that we needed to introduce this new idea of DevOps, this idea of infrastructure as code to make our lives easier. So we built great pieces of technology like Puppet and Chef and all these things. So that now, when we wanted to make these kinds of changes, operators wouldn't need to manually go to all these machines and figure this stuff out. The software could do it for us. I think DevOps became a real thing. For a lot of people that people have taken the next step, they've said, well, it's still a bit of a pain in the butt to package up my, or install my applications. Could I do even better? And so Docker is really one of the things that came out and has made that tremendously easier. Now, instead of having Puppet download all these things installed on individual machines, Puppet could just download something like a Docker image, right? So things like Puppet and Docker are really a fantastic combination to solve these challenges in the past. A lot of organizations transition to doing exactly this, running Puppet with Docker, Chef with Docker, whatever it is. So that was last decade. Now last year, now that we've solved that problem, our life is so much better, we get to take on new problems. That's the wonderful thing about technology is over time, as we fix some problems, we're able to take on the new class of problems. And so now, as devs, we said, hey, the machines that were running my Docker container, or my app, whatever it is, they have failed. Can you figure it out and run this on different machines? And Puppet wasn't great at this because Puppet was focused on just the machines that you'd installed it on. Or they said, hey, I've actually got a bunch more users right now. Can you actually run my container on a whole bunch of different machines? Can you just scale it up and click up a button? And so these two new problems, failures and elasticity, that really drove us to things like mesos and marathon. That's how we were now being able to solve this new class of problems, which is, okay, great. I figured out how to package up my app. Here's a better way of actually running my app so I can deal with things like failures and elasticity more easily. Okay? Now, we've got tomorrow's challenges, right? As we're going through this evolution as an industry, and as we're saying, okay, great, we're slowly solving challenges and making life easier in some respects, there are new challenges we actually need to take on and solve. This is really, this is the checklist, I love checklists, too. This is the checklist for any organization that I chat with about wanting to take on a container ecosystem. You've got to figure out service discovery, you've got to figure out load balancing, you've got to figure out networking, you've got to figure out how you're going to do storage volumes, security, secrets, health, metrics, logs, debugging, so forth and so on. Okay? Most organizations, they start here. They start with mesos and marathon, and then what they find is over time, they need to start solving these other components, which are not core aspects of mesos and marathon themselves for their own businesses. So they introduce networking, and they go check off that button. Okay, good, we got networking, we got load balancing. And they go introduce monitoring, okay? Now they've checked off that one. We've got metrics, that's good. And they go introduce security, and full security throughout the system. And they go check off that one, so forth and so on, until they've built out all these components. And they figure out the best way to run this across all the different machines. The collection of all this stuff is what we made be the DCOX. We didn't say, you know, we're replacing mesos. We're saying that there's a lot of things that we see organizations doing, and we want to capture that all for all organizations to take advantage of. Okay, so now yesterday's challenges are all checked off because we've got the DCOS. Now the analogy here, where DCOS came from as a name is from this. It's this idea that mesos, as this core component in the system, really acts more like a kernel to the data center operating system, which includes all the other bits. In the same way that Linux is the kernel to, say, CentOS or Ubuntu or Debian, and these are all the operating systems we think of, we're trying to capture the same thing with DCOS. So, you know, there's system D versus entity. There's file systems versus other subsystems. These are all components that make up the entire operating system. And of course, operating systems have GUIs. GUIs are great. And you saw a demo of the GUI from G&Node earlier. In fact, I think you saw this. You saw actually running services via the GUI. One of the other components we actually built into the DCOS was a concept we called the universe. And the universe for us was a collection of these kinds of distributed systems that we knew a lot of organizations would want to run on top of DCOS. And this is the way a lot of organizations today are using the DCOS is to run things like Spark, Jenkins, HDFS, Cassandra, Kafka. And there's a whole set of new challenges when it comes to running these kinds of things. Just like there's challenges in container ecosystem, there's also challenges in running data services analytics. Things like, how am I going to deal with the bare metal storage? How am I going to drive down my job latency and drive up utilization? How can I run multiple versions simultaneously? How can I upgrade these things? And a lot of these challenges can actually get solved by software. In the MESOS world, what we've done is we've actually built what are called frameworks for these things. Where these things communicate directly with MESOS via the MESOS API to manage all of these operations challenges that you get from running difficult systems like Kafka, Cassandra, Spark, so forth and so on. Great, so systems can do that. They can use the MESOS API and they can actually be successful with that. But it's a lot of work on Roses. It's a very low-level API meant to expose all the power of what you can take advantage of the Roses. And therefore, when you're building one of these systems on top, there's actually a bunch of work. But getting back to this analogy, again, most people, when they build their applications on top of a Linux kernel, they don't use the system call API. They use something like the GNU-C library, glibc. They program against that or even a higher-level abstraction if they're in a particular programming language. And we wanted to do the same thing in the DCOS row as well. We wanted to introduce, effectively, a DCOS SDK for people that are trying to build these more sophisticated distributed systems on top. Okay? So I'm actually really excited to... This is one of the first times we've talked about it publicly. It's actually here at Mesa's Con China, which is super exciting. The website will be up and there's going to be a talk later today that's going to go into it in more detail. And the goal, again, is to introduce this SDK so that people that are trying to build more sophisticated distributed systems on top are going to be able to do it and take advantage of this SDK. So DCOS is open source. Just like Mesa's is open source, the DCOS is open source. You can use it today in production in your environments. We launched it as open source April 19th, and since then we've had 31,000 clusters created. We have more than 3,000 people in the Slack community. There's 68 of those packages I mentioned in the universe. We've gone through two more versions. We've launched a version 1.6. We've done 1.7 and 1.8. We formed a PMC and we're actively growing working groups. It's a place where you guys can get involved. One of the things that's been really funny about being fun, not funny, fun, about being in China is getting to engage with the Mesa's community. And I'm really excited about in the future also engaging with the extension of that DCOS community the next time I visit. So here's where you can check it out and get involved with the community. Thank you guys for being here this morning. Enjoy the last day of Mesa'sCon. And thanks so much for having us.