 What we thought we would do is give you a little bit of an overview of the transitions that Marantis has gone through to accomplish new and better and inventive ways of deploying and managing and maintaining OpenStack. We picked, of course, the comic theme of our company, so instead of Dr. Locke and Bearman, we're going to use Superman and Batman, and we're going to go from the Batcave to the Superman's cave in the sky. We did set a pretty low bar for ourselves in that this session should be better than the movie. Okay. So we've now had an opportunity to work with many of our customers who are making a transition with us from how we originally started to deploy to how we are currently, the methodologies that we're currently using that we call MCP. How many of you people have had exposure to fuel? I'm always impressed with that. Okay. Thanks a lot for having taken that route, and I'm sure hopefully you had a good experience with it. We think it was a pretty decent tool to do deployments of OpenStack, and many people did start out using that. It was sort of like Batman's utility belt, if you remember the old series. So in those deployments, you may have found situations where the deployment stopped during a part of it or one of the pieces of it, one of the nodes didn't get deployed properly and it would fail, and we'd end up having to start over again. So we called those the jokers and the riddlers and the fuel deployment mechanism. Now we used a bunch of workarounds to accomplish ends initially. We would create modifications and everything else in post deployment. But typically we wouldn't want to do that because then we'd provide an update to the individual of MOS 9.1 or 9.2 as opposed to a 9.0 that you got as an ISO, and those workarounds would end up getting overwritten if you installed the updates to the thing. So that probably wasn't a very good idea. As a result, we began thinking about how to solve that problem, and we ended up determining a morantist cloud platform that's driven from a completely different methodology and technology than the fuel base had been. So let's take a history lesson down memory lane as to how fuel worked first. Okay. How many people in the room are actually old enough to remember the Batman series with Adam West? Wow. Okay. You remember the theme song? Let's all sing it together. Batman. Okay. Enough of that. So here's the problems that fuel as an application service solved. These were problems that in 2012, 10, 11, 12, were pretty difficult. I mean, there were so many people who had difficulty just getting a deployment done. So the OpenStack deployment to an enterprise standard that was repeatable and consistent, so you could do it more than once, and it would end up with exactly the same result of the OpenStack deployment itself. We added the ability to do plug-ins because not everything we could give you was available on an ISO distribution that you'd be able to download, and that made deploying OpenStack a bit more future rich, but it also made it a bit more complicated and complex. One of the real beauties of it is that they had testing and basic operations checking and everything else built into the fuel framework and that multiple clouds could be stood up from a central fuel master deployment. So let's take a look at how that actually worked. Some of the features of it started off with the idea of having a master node that contained Nailgun DHCP services, TTP services, and the web UI, so you could access it within your company, and then the master node's operational components were things like astute and Nailgun talk to astute, and then issued commands to Cobbler that then used puppet manifests to make orchestrated deployment of OpenStack components, and applying DHCP services and TTP services. From there, it would talk to an M-collect daemon on a component on an individual node, and it would apply the roles and features and capabilities that you specified in the fuel GUI. So that was how MOS was deployed via fuel in past generations. And this was kind of the flow of it. If you take a look at it here, you'll notice that the first five of these are references to actual fuel user interface components. So the first one is let's make a cloud. The second one is what version of Morantis OpenStack do you want to deploy? The third one in that chain is give me which kind of hypervisor you want to use, and at that point you could have picked QEMU or KVM, which were kind of our standards, but you could have also picked ESXi or System Center. In the next portion of the GUI, it's talking about the network components and how you intend to lay out the five networks that are associated with a fuel deployment into your environment. So you specified by cider and ranges and insured that you had a public, which was really the network internal to your data center, a private, which was going to run all of the tenant networks for OpenStack, which was usually either VLAN or VXLAN. You would specify the storage network so that you could isolate out your CEP environments from the other nodes in your world, and an administrative and management network handling all the traffic for RabbitMQ and MySQL and all of those things. You'd give that to that third graphic there. And then you'd assign the roles to the specific pieces of hardware, and the specific pieces of hardware had to be bootstrapped on a fuel-generated bootstrap over the administrative network to be able to have taken an inventory and everything else. But it didn't keep that inventory in any more logical fashion other than saying these are all the components that are sitting inside that physical hardware, and then you had to tell it, well, I need to make this one a storage node, and I need to make this one a compute node. And I want these three over here to be controller nodes. And those were kind of the fanciest part of all the roles that we were providing. But hey, what if I wanted to shift out the MySQL and put it on its own platform? What if I wanted to take the salameter and move the database out? Those were things that we had to actually program in to accomplish that. And the end result of having filled out all of those lovely forms that you got would end up being an OpenStack distribution that was based on one of the alphabetic versions that OpenStack has. The current one is 9, Moss 9, which represents Mitaka. 8 was Liberty and so on, going backward. These were the things we ran into in a lot of cases. So we needed to have a little service done on the old Batmobile because it was getting worn out and tired. It was difficult to manage the life cycle of your cloud. You would end up having, in a lot of cases, to make adjustments post deployment. So it would have the features and capabilities because fuel didn't happen to have it in the GUI and those kind of things. There were customer specific additions that needed to be added so that individual customers versions of things would not be exactly the same as the one that got downloaded. And of course change control and auditing was something that you ended up having to do outside of the fuel application service. So there were a lot of power's bams and swishes around getting things going in OpenStack. And now, what I'd like to do is shift gears and have Ryan, if you don't mind, talk about the Mirantis Cloud Platform version of how things are going to be done. Sure, thanks Bruce. So Mirantis Cloud Platform isn't actually a replacement for Moss. I want to clarify that kind of upfront. Mirantis OpenStack is basically a cloud platform within the umbrella of MCP, meaning that if you want to deploy OpenStack with MCP, that's fine. MCP is basically a replacement for fuel, not Mirantis OpenStack. Just want to clarify that real quick. So what are some of the things that MCP aims to solve? You saw kind of the weaknesses or the pain points of using fuel and the lack of LCM. That's pretty much exactly what we aimed MCP at for the last year of development was we needed to solve the update, upgrade, and service introduction problem that fuel had. So pretty much from the very beginning, we wanted this to be something that was extremely flexible, extremely adaptable, and very portable, right? So what this meant is that we needed to build a tool chain for integration and delivery, and that's what we call drive train within MCP. And I'll show you at a component level what that looks like here in a second. Some of the things that drive train does are things like version control and code review systems. We built a metadata model to represent all of the infrastructure components that we're orchestrating and the services that we're orchestrating that's hierarchical and empowers users to make overrides and adjustments and change the way that they want pretty much anything laid out in an OpenStack cluster through that metadata model as well as other services. And most importantly, being able to deploy, change, update, and upgrade all of those components and all of those services through a unified methodology around basically CI pipelines, CD pipelines. We also needed to empower this tool chain to test any change or service introduction in another cloud before it was introduced to production, basically the blue-green model, right? Like we need to be able to test the changes we're going to make where we need to be able to test the services we want to introduce before rolling that out to production. So there's multiple ways that we can kind of do that. But getting back into the components of MCP, this is a high level overview of what MCP looks like. You'll notice the apps portion of this in purple has a dotted line around it, and that's basically the application portion. I'm going to talk more about drive train and the components that we have for LCM because that's really the primary tenant behind our development of MCP. So we use pretty much industry standard tools in this tool chain. It's the same kind of CI, CD, and config management tooling that you would see in many different DevOps organizations or CloudOps organizations that are managing applications at scale in a cloud context for workloads. We're applying that to the infrastructure and the actual services or cloud platforms that we're providing or our customers are providing. On the left-hand side, those of you that use Fuel may have used a plugin that we had called StackLite or LMA, and that's basically OSS tooling for logging, monitoring, alerting, basically giving insight into potential issues that could come, capacity planning, troubleshooting, things like that. And we're looking at things like building correlation engines for that and basically more tightly coupling OSS with the actual drivetrain portion of this so that we can do things like self-healing of different services and things like that. The last point I want to make on this slide is that drivetrain is actually applicable to applications as well. You could basically span not only the low-level compute infrastructure, but all the way to the applications that you're building. Maybe you want to build machine learning platform or you want to build a serverless computing platform or something simple, just a three-tier web app. All of that could be orchestrated using the exact same tool chain. And that's a pretty powerful thing to be able to couple the way that you orchestrate your application with the exact same way that you orchestrate infrastructure in the same tool chain. It's extremely powerful for change management purposes. So how do we deliver MCP? First thing we need to deliver is artifacts, packages, OpenStack packages, Linux packages, any of the 50, 60-plus services that are involved in an OpenStack cloud. So we have internally basically our own CI built around the same things that I just described with one addition, right? We need a place to store artifacts, not just to get repository. But what we're doing with MCP is we're saying, okay, customer, you can build the exact same pipeline in-house and we will couple with you an innovation that we feed you on a continuous basis. You can then customize on your own terms and deploy into your environments. And innovation that you may want to have exposed to a community, you can then contribute directly to the way that we're orchestrating services with MCP. So that's a little bit about the artifact portion. This is really, so I've mentioned drivetrain like six times, I know that. But this is really the part that is most important for you to understand about MCP. Every change in every service you introduce and pretty much anything that you want to do in the context of MCP is driven with this model. You make a configuration change in a text editor using human readable YAML. It's very intuitive the way that it's written and structured. And that's what we're essentially asking operators to learn how to deal with, right? We need our operators to know how to navigate this metadata model and basically build key value pairs for the services they want to introduce. Then you're checking those config changes into Garrett for a code review. And then you're triggering a Jenkins pipeline to deploy those changes out to a staging environment or roll out in blue-green fashion to production or both. And obviously at some point when you're deploying something or you're making a configuration change, you may need to pull packages for that or pull a Docker container or whatever that is. So you're gonna need an artifact repository to do that obviously at the top there. One thing I haven't mentioned yet is that the configuration management engine that we chose to use in MCP is SaltStack. But we coupled it with something called Reclass. And what Reclass allows us to do is to dynamically generate node definitions. So there's really only one actual node that you ever have to define in this model and that means that making changes and introductions or additions to your cloud infrastructure don't require a whole lot of addition to this model. That's one thing that Reclass does. Another thing that Reclass does is actually give you kind of a hierarchical way to structure what we call classes. So we can build all kinds of custom roles and customize ways of laying out this infrastructure and services with complete flexibility. If I want to strip out, for example, if I want to strip out Keystone from the rest of the services running in my control tier, my controller notes, all I really have to do is comment out one line of class inheritance on that node type and then basically create that class specifically for Keystone on a different node type, right? It literally takes a couple of minutes to make that happen. And you can roll that out to a live running system with no service impact. That's pretty powerful. We will be giving some demos. I'm just going to plug real quick. We'll give pretty much this exact scenario live tomorrow at the booth, Mirantis booth in the expo hall at about 10.30 am. I'll be doing that tomorrow. Okay, so in summary, what is MCP? What are some of the key benefits of MCP? From a cloud platform perspective, which I didn't admittedly talk about a whole lot, we're at OpenStack Summit. I think all of us know that to some degree that cloud platform is going to be OpenStack or Kubernetes or something like that. But we provide with MCP VMs, containers, and bare metal in the same cloud infrastructure, drivetrain tool chain used for lifecycle management of all of those components and StackLite OSS for visibility into the infrastructure components all the way to the application and extendability across all three of these. So I had one question before we move on to the next section about this. So upgrade has been something that's been very difficult for a fuel environment to accomplish. And we ended up having to kind of burn and build whenever we were moving from Liberty to Mitaka to Newton to Ocata and so on. In order to accomplish that kind of a change within the MCP framework, it relies on the same metadata principles that we talked about with the other one, right? And that you would change it in one place. I am now going to use Newton. I'm now going to use Ocata. Yeah, so you'd be changing things like maybe Mitaka's in a different repo, maybe it's just a tag. You might change those things in the metadata model to point towards updated packages and run a Jenkins pipeline to update those. So there's specific Jenkins pipelines for things like distribution upgrade or OpenStack version upgrades and things like that because things need to be executed in a certain order. That's that higher level orchestration piece is handled at kind of the build tool, right? It's handled by Jenkins. Cool. So with that, I'd like to actually introduce a Mit tank. He's a cloud architect at AT&T. And he has lots of experience with various different internal customers over there in the service provider context. So very interesting content. So I wanted to talk a little bit about some of the pain points and the class of challenges that large adopters as well as large service provider companies run into on a daily basis. Now, through my career, I have been fortunate enough to work with some very large financial tech companies as well as service providers, some startups, and had clients, a client companies while working for major technology providers. I had customers that were in cable service industry and government and some very large deployments. So one of the problem that kind of stands out based on what Bruce Yu and Ryan talked about is the problem of the dichotomy of different applications, the workload patterns, as we used to call it. There are companies that have applications that were written in the 1980s that continues to run today. And then there are applications that those same companies have been able to deploy as Greenfield. And they need both of those to work as business critical applications for their business to perform. And when they look at cloud platforms, a lot of time, the challenge has to be solved from a workload pattern that what pattern is going to work best to be able to run on this cloud. And how do I migrate some of those other older applications? The other challenge that a lot of companies have to solve is really about life cycle. And nine out of ten times, they don't know that they have to solve this challenge with very significant mind share given to the challenge because almost always they underestimate the size of the problem. Launching an OpenStack cloud is complex, but it's doable. But running it and operating it is very challenging. And it's complicated if you don't have the right set of mindset as well as right set of expectation coupled with the skill sets on tools and people side. You deploy an OpenStack cloud and you put an end on it. Say, let's say for example, a service provider or a large adopter deploys Folsom release or Icehouse release or any of the more recent releases. And once you have a tenant up and running, you really have a problem that can be best described from a computer science point of view is statefulness. That now your state is really logged, burned into that database that those OpenStack services that are now running as a somewhat tightly coupled set of clusters. No matter what tool you used, if that tool does not solve the life cycle management aspect of the problem, then that state is now your shackles. You're not going to be able to move your tenants around unless they have been smart enough to keep their workload very portable, which until the advent of containers was very difficult to do. So to be fair, your tenant workloads are always going to be static, but you are not going to be able to move your tenant around and upgrading and downgrading is going to cause a lot of disruptions if at all you're going to be able to do it. And so the dream to pursue a non disruptive, hit less or in service upgrades is something that is really worth pursuing, especially if a tool can solve the life cycle problem. Life cycle of an OpenStack cloud, life cycle of a container cluster, life cycle of a bare metal cluster. So I think that this problem is really, really worth pursuing because as your workloads evolve, 1980s workload as well as 2020 workloads, some workloads will require communities cluster, some will require OpenStack virtualized VMs. And a tool that can give you a uniform experience so that you invest in your people skills, your team skills that have learned to say operate a Jenkins pipeline. You can reuse all of that skill set and all of those people's knowledge that you've built to be able to efficiently manage a life cycle of communities cluster as well as OpenStack cloud, then that's a really good place to be in. So that was essentially my two cents in terms of talking about problems. Thank you guys. But I digress. But the whole idea is that service providers are getting dragged along with this as well and they have to change their methodologies to support this kind of thing. And as Ahmed was saying, there are all of these obstacles because they have a legacy environment that they've got to maintain with the applications written for the 80s. And then they've now moved into the 2000s and life is much better when they do it, but boy, it's awfully painful to get there. Hopefully with our environment taken the way it is, it makes the pain ease much less and that the ease of use once you've gotten there is worth the effort to get there. Did you want to add anything? No, I think let's just jump into the comparison actually. Okay. I think we'll have some. So this is pretty straightforward, you know, Batman versus Superman. Batman had some good things going for him and Superman had some good things going for him. They were just a little different in how they went about it. So from our standpoint, the Morantis Moss distribution that we used to provide, you know, they had several different capabilities like, you know, which versioning you would use for OpenStack and everything from pre-Mitaka to the M-Series are available in some form of fuel deployable cloud. Lifecycle management, for example, there was very limited stuff in the fuel graphic user interface. We tried to extend that with StackLite to some degree, but we could never couple them tightly enough or if we did, we'd end up having to rewrite StackLite every time we deployed a new version of OpenStack, what other features would you like to point out? Yeah, I think the things that I would call out specifically would be on-prem CI CD tooling for continuous operations of infrastructure and application services versus an ISO distribution that deploys something once it's an installer, right? That's what fuel was. It was an installer. It was not a lifecycle management tool. It was not an operations tool as much as we tried to make it that. It was designed in a way that it wasn't conducive to running infrastructure at the scale that we want to be able to run infrastructure and we want to empower people to run infrastructure. So that's probably the number one point that I would bring up. The other is actually the ease of deployment and operations. So fuel was very, very easy. Like it may not, you know, network names may not have been super intuitive or, you know, you could pick up little things with fuel, but relatively it was probably one of the easiest tools out there to deploy OpenStack with. And that was really beneficial for a lot of reasons, but I'd like to contrast that by saying that MCP is not necessarily very easy, right? Those components that I showed in DriveTrain, that's a skill set that not every operator has. But we feel as Marantos that we can make that adoption of those skills and the pursuit of those skills much more structured and easier by providing basically a metadata model that kind of illustrates all of that infrastructure as code in a human-readable way. So tools like command line interfaces, YAML, and groovy scripts, those are the ways we're describing things, right? The way that you execute things is by running Jenkins pipelines. That's very, very, very different from an installer. A couple other points that I'll bring up real quick unless you had interjection. No, I was just going to say that I was so hoping to say groovy because I have been able to say groovy since the 70s. But groovy script is now the tool of choice to do Jenkins workflow stitching, if you will. Yeah, definitely. So another thing I would mention is that where Fuel basically just deployed OpenStack, that's all, it was an OpenStack installer. MCP is a service orchestrator, right? Like we can orchestrate OpenStack, we can orchestrate Kubernetes. And those are the two kind of reference implementation cloud platforms that we are dealing with with MCP at our kind of 1.0 launch. But that portfolio of cloud services needs to expand and layering of those are going to, that's going to occur, right? Being able to deploy Kubernetes on bare metal, OpenStack cloud and potentially public clouds using this exact same tool chain could be extremely powerful for workload portability and platform, having a unified platform even across multiple clouds. I think that's, those are the highlights that I wanted to bring up on this. And of course groovy is a groovy word. Well, it's mostly because I'm old enough to remember when it was invented. So, all right. There was one other point that we wanted to give you and this is more a public service announcement from Morantis and I don't want to go too deeply into it, but the fact that when you have new methodologies and capabilities and you form new alliances to help solve those problems for our customers and one of which that we've had that superhero alliance merge, merge is the NTT group who has 140 some odd hosting facilities across the world and we've now partnered with them to allow folks to, you know, host at their facilities using their hardware and we'll manage it for you from there as opposed to having it have on your prem. We've just announced as of this thing of a alliance with also with Fujitsu. So, and they're similar in context that they have global facilities and will allow you to then host at their facilities and we will manage the open stack resulting environment for you. And with that, we just wanted to thank everyone for showing up for the thing and I regret, I really do regret having made references to Superman, Batman, Robin and all those guys and as the Simpsons points out. This is the worst day on comic book I've ever seen. Awesome. So, do we have time for some Q and A Bruce? Yeah. Cool. Before we do that, did you have any other follow-up comments that you'd like to bring up? Okay, cool. So, let's open this up for Q and A, anything, everything, whatever is on your mind. Okay. Thanks. So, one of the main contrasts you call that was the LCM component between fuel and MCP. Fuel nine I think introduced LCM originally. Can you talk a bit more what was it, is the drive train, is it like the next generation of LCM? Was it a complete start over? Like, how did that chronology go from here to there? Yeah. So, where we were about a year ago when we were developing MOS-9 and trying to introduce life cycle management capabilities, the underlying config management engine for fuel was Puppet. But we used Puppet in some very specific ways and with other components that made it very challenging for us to go to a model of basically continuous deployment. Right? It made that very difficult to do. It also made it lack the flexibility that we wanted in terms of being able to move kind of the slider of roles kind of however you want it to split services wherever you want them and to introduce new services. The plug-in framework for fuel was much more complex than we wanted. So, we kind of came to a point when we realized that we needed to kind of build something new. Is really the answer for that. So, where we were with 9.0 with fuel and introducing some LCM capabilities and potentially even the option to plug in basically a Puppet master behind fuel, that was a progression towards this. But this is really kind of a reset for us. This is the new way forward for operating at scale and operating with continuous deployment methodology. So, and one of the other aspects of it is that the granularity that we needed to, you know, the decomposition of what OpenStack is in reality the metaka that gets deployed is exactly the same metaka code base that we use in MOS and in MCP. It's just how it gets distributed and deployed and that allows us so much more flexibility in how we can update and change and fix and all of those things. Yeah, I mean, just a quick anecdote I guess. I remember the days of fuel and where you would reach scaling limitations with those consolidated controllers with all those services. And then we would need to do something like split RabbitMQ message bus out of those notes and put them on different notes. Placement of RabbitMQ service, right? Making a change like that to an already deployed cloud with fuel was excruciatingly difficult. One of the big eye-openers for me working in MCP and getting hands on with MCP and drivetrain specifically was that I was able to do that in under 30 minutes with no downtime. I don't think putting a puppet master behind fuel was going to solve that challenge. Yeah, especially thanks for showing up to a 440 session.