 Good morning everyone. How's everybody doing this morning? My name is Frank Hanoi. I'm an infrastructure architect for Adobe. I plan on talking a little bit about multi-cloud projects we've been doing at Adobe to get users running in multiple clouds. I'd like just to start by saying thank you to the Linux Foundation for hosting this event. It's great to be here. Great to be able to share this with you guys. Second is thanks to Adobe for actually letting me come. Get my head out of the screen and be able to come travel to this gorgeous country. See the sites, but most importantly, present to you guys. Thank you. Also, I want to say thank you to you. Thanks you guys for being here, being interested in the technology, being wanting to learn more about it. I'm excited to present this to you. I hope that you get some value out of it. Quickly, just want to go through the agenda here. First off, I'm going to talk a little bit about what we were trying to solve as far as the problem goes. Often as technologists, we go out and we see a cool technology out there. We run and start implementing it because it's cool, but we fail to actually understand the problem we wanted to solve. For me as an architect, it's actually pretty critical that I understand that problem. Because not just do I have to understand the technology, I also have to relay to the managers and everybody outside for me why we're doing this, right? And if I tell them, oh, we're using Mezos because it's cool, that just doesn't go very far. So we need to have a reason why we're doing it. So we're going to talk about that. After that, we're going to talk about how ECOS actually helps us in solving this problem. And yes, not Mezos, but ECOS. The fact that some of the pieces that we use out of there. Third, we're going to kind of dive into the actual multi-cloud design, how we actually design this and we're running this on multi-clouds. We'll go piece by piece and kind of work through all of the pieces in there. And then kind of talk about what else I should know. And what this is is really just providing you guys the resources you need to be able to go and read this about yourself. Everything we've done here, there's nothing proprietary, there's nothing secret, it's just everything's out there and open and available to you. So I want to share this with you and you can do a much better job than we ever did. So first of all, kind of talk about what is the problem we wanted to solve. So Adobe is a software company, because I'm sure some of you know, and we have a lot of engineers who create software. Part of that is, you know, the challenge of creating software at a large scale distributed system is actually quite challenging. And so trying to do this right across environments has been a challenge. So when we hire an engineer, usually fresh from school or university, this is how they look at the world, right? They have some idea, they're going to enhance the service, they're going to build a service, they're going to do something cool that's going to be great for the company. They go to their laptop, they write the code, the use of woman pieces in there, open source most probably. And then we start making money, right? So in their mind, this is really what's happening, is I go in, I do my idea, I write on my laptop and I deploy it and we start making money. Closer, is that better? Sorry you guys. So, however, an engineer soon finds out that reality looks like this. He has his idea, right? He's going to implement this and the process happens. There's just a lot of things that he has to go through to get that code out in production. He has to understand what the QE system looks like. He has to understand what the infrastructure looks like. On top of that, add something else, which is multi-cloud and now he has to understand the different services in the different clouds, right? With AWS, with Azure, all of these come from different services. With the data scenario, it functions very differently. There's a lot of nuances. An engineer that never wanted to understand or do this now has to understand to get that code out there. And by the time he finally gets the code out there, it's gone the way of the typewriter, right? This code is really not relevant anymore. He's done all this work. He's gotten excited about this. But he had to go through so much process that by the time he got it out, it was just not relevant. So this is the fundamental problem of what we wanted to solve. So this is the technologies, one of the core technologies that we use is actually Mezos. And we did it on multiple clouds. So how does DCOS help our case here? How does DCOS or Mezos, which is a sub-component as Aaron was talking about earlier, of Mezos is a sub-component of DCOS. How does this all help us? Okay, so we created what we call the Adobe multi-cloud stack. This is a team that we're working on and we figured this is the problem we wanted to solve and this is how we're going to solve it. Once again, we have our developer and he's writing code. But instead of being concerned with all the nuances of clouds and everything else, he only should be concerned with the tools he knows best. So Git to be able to source control his code, Jenkins to be able to build his code, and a spec file. So these are all standard things that he just has to worry about and that's all he needs to worry about to get his code all the way out in production. So now you're wondering and you're sitting there and saying, hey, okay, I know Git and I know Jenkins, but what's the spec file there? That's not something standard, I don't know anything about that. Well, the engineer just has to tell us, hey, I'm writing some software, I want this much resources and I want to run it over here. And instead of submitting a juror ticket to do that or calling somebody or working or having multiple meetings to do that, you can now just put it in a file that actually gets wrapped in the build system. Once that happens, this amazing thing that we can now do using the build system is wrap that code in a container. So the cool thing about this is that with container technology we're able now to take code from a laptop from something else written on an engineer's own workstation and deploy that and move it across different systems. In the past, that used to be very difficult. You were dealing with the engineer writes it on his laptop, it moves to QE and it doesn't work in QE for some reason. Libraries are out of sync in different versions. It moves from QE to production, it doesn't work in production for some reason, same reasons, right? You have to maintain all of these systems and it has to be static systems that is very, very standardized. Very hard to do. Along with container technology, it kind of changed that for us. We're now able to run a container and no matter what system it runs on, it always runs the same. So the excuse of, hey, it worked on my machine, no longer is relevant. Hey, so there we have the platform, right? So we now have a system that was built, code that was built, it was wrapped in a container, a spec file was put around it, and then deployed into a platform. That platform's running BCOS and all of these systems you can guys can see there, there's a Dev cluster, a QE cluster, and a production cluster, all were run that same container. The container will just be promoted through the system. Okay? So it hits Dev, it checks through it, it's promoted to QE, it checks through its production. Well, now that container runs there and you register a shell for something like self-history component. In BCOS, there's actually a good component you can use called MarathonLV. For our system, we use the Ingenit, that's what really matters. Some kind of load balancing component where you can update the configuration automatically services that we say, hey, I have a container ready, I'm ready to access, and I'm ready for the consumer to come into me. This is great, if you look at this, this is actually really awesome. It's very simple, it's very standardized, it enables the developer to build code on his own machine, to deploy that code to the development, QE and production, and then have consumers be able to access that without anyone being involved. There are circuits, there's snow, people coming in saying I have to, BC Ops coming in saying I have to plug this in before you can have your machine and so forth. This is all the way through. So we went up and discussed this and said, okay, this is what we propose building. And everybody was excited and managed and said, yes, go, go build this, this is exactly what we need. Cool, do it. Docker is awesome, MISR is awesome, BCOS is awesome, go do this. And so we went to Ops, our operations component and said, go build this, architects are shade to Ops, let's build this and Ops said, that's great. What about the infrastructure? And we're like, hey, what about the infrastructure? We went to management and they're like, no, no, this is the cloud, you're going to run this in the cloud, there's no infrastructure, just runs, right, it's just the cloud. Nope, the cloud is just somebody else's machine, right, so there's still infrastructure, there's still infrastructure to be concerned about. So we had to think about how are we going to run these clusters abstracted, but with multiple clouds. And as you can see, we're running it on AWS, we're running on Azure, we're running in private. Now I'll skip between pulling in private cloud or data center, same thing for us there. One main component here, operating system is the same, right, running core OS. So as you look right there, you saw we're running it all with these clouds and I thought, but why are you doing that? Why are you running this thing with multiple clouds? Why not just run this in the data center? Or why not just run this in AWS or Azure? Just make up your mind, run it in there and pull it good and be done with it. Why introduce this complexity of multiple clouds? And this is actually a question that a lot of people have been debating over, including us internally, and operations and engineering has an interesting way of doing it. If you ask an engineer why he's doing it in AWS, he's going to tell you it's agile, I don't have to create tickets and I have infrastructure codes. I can actually pull the infrastructure programmatically and build it. That's what I like, I want to do it that way. If you go as OS, why the data center are going to say it's secure, it's cheaper and we have more control. So what ended up happening is very much like this, right? It's very much like this fencing thing, where OPS goes up a little bit in the fences and says, hey, if you do this in the data center, it's going to be more secure and cheaper and engineering comes along and says, hey, if you do it in AWS, it's going to be faster or in Azure or whatever public cloud the flavor of the day is. And so sometimes OPC wins and the service goes out to the private data center, sometimes engineering wins and the service goes out to the public cloud. But never a consistent way of deploying infrastructure to each cloud. You just don't have that story right. You can see how little up there that's the past. Because thanks to Nezos, thanks to BCOS, we can make that story a lot easier now, right? If we run clusters that's abstracted out of the infrastructure like I showed, nobody really cares about the infrastructure except if you're an operator. Then suddenly all you care about is where do I need to run my stuff to run most effectively. And the two main things that comes up is latency and data governance. So if you're thinking about where do I need to run my stuff? Where do I need to run my container as an engineer? In my spec file I can say it's very late and sensitive. I need to be in a location over in Europe. Because of that operations now can take that requirement and run that in the appropriate cloud. There's no more of this fencing, there's no more of this debate going on. This is very standardized questions. Operations and engineering doesn't battle, engineering doesn't care because they know their container is going to go and run where it needs to run most. So that's why multiple clouds make sense. We don't have data centers in every location so we need to use public cloud to get that latency required and data governance required. What I mean by data governance is certain countries wants the data to stay in that country. We also don't always have those requirements on the application. The application can run anywhere and so data center is actually a good choice. Data center is cheaper. We can do a lot more of data center so run it in the data center we can scale it out to the cloud of public cloud if we need to. So at this point it really becomes our ops's way or infrastructure's way or engineering's way. It's all about where latency and data governance is at and we can establish those pieces in there very fast. Now to dig in a little bit. We've kind of just talked about what is the problem that we're trying to solve. We've talked a little bit about this multi-cloud approach and why we've taken a multi-cloud approach. So now we're talking about some of the stuff we can actually do in DCOS in multiple clouds. So as we started our journey and started building this platform that we wanted to do in multiple clouds we actually started looking towards DCOS immediately. So we already were running numerous business clusters we were already very familiar with it but we were only running on a very specific location. We were looking at providing more of a service to the internal company. And so we were looking at DCOS simply because it's very easy to install as Erin mentioned here before. And it's pretty standardized. And the nice thing about DCOS is if you download it it actually comes with pre-cloud installer. It comes with cloud formation template for AWS. So you can actually just go run the template in AWS and it will be able to single node cluster for developer. Or it will build out a multi node cluster for production. It also comes with a ARM template so you can actually go spin it out in Azure as well. Same way. However as we started digging in one of the key tenants that we've had is being able to be standardized and being able to be simple. And DCOS is still a very young project there's still things that needs to be worked on and we're trying to contribute to that as well. However for what we wanted to do there's just a couple of things that we couldn't do with the templates that provided us. So for example on AWS they spin up clusters using CoroAs. On Azure they spin up clusters using Ubuntu. That difference gets away from our standardization rule for infrastructure and so now we have to go make a change and move the Azure one to CoroAs. A couple of the other things that we need to do as well is we would like to for production inject security rules in your security policy. Our own IDM solution and authentication pieces and it makes it very hard to actually do that in here because what you're getting is JSON files you're actually getting those of two JSON files in the back that actually was run and I don't know about you guys but I'm not too fond about writing or modifying JSON files two or three thousand lines of code and mucking around with it for days and trying to figure out why if I go to space somewhere or something like that. So we decide to actually look through and build out what we call multi-cloud design. How are we going to approach the clouds the multiple clouds and have these thin and pieces in place simplicity to be able to do it very simple and being able to do it very standard standardized. So this is what the logical the logical design looks like. First off we have something we call input data. So input data is just simply the data we need to hand to the system that's going to build the cloud the cluster in the cloud. This data can either be it can be as simple as a file so it can just be a text file if you want and you don't have much or it can be as complex as the CMDB. In our case we're actually using a CMDB. We're able to get dynamic data out of there generate the file automatically and then be able to submit it up to be able to be built out but what this is it's just standard data that needs to go in so the cloud knows or the cluster knows how to set itself up this is data like instance size what kind of system am I going to use it's also data like where I need to run my infrastructure things like that second part is the infrastructure data what we mean here is what kind of data do we need to build out that's specific to the infrastructure this is where it becomes really interesting because I've been hammering on standardization and saying we want a standard but here's the one place we actually didn't keep to a standard between clouds that's actually an important point to make there's between clouds and there's inner cloud and so the requirement we set for this was between clouds the tools does not have to be the same however in a single cloud the tool has to be the same the method of deployment has to be the same so the example here the tool we used to deploy in AWS has to be the same when we deploy to any AWS region or any AWS account however that's not the same tool we use for Azure or for the data same and why is this why didn't we go down this way and not stay standard well it's actually a very difficult problem to solve so there's a lot of companies trying to solve it today that says get this one tool, install it run it and it's going to deploy to all the clouds it worked great to start with but it's really hard to update it's really hard to keep up to the cloud specific when cloud providers changes the tool doesn't work any longer and so you have all of these problems with this tool so instead we decided to make that rule because it just worked better and it was just a lot simpler so at this point we go out, we have our input data and we run some kind of tool and we get IAS so infrastructure as a service this is brain except that this is not what we quite want we have infrastructure now that means we actually have instances running with OROAs on it or with your operating system in there for us it was OROAs but they don't do anything it's just IAS, it's nothing more than instances so it's a very important part the cluster data so once we have input data we know how to generate the infrastructure we generate the infrastructure then somehow we have to go lay on the data so that a cluster can actually come installed DCOS has done a great job as far as how they install their clusters and pull that out so the mesos cluster, the zookeeper master all of it actually can be installed using packages from DCOS so we've incorporated that and how they did it is actually within the user data section we won't dig into that here in a second but that's the three important parts that we actually broke out that we said this is how we build multiple clouds with input, with infrastructure and with cluster data so here's a little bit more about digging into the tools so for AWS how can you build stuff in AWS you can go clickity click you can go deploy some instances install stuff, get it done it doesn't really work for us it doesn't meet any of our requirements you can also do cloud formation talk a little bit about that's actually what DCOS provides is a cloud formation template but that is JSON there's turbo sphere this is actually a great tool you can actually write up infrastructure in it and compile it into a JSON file compiles it into cloud formation template so instead of dealing with cloud formation you can use turbo sphere to build the code and then run it to be able to generate the cloud formation template and lastly there's terraform I don't know how many of you are familiar with it but this is an awesome tool it's by HashiCorp it's actually able to build out cloud infrastructure and keep state on the infrastructure that you build out so if you run your terraform it'll actually know what your state is in the cloud and if it's missing that state it'll replace those states and so forth what about azure what about azure we have the portal you can go sign into the portal and go click click click again not for us we have ARM templates pretty much the same as cloud formation templates they are JSON and then this actually came out only Microsoft actually open sourced this only two weeks ago I didn't even have this before when I originally submitted this talk but now it's available and it's pretty valuable it's called the azure container service engine and what azure did here is actually open source and provide to the community the tool they use to build their own clusters or systems within azure so if you go grab the azure the azure container service engine what you'll get is an engine ARM templates that they use to spin up the clouds or spin up the clusters in the clouds so this is actually great because now you're able to use a tool to build infrastructure that generates a file and finally Terraform again the great thing about Terraform is it is actually a multi-cloud tool you can actually use it between clouds and then there's the data center this guy said a little bit harder right it's not can't just go run an ARM template in the data center don't know how many of you have a familiar soul but we use it pretty heavily you can work with any configuration management system in here puppet, ship, and the list keeps going the idea here is to use configuration management to actually build up the infrastructure in the data center there's another project called rack HD this is an open source project by EMC and I like to call it a light light provisioner what it does is actually does for your physical infrastructure that's the firmware management because now we're back in the data center we have to do all these things again that's your firmware management lays down the base OS and hands it over perfect to what we want in this case and then lastly there's open stack probably most of you are familiar with that project I'm going to quote a very good friend of mine and he's pretty well known in the industry he always tells me whenever we start talking about open stack so I'm going to change a little bit of that quote and say open stack is not a thing unless you're a service provider or you have network segmentation needs open stacks complains it's not easy to stand up it's not easy to operate, it's day 2 operations are very heavy and so to build open stack, to build a pass doesn't make sense, you're not building an IAS then to only build a pass on top of that why not just build a pass in the first place but it is a tool you can use it also had another requirement that we set in place that we should call the output requirements we wanted to generate something that actually has a file type format too why is this? why do we care about generating like a file type format for the clouds well, the idea here isn't just that infrastructure should be spun up by a massive team of SREs but that engineers should be able to go spin up that same infrastructure it should be simple, standard so if we can give just a file and get then we really want a big battle because now anyone can go spin up that infrastructure so here's our solution of tools once again, the output is a file that's going to be stored again for AWS we're using Troposphere because it generates a cloud formation template that we can store and get anyone can go grab it and do it they don't have to know Troposphere and go build that they can just use the output if they so want to if they want to modify it they can go in Troposphere and do that for Azure, the Azure container service engine that I mentioned earlier generates an ARM table for us and for the RAID, for the data center we're using RackHD, same idea once again restoring the stuff in Git we're able to put a developer spin around this where operations can go get the files, spin up the infrastructure okay so if you think about where we're at right now we started up with giving input data input data went to infrastructure data infrastructure is now stood up using some kind of file that we wrote and in that we gave a prescriptive design of saying this is how many log balancers if I want them there, this is how many data or how many instances I want in there this is the VPC, so forth, so forth so forth, what ended up there is we have a full set of infrastructure pulled up, ready to go core OS running on these systems but nothing yet, there's no intelligence in this system yet, it's not a cluster it's just a bunch of hosts running core OS we have to get the cluster data on there right now what you're seeing right there is actually an example of the cloud config component in the user data that DCOS uses to stand up its part cloud config user data is for the clouds azure and AWS, you can use the user data to run on first boot and you can actually run cloud config if you're using a bounty you can run cloud init what these are going to do is pull down the packages it just runs a set of commands excuse me we can actually dig into this and go in much deeper because it's actually a very interesting topic we don't know how to see how they build out their cluster using cloud init or cloud core config we can also use solstack however that defeats another purpose we use solstack, we have to have solkmasters we have to have complexity and then a developer can't go do it no developer is going to stand up a solkmaster and then run solstates to install a cluster and take hours to get it done it just wants to run it and work so we need to meet that cloud init or cloud config actually work really good we really like what DCOS did here we actually pulled a lot of that off and started working with it either those that hasn't ever worked with cloud config from core OS knows that it has limitations there's some issues with it it's missing some functionality and so they have a new tool that they're using ignite econition so econition that meets a lot of those requirements so we're actually moving all our cloud config or our user data into econition right now so what's the inverse old what do we get when this is all done do you think once again through that story we had input input went into infrastructure infrastructure stood up a cluster and now we have this and what you're looking at there is DCOS and this looks the same for Azure for AWS or the data scene and it came up in pretty much the same time so we were able to provision those clusters in multiple clouds in a standard way in multiple and get the same output in result which is a cluster an endpoint or a platform that we can now deploy code to it so at this point you're like well that's cute so you can stand up clusters really fast that's awesome but if there's any operators in here you're like our own operators which I respect very much comes to me you guys in architecture always worry about standing stuff up so you're like go ahead and build this out and then just forget about data operations operators worry about data operations once this clusters up how does it look in day 2 how does it look in day 100 how do I operate this thing how do I scale it down how do I maintain infrastructure how do I troubleshoot this well the good thing here is all of this has been defined and abstracted into code so whatever we need to do we can do in software when we need to add instances if it's an agent we're adding or a mesoslave or a cluster if we need to upgrade a cluster we drain the workflow off of the cluster we stand up a new work we stand up a new cluster using software and direct that workflow to the new cluster and kill the old cluster right if there's a hardware machine if there's a physical machine in the data center giving us a lot of issues no more SSHing in and looking at driver issues and all of this other junk just kill the machine, shoot it in the head and bring in a new one and put a system around cattle meaning they do operations if it comes very much a software driven component okay so with that we now have the ability to spin up multiple clusters in multiple clouds and you're thinking well that's fine but how do you actually you're saying that you can kill a cluster and stand up another one right how does that actually work across multiple clouds how can you do this and they're not so one of the main projects that our networking team has been working on for the last years called MCT multi-cloud transport and this is a huge, it's been a huge effort and also some awesome work done by that team to be able to unify all these clouds so there's actually an under way we introduce all the clouds when I say all the clouds Azure, AWS and the data center to be able to talk seamlessly to each other from a networking component's perspective so if you have an instance in AWS or a container and running in AWS and that container needs to have access to the data center the traffic is actually going to go through private through private network and we'll go over private item space and be able to talk seamlessly so this is a big step for us because that really with this component we can now spin up clusters we can write down a cluster and say there's not enough hardware in the data center to do that we have to upgrade a cluster we're saying well just spin up another cluster and there's just not enough hardware we can go out and spin up another cluster in AWS or Azure for the time being upgrade the hardware stack in the data center and then move the traffic back because we have seamless network network traffic ability so that pretty much tells you the story of what multi-cloud is and it's very effective one thing I want to do a little bit is dive a little bit deeper into private cloud or the data center I kind of skipped over it earlier by just saying oh we just use rack HD and it finds devices and from those devices the cloud gets built and it's all wonderful and everything works however the data center is more difficult anyone's worked in a physical data center worked with the teams working at scale it's complex and it's hard infrastructure is code it's really hard to achieve like I mentioned earlier it's not easy just to go run open stack however one of the things about the data center is because it has so much complexity it also provides a lot of flexibility we can make a lot more choices in the data center than we can in AWS or Azure yes we can go choose our CPUs that is tuned for this workload we can go to create network that works a lot better for the workload than in a public cloud also because there's no because of what the software provides as here from resiliency and a availability component those components have now been abstracted into Mezos in the higher layers above we don't have to worry about machines as much as we did in the past network storage and compute drastically changes in the data center because of this new world another important lesson we learned power and cooling is pretty important in the initial design if there's anything you're doing when you're standing up to data centers thinking about those two things more than anything else because if there's anything you want a few future proof is those you want to go and say hey I'm standard we were only building out the data center we only had 7.5 kW per rack and then it ends up saying whoa but that was the old VMware environment and that's not what we're doing anymore I'm running now, I'm rolling in a rack and there's 41 used in there and I need 17.5 kW per rack and data center all just looks at you and they're like no, it's not going to happen and so being able to make sure that your future proofing power and cooling for the future these are the hardest pieces to change when you design data centers and then new designs for new data centers Mezos has introduced a piece to us that simplifies hardware and abstracts all of those pieces out what we can now do with the underlying hardware is fundamentally different and because of that we need to think about moving away from old designs and moving into new designs some of my favorite reading that I do is the work that the Facebook guys are doing in their data center they've done an amazing job of making that transition into new data center architecture so part of our multi-cloud strategy is redesigning how we run in the data center we call this project Greenfield and what it is is we re-evaluate it compute, re-evaluate it storage and network and set based on containerized workflows, how do we run our run in the data center compute fundamentally changes we can now run white box machines we don't need to go run expensive machines with two power supplies and a rate controller and a bunch of sensors that needs to be booted and takes 10 or 15 minutes to boot up the machine because it's checking making sure that machine is 100% right none of that is the priority anymore because that machine now is a cow not a pig it makes it simple we have simple more simple infrastructure storage storage now is abstracted into the software we can run HDFS we can run something like ScaleIO and we are able to aggregate and move the storage into software and it's not concerned about the hardware and because of that now the hardware itself can just be those same machines this is great because what it enables us to do is collapse our models collapse our infrastructure now instead of running a storage array over here with sand fabric over here network here with servers all trying to tie it all together and having blade chassis switches dealing with all of that complexity and just throw all of that away and say at one node it's a storage node to compute node it does all of the things it does storage and compute and the network actually one of these pictures you see right here is the close architecture from Facebook but network can go to close needs to have a good close architecture east-west becomes very important being able to traverse traffic between machines critical if you're saying the storage now is between all of these machines some of the other stuff pulling in a layer 3 BGP down to the host instead of running it running layer 2 stuff in the network using projects like FiliCo to provide security and compliance from a container level and this is one of my favorite topics to debate about is running Linux on the switch because we can now but should we, right and the answer is yes because Linux is an awesome operating system especially on a switch we're able to get years and years of maturity on monitoring, provisioning we can pull it into the data center that's the most lacking component now the switches become cattle and not beds no longer does a network engineer need to log into a switch anymore and be able to do work on that switch in fact that should be a sin a network engineer should be submitting code up to the repo that then updates the switches repos update the switches not engineer so that tells you a little bit about the whole story there I threw this one in this is not something we're doing today but I did want to do a call out for it because I thought it was a really awesome project as I was digging down to the center design one of the problems we came up with is if we move to this new one compute model layer so everything runs one type of compute model that compute model that's awesome except for the mesos masters we have a big problem mesos masters just doesn't require much to run we're giving quite an expensive machine relative right here much cheaper than your normal machines but expensive to the mesos masters to use for no apparent no parent other to be standardized yes services on there too they're running ncd, they're running dns they're running snap route which is a 4 layer layer 3 4 so the toporex switch is actually a router as well but most importantly you can see they're running mesos masters this is looking at the future and the data center this is why the data center is still relevant this is why the data center is still important we can modify we can customize the data center a lot more than we can in any of the other clouds so time for a demo let me just go over the demo quickly and explain what you're seeing if I actually showed you what we do in the data center or what we do it against the clouds all you would see is one command line and then the end of the cluster just the dcoa screen that doesn't make for a very interesting demo it makes for interesting system not for a demo so what I did is I actually took the generated json file from troposphere that we actually have the input data from our cmdb and loaded that into cloud formation in aws so the file I'm going to load in has been generated using troposphere and cmdb data and now it's going to stand up a cluster and I'll kind of walk you through it as I go ok so you can see I know the resolution is terrible here but what's going on is I'm going out and creating a stack so I'm going to grab that file now I could have been a developer or I could have been production sre I grabbed that file and this is actually the input components right so usually this is generated and grabbed from cmdb automatically but I'm just trying to show how those input components look it's kind of like what is the stack's name this is Hds right what is the what's the what is the number of slays and what's the number of masters you want in here all of that input data that's the data coming in telling the cluster how to build these are just some specifics to cloud formation and we're going to go pick it off cloud formation is going to start it's going to say go ahead and build my cluster usually build a cluster like this in the data center it would take a good day, 3 months we're done, cluster is completed cluster is up I'm going to go ahead and grab the output data now which is just a URL to DCOS that's DCOS CLI coming or DCOS GUI coming up I'm going to go ahead and authenticate and there you go that could have been Azure, that could have been the data center it doesn't matter, looks the same everywhere functions the same everywhere what I'm showing you here nodes are still coming up cloud config data is loading in the nodes configuring some of the agent nodes hasn't been configured yet so they're in an unhealthy state and cluster itself is still in an unhealthy state and they're done, they're finished up too so these nodes have a fully made provision they're available, they're being used you can see all services are running cluster is healthy to go the presentation before talked a lot about the universe something that I absolutely love about DCOS is the universe, I think it's a great component with those packages in there you can also go ahead and create your own private universe I actually pulled down here and I'm installing Jenkins grabbing a package and then showing Jenkins onto the universe from the universe onto the cluster and you can see it launching in DCOS it's going to spike the resources and then Jenkins is not going to start running it looks like this might be where it's at anyway ok, what else should I know these aren't just a bunch of resources if there's anything interesting you found from today from Troposphere you want to know more about that if you want to know more about the ACS engine there's the GitHub for SNAPRA for RACHD all of these, this is all open source projects everybody's been there's been a lot of people contributing to it awesome work done by everybody else on providing these tools so we can actually build multiple clouds for Adobe and I think we have some time here for Q&A and I understand it's lunch and you guys are probably pretty hungry we've been sitting in here for about two hours now so thank you I appreciate it, if there's Q&A please go ahead otherwise, thank you guys I appreciate the opportunity for being here and thanks for having me