 Lights, OK. OK, thanks for coming. My name's Dwayne Philpe. I work for Gigaspaces Technologies. And we have a product called Cloudify. That's an open source orchestrator for cloud platforms and others. And I'm going to be describing some of the challenges and solutions for orchestrating multiple platforms simultaneously with the same orchestrator, how this can be done in a specific example. Describe a real world effort for orchestrating Kubernetes, along with microservices on Kubernetes, as well as having those services interact with external components that are outside of Kubernetes in a single description. So I think historically, this is a good idea or a good way of explaining exactly the value of orchestration and the kind of problems it can solve that are non-trivial. So we'll be automating not just Kubernetes, but we'll also be throwing in MongoDB in there. We'll be deploying a microservice and wiring it all up, and then scaling it. OK, so here's our target architecture. For those that aren't familiar with Kubernetes, it's container orchestration system. So in this case, Cloudify will be acting for Kubernetes as a orchestrator of orchestrators to a certain extent. So again, the goal of the overall project is hybrid orchestration. So where I have multiple platforms, I'm orchestrating. So in this case, we see the master and the minions, so basically the worker and the master is here, MongoDB outside, and then the microservice. We can scale Kubernetes itself easily with the workflow. We could also scale the microservice externally with the orchestrator. Now, it's kind of hard to, looking at that diagram, to understand the complexity of the process that you need to do and the need for an orchestrator. But this is a partial list of the steps required to do the orchestration. A lot of it involves the coordination between different parts. It also involves setting up the network properly and modeling actually the services in and outside of Kubernetes. In this case, we're not trying to usurp the job of Kubernetes in orchestrating containers. We're merely commanding Kubernetes from the outside. So among other things, we'll be creating dynamically, creating templates of Kubernetes deployment descriptors, pushing those into Kubernetes, properly parameterized so that they can recognize the services that are outside Kubernetes. In addition to that, Cloudify provides the ability to auto-scale based on arbitrary metrics that flow into the system. So part of the orchestration is an auto-scaling workflow. So automating that is a lot of work. Anybody who's set up Kubernetes itself has discovered that it's a lot of work on its own. Having this all in an easily reproducible blueprint is highly valuable to make for repeatable, deployable environments that are bulletproof. In addition, making them cloud-neutral is difficult. So in this example, we're running everything on OpenStack. But we have Kubernetes on OpenStack. We have MongoDB on OpenStack. So we're only using the OpenStack APIs. But the way the modeling language works in Cloudify, the clouds are pluggable. So the core orchestration doesn't have to change. So in order to understand Cloudify a little better, you have to have a basic understanding of TOSCA. So TOSCA is an Oasis specification. It models deployments as graphs of nodes, essentially nodes and relationships. And a node can be anything that you need to orchestrate. That would include network components, software components, virtual components, hardware components, anything else. It's essentially code at the root level. Nodes are implemented via a type system to avoid boilerplate. So you can create, for example, in our orchestration, it uses an OpenStack node that's based on a lower level node that has some basic interfaces and operations that are common across all target platforms. And then everything is connected by relationships. And among other things, the relationships let the orchestrator know the order, criticality of the different parts, and what order they need to be start in, and how they need to be connected. So for example, in this standard sort of TOSCA view, we would have all these would be considered nodes in the OpenStack world, a floating IP, a network, a subnet, an application, the VM itself, and so forth. The orchestrator can interpret this model and deploy it in the most efficient way possible. So once you have the model, you can actually run workflows on it. Pretty much nothing happens without workflows. That's the execution model. The models ultimately point to actual code that can be run in distributed fashion across the clusters or on the server itself. For example, a server node for OpenStack would define operations needed for Nova, Neutron, and so forth. So what we need to do for Kubernetes in this project was define a type for Kubernetes. There's some obvious types in Kubernetes, a master node, which is basically the controller and the minion node and so forth. And there's going to be code associated with each of those that's going to actually activate the APIs on either. So in general, you would define operations on these to install and configure Kubernetes itself. Probably in the done properly, we would delegate this step to a CM tool like Salt Puppet, Ansible Chef, et cetera, et cetera. In this case, not so. It's just Python. But so there's also custom types for MongoDB, Mongo CFG, Mongo S. Actually, in the example I have, I only have a MongoD running. But it's not important just for the example. Define operations on these to install and configure. Again, that can be completely delegated to a CM tool. So we model it out here. We want to essentially have a microservice that runs in the master. The master we contain in a master VM, the nodes in the Kubernetes sense run in the node VM. And then when we scale, we can scale the node VM and the code so we can scale Kubernetes dynamically. When we want to scale the microservice, we can scale this. See, and that's what it looks like on a larger scale. So for simplicity's sake, this model assumes a flat network. We don't have anything tricky with routing or anything. So a standard workflow will essentially walks the model, as I described earlier, for the installation. Recognizes well-known endpoints that describe operations related to installation, like configure, install, start, and so forth. And those ultimately either trigger custom code that you write or Chef cookbooks or puppet manifests or whatever you have. Note the VMs are independent. So when this is traversed, they will all be instantiated in parallel. So when the orchestrator gets done, these all finish. And then the orchestrator is ready for the next level of dependency. It's going to march down to the master node. This is installing the actual Kubernetes code on each of those. And then that's it. Now here's what the actual descriptor looks like. So in this case, for OpenStack, you'll note that the master host has a type, which is a OpenStack server. It has some probably familiar properties, such as flavor and image. It also has some relationships defined. So ultimately, these relationships are actually executable code. When they fire during the install process, it actually triggers OpenStack APIs to do the proper connections. For example, create the connect security group. This is just an example security group definition. Nothing particularly tricky there. It looks pretty much like what you'd expect. Now here's a special Kubernetes microservice type, an example of how that would be implemented. So it's very simple. It's got a base type up here. But the main thing is it's got some basic properties here that need to be filled in, the image to run, the port for the service, the target port to the map. Essentially, all of these values are going to get substituted into the actual Kubernetes descriptor for the replication controller in the Kubernetes descriptor before it gets deployed. And then here, we see the interfaces for the behavior. So this is just an example of how arbitrary code is tied to lifecycle events. For example, up here, these are actually pointers to Python packages. It's actually running and exposing the Kubernetes service here. And another lifecycle stop event might trigger a delete. These are all standard interfaces for a Cloudify. OK, I already went over that. Let's go. We're running out of time. This is an example of the implementation a little bit more about the microservice type. Here, essentially, one of the ways to configure the custom microservice type is to specify an external service definition file and then just define overrides. And that's what's going on here. So this is the way that information from outside of Kubernetes can be injected into the service launch so that it knows, for example, the microservice, in this case, is the old node seller application. This is how we actually feed it the MongoDB port IDs and other information it needs. In addition to this, the Kubernetes pod that's created will have metrics gathering, which will eventually find its way back to the Cloudify server. This is the Kubernetes native descriptions. This is essentially what we're overriding at runtime. Like I said, I'll just skip over this because then we have time. Anyway, as part of the pod, we insert DiamondD collectors. DiamondD injects metrics into a RabbitMQ and the Cloudify server from which various workflows can be triggered, including auto-scaling, auto-healing, and so forth. So takeaways, Tosca makes complex orchestrations more understandable. It hides the Cloud APIs completely behind type definitions. It's an orchestrator. It can render a Tosca blueprint on any infrastructure, so it's not limited to Clouds. Any kind of virtual infrastructure, cloud infrastructure, or physical infrastructure, it doesn't really matter. And then it's an orchestrator that can coordinate other orchestrators. Essentially, it can orchestrate pretty much anything. This is what the Cloudify server looks like. And ignoring here, which is basically related to the REST API, we have Elasticsearch for the database, RabbitMQ at the heart, which is where all the events are being fed into, really all of the executions. We have the self-celery task broker, which is doing all of the remote executions for us asynchronously. InfluxDB is storing a time series database of collected metrics. And Riemann is doing the real-time event processing to trigger automatic scaling. So having said that, I actually started it up before this. Let's take a look at it. All right, so, oh, that's good. This is the Cloudify UI. So each of these, the way this deployment was, or these deployments were architected was to separate them. So the meaning of this K here, the blueprint, this is Kubernetes. OK, here's the base, the graphical representation of that relationship. The master being inside the master host, the Kubernetes is just a more of a graphical separation there. It doesn't have any real meaning to it. The minion host, there's a representation of security groups and the networks. If we want to look at the source of that, it's too slow. Error browsing, great. So let's look at, OK, MongoDB is very simple. This is the host with Mongo, and this is the hybrid service. OK, and here, you get the idea. And the hybrid service is essentially reaching out to the other deployments. They all have a completely distinct lifecycle. They can come as go as they please. This connects a node seller to the Kubernetes proxy. So if we were to look at how this is actually, well, actually, let's make sure it's actually running here. OK, so this is the actual application that's launched. This is running in a microservice in Kubernetes. It's hitting the Mongo database described in there. Let's look at the Kubernetes side. So here, this is a Kubernetes master node. On here, we can actually see the pod running with the node seller app in it, the Diamond D collector in there, the events flowing back to the server. That's weird. Very long. OK, well, never mind that. We won't look at that. Just have to take my word for it. The events are flowing back into the real-time event processing. If we want to run workflows, we can actually come here. That's interesting. I lost it. I lost that. OK, never mind. So now, one of the things that this kind of high-level orchestration lets you do is embed this capability of doing advanced orchestration inside of another, basically, a front-end that abstracts the cloud platforms. So this is a product by a partner of ours, Mist.io, that has a potential management, sort of a unified management view across multiple clouds. And what they do is allow the deployment of templates. For example, here's Kubernetes on CoreOS and so forth. Let's you target any cloud independently. So you can just offer that up as a service that's easily consumable. This product is built on top of Cloudify and takes care of all the automation in the back end, and then they take care of all the sweetener on the front end. I guess we've got to know we have zero minutes. Any questions, though? No, thank you.