 My name's Dwayne Philpey. I'm from Gigaspaces, Cloudify in particular. Today's talk is focused on hybrid orchestration, model driven in particular. You know, we at Gigaspaces are a big fan of OpenStack and have been involved since the very beginning and in fact our first cloud orchestration was on OpenStack but there are, the premise behind this talk is that there are circumstances where a hybrid approach is needed or a mixture of different technologies possibly containers, possibly cloud, possibly bare metal all need to be managed in a common orchestration framework so that's what we're really focused on here and model driven is key so there's going to be a fair amount of unpacking I need to do pretty fast here to get us to the point where the I have a little demo at the end so that that has any meaning. I'm going to go through some of these concepts to give you a background hmm it's not doing anything alright let's do it the hard way okay so first just an introduction and this is model driven orchestration so the basic the basic concept is the difference between imperative versus declarative since we're all OpenStackers here we're familiar with heat and hot and that's an example of a declarative model for infrastructure and so forth the alternative being something that's more script oriented, something that's very tailor made rather than a model and the basic idea behind model driven or imperative is that you're separating the nouns from the verbs in an orchestration so you're describing what needs to be done without necessarily exactly saying how it's going to be done so the model represents to some extent represents a goal state for the system okay a model by itself doesn't do anything it just kind of lays there passively you need something to operate on it and that's an orchestrator in the case of a heat template you need heat of course and heat is going to take that static representation and actually turn it into API calls ultimately on the back end so there's just a little code snippet down there of a standard YAML based model where it describes a key usage of flavor same thing would apply to network components if we consider a slightly more complex example with relationships we have this concept of containment and connected to relationships here for example if I have a container inside an open stacks server and SDN controller inside that may be pointing at a V router in another instance and we're going to expand on this and generalize it a little bit more of course neutron underlying all of it in the case of open stack but the model becomes more complex when we really configure relationships and we want the orchestrator to be smart enough to take those relationships and use them in the orchestration often mainly for the ordering of operations so you're building a dependency graph essentially in memory and the orchestrator is going to be building a model of tasks that have to fulfill the relationships and the target nodes in the graph so if we think about that a tool like heat and ultimately like our tool to quantify you have to understand implicitly understand relationships in the model you have to understand what workflow is being run so if I have a model install and uninstall are just two possibilities of what I could do with it I could run any kind of operation on top of that I could do software upgrades anything you can imagine really it has to understand the types of the model the nouns obviously heat understands what a server is it understands what a network is it understands what a port is and a security group and all the rest of it but these are all implicitly baked into the orchestrator it has to know what api calls are associated with what types these are the verbs and it has to obviously execute the api calls to produce the goal that the workflow is associated with okay so now if we pull back a little bit we try to consider something more general purpose we come to TOSCA TOSCA is an Oasis specification and that's what it stands for it's despite it's saying that it's for cloud applications it's actually far more general purpose than that cloud applications it's really if you just eliminate the cloud it's any application it's really a general purpose controlling tool that can apply to virtual infrastructure no infrastructure containers clouds some combination of the above so that's what we're here about and it breaks things down in really three parts the topology which is the model I was discussing the workflows which are the operations you might perform on the model and then policies I'm not really going to get into much here today but the policies in this case are interpreted as the sort of post deployment dynamism so policy in the sense of self healing auto healing auto scaling that type of thing there's a way in which the orchestrator can associate a set of actions to the model to say okay if these three nodes are experiencing a very high load then I'm going to incrementally scale that particular tier and so forth and it goes up and down and back and forth the Tosca meta model is extremely unopinionated the difference I think where you start really coming across from heat now is that it's really almost like a programming language it's you can define your own types so for example we have type definitions for heat itself in Tosca but also for the individual sounds you know routers, subnets, ports and VMs and so forth in open stack or any other cloud so you wind up with a sort of a what's the word I'm thinking of you wind up with a menu of possible types that are either provided by plugins by third parties or that you can create yourself to address specific needs has the concept of requirements and capabilities this permits a certain type of auto wiring that can occur inside of a topology if I don't want to specifically say I am tied to a certain type of virtual machine I can be more general about it I can say I need a Linux CentOS Linux box at least version 7 with so much RAM so much memory that type of thing. Operations for verbs the model doesn't presume any operations either so the model is provides hooks basically for connecting logic and this is where the plugin architecture comes you can have the model independently from what it's realizing so when you go to Tosca you have an open stack plugin and the nodes that are related to open stack will activate the open stack APIs you can define your own workflows there's certain built in ones install uninstall obviously there's scale and heal and so forth but there's any kind of workflow you would imagine and the basic idea is that the workflow is an arbitrary bit of code that the model is handed to and it's really up to the workflow what it does with it this is where you might fit in something like a very tailored upgrade scenario or some kind of blue-green testing or anything like that the model is not just static either when the hands of the orchestrator the model lives in memory and dynamic information about the orchestration is plugged back in just as an object oriented programming language you can have run time state for things that are not specified in advance for example if you're getting IP addresses from DHCP or so forth you can stick those in the model and then other workflows can actually pull that out so if I'm instantiating a database for example and allocating a floating IP address to it that floating IP can be available to other nodes in the orchestration other elements in the orchestration later on so it handles that and of course user definable policies I'm not going to get into that but you can do any kind of effectively real-time event processing based on metrics that you're receiving from the system to trigger workflows I got to go faster than this so this is just a good example of Tosca in a nutshell you have properties you have interfaces if you have a node type here the interfaces represent actual endpoints where functionality is connected to it so this is for example if you had a node type you had was a OpenStack server of some kind the interface would be where the hook to the actual OpenStack API would be the key thing to note is that those are pluggable so that they're not fixed to the orchestration and then all the other requirements that go with it there may be built-in requirements that that node type needs and the node also has properties this is very much in the vein of hot itself is that you have properties for the nodes those are fixed and not mentioned on there are the run-time properties which are dynamic okay and then this just clotifies the view of it the API is a Tosca inspired orchestrator and basically the orchestrator sits at the center the application blueprint comes in the front there are many plugins out there for all of these platforms okay and there a given application blueprint is not tied to any of those and not only is it not tied to a single platform but within a single orchestration you're not tied to one platform so every node in a Tosca orchestration could be pointing at a different IAS provider or network SDN controller or anything so it's extremely wide open in that sense okay and then there's the concept of workflows there's built-in install and install scale heal and declarative based on the application topology it's going fast but this is turning into a lightning talk now I'll skip over policies here okay so now we get to at least some background for the hybrid orchestration example so the concept for this talk is a hybrid environment where we have Kubernetes running on bare metal actually and a database running in open stack and how do we orchestrate that so we're running our old famous, our old favorite Node.js demo called Node Seller and this is just a peek into the actual configuration there's no point in really digging into this too much but we do have the idea of the Kubernetes master so this is a snippet from the Kubernetes orchestration you can see how the Kubernetes master has a relationship here that it's contained in the Kubernetes master host the master host looks a lot like the definition of a virtual machine for a heat template the one thing to note here though is that I don't actually have bare metal here I don't have like a rack I'm going to bring in so all I did was for the demo was to create a couple virtual machines in advance and we have a plugin called the host pool plugin which basically manages a set of IP addresses lets you attach characteristics to them and then refer to them in an orchestration as though they were being spawned by a cloud infrastructure as a service so in any case we have the master here you see all the settings for Kubernetes here we have the flannel, etcd and all that good stuff the way this was actually implemented is using the Docker multi-node that Google supplies anyway just for simplicity we also have another orchestration that goes down and breaks down every component as well now on the other side we have a separate orchestration called MongoDB so this is just starting a MongoDB server and it's the same pattern here we have a MongoD we've actually got the Mongo database here it's got a replica set I'm not going to go into the scaling of that I have another demo where I actually scale that based on the activity in Kubernetes but it's not going to be time and then that is relying on the MongoDB host and the orchestrator will order this by noting this dependency here based on the relationship start the open stack instance and then install the MongoDB on it here all right then there's one more service so I broke this into three different orchestrations because I thought that was more realistic all these could live in one orchestration but then they all have this common life cycle which I don't think is terribly realistic so this is the actual hybrid node seller service and in this case we're taking the approach of a custom type called the Kubernetes micro service up here and what it does is try not to disturb the native Kubernetes configuration so it takes as input you'll note the files here there's a file up there called pod.yaml file here called service to.yaml the services doesn't really change for this but what we can do here is through a custom plugin we can override the information like the Mongo IP port the certain environment variables that are needed for monitoring and so forth these get plugged into the actual Kubernetes container descriptor and when the service is rendered then Kubernetes actually the Kubernetes service can actually contact the MongoD which is running externally okay I think we're getting close to the end so that's the end of the slides let me get to the cloudify so this is just a high level view of the cloudify UI it's not the latest cloudify UI but it's good enough so here we see the node seller which is the app it's actually connected to two external orchestrations so these are references this is another example of power of plugins this Kubernetes proxy and Mongo proxy are actually separate orchestrations that are running so this allows us to do a sort of composition of orchestrations each of which can have their own scope of responsibility but which they can still exchange information to each other now if I go in here and I say execute workflow on here I will pick you see we have a number of workflows like scale install and so forth in this case I do want install and basically the gods of demos are cooperating with me today so what's happening now the plugin is actually pulling the information from MongoDB it's plugging it into the Kubernetes configuration it's shipping the Kubernetes configuration to the Kubernetes master node where it's actually instantiating the service and bringing it up and all the green check marks are there and I'll be able to say site can be reached now and it's there okay now that obviously this was all done in OpenStack because that's the cloud I'm working on but there's no particular reason why that Kubernetes instance I'm running was located on any particular set of IP infrastructure I just gave it a list of IP addresses it installed that and then pointed the service at both end points and it tied everything together so the power of that too is that the service can respond to scaling events from Kubernetes for example it's a little bit far-fetched but if you're for example monitoring the CPU workload on the node.js nodes and they reach a certain capacity obviously Kubernetes itself can handle scaling of node.js so we don't need to handle that in our orchestrator but if it gets to a certain threshold and I want to scale MongoDB and that is a clustered MongoDB blueprint then the orchestrator can actually scale MongoDB as well on a completely separate platform so the connection can be very intimate there and that's it.