 That's a good start. Anyway, we have a lot of ground to cover here, so we're going to be very, very fast. And we wouldn't ask questions during the session. This is not one of my habits. But we'll ask a question or we'll let people ask questions toward the end. The topic of this presentation is really to cover different kind of orchestration and try to compare between them. So we're not going to take also a deep dive in each one of them, but it will give you the tool set to actually choose the right tool for the job and hopefully will give you a way to understand how they may. With me is Uri and Dan. And again, because we have a lot of ground to cover, I needed their help and they jumped into the, some of them want to curse me now, but it was an interesting exercise anyway. And the way we kind of structure the presentation is that obviously we'll try to really find this orchestration for those who are less familiar with that, but the bulk of the presentation would be really to try and first of all define how do we compare tools that don't really match the same standard, the same API and even give some sense to that. So that was the biggest exercise. And the bulk of the presentation really to go through each other tool, go through a quick introduction into that tool and what it means to actually manage an application, orchestrate an application with that. I think that the end result of that exercise would be that you will have better understanding of the classification of each one of them and which problem it aims to solve. You will not get the technical deep dive of understanding how to work with it, but you'll get a good sense of what it means. And if that, if we'll achieve that, then we've done a good job. In terms of understanding what orchestration is really all about, I think that the simplest way to think about it is the robots of IT and it takes, in my view, a very similar role as we move to DevOps into the robots in car manufacturing and it's basically to take manual process and automata. That's in a nutshell what orchestration really means. And at the end of the day we have different kind of orchestrations and why do we have different kind of orchestration? But they all seems to have common ground which is basically take a specification, a plan of what we want to automate and then the engine itself process, execute it and give me a result which is equivalent to the process that I would otherwise do manually. That's in a nutshell what orchestrations aim to solve and provide. There are different approaches to orchestration. In many cases it differs because of different assumptions of the architecture, for example. If I'm aiming to build microservices, then I'll build the orchestration in a certain way that caters to that. If I'm trying to be more agnostic, I'll build orchestration differently and that impacts the design quite significantly. Similarly to the scope of problem, are we dealing mostly with the installation and standing up in the environment or are we trying to solve and automate things till production? And that's kind of another set of differences. So in this exercise we wanted to kind of classify into basically three main categories. We call it the infrastructure centric, the container centric and the pure play. On the infrastructure centric obviously we have something like it. On the container centric we'll see Kubernetes and Docker and on the pure play we'll have Tosca and Clarify on one hand and Terraform on the other hand. So how do we compare tools that doesn't comply to the same standards and API? The method that we chose is to take an application, Node.js and MongoDB that all of you probably are aware of even if we haven't touched with it and basically run through the process of automating the process of installing them, configure them, sending them out and getting them into production and seeing how far the tool gets me there and what is the actual process in each of the tools to do that. Obviously we will not be able to cover all the tools and we actually spend only five minutes on each tool so we cannot compare all the tools in the market. We chose a selected list of tools in each category and that's what we've focused on. We haven't selected paths or other things on that regard. Now let's look at the actual test application and for that I'll hand over to Uri. My colleague here will explain how orchestration really looks like. Okay, thanks Nati. So basically we took a kind of simple Node.js and Mongo application to make it more production-like. We chose to use a full-fledged monocluster so you can see here replica sets at the bottom, monoconfiguration servers. There's also another infrastructure part of things where we basically provision a network and a subnet and hook the VMs into that network and then create an LBAS pool through OpenStack and register the Node.js instances in it. Here you can also see security groups. So basically all the infrastructure resources. Now let's just go through the setup process. So the first thing we need to do when we create such an application, deploy such an application, is create the resources, the infrastructures. So again VMs, security groups, networks, load balancer pools and all these stuff. Already here you need to do things in a specific order. For example, create the networks and subnets before you create the VMs. Same goes for the security groups. There's already some sort of orchestration here. Then we need to install the software binaries, Node.js and Mongo. If we're using Docker, that's just down on the images from some repository. If we're using non-dockerized workloads, we just do yam or iptget or unzip something. Then we need to start starting the processes, right? So we can't do it all at once because some processes depend on one another. We need to be configured after some other processes have started. So we start the Mongo D processes in the replica sets. Then we start the Mongo config processes. Then we start the Mongo S processes, but we need them to be aware of where the Mongo config processes are located. So there is some information passing here that the orchestration tool needs to do. Then we need to configure the replica sets. Here we need to pick only one instance and designate that as a master. Add the slaves to it. Then we need to add the shards one by one in Mongo. This is actually done as a command on a single Mongo S instance. It's pretty weird, but that's actually a good example for something that's not very symmetric and pretty hard to orchestrate. Then we basically initialize the data if we want to start the Node.js processes and register them with the load balancer. This is only setup. This is a very complex process that we saw for quite a simple application. This is only setup. We're not talking about other things like the within production, like monitoring and lock collection, how you do wire all the agents, how you do healing and scaling, detect errors. So for example, if some of the pieces of the stack fall apart, you only want to fix them and all other pieces and make sure that the dependent components are ready and know about this fix. Then we go into kind of post-deployment aspects like continuous deployment. If I want to push new code, for example, I need to do it, let's say, two instances at a time. If that fails, I need to roll back. If that doesn't fail, I need to push the change to the rest of the instances. If I want to apply infrastructure patches like an OS patch or something like that, I need to take down instances one by one from the load balancer, apply the patch, start them again. So these are all, again, pretty complicated processes in the same way that I modeled the initial setup process. You need to model these processes. That brings us to kind of a few common requirements that we want from an orchestration tool. So we want them to be able to be able to be in dependency management, right? So we need to be able to say to the orchestration tool, create this before this, tell this where this is located. We need them to be reproducible. So we need to be able to run the same plan over and over again and achieve the same end result. We need to be clone-able. We want to run the same plan side by side multiple times so we can create the same plan for QA, for pre-prod, for testing, for production. And we need it to be recoverable. Recoverable means that if my execution fails for some reason, I don't know, I exceeded my quotas or something breaks in the middle, I need to be able, even as a manual step, basically clean up the environment and start over again without failing. So with that, I'll pass to Dan, who's going to start to talk about the Docker aspect of things. Hi. Okay. So we'll be speaking about Docker Swarm and Kubernetes. We'll start with Docker Swarm. It's a native clustering system. It's written by Docker. Basically, it's a pool of hosts that all have a Docker daemon installed on them. And from the client perspective, you use it as if you were using a single Docker daemon, but it has features to schedule containers with requirements. The architecture is pretty simple. You have the swarm manager, and you have the daemons. As I said, the client speaks to the manager, and as we'll see, you can specify certain aspects of this scheduling. So if I compare it to all the points that we just mentioned, it's pretty low-level if you use tools like Docker Swarm. Basically, you have just this huge pool of instances, and you just deploy containers one after the other in the correct order. So to highlight a few points, for instance, when we create the replica sets. So let's say we have N replica sets for each replica set, for each member of the replica set. First, we simply just deploy a container that has MongoDB installed on it, and we specify the command. Now the things to note here are, you see, we say affinity and constraints, which this is where the real power comes from. So we say, do not place this container near any replica set instance or near any configuration instance because we want high availability, or for instance, do place it on the daemon labeled MongoDB because the image is already there, so it'll get deployed faster. Yeah. After we start these processes, for instance, in this situation, we want to configure the replica set. So for each replica set, we still have to somehow manually run the initialization code, either by a sage or doing some docker exec on one of the containers. So we will use a command such as docker inspect to extract the IPs of the containers. Okay, another, so another highlight, let's look at the Node.js application container. It's pretty similar. One note here is that we need to inject, for instance, a set of IPs. Now, it does have the linking, but it's still not very supported with regards to multiple hosts, and so we need to inject it. So that's another manual process that you need to have, like, abstract, like a layer on top of it to manage it for you somehow. For instance, there's nothing very new here. So say we have an AJ proxy container. We want to dynamically add and remove members. This is a certain way that you can accomplish this by having a script on this container and you just execute the commands. If we're looking about scaling, nothing very special about it. It's very similar. We didn't see the deployment process of what you would do, but it's pretty much, you just say, okay, we have three replica sets. Say I want a fourth one. I just execute the same commands with a new index, basically. So this was pretty fast, because it's pretty not reasonable to cover the entire process, so I'll just say what were my thoughts about this process. So the really nice things, first, is it's, like, pretty easy to use. If you know how to use Docker, you just start using it. If you know how to deploy the application with scripts so you can model this same process with containers and you just do whatever it takes to do it in the correct order, but there's nothing really new in regards to learning something. A strong feature is the placement affinity. Like, as you say, you can easily achieve high availability of different components simply by saying, don't put this where there are similar containers such as that. It's a strong feature. Okay, we did, we said nothing about setting up this cluster, so we did not handle infrastructure at all. So you can use Docker machine, for instance. You can use, we just saw a magnum that can set up a swarm cluster for us, so it should be pretty easy. What else? Okay, the process was, okay, many, many steps. So the replicas, so if you want, to add more members or remove members or add shards, we have to manage all these indexes by naming of the containers. There's no, like, something, like, we need to write this layer to manage that. It's pretty, there's no workflow or anything that you run the scripts. And yeah, so you see, like, requires other tools. Basically, it's all these other aspects, like, such as monitoring and healing and autoscaling and whatnot are handled by other tools, not by Docker swarm. Okay, I hope it's not, like, I hope there's added value here. So we'll move to Kubernetes. It's another container cluster manager. This one's by Google. We'll highlight, there are some key components to talk about. So pod, basically, it's just a set of containers that run together. They share the same network namespace so they can talk to each other using local host. They share volumes. So for instance, you'd have a Mongo S and an OJS application running in the same pod so they can speak with one another through local host, for example. This would be a use case. The application controller would be a way for you to achieve healing. First, you specify, like, how many replicas you want and Kubernetes takes care of the rest. So it's a common pattern to see a replication controller that wraps a pod with one instance and this is the way to keep it alive. Say one node fails, Kubernetes will take care of moving that to another. Networking-wise, each pod gets its IP. So we said, within the pod, they speak through local host but outside they get their own IP address, which doesn't matter on what host it's running on. This gets abstracted. And the service is the last key thing in Kubernetes. It's a way you specify you wrap a pod by a label selector and sort of a load balance endpoint. Say we have the node.js. For instance, we have many instances of that pod. We'll wrap it with a service and it also gets an IP, which is static, and we can address it throughout the lifecycle of the application. So architecture-wise, I won't dive into it, but it's pretty similar. You have the manager node and you have the minion, which are hosts that we have agents running on them that do the bidding of the manager. This is how a replication controller will look like. For instance, you say we specify replica as five. I want five instances at all time of this pod, and for instance, this pod is a MongoDB replica set instance and it has some administration node running alongside, maybe to dynamically change the cluster as members get added and removed. Service, similarly, you'd say I want a service for all pods that have a label type node.js. When you get request to port 80, delegated forward it to some of the instances on port 8080. Okay, so we won't look into the files themselves, but as we said, we need to start config servers. So we'll do 123 kubectl create controller and we'll create a service because we want a static IP for these config servers, because the router needs to address them at all time with the same address. We'll do a similar process for the router. We'll start a service. The service in this case will be used by the Node.js application. So it's easy somewhat to model this step. Nothing very new to say about it. For instance, this is where we create the replica set itself. So in this case, we can model each replica set as a whole in a controller. Let's say we want five replicas. After we create the controller, we say we wait for all the instances to start. We can extract the IPs. Again, we need to manually, at one place, initiate the replica set and so forth. Regarding healing, it's something that gets provided by Kubernetes itself. Once we've wrapped it with a replication controller, if something breaks down, it simply gets rescheduled in another minion. Not much to say about continuous deployment. Besides that, it's pretty simple to, if you want to switch between versions, for instance, if you want, you can start a new controller with zero instances and move pretty easily from one to the other. And if something breaks down in the middle, it's pretty easy. You can just say, okay, forget the new controller. I want to roll back to the old one with a number of instances that were on it. So we'll tie things up with Kubernetes. Regarding auto-healing, you saw it's a pretty simple process. You can just wrap it in a replication controller. Load balancing, I actually did nothing. It was just telling Kubernetes to load balance a certain service, and it just works. Scaling, you saw, it's a simple matter if your application can handle it. The new increase, decrease the number of instances. It just works. So we didn't say anything in placement in comparison to swarm. That's because it's just not implemented yet. I saw that they're working on it. So it will probably get there at some time. These are pretty new tools. And with regards to orchestrating stateful services, as we just said, there are many steps. It's hard to automate such a process and to orchestrate it, so you need to do things on your own. Kubernetes itself does very little to help you in that regard. And with that, I will pass the stage to you. Thanks very much, Dan. All right. So this stage is about pure pro orchestration. So Terraform, anyone using it here? There was someone here. Yeah, there is one. Cool. Two. Okay, great. So Terraform is a relatively new tool by Hashicorp. It's a simple and a good way. Basically, it's a command line tool that you basically feed your plan or configuration into that. And that command line tool basically executed the configuration for you from the laptop you're running it from. So there are no servers or things like that. It basically has a few basic terms. There's a notion of a resource. If you work with Vagrant, that's pretty similar to that. A resource is anything you can create on an IS layer, like a VM or like a network or a subnet or things like that. There's pretty extensive support for OpenStack resources, which is fairly nice. There are providers. So providers are implementations. Really the code that implements resource creation. So there's a provider for OpenStack and Nova Network, for example. And there are provisioners, again, very similar to Vagrant or to Packer. Provisioner is something that runs after a resource has been created. So that can be a script. That can be a file copy. That can be a chef run or things like that. Terraform has very nice support for modules. Modules are basically a way to compose configurations. It's like nested stacks in heat, for example, or substitutions in Tosca. So you can model individual pieces of your stack and share them between configurations. And there are variables and outputs, which are analogous to heat inputs and outputs, or to Tosca's inputs and outputs, basically allow it to interact with runtime and calculate the values. This is Apple configuration. So you can see here, for example, this is the type of the resource. And this is the name of the resource within your configuration and a bunch of configuration around that. It's not really JSON intentionally. They want it to be human readable. But you can also feed just pure JSON into Terraform. It will know how to digest that. There are a few examples here. It's all kind of similar. The resource type, the resource name, and the parameters. And then there's a nice thing here that allows you to express dependencies. This is kind of an expression language or interpolation language that you can express, for example, that this subnet network is the resource of type networking network v2. And this is the name of the resource that we created over here. And this is the idea. This is runtime calculated value. And that's how I create dependency between elements in my configuration. So the way we modeled the Mongo application is we had a single top-level configuration file. There's a link here. We're going to share the presentation. And you can access that online. It's on GitHub. It creates all the infrastructure for the application that I mentioned, like subnets and routers and networks and so on. We actually did have a module here. We modeled a Mongo Shard as a separate module within Terraform. One thing to note here, there's no easy way still in Terraform to say I want x instances of that entire module. So you basically have to copy-paste it. So for every shard you want to create, you need to create a section in your configuration file. That's something, I guess, they're working on. And still we're left with kind of the hard issues of master assignment and registration of shard, which I mentioned before. There's no kind of cluster-wide operation to invoke provisioners in Terraform. So you can resort to a number of ways. I guess what the HashiCore guys are kind of encouraging, that's also in the mailing list, is to use console. Console is another tool by HashiCore similar to ETCD or to ZooKeeper, which you can kind of use to coordinate multiple instances through locking and acquire and wait semantics. We can use that to do that using scripts, using provisioners. You can use static allocation, so you can separate within the configuration the master from the slave. So you have a separate entry for the master and separate for the slave. That can work, but it's not really good because these things are dynamic and can change over time, so the configuration becomes stale as your cluster evolves. The third way is also to use a mechanism of local execs, but this also requires you to kind of use certain tricks on the local machine, like file locking and things like that to be able to orchestrate that properly. So if I have to summarize Terraform, the pros here, it's infrastructure and framework neutral. You can actually do anything with it. If something doesn't exist, you can basically create a provisioner or a resource provider yourself. It's pretty easy if you now go. It's very solid support for OpenStack, which is, I guess, where everyone's here. Very simple to install, very elegant. One of the things I really liked about Terraform is that it has a way of planning things. So essentially, you don't just create a plan and apply it. You can do a preliminary planning step, which will give you a list of all the things that are going to happen, and then you can see exactly what is going to be applied before you do that. In the same way, if you actually change a plan, Terraform actually knows to kind of calculate what needs to be changed in your infrastructure, present that also as a plan for you in a very elegant way and do that. On the other hand, configurations are not portable across providers. So for example, if you use Kubernetes or if you use Tosca, you can take that to any environment, because these tools work on any environment. They're not specific to OpenStack, for example. With Terraform, you need to create a separate configuration for OpenStack and a separate configuration for Amazon, and if that's a concern, you need to take that into consideration. It's kind of hard to mod non-infrastructure components, as we saw before, things that are not symmetric by nature. This takes a little effort and makes you go outside the tool. And all the operations are kind of part of that are done in the context of a single resource instance. If you want to run something, you only can run it in the context of a single VM or a single network and so on. So that's Terraform. I'm not to cover heat. Thanks. Thanks, Surya and Dan. So Tosca is really a neutral specification language for defining blueprints. As opposed to the other tools, it is a language, which means that I can create types. It's fairly extensible. I can inherit. I can create interfaces and so forth. And it has a lot of language characteristics that applies with it. The other thing that is important is that it does tend to look at the stack from a holistic way. So it will deal with infrastructure in a similar way to the way it will deal with software aspects of that stack. Clarify is an implementation of Tosca. And Clarify, and I think that's kind of the unique position as two main characteristics that I think are unique to Clarify versus the other tools is that it takes an integration approach to orchestration. Integration approach means that it doesn't assume that it owns the tools that it manages. It gives you a lot of set of plugins that could enable you to plug in configuration management tools as well as infrastructure tools, whether it's cloud, open stack, VMR and so forth. The other thing that is pretty unique is that it does focus a lot as opposed to the other tools on the production side, the post-deployment side. So it has built-in monitoring, built-in logging, alerting and also policy management and workflow that I'll talk about in a second. So those are the two main things that we need to take away from that. Another thing to note about containers and Tosca, this is still evolving. This is actually a suggested spec for containers is that Tosca because it is abstracted, it provides you also a way to abstract the actual software from the individual containers. And we all know that there would be more than one container in the world. So you could actually use Tosca to also be portable from the containers, not just the cloud infrastructure itself. And that's another interesting benefit to that. Now, if I look at the exercise that we have setting up MongoDB and Node.js, the way we modeled that in the Tosca blueprint is in the following way. I had one blueprint to manage MongoDB, and in that blueprint I have three kind of nodes. One node is to set the replica set. It's called Scalable, which means that for each node of that replica set I can define multiple instances of that. Then I have a dependency on that, which is the initialization node. Initialization node, it's all purpose enough. That's a logical node. So node in Tosca, a logical node. It doesn't have to be a physical node. It doesn't have to be a machine or a network. It could be just a software piece or a process that runs and ends, and it could still be defined a node. So in this case I can define a node that is a logical node that depends on the replica set, which means that from that respect it will basically start only when the replica set are ready, and that's how I can initialize that without external tool intervention, like in the other tools, and that's an interesting concept there. Similarly, I can do the same thing for MongoS. Same concept. I can have an initialization node that will trigger the initialization process after the MongoS cluster is being defined. The main takeaway from that kind of modeling is that I can actually automate the process almost from start to end without too much breaks in the process. And so if I measure it from an automation experience perspective, because of that strong relationship model, it's much easier for me to model the application so that I can create one blueprint and then say, give me the entire thing without doing any steps in the middle, like calling scripts or calling external tools. Let's see a few snippets of the blueprint itself. So this would be, for example, a way in which I can define infrastructure element. Again, I wouldn't go too much into the details of that, but in the same way that I define an infrastructure elementless security group, I can define a whole cluster of databases. In this case, it's a MongoDB shot. Note that I'm using something that is called Substitutable. Substitutable is the way in which I can create nested types, or nested or dynamic binding as they are called, which means that in runtime I can decide that a set and component, a set and node in the elements would have sub-elements that are dynamic. And for each application deployment, I can substitute that with another sub-element, whether it's a set, a different MongoDB server configuration or a different replica set configuration. In this specific case, we have a Substitutable element for the replica set, which means that every time that I'm creating a MongoDB instance, sorry, MongoDB shot, it will automatically create a replica set with it. So by defining multiple shots, I'm also creating the multiple replica set per shot. The other thing that I wanted to kind of jump into is the relationship concept, a concept in Tosca that is very strong as requirements. So for example, in the HAProxy, the way I express the dependency in the Node.js component is by expressing requirements and specifying the members in the Node cellar. Note that the relationship is implicit, and that will trigger a call to a certain script or flow. That will, at the end of the day, will inject the IP addresses of all the elements in the cluster into the HAProxy. In my view, it's more elegant, even though it's implicit. Last concept to discuss is the workflow. The workflow is really a way in which I can interact with the cluster after it's being deployed. And the reason why I think it's powerful is because a lot of the work is actually after the cluster is being deployed. Workflow gives me a way in which I can execute scripts on that cluster that has a context of the graph, which means I can execute different types of operations on any element in the graph, and the workflow is basically scripted in Python. The built-in workflow, for example, install, which is basically reading the graph, figuring the dependencies in the graph, and then calling the lifecycle events in each node in the graph. That's how I can set up an installation with the dependency in it, and it could be fairly implicit, meaning that it could be generic to any workload, to any type of graph. Similarly, we have uninstall, heal, scale, and so forth. Discovery is done through navigation in the graph. Once I have the context of the graph, I can actually navigate through that, and that's how I can discover elements, including external elements of other blueprints. I wouldn't discuss that right now, and it can be remote or local. So to sum the part of TOSCA into that, the main benefit of it is that it's portable, and it has a more holistic view on how it deals with the application or with the orchestration task that I mentioned. Basically, from a robot analogy perspective, the same robot can build more elements rather than doing a smaller step, so that means that you need more robots to actually fill in the gap. The main, if you'd like, benefit that Clarify brings as an implementation, and that's more implementation-specific, is all the workflow monitoring the post-deployment into it beyond the configuration aspect. On the con side, I think that the thing that is apparent to everyone is that the spec itself is still evolving, and Clarify itself is still not 100% compliant with that spec even in its current state, so we're evolving ourselves, and it's a moving target like the entire industry. Obviously, the set of tooling, given the level of maturity, is also limited. Then I wanted to actually move over to heat, and that's going to be shorter because most people probably familiar with the heat, and the concepts are very, very similar to Tosca, only that it's specific to open stack. That's really the main difference. So even if there are gaps between Tosca and heat right now, I believe that those gaps will be filled over time, so if your world is open stack and only open stack heat will be providing fairly compatible features. So just to give you, for those who are less familiar with that, a quick introduction to that. It's basically a way to model all the nodes and elements in open stack in a scripted language. Again, it's called HOT, with concepts that are very similar to that. From an architecture perspective, it's basically a REST gateway that can pass those blueprints and interact with all the underlying infrastructure that's a simple way to describe it, whether it's in there, whether it's Nova, Neutron, and so forth. So from a solution perspective, they were modeled that blueprint in this specific case, because right now it has less of a strong relationship. Again, these are gaps that are being filled in, and it's probably also part of my lack of knowledge not being expert in heat. Is that I model it as a few different blueprints that I need to execute separately or task stacks as they are called in heat. The first stack is the Mongo replica set, and once I set the replica set, the output gets me the list of IP addresses of that replica set. Then I need to call a script that will pick that output, pick one of the IP addresses, SSH to that machine using SSH key, and execute the initialization of that replica set. Again, similar to the thing that I'll do in the Mongo S side, only that because on the Mongo S, Node.js is dependent on that, that makes the dependency between Mongo S and the Node.js more complex because it's not just waiting for the Mongo S to start. It's waiting for the Mongo S to start, and then someone needs to initialize it, and then you could start Node.js. So the idea here is that the relationship can be a fairly complex thing, and not just a simple start machine and run some data you need to make. Note that the YAML file looks, resembles a lot the Tosca syntax. Again, the types are slightly different. In this case, I basically defined the security group, which is an infrastructure element, a server, a resource group, which in this case I can create more than one instance of that same definition. In this case, that would be the replica set instances, and I think that's kind of the main element that I wanted to kind of spend time on. So if I summarize it, I think that for OpenStack users, hit will come always fully updated with all the types and mapping of the resources that comes with it, and therefore it will be very good for setting up infrastructure within Hit. It has some software configuration elements, including integration with Chef, that gives it some capabilities to also do software configuration. Right now it's not a strong capability. If I take the elements into the con side, then obviously the same benefit is also a disadvantage, which is it's limited to OpenStack. So if my world is not just OpenStack, then that would be a limit. Also the post-deployment production require external integration with external tools, whether it's tools from the OpenStack ecosystem or tools from the outside world. It's still an integration process that you have to go through, and it's less trivial. I think with that I wanted to kind of look at the entire comparison in a different way. So we look at the tools as an alternative to one another so far. And basically what we try to do is show how each of the tools deal with the problem of taking an application and getting it into production. In this case I think, and it was even in this summit, we've seen that there is also synergies between the tool. Even though there are overlap between the tools, there are synergies between the tool. And I think the announcement in this summit around Magnum is a good indication to that, that we can actually, it makes sense, for example, to set up the infrastructure for Kubernetes through Hit, and then run Kubernetes to run container-only workload, and Hit to run the rest of the workload. So there is some synergies here. Similarly, by the way, Cloudify and Kubernetes could live very nicely together. Cloudify and Docker already integrated. That's part of the announcement that we made. And also there is some level of synergy between Cloudify and Hit. So the main takeaway on that slide is really to say that they are not necessarily just an alternative to one another, because they have different switch spots. Depending on the switch spot, they can complement one another because one thing will do a good job in certain area and the other one will do a good job in another area and that's where the synergy comes in. So this is the bottom line, right? How do I choose the right tool for the choice? So based on what you've seen, we've seen that there are more than one option to deal with the problem and very different approaches to deal with the problem. So if my world is microservices and containers-only, Kubernetes was built for that. So it will do microservices. It comes with a fairly opinionated way and how you structure your application for that specific task and therefore it will do a very good job there. If your world is more heterogeneous, in which you don't necessarily know that it's going to be just OpenStack or it's going to be just microservices and your world is more a combination of legacy, microservices, and new other stuff that doesn't fit that, then you would need a more of a pure-player approach. If your world is only OpenStack and you have a lot of infrastructure set up, then obviously things like heat would come into place. So those are the main questions that you need to ask yourself to come to an answer and say which tool are good for you for that specific workload. We can go even deeper into that questions and find which workload is that more stateful workload, less stateful workload, and that will also lead you to a different type of selections, but the bulk of it is really, am I container-centric? Do I believe in heterogeneity? And if my environment is heterogeneous, that will give me a different set of tools. With that, I wanted to conclude with the following words. So if you follow the OpenStack Summit event every year, and I did that for the 11th time this time, I'm going to get to Bar Mitzvah soon, working on it, then the main thing that you will notice is that every year there was a new next thing. Every year, like two years ago, it was puppet and chef, and it was Ansible and Salt Stack. This year it was a docker, and I can assure you one thing. Next year it's probably going to be something else. That's the only constant that I've seen, and that's the reality that we live in. And again, if I take Darwin's approach to that, it's not the strongest species that survives, it's the one that is more adaptable to change. Speaking of change, change is a way. It's coming over, and it will continue to come over. Just see the announcement of Kubernetes with CoreS. There's going to be some dynamics there between what Docker is trying to do, going up the stack, Google and other companies being threatened by that, and obviously they would never let one company control their destiny. So there's going to be some movement there, so there's not going to be one container or one, if you like, company driving that, even though right now that seems to be the case. And so it's inevitable that it will be in that case. Similarly, we've seen movement in the OpenStack community towards TOSCA, and much more openness within OpenStack towards collaboration and integration with Google and other clouds, so there's a lot of change coming down the road. And I think with that, I wanted to thank my colleague here, Dan, who had a lot of hard work into it, a lot of nights that it was actually spent on that. Interesting exercise for us as well. Uri obviously jumped into the challenge gladly, and that will let you ask questions. Any questions? No questions? So everything is clear. Everything is clear. So you talked a lot about TOSCA, but compare and contrast that with using an IETF, like a Yang model type system? Okay, so that's a very good question. The question is to compare TOSCA to Yang, those are not familiar with Yang. It's by O, U, G, I almost spelled it correctly. Y, A, and G. That's a specification that is mostly popular in carriers for configuration of network services. We actually find that there is a lot of synergies in terms of integration between TOSCA and Yang, and that's something we even announced this summit, that Yang, specifically, TELF, can do very good job on network configuration, but less of a good job on software stack and more application topology, and therefore you could actually integrate the two together. So you could use TOSCA to create the infrastructure, to create the network elements, and then call Yang to configure those elements based on the Yang template that most people already have existed. So that would be the different sweet spot. So even if there is an overlap, there's different sweet spot, and therefore they could actually be more synergetic to one another than alternative to one another. So next year we'll see a little more Yang in there. What's that? We'll see some Yang in your presentation next year. Well, actually, yes, that's a good point. It's a good point. Point taken.