 Hello, my talk today will be about application portability and effective orchestration across various platforms. I know it's kind of hard to pick before the lunch, so I will try to be as quick as possible to make it easier for you. But I won't let you leave before the talk is over, so let's make it together, yes? A little bit about me. I am Malaš Komarek, I am a TCP cloud solution architect engineer, I am a technology enthusiast, and I am also OpenStackSoft project ETL. You have already heard about the OpenStackSoft project from the Jacob and Loki's presentation. It's a project aimed to install the OpenStack platform with the Solstack. We recently added support for containers in our solution, so now we are able to orchestrate both virtual machine services and container-based microservices. So, what is this presentation about? We as an IT industry are getting into serious problems lately because there are conflicting demands to the industry. There is a raising ratio of cloud services and container microservices, gaining the production workload in IT industries which cannot or hardly managed by traditional orchestration tools. Also, the complex application stacks make use of multiple clouds, containers and some advanced network function virtualizations like virtual firewalls, virtual load balancing, virtual routing, whatever, yes, which now is becoming the integral part of application stacks. So, you cannot split apart network functions and some software services. And when the service goes down, the impact on the business is getting more serious because now growing number of industries or enterprises is relying entirely on outsourced services for computing. So, they need to have some strategies, some tooling that will help them in case there is some form of regional outage when the connectivity goes down or when there is, let's say, some cybernetic attack that brings down the entire city or makes the site inaccessible. And also, one of the greatest challenges we are facing is that there is much more push on the change from the market. The customers are very demanding and they want to have their applications fit to their needs and they want it now. So, you cannot rely on a deployment cycle that takes four months. You have to have deployment cycle or deployment pipeline that takes days to bring your new feature from development to production. But if you make things wrong and your development pipeline is a little uneasy, you can still put your stuff into production but you don't want to know how the things happen. If you do not have the proper tooling, you can end up like this that you get the job done but it's not in a preferable way. So, to help us with these problems, we have orchestration tools to help us. These tools help us with several areas within the lifecycle of our deployments. It doesn't matter whether the deployed application is an entire open stack or some application running within the open stack. The principles are the same. So, we are able to reuse the same principles we use within the cloud even for the installation of cloud itself. We use it in our open stack or project. So, we focus not just on the installation of the application but also on the operation and maintenance of the thing. You need to set up your monitoring, you need to set up your log collection so you know what's happening in your infrastructure and you can react to it. You want your systems to be as independent as possible so human interaction from operators is not necessary in most of the cases. So, if you are an industry and you want to add an orchestration tool into your application ecosystem, you need to realize two things. The first, you need to align it with your existing configuration or orchestration tools. So, if you are already using something and you want to orchestrate it entirely, you need to check that it's compatible with two solutions. Then, depending on the workload you are having, you can choose the proper operating system. If you are running with those VMs, you will choose some different form of orchestration. Then, when you are running Linux machines and also you need to realize that traditional orchestration focuses mainly on bare metals and virtual machines. But the shift is to using platforms or the containers as the applications are getting to come out and you have more services to run which are mostly independent and can run without the knowledge of the other ones. So, now I will talk briefly about several options we are having for the orchestration. There are several large groups of orchestrations we are having. The first is infrastructure as a service, orchestration in this case. You are orchestrating virtual resources you would normally find in your physical data centers. You are orchestrating virtual networks, network functions, virtualizations, storages, servers. So, all these kinds of hardware resources. The example is the OpenSeckey, which comes originally from Amazon CloudFormation. These tools orchestrate the resources in a declarative way. I will talk about this a little later. The next large family of orchestrators is the platform orchestration. These tools focus on a single platform of some concrete programming language by RubyPHP. And it lets developers not to care about the infrastructure database sketches as these are provided as a service to these environments. So, you just leave the platform to the developers and they can work and they have all the services provided for them. Then you have the container orchestration. For the example, it's a Kubernetes mess also. These orchestrate multiple servers on which the containers run. One of the major advantages is that if one service fails, you can always reschedule this failed server to another host. Also, the containers are artifacts, so it has minimal time for deployment. As you have seen, you can run the entire OpenStacks from containers within a few minutes compared to running it from virtual machines which take several minutes to configure and run. Also, the scalability in your normal system, you would have hundreds of VMs running and compared to this, you can have thousands of containers within your infrastructure. The next family of orchestrators is software orchestration, more commonly known as configuration management. From this family, Solstack, Puppet, Chef or Ansible are the most well-known examples. This orchestration provides software resources in the sense of software applications and this orchestration makes sure it gets installed, it gets configured and run in the correct order and you have all the software services running on your infrastructure. This helps tame the tribal knowledge, as Jacob said in his presentation, and put it on the paper or put it into some discrete process that can be repeated again and again or can be improved in some auditable way. So we don't have the need for the domain experts that much, their knowledge is put into the orchestrating tool which can repeat the steps over and over again. This was the brief summary of the options for the orchestration for specifics platforms we have. Now we'll talk a little bit more about how do orchestrators do their job. There are two main approaches to orchestration, the first is how to do it or what to do. So to make it a little clearer, the decorative approach do what. I have prepared the sample, I hope the goulash is not too sensitive right now, but we have a dinner goulash that's set on the table and it has three to five dumplings according to the hunger. Yes, as you can see this is a nice example of auto scaling topology which can react to your user demand and scale properly according to the need. So the decorative approach is the representation of your infrastructure model which models each pieces, so I know I have some dumplings, I know I have some great way and I know I have it on plate on my table. This approach provides some configuration management database functionality. So it has some items, some resources, it has some well-defined sample properties and can serve as CFDB in some way. On the other hand you have the imperative approach, so it's how you do it. So instead of saying there is a goulash on the table, you give the more direct order. So it's cut dumplings to slices, cook the goulash and serve it on the table with three dumplings and I will take more if I am angry. So this is just a consequent flow of actions. You don't define your final state but you define the steps that will lead you to the final state. And this is how you describe various auto-prefix processes as auto-healing, auto-scaling. All these are imperative, just set of steps which needs to be followed. Okay, so we are back to square one because neither of these approaches works for the entire lifecycle of the applications, for long-running application stacks, the declarative approach is appropriate. As you define your desired topology, you can do this model, you can do this topology, you then define the monitoring, the log collection, the documentation which is tailored to fit exact model you have for your application. And on the other hand, you have the processes which are required to deliver the change. So let's say for the updates, for the healing processes, so something that is not long-running but individual processes that are required to set up the infrastructure in the right order. So the big question is how do I choose the proper orchestration tools? Well, it pretty much depends on what application workload you are having. If your infrastructure is container only, you can use tools like Kubernetes or Mesos. If you use virtual machines in your infrastructure, then you can go to OpenStack or EVS, but if your application gets distributed across various regions or countries, continents, or even some project within your cloud, you need to go further and select some tools that does not rely on the single service provider. Sometimes even the licensing may be issued, so you need to keep some of your workload on the bare metals. So you need to make sure that your orchestration tool does all the pieces you need to define your application infrastructure. So you need to choose your tool wisely because if you don't, things can get a little unpredictable as well. So sometimes magic can happen and you end up with things you are not entirely expecting. So for this reason, there is another large family of orchestrators. It's a platform agnostic orchestration which does not care what resources it orchestrates and it actually reuses the other existing orchestration tools to enforce the states and the processes of the orchestrated resources. So we can call this family of orchestrators the orchestration of orchestration. So you reuse various cloud platforms or container platforms with the configuration management tools to solve all problems of the lifecycle of the application states. So to do an example, services that are platform agnostic orchestrators is the Terraform for Mashiko, Clarify, and even Heat can be in some way seen as a platform agnostic as it has support for multiple backends, not just the open stack, but it can now with proper plugins orchestrate pretty much everything. And so is there any standard way how to describe our topology and the processes? Well, what would be the point of this slide if there wasn't? Yes, so of course there is. It's called a TOSCA. TOSCA is an abbreviation. It stands for Topology and Orchestration Specification for Cloud Applications rather long description, but it's a standard in which you can define both aspects of your infrastructure, the topology as well as the processes. It's a domain-specific language, originally it was XML, but now it's just the unknown soul. It pretty much looks like heat orchestrated templates, but it has more power to express your software resources as well as your virtual infrastructure. You can use it to describe container micro-services, virtual machines, services around the virtual machines, all infrastructure elements. And for the definition or for the description of VIRS tags, you can even add some built artifacts along the text description, all some binaries, images of VMs, of the containers. These all are tied with the model, with the topology and the processes that drive it. Okay, so this is the architecture of how this orchestration tool looks like. You need all these services to make sure that your infrastructure is operational in the way you expect. So at one place you have your model data, which is your topology you want to create. Then you have some workflow engine that makes sure that the basic workflow is to install the service. So you basically start with the model, then you run the first workflow which will install the topology into the reality. So when your application gets installed, it begins to send the feedback data so your orchestrator knows all the time what is the state of your orchestrated service. So when we take our example of the goulash and the cooking, the orchestrator here is the cook, who does the goulash, and the orchestrated service is the plate with the goulash. You are the consumer of the service, so cook always see if there is enough toppings on your table, on your plate, so when you run out, you get asked if you want more or not, depending on your demand. And if you want more, the cook makes sure that you get your resume. So how do the orchestrator perform the long-time maintenance of all the systems? It always observes the orchestrated infrastructure and reacts if some policies are breached. So my policy is if I'm still hungry, I won't pour dumplings, so it will happen. So now let's talk about some of the major use cases the orchestration engines are having. One of the biggest issues it's solving is to create complete CIC pipelines. In this case you need to create multiple environments of your applications. You need to create multiple development environments. Then you need to create several staging environments where you test your new features. And then of course you need the production environment, one or more depending on your architecture. And you need to be able to create, to destroy, to work with these environments in a rapid fashion. So if a new developer comes into your company, it means to take at most a few hours to get up and running, not a few weeks or months. Also you need to make sure your system is ready to respond for the outage or let's say some unexpected load peaks. So you need to feed it with the proper data from the system itself so it knows when to react. We are using this approach to implement cloud service brokerage platform which does nothing more than put some UI in front of our orchestration engine and it does the job. Yes, so this is one of the last slides. This shows how some example of what complete orchestrator services have. It cannot utilize just one service. It's always a collection of multiple services as in the Linux world each service does its job and it does it well. But for this reason you need to have more services each doing its job well to have the complete system up and running. So for the open stack sort, the orchestrator is the sole stack. We use Reclass to store the data, go like this, send to HECA for the monitoring part of the gathering data and graphite elastic search for the storing the data. And compared to this, there is an Aria Tosca project, a recent one which uses Qualified as orchestrator and uses the Tosca language to describe the topology and some other services similar to ours that make sure that the system is running in the right corner. Okay, now finally we have the demo. Okay, so I will try to, where do I click here? Right, let me go. So we have our demo. Wow. Well, I might have accidentally deleted it instead of the deployment. Shit. So I think the live demo is kind of dead today. So let's skip this part and head right to the questions I think. I will skip the boring part. Well, so much for the preparations. So let's skip to the next slide which is questions. So do you have a question? No questions? Can you pass the mic please? The question is for this kind of orchestration I'm also hearing about Yang as a data description language. So can you highlight the benefits of Tosca as of Yang with this model? What is the Yang? The Yang is to define the network functionality more than the Tosca is more to define the service oriented architecture. These two can work together but you can model your stuff in Tosca and then you have some tools which will translate or transform the definition from Tosca to Yang. Yes, so these can coexist together with the one solution. Okay, thank you. So is it a lunch time? Aleš, can you please compare Tosca and for example the compose files for the Docker or for the Kubernetes or even with heat stuff. What are the features? What is promising for the future? The Tosca is just the way how to model things. It doesn't care whether it's VM or microservice. It's completely technology acoustic so it's just a basic set of rules to describe virtually anything. You just have some guidelines how to use it but the implementation is up to you so you can have it to define your Docker compose files or your heat templates. It provides the same functionality which can then be used for other orchestrating tools to use. So Tosca is a standard being led by committee. It seems to be not necessarily as fast moving as many of the technologies in this space. Do you think it's going to be able to keep up? Tosca is just a specification. It's not an orchestrator. It's just the way how you define your application stacks both from the topological angle of process view. So you can reuse this data from any software you want and I think it's a standard which has been evolved. The new version is coming this summer which is solving some metadata and policy issues or things that were not covered well in the current standard. So I think it has good place on the market because it's standard on which major players agreed upon and it can be implemented by anyone. You just have your application described in this language and there is a good chance that you will be able to take these definitions and move it across orchestrating providers in the future as the support for this specification, the reading or parsing these definitions is getting larger. So you have more tools with support reading Tosca topologies and I think it will be growing in the future as the standard is alive and actively evolving. So thank you and have a good lunch.