 Now, let's start, let me first introduce you to the talk that I'll be giving. It's all about orchestration framework. It's called iOrchestra. Basically, it's based on Python 3.5 and native OpenStack bindings. This talk will be about the challenges that we've faced with building a synchronous orchestration for OpenStack and for another type of clouds. So let me introduce myself. My name is Dennis Magagon. I'm a software enthusiast. I mostly do open source, a bit corporate open source, so, nevertheless. So how many frameworks do you guys use in your work? So starting this moment, I know 35 frameworks that actually do orchestration for even OpenStack. And like 35, this is a number of different tools for different languages. And there are like six of them are designed for Python specifically. So today we're going to talk a bit about Tosca. It's a topology of orchestration and specification for cloud applications and iOrchestra, and how it implements the Tosca spec. So a bit history of orchestration, starting at that point of time when Amazon released their cloud formation, the templates for orchestration became like two mainstream things for building an orchestration platform. As you know, Heat was inspired by Amazon CloudFormation and they invented their own spec, which is called HOT. Right after that, like a year ago, Terraforma peers made by HashiCorp. And for now we have Ansible that's capable to do not only software orchestration and even cloud orchestration, starting, I guess, latest release. So Tosca by itself is more than generic spec, so we can do a lot of stuff with it. So we can orchestrate clouds in the same manner. So we can do a hybrid cloud orchestration and we can do a multi-cloud orchestration within a single template. And we also can do software orchestration. Would it be Ansible, Docker Compos, Shell Scripts, PuppetChef or whatever tool you use for orchestration unless it doesn't have Python binding? So what is Tosca and why you've heard a lot about that? Tosca by itself represents a template. So the shell spec was made for XML and the latest version of Simple Profile allows you to write a template using YAML. So core components of Tosca templates is an import, it's actually defining which type of plugins you want to use. Basically you specify which types you inherit. So you can use types for Amazon, you can use types for OpenStack or whatever. It also provides types. Basically it's an abstraction that allows you to define types that will be subsequently implemented within template. And also we have entities that will be executed. So it is a topology template itself. So by itself, Tosca provides capabilities that are defining how your resource, how you know it will be executed and treated by Engine. We also have artifacts. For example, Tosca by itself traits images, traits scripts, files, everything that has file representation as artifacts. So in Tosca spec, there is an example when you can, when you bind an image to a node of virtual machine by specifying specific artifacts for that virtual machine. So each node also has requirements. Requirements is a specific definition of which node should be provisioned before root node will be provisioned also. So in the core component of Tosca types is a node. Node represents a specific entity of an orchestration. So would it be a virtual machine or image or a Cinder-Wallium network port or whatever you would define as node actually. So in order to define the connection between two nodes, Tosca for such a case provides the relationships. So basically Tosca allows you to bind two nodes with a specific relationship. Would one node depends on another or it will be related to... So this type of relationships actually tells an orchestration in Engine how to treat those nodes. Would they be able to provision separately or would they be provisioned subsequently? So each node has a mention of each dependency. So you have capabilities that actually allows Engine to define how this node can be treated. So for example, if you have a virtual machine, it has a capability called linkable. It means that the node of virtual machine can have a link to port. So basically you're telling how to connect port and the virtual machine. And also node has requirements. So it tells explicitly which node should be provisioned before node gets provisioned in the end. So and each node has a different type of parameters and attributes. Parameters is something that defined within a template. So you can use different functions. It'll be talking later. And each node has attributes. This is immutable parameters of node that will be available once node gets provisioned. And also each node has runtime attributes. This is something which is dynamic. They will be evaluated or described within provisioning. So for example, for OpenStack, you don't have a token before you get actual authorization. So the token is in runtime property. So typology template by itself consists of three groups. It has inputs, it has node templates, and outputs. So inputs, as I already told, with inputs you can define which type you would inherit in the parent template. Node templates is an implementation of specific types that you're importing using inputs. And also you have outputs that are similar to stack outputs. So similar to other orchestration platform services, you can configure, parameterize your template as much as you want. And for example, we have a node that's called OpenSecretarization. It accepts different parameters. And we have interesting functions here. Actually, there are a few of them. One function just gets inputs. So it binds your template inputs to a node definition. Then we have a function that works with parameters and attributes. So as I already told, outputs are something very configurable. And finally, you will get a dictionary or mapping that you will define inside your template. So as I already told that we have four type of functions. One is property retriever, one is input retriever, one is attribute retriever, and the final one is concatenation. So these are four functions that Tosca's simple profile defines in spec. So for now, since we're starting to work with an orchestra, we try to figure out if there are any existing tools that are designed to work with Tosca. And for now, we have a Tosca parser that was built by Tucker team, and the Tosca parser lives under BigTent eventually. We have Aria, which is started by Gigaspaces as an extensible framework for Tosca parsing. And we have parser for Cloudify, but it's too specific to Cloudify itself. And we have a parser written on go, which is called Toscalib. So basically they are do what they do, they do parse. But none of them are actually providing you something executable. So what was the value of having a template if you can't run it? So what is a orchestra? A orchestra is a Python framework made with Python 3.5. It has three parts. You have an engine. You have plugins, and you have persistency. Sorry. So how a orchestra addresses Tosca? So when you have a template, you parse this template into a graph. And a orchestra makes that unordered graph as ordered graph and defines a sequence of execution for each of node. So this is how actually Tosca looks. You didn't have order. You have only links and dependencies. So as Confucius said, the man who removes the mountain begins by carrying a small stone. So what it actually means? From another graph, Tosca tries to build a sequential graph when the root node has no dependencies, but every next node after it depends on the node that doesn't have dependencies. And it also doesn't mean that in graph, you would have only one node that doesn't have dependencies. It is possible that you will have a bunch of them. So you will have just a sequence of node that doesn't have dependencies, and they will be executed on the start. And once those nodes are provisioned, next node provisioning will take its place. So in a orchestra, each node is just a definition of a set of coroutines that are defining its lifecycle events. So each node has at least four lifecycle events, but a orchestra doesn't force you to have all of them. You only should have two of them. One is create, and another is delete. But Tosca spec defines two more. You have create, you have start, you have stop, and you have configure even, and you have stop and delete. But for some cases, you don't need a configure. You didn't have to have a stop. Because for example, if you have authorization node, how you would stop the authorization? Like, it doesn't really make sense for different type of nodes. So coming from the template definition nodes into actual code, we have a dynamic class construction that actually builds a node class using predefined methods that are actually coroutines. So if you can see, there are two events defined here. We have a create, and we have a delete. And when you see the implementation section, you have a function pass that will be used by a orchestrator to retrieve a function definition. Basically, like, you know, in Python, everything is an object. So you can just import that and tries to execute with specific parameters as you can, which are specified in input section. So the same thing works for relationships. Relationship defined with two events, two interfaces, I mean. Sorry. We have link and unlink. And each implementation follows the same pattern as regular nodes. So the whole lifecycle of a node has a lot of different events. So you can see we have four for provisioning, managing lifecycle, and we have at least two for managing dependencies. So for example, if you have node A that has a link to node B, node B will be provisioned first, then link to node A, node A will be provisioned. And everything that happens in system, it's a code. So why are orchestrators that important, and why coroutines are so important in our template management? So since Tosca does not provide an audit graph, the main task for a orchestra is to build a sequential graph and can erode that sequential graph of nodes into a specific list of coroutines to execute. This is actually the one benefit that allows you to run provision step by step. You actually should await on each coroutine. So you would have a set of nodes for one coroutine that will be awaited. Then if you want to check some policies, you can run your policy check-in. And then you're going to wait again, and you will get a new node provisioned. So you don't have to do anything manually. And also, you are able to see that current design of a orchestra is the best option. Because when we try to figure out what kind of a graph we can use, the first idea was to have a graph that has the next view. You have a node that doesn't have dependency. And then you have a dependency on node A, and you define node B as a node that has dependency on node A that doesn't have dependency at all. So you would have a lot of unordered trees. And this is something that's really hard to process and hard to calculate. So for example, if you have a template with at least 100 nodes to provision, you would spend a lot of time to build a dictionary, to build a graph, basically. So the problems are told. It's like calculation time. The second problem is it's hard to roll back the deployment with that graph. Because the process of building a reverse tree is not that easy, and now think that how many trees you have to reward. So the benefit of current graph processing is that you have a sequence of tasks, and each task can be rolled back just by setting the specific flag. So why coroutines? Why we decided to make something asynchronous? So since we work with APIs, we're talk to sockets. And since Python 3.5, we have an ability to use cooperative multi-tasking. So basically, we have a context switch on each Iobound operation. So since we use an OpenStack bindings, we use their APIs. And we decided to make it asynchronous, because as I already told, we can have in-graph. We can have a lot of nodes that doesn't have dependencies. So we can just instantiate without awaiting on other nodes. So the main problem right now is that Iorquistva is not stateless, but their deployments are stateful. Because if you just want to provision, then you just hit the provision, and you will have some stack of resources. But if you want to manage the full life cycle, you have to define your own persistency. And you should care about the state of a deployment. So the mission of Iorquistva is to automate deployment for you. It's not a product. It's only an OpenSource framework. It is not driven by any company. It's driven by enthusiasts. Because the main reason why we made Iorquistva is that we don't want to have yet another service in our clouds. So we are full with infrastructure as a service, paradigm of cloud. So that's why we decided not to follow HIT, not to follow Ansible, not to follow Qualify. We just decided to have our own tool for doing an asynchronous orchestration and have it as a library, but not a service. So what are the problems with Iorquistva right now? As you know, only this fall, OpenStack completely switched to Python 3.4. But there are like two types of Python 3.0 compatibility. You have a parser compatibility, and you have a feature compatibility. So for example, for now, OpenStack can run on Python 3.4, but it's still compatible, for example, with Python 2.7. And it means that you cannot use features from 3.5. And the main feature that I'm referring to is event loops based on the C implementation of E-Poll mechanism. So when it would change, we had a discussion with teams that are working with OpenStack SDK, that are working with the client specifically, in order to enable 3.5 asynchronous Iorbound tasks. But it would not happen easily in the time frame. It would take some time, even more than one release. Because we have Django, we have Horizon that based on Django that doesn't really supports well Python 3.5 and its asynchronous API. So and yet, this is not 2020 yet, so why should we care about Python 3.5? This is the main problem of OpenStack right now. So what I already told that we have a benefit, so you can just roll back your deployment on any state of it. So for example, if you just started a deployment that has some misconfiguration, and you know that it will fail, you can just await on full graphic execution, and then you will receive the deployment with a negative state, and you will just say, just roll it back. And that's all. All your resources will be cleaned up. So for now, we have two plugins. One is for OpenStack, because we're mostly working with OpenStack itself. And we have a change plugin. So for example, if we have a deployment that takes a lot of nodes in its implementations, it's not that easy to maintain that deployment. So we actually converted a orchestra into self-plugging. So for now, a orchestra can use its own API to create a node that will represent a deployment. So basically, we do some sort of proxy. So each node will create a deployment of a template. Why we did that? Because we did a lot of networking orchestration. And when we need to build an infrastructure for, for example, a virtual rotor that will work on top of OpenStack networking, we had to create a bunch of servers, a bunch of networks, and we need somehow simplify that. So that's why we decided to have a chain plugin. And we can distribute the work within our team. So one team develop one blueprint, one team develops another blueprint. And finally, we can have master blueprint that will just inherit all of them. So this is the main feature of iOrchestra and why it simplifies the development of new plugins. So we didn't have an API plugin. The main thing that we wanted from developers and actually this requirement is checked by iOrchestra, each node lifecycle event should be coroutine. Because in Python, you cannot just await on simple function. So what are requirements for plugins? They are not mandatory. This is only a good style of development. So for example, in OpenStack plugin, we decided to split tasks that will contain the lifecycle events for nodes. And we have core API, core automation in the other package. That was made because we want to allow developers to have a plugins that are having dependencies on other plugins. So for example, if you want to build something more complex, would it say like multi-cloud plugin? So you can just build a dependency to iOrchestra OpenStack plugin and then to iOrchestra AWS plugin and then just build nodes from actual API automation. So this is all you can just keep your code simple. This is what actually really means for development. So we also have two contrib libraries. One is Persistency, which is made only for a reference project in order to show how you can treat state of your deployment. Basically, for now iOrchestra has two types of persistency. You have a context persistency that actually has a serialized template, serialized inputs, and other type of persistency is node persistency. So basically, you have two different models that after deserialization can be combined into one working solution. So you would have a new context that can be executed. And we also have a Sync SSH plugin. It is based on Python Sync SSH library, which is based on Python 3.5, 3.6. It was made only for software configuration using bash scripts, nothing else. And we've seen that there are not so many usage of it, and we just decided not to support that. We have a stable version, and we did not contribute to that more. So we only focused on developing a core part. So let's time to do some actions says action man. So what we wanted to have, we, as you can see, and as I already told, we're going to have a distributed router that, as you can see, we have an external network, we have router, we have a management network that looks with a light blue color, and we have a virtual machine that is connected to all of three networks, except external, but it connected subsequently through router. And it is meant that traffic from the virtual machine below in the left corner will have traffic roting to node into right corner. So let me just find your video. So what we're going to have, this is just a script for a demo that will be available right after I finish this talk. So we have a function that actually runs a deployment, that has a breakpoint on undeploy in order to show you how your graph actually being processed into real environment. And then we just run the uninstalling. So let me just scroll forward. So for now, it will take a couple of minutes to create a whole stack. So basically, we have three virtual machines, three networks, three subnets. We have three ports. We have floating AP, and we have a router. And all of these resources are being combined into a single deployment. So all of the slogs don't really care about that. It just defines how your resources are being connected. Everywhere that you see row, the link action happens. And when you see the start, stop operations, and you can see node names. So it will take some time. But we can just scroll forward a bit. So we're going to go to Horizon and go to Network Entopology. And let's wait a bit. Yep. We're almost there. Yep. So as you can see, we have what we wanted. This demo was made on Mitaka release. But it still works with latest OpenStack releases. Since we don't limit our dependencies on OpenStack bindings to different type of OpenStack releases, since OpenStack tries to maintain compatibility within releases, except load balancer. So we made a stub that works with both of them. So we have router. We have networks. We have ports and external ports. Yes, yes. Too long for me. And since we have a breakpoint, we have, as you can see, we have a couple of attributes. We have a deployment context. We have WorldBack enabled. We have a civilization enabled. And we have a template that I will just show you in a minute. So we're just going to hit the breakpoint here and then go forward. And also, we'll take almost a similar time to destroy the whole deployment. And you will see that there are some concurrent tasks that actually run in when you have different computes are being stopped, started, and et cetera. And it will take like, it always takes some time on compute destroying, but it will finish eventually. Yeah, it deletes the last machine, I suppose. Yes, it should be the last one. No, this one's the last. Yep, and we're done. So as you can see, iOrchestra, it's some sort of like competitor to existing solutions. But if you're considering to have a framework, accept service, this is kind of a good start point. So let's go to presentation. So as I already told, our benefits are that we are kind of modern. We are using Python 3.5. We are not yet another service with REST API. And our plugin system doesn't require an API. So we just do coroutines. And also, our limitations are we are 3.5 Python. And I know that there are a lot of software that are built for OpenStack that actually runs Python 2.7. So it would be a problem, because we're using a sugaring syntax for coroutines. And the main problem for now, for OpenStack plugin, is that an API that we used from OpenStack clients is synchronous. So each time you hit an API endpoint, your GIL gets blocked. So you have to wait until the context will switch once you will have a response. So if you are someone who worked with OpenStack, Amazon, or whatever, just consider to at least look at iOrchestra. If you are doing software orchestration, I know that Ansible, the ChefPuppet, Docker, and BUSH, you can write the plugins for them to run. Like for BUSH, you have Async as a Sage, or Paramiko, or Fabric, or whatever you want to use. So questions, if you have such. OK, good. So thank you for attending this last talk. I know it's kind of hard. It's kind of a long day. So if you have a question, just mail me. iOrchestra is completely open sourced. You have an organization GitHub. And also you have documentation on the read the docs. Just take a look at them. They are pretty nice, I would say. Thank you, guys. And have a safe flight home. Thank you.