 We're live? All right, great. Well, welcome, everybody, to the Big Data Barbecue. My name is Michael McEwen, and I'm a developer with Red Hat. And I'm joined today by Nikita Konovalov, who's a developer with Mirantis. And we're going to talk about building cloud applications. And just a warning, there will be no barbecue today. But hopefully, you'll be able to build some tasty applications when this is done. So a couple caveats. We're going to show some technology today. And we're not trying to make a blessing of one form over another. We're just going to show you what we've been working with and what we've been using to make these applications. Also, some of the code samples you'll see may be using experimental features or master branch upstream code. And just be careful when implementing this in production. Like, the examples we're going to show may not necessarily be ready for production. So with that said, what's the problem we're trying to address here today? You'd like to create big data applications that live on top of OpenStack. And these applications will most likely need to use cloud resources that exist within your stack, storage, messaging, those type of things. And you're going to need to iterate on these applications quite frequently. So given that they'll need to be deployed and redeployed, you don't want to have to bother the infra team to continually upgrade your infrastructure. So how do we solve this? So I like to look at this as building cloud applications. And you hear a lot of talk about this. And one of the big differences in my mind between building a regular desktop application and building a cloud application is that a cloud application is not a singular entity. It may be a collection of applications that all have to interact with each other. And so to that extent, you want to create a resilient infrastructure between your applications so that if you need to upgrade one part over another, you won't bring down the entire application structure in doing so. And like I mentioned, you're going to want to be iterating on these applications quite rapidly. So as an OpenStack operator, you're going to want to make sure to enable your developers to have enough power and privilege to continually upgrade their own applications. And so when we look at cloud applications, this little diagram here kind of points to, this is an example of what I think about when I put together an application that lives on the cloud. It's really a collection of applications. And there may be mixed types here. You might have console applications that you would run to inject commands into your pipeline. You may have server applications that are performing very highly specific tasks about moving data. And you might have graphical applications that show the results of what's happening with your big data stack. And so if you look at this here, you've got like a message queue and a database and maybe an HTTP server and a Spark cluster and some Python applications. And so you're going to have these resource controllers that need to manage what's happening on your stack at any given time. So when I talk about controller applications, in my mind, this is how I've been working with them, is I usually have an application that maybe lives on my desktop or it lives in a cloud node that I can use to perform these operations. And one of the big operations you're going to want to perform is provisioning resources that your data applications can then consume. So in this case, you see my controller might spawn a message queue. It might create a database. It will spawn the Spark cluster. And then in the next step, the controller will configure and deploy the applications that run against those. So what this little example is showing that I might have a Python application and I might have some sort of web application. And those will consume the results of my provisioning phase to specifically configure them to run what they need to run. And we'll get into looking at how this works. And so if you're thinking about cloud applications and you start to think about, you want to create a big data application. And what does that mean? You want to make some sort of data processing application. And as you start to plan this out, what you're going to want to think about is the data that you're going to process and how are you going to process that data? What are the transformations that you'll need to do to that data? And ultimately, where will the results of that go? You may have several analytics phases in the middle of your processing. And you want to make sure that you have a clear plan going in ahead of time of what you're going to want to do with these results and where you'll ultimately store them. And so you see, we have a database here. And in this respect, that might be one place where they'll go. And the message queue might be something to communicate these between applications. So how do you plan one of these applications? Where do you start with? And you can see we've got this guy here, and he wants to eat some soup. And he's got this whole complicated thing going because he wants to keep his mustache clean. So you've got a lot of moving pieces. Some of these applications can get quite complicated as the number of pieces you install grows. And again, going back to the previous slide, you're going to want to think about the data pipeline. You're probably going to be ingesting data from somewhere. In my case, I was doing a lot of streaming work. So I was looking at data coming in off message queues directly. But this could also apply for reading from a database or ingesting from HDFS or something like that. And then you're going to want to think about the storage of the processed results of your data. So in this respect, I'm not talking about the storage of where the data is coming from. But once you process your results, if you want to share them with other people, you're going to need a common way to communicate that. And so you're going to need to think about where will you store this? And then ultimately, what I've found is that as you build these applications, you're going to get into specific situations where you'll need to build custom applications that will help massage the data or move it from one place to another. And we'll get into talking about that when I show you some of the code examples. So I want to talk a little bit about resiliency. And this was a big part of my learning process in creating these applications was building pieces, building functional units within the cloud applications. So these may look like individual applications. But building them in a composable way where if one drops out, it doesn't destroy the entire application. And kind of intrinsic to this is being aware of the network functionality between the different pieces that need to communicate with each other. So if everything's listening to a message queue, this is probably less of an issue because you're just reading from the queue. And if some application drops off, it's not going to negatively impact the entire application. And then building stateless applications. And what I mean by this is when you build your applications, you're going to need to be redeploying them and upgrading them continually. And so you don't want to build a lot of state into the application. You want them to be able to process an incoming data stream and send it out somewhere. And maybe they'll have a little configuration to set them up. But you don't want to have a large, complicated state that's involved with your application because that really complicates how you'll deploy it. And empowering the developers. And this is something that you're going to have to think about, especially if you're an operator or an administrator. How do you give your users the appropriate levels of control within OpenStack so that they can create databases or create Spark clusters or create message queues? And this is just going to be different for everyone who's deploying a cloud. You're going to have to think about your policies, think about the way you've got your projects and tenants segregated, and make sure that when you give a developer access, you're not giving them too much access. Because security is a big concern. And if you have multiple projects working together, you want to make sure that you keep them separated and the developers aren't able to use their credentials to access resources that they don't own. And then finally, you want to think about production versus development. So when you're going into production, you're going to have much more stringent testing and controls before you let it go out there. And so make sure to have a clear pipeline between what is the development process and what is the production process. And there's some tools you'll use to help you do this. And so as we get into the tools of the trade, this is what we use to help synchronize between developers and ensure that we can have a kind of clear path line. You want to be using version control for all of your source code. You want to make sure that your configurations are managed in a way that makes sense and you're keeping secure information out of the configurations. You're going to have to analyze these on a case by case basis. But this is going to become important to how you lay out these applications. And then testing and test gating is another big part of this, especially when you talk about moving from a development environment into a production environment. So I'm sure that every developer in this room is using good testing, your unit testing, your functional testing. But you want to make sure that this is something you can learn from OpenStack as a community, that we have very strong test gates that ensure we're not pushing bugs out into the master tags and branches. And so you'll probably want to implement this as you go from a development environment into a production environment. And then I just want to give a shout out to the Python repel here, the re-devaluate print loop that comes with Python. This can be an invaluable tool for investigating how the OpenStack clients work and really introspecting your stack through a programmatic manner. So I turn to this almost daily when I'm having questions about how does a particular client work. If I'm working with the Sahara client and I'm trying to provision data clusters, I might just play with the Python repel a little bit to see how these objects work. All right, so this all sounds great. How do we do it? You've got this giant box of LEGO bricks here. How do we put them together? And so in this next part, I'm going to go through these bullet points here. And I'm going to show you some code examples of just places where I find interesting interactions and maybe show some usage patterns that I think are intrinsic to almost all clients in OpenStack. And some of the foundational pieces that you'll want to get comfortable with as you write Python applications against OpenStack. So to begin with the OpenStack tooling, you're going to be interacting with the Python clients. There is currently an OpenStack SDK, an OpenStack client project that is trying to bring all these individual clients together. But more than likely, if you're working with some of the as-a-service pieces like Sahara or Trove, these may not necessarily be in the OpenStack client yet. So you're going to want to become familiar with how the clients work and some of the common design patterns that you use with them. Also, the Oslo projects, there are just some great projects in there to discover. Oslo config and Oslo log in specific, I use them all the time. The config project makes it really easy to create command line options or file-based options that you can feed to your applications. And likewise, the logging is wrapping some of the inherent Python logging functionality. And it just makes it really useful to be able to produce clean, consistent logs from your applications. And then if you're not deploying directly onto a cloud and you're using a development environment to do this, some of the tools you're going to want to look at are DevStack, which is probably the most prominent tool that most developers use for creating their applications. And then there are some newer projects like Kola and OpenStack Ansible that give you some different options for how you might deploy just a test instance to play with, write your applications, and then migrate it into your cloud. So deployment. As you create your applications, you're going to have to think about, how am I going to get these into the cloud? Where will they live? What is the structure that we're going to create look like for them? So most frequently in OpenStack, you're creating virtual machines. But nowadays, there's lots of options for containers. And so you're going to have to think about, how will you create the images that ultimately get deployed into these machines, or nodes, or containers? And so you have a couple of choices. You can either create custom images that includes all of your code in the image, and then that gets created as a virtual machine or as a container. Or you could modify a running instance. So maybe you'll use a base Fedora image to create your operating system, and then you would use a tool to deploy into that. And then as you think about iterating on these applications, you're going to need to think about, will I redeploy my entire infrastructure, or will I run live updates as my applications are running? And so I want to look at a couple pieces of code here. So this is looking at the disk image builder, a piece of code from the disk image builder. And this is a tool that is used for creating images that can deploy virtual machines. And so this is a very simple example. But what I'm showing here is, this is a bash script. And as I'm creating my image, I could have it clone a repository for me and then maybe install the requirements from that repository. And so now what I've got is an image that has my project in it and is configured for what my project is going to do. And this is a very simple example, but it's just kind of showing you what you can do. Now, moving from image to update, this is another way that we could look at doing this. I might use a generic distro image, like I love to use Fedora. So I might create a Fedora image. And then I could create a Python script using a tool like Fabric, which is what I'm showing here, to do something very similar to what we just did in the previous example. I've got a host, this 10.0.0.3. I've given it a Cloud key. And I've told it what user I want it to use. And then I'm going to do the same operation. I'm going to clone a repository. And I'm going to run the requirements, install the requirements to make sure that we can run it. So how do you configure these applications? Because no doubt, you're going to want to pass information into them to instruct them on what to do and where to look for resources. And so you have a couple different options. And Oslo Config makes this really easy. You can create applications that accept command line parameters. And then when you kick them off through SSH or through some other mechanism, you would just pass the command line parameters in. Or you might create more file-based options, which is what most of the OpenStack services use. They'll have a file that has all the pertinent configuration information for their application. And they'll load that in at runtime then. And always, you want to make sure to protect the sensitive information, especially if you're dealing with passing credentials to your application. So if you have an application that's running in the tenant space, meaning it's not in the control plane, you're going to need credentials in that application so that it can read back and use the Cloud resources. So for example, if your application needs to communicate with Swift to store or retrieve objects, then you're going to need to have credentials in that application to do that. And always, you want to make sure that if you're using something like version control, just make sure you're not checking your passwords in, because people do troll GitHub for that kind of stuff. So this is looking at what Oslo Config can do. And what I'm doing here is just creating a couple simple options that I might need. So with Sahara, when we create data processing clusters, we usually need to know the networks that we're going to be attaching the nodes to. And so for an application that I'm making that might interact with Sahara, I want to create a couple options that will be able to allow me to put those in ahead of time or maybe allow another application to fill in before it spawns this application. And at the bottom, I'm just showing how I'm going to register those options. And I'm going to also register the password options that go along with Keystone, so my credentialing options. And really, this is just meant to show how simple it is to create these interactions and how rich and expressive these data objects are. So we can see this is what I made. And then I could use those as command line options, but I can also get Oslo Config to create configuration files for me. So we can see here we've got a nice example of how it's created a description, it's added these in, and I've got a nice clean INI file to work with that can be updated and filled in with the right values. And so I'm not doing anything with the top two currently because the defaults are fine for what I'm doing. But the bottom I've put in my specific Keystone credentials and where my authentication server is. And in this respect, I can really tailor how each application interacts with the rest of the cloud. So authentication, this is a big topic. This is going to be common to almost all OpenStack components. And I have a little star here because there will be some situations where you might have deployed a service that doesn't use Keystone for a back end, or you might have a service that just allows public interfaces, for example, you may have a Swift object that can be accessed through a public URL. So there are some corner cases. But this is going to be a code that you're going to see all over the place, and you're going to see it again, and again, and again. And so as you build your applications, you're going to want to look at how the Keystone authentication project does its management of these things. So there's a concept of reusable sessions. So you can really reduce the amount of repeated code in your project by taking a look at some of these interactions and figuring out how to use these mechanisms. And so what I'm going to show here is just this is an example of creating a few clients. So at the top here, I'm just kind of highlighting that I'm going to load, like before, I put the password options into Oslo config. Now I'm just going to load them back out, and it's going to create an authentication object for me. And with that authentication object, I can make a single session object that I can then use to create my Keystone client and create my Sahara client. Now in this specific case, the Trove client does not accept a session object. So I have to actually manually give it my username and password and project name. But even in this case, since I've pulled those options out of my configuration file, it's really easy to pass them in. So using Sahara, we want to create data processing clusters that our applications can connect to and use. And when I'm creating applications, some of the things that I like to think about creating templates that will allow me to do these operations that I need to do over and over again. So if I need to create cluster templates or if I need to spawn clusters or create node groups, I might put these in my code and create templatized versions so that I can quickly substitute values in at runtime for things like what are the networks I want to use. And then I want to think about if I need to provision my application or provision my cluster, I may also need to scale it depending on what's happening in the cloud. So if you write a very smart application that's able to detect its needs, you might actually have it dynamically scale the cluster as needed as more data comes online. And then I probably also want to monitor the progress and the health of my cluster so that I can know, if something goes wrong, I want to warn the end users that the cluster is in a degraded state. And so this is just an example of a template I might put together. In this case, I'm creating a Spark template for a cluster that I'd like to deploy. And this isn't the full example, but this is just kind of showing you where I might use some of these variables in the template. So if I want to create mini clusters or I have an application that might be responsible for creating mini data processing clusters, I'm probably going to want to give them different names. They might need to be on different management networks. They might need to use different floating pool networks. And so by templatizing this and putting these variables in there, it gives me a nice programmatic way to reuse these options again and again while putting in new variables each time. And then this is an example of how I might use it. So on the first line here, I actually create the template for a cluster. And then in the next line, what I'm going to do is try to find the images that I'm using there. Let's see. So that I want to look into Sahara to find out what image I'm going to deploy into my cluster as I spawn it. So at this point, I've created a template that will tell Sahara how to create a cluster. I'm going to find the image I want to use with it. I'm going to find the key pair that I'd like to sync up with the nodes on that image. So if I need to log into a node to look at it, I can do that. And then finally, I'm going to create the cluster. And my point here is not to show you some sort of novel usage of Sahara, but to kind of introduce you to the idea that each of these clients has these keywords. And you can see like, Sahara client has a cluster templates. It has an images. The Nova client has a key pairs. Then the Sahara client also has a clusters object. And a lot of these, in many cases, map very closely to the way the REST APIs are configured. So as you start to work with these clients and you get more comfortable with them, reading the API documentation that's up on developer.openstack.org can really help to inform you about how some of these clients work. And this goes back to the idea of using the Python REPL to explore these things. So some of these you can find good documentation for because some of the clients are documented very well, but some of them do not document all these options. And so you might have to explore a little to figure out how do I create these transactions? How do I do what I want to do with the cloud? And then here's an example. Once I've spawned that cluster, I might want to create just a simple loop to see, okay, is it active yet? I'm going to sleep for five seconds. I'll get the cluster again from Sahara to see its updated status. And if it blows up, then I'll do something. And if not, I'll loop back and I'll keep doing this until it becomes active. And then finally, and this is something that comes into how you feed information between applications. Once I've created my Spark cluster in this case, I probably want to know the IP address of the master node because when I create a Spark application, it's going to want to talk to the master node to send its request there. So after I've created my cluster, I want to pull out that master IP and save it for later. And we're going to show you how you use that later on. So another big part of these data applications is using message queues, especially for the streaming applications that I've been working with. Using a message queue is a very convenient way to get data from one place to another or to ingest it as it's happening live. And the Zacar project is the OpenStack messaging service and creates a really nice baseline and a really easy way to communicate between applications that have the proper credentials to use it. So you can create named queues, these are like topics you could have. Of course, you can write messages to the queues with several different types of options. You can read off the queue on a manual read type basis. And there's also a subscription methodology where you can set your application up to be notified by Zacar when new messages come in. And so here's another example. I'm creating a Zacar queue on the top and I'm going to call it my data channel and I'll create a little message and I'll give it some information, I'll give it a time to live and then I'll send it off. And again, what I'm showing here is that to use these clients, the keywords that get used are often very similar to the way the REST API is and they're very easy to discover. So it may look complicated when you begin and especially if you go to some of these projects documentation, you may not see all these interactions but looking at the REST APIs and looking at the clients, you can start to deduce how these will work. And again, I highly recommend playing with the Python REPL in reference to these things. So another, the previous example was sending a message. Here's a very kind of naive approach to receiving messages. So I create a queue and then I enter just a long loop that tries to pop messages off the queue and if it doesn't have any, then it goes to sleep for a second and if it does have some, then it does whatever this process message function does. You know, hopefully it's doing something useful. I don't know. So data stores, as we talked about, as you're creating these data processing applications, you're going to want a way to store the information that is produced from them because presumably you'll want to share it or analyze it later or whatever works for what you're creating. And so you need to think about what type of store are you going to create? How are you going to maintain that data store? How will you back it up? What will you do in the event of some sort of failure? And then also how will you provide access to that data to other people who want to use it? And so in this respect, you can use the Trove project and what we're showing here is I want to create a simple Mongo database and so this is another example of how I'm templating out some of the options that go into that. So I want to make a database name. I maybe want to create a dynamic user and password that can access that database and maybe I want a network to put that database on. And then finally, I just use the Trove client and a simple command to create the instance and create the databases. And if I use this in the framework of a larger application, I'll take these options like the username and the password and I could pass them on then to another application that might read from this. So if I have a web client that wants to look at the data that's in my database, I might pull these credentials out that I could create dynamically when I make the database and then pass those on to the web appliance to look into that. And in this way, you don't have to have long-living users and passwords lying around you. Some of this data can be a little ephemeral. And then in a similar manner with this here example, you see here with the Trove example, I'm going into this simple loop again where I'm going to try and wait until the cluster is active before I do anything. So in this case, I'm doing the same loop I did before. I'm just looking to see, has it gone active yet? And if I have an error, then, you know, let's get rid of it. Otherwise, let's keep looping. And I realize, I think I have an error in my code there too, so I don't know. But finally, after I do that, what I want to do is pull out the IP addresses for the databases that I've deployed. And the reason I'm doing this is because at the bottom, as a convenience to the other applications that will consume this information, what I'm going to do is create the Mongo connections during ahead of time. So, you know, I've got my user and my password that I created earlier. I've got an IP address that was given to me when the nodes were created. And then I've got a database that I dynamically created. And this connection string would then be passed onto another application that could use it. So, this is where we depart a little bit from the OpenStack side of things and we get into Spark streaming because the applications that I've been creating have all been based around Spark and using it to stream information. And I've been using the PySpark interface, which is really easy to use in conjunction with OpenStack. And so, some of the things that I think about when putting these applications together is as I create my Spark cluster, what is it going to need to have access to during its operation? So, if my cluster needs to, for instance, store information into a database, I need to make sure that my cluster and the database are on networks that are accessible to each other. And now we're getting more into kind of the streaming aspect of it, but thinking about how you'll slice the data as it comes in. So, you're going to have data coming in, you're going to need to think about how will I break this down, how will I process each chunk? And this is something you're going to be concerned about, especially with respect to streaming applications. And then finally, how will my data processing cluster communicate with the rest of my applications? So, if I have several applications or even several processing applications that work in a queue, I'm going to need to send information between them to provide updates or to tell the user that something has happened or even just to make sure that the processing goes from one place to the next. And in this respect, I need to think about am I going to use message queues to do this, which create more of like a bus that we can send the information on? Or will I create direct connections between my cluster and the applications they need to speak to? And the concerns you're going to want to look at is, you know, how much data are you processing? How big is the cluster? Are you going to be overloading communications if you communicate directly? Will that introduce brittleness into the entire cloud application? And maybe is it safer to use a message queue because more instances can read from the queue, you're not directly communicating with specific endpoints. And so just a little example of what the Spark code looks like, and again, this is kind of a departure from where we were at. But what I wanted to show here was this first highlighted line is where I'm setting up the master instance for my application. And so at this point, I've deployed my Spark cluster and I've got an application that's going to run maybe on another node. And like we saw before, when I created the Spark cluster, I pulled off the master IP from that cluster. And so my controller application that created the cluster takes that IP and then would pass it into my data processing application and it would use that so that I could dynamically set the cluster. And in this way, I can have my application running against several clusters. If I want to, I can spawn several instances of it. And then just to get into kind of the time slicing, and this is more about Spark, I'm creating a streaming context here and I'm showing that I'm going to tell it to update on one second intervals. So again, specifically when you're looking at streaming applications, you're going to have to be very aware of how you slice the data and how you're going to process it in those chunks. And then the other lines I'm just going to show, I create a text stream and in my specific case, I didn't have access to something in Spark that would read directly from Zekar. So this goes back to the idea of custom applications. So we needed to create a custom application that would pull information from the Zekar queue and then forward it to a socket that the Spark application was listening on. And so what I want to highlight here is that as you think about your applications, these are the custom bits that you're not going to capture to begin with, but that you're going to have to think about as you look for the unevennesses and how you move the data from place to place. And then just as an example of Spark, what you can see here on the last line, the last highlighted line, is that for each time slice that I'm processing, I've got some generic function that I'm going to run that's going to process that data. And it might write to a database, it might do a few other things. So this is what the generic processor looks like in my case. I just wanted to, I was bringing information in from the stream, I kind of map it depending on the keys that it comes in as. And then I want to store the packets to the Mongo database. And what you can see at the end here is that I have the Mongo URL, which is my connection string that I also created earlier when I spawned the database. And so in my Spark application now, I've passed this in as a configuration parameter, either through the file or through a command line, which command line might be more appropriate if I'm doing this dynamically. And then after that, I'm going to signal some REST server, some application that I've also got living on the cluster. And again, in the same respect, I might have created this instance through Nova, taken its IP address and then passed it in as an option to this application so that in that way I could create a dynamic application that could be deployed several times with different clusters or even in different projects. And so bringing this all together, what I like to think about is, how do you compose these applications? And like I said, you're going to want to use controllers to create resources. You're going to want to take that information and then pass it into the other applications you make. And so we got the little madlib example here. So, you know, Alice is building a data intensive, I don't know, she's going to pass the information in and she needs an IP address and she's going to do something else on the message queue and it's going to be sent to this message queue. So in this way, you compose your applications by allowing the controller to create the resources, pulling the information dynamically that you've created and then piping it to other applications in your data pipeline. And so again, you're going to be using these resources that you create dynamically. You might have several different components that all fit together. And you might have these custom components, these shims to help you kind of even out the process. So with all that being said, I'm going to hand it over to Nikita and he's going to talk a little bit about heat and Murano and an alternative approach. Thank you, Michael. So what are the heat and Murano components offering to this kind of application design? It's actually, they provide a higher level API to control your OpenStack client iterations. And what they really do is allow you to provision all your cloud resources through a single endpoint like heat API or Murano application. They are also what they're providing is the generic and more declarative syntax. So you're trying to provision your Nova VMs or Sahara clusters, not through the calls to a Python client, but rather than you're defining YAML template and then feed that into the heat API or Murano app to provision you. And if you look at the Murano application or a heat template, you will see that the level of the API is higher than the basic OpenStack clients provides. So you will be able to get the monitoring capabilities or scaling capabilities or even auto healing capabilities exposed through the Murano or heat auto scaling groups. So again, how do these compare to the typical cloud client workflow that once you build the application and you want to provision some cloud resources, you will need to authenticate and authorize your OpenStack clients somehow, then you will need to configure them to reach the OpenStack service controllers and each client requires a different side of parameters. And then you will need to handle and pull these clients for the resource response and for the resource status and get the output afterwards. So basically what you will have, finally in your application, you will have the business logics that you want to provision and it will communicate independently to each of the client codes. Like you will probably have one for Troll, one for Manila, one for Sahara and each of them is having exposed in your different set of parameters, different authorization mechanisms. Some of them supports re-authentication with session, some of them don't, but the idea is here is that you will have a lot of pieces, small pieces of code interaction with each component independently. Of course, this approach has a lot of advantages like you get the full control and you get access to all of the possible options that these clients are exposing. Also, what you are getting is you may run all these operations as sequentially and independently of each other. So these clients usually do not interact between each other in any way. But the drawbacks are also obviously each client has access to only a specific service. This is not aware of whether you are pulling the... While you are pulling this Sahara cluster for status and you see that Neutron has failed to create some network for you, well, Sahara client will not know about it until the cluster fails. So you're as a developer here to handle everything like asynchronous weights, rate rise and handling information of the client. When you move to Murano Heat Approach or combination of those, what they will offer you is that your application is talking only to the Murano Heat API. And then they care about communication with the Trove Heat Manila or Trove Sahara or Nova Neutron and everything like that. So now you have only a single point of access and you can just declare all your resources once. Put them into the Heat template or Murano app and then just wait while these orchestrators get everything done for you. This approach will, again, one API endpoint. Also, what you're getting with moving to Heat and Murano is the community application catalog which has a lot of stuff already built in. Like if you want to provision a simple HTTP server, it's no need for you to invent it once again. There is an application for it already and you can just use it or slightly modify it and then use it. You're also getting built-in UI capabilities. So your templates and your applications will be exposed in Horizon and all the parameters can be passed through UI. You do not need to generate a lot of config files for that. And these orchestrators take all the async weights, error handling, retries, and self-healing properties for you. But the drawbacks that you get are, your calls now are limited to Murano and Heat. So now just quickly go through some quick examples of how you built both Heat and Murano texts and templates and just to give a simple insight how you configure your application finally. So building his tech is pretty straightforward. If you declare what your parameters want to have for your template and then you declare what your source is going to have provisioned, of course, using those parameters. So the Heat template usually is a YAML file. It supports JSON as well. And what you are allowed to define is the set of parameters with different type constraints and descriptions that will be useful to display on UI. You can also set up some default values if you're a declarer like some flavor constraint or the number of nodes which can be not less than one or not more than 10. You have a set of basic types that you can use here like strings, integers, and all other stuff. And then the actual section that these declarer resources is just a list of what are expecting your application to look like when it's deployed. So you can say like, you want to have a group of virtual machines which will get some configuration from outside like the parameter that you have declared a server config and number of instances. And while having the server group, you may want to set up some auto-scaling policy which will rely on the parameter like what instance are going to be scaled which image they are going to use for scaling and what your user keys are going to be provisioned there. And that's all that actually you need to build a heat stack. So the template is pretty straightforward and you fit it into heat and wait for it to complete. When you go to the Morana application building, well, there are a few more steps you need to pass but if we look at each of them, they are getting pretty much clear. So you are declaring a manifest which is a high level description of your app but then you set the plan how it's going to deploy these resources and describe the deployment process for each of them. The UI components are kept separately in a separate definition and all this goes into a simple zip archive or the other type of package that you give that to the Morana API. So the application manifest is very simple. You're just declaring the format and the type and codename for this app you are going to use to filter it. You can also declare a set of tags. So if you have a multiple of them in your environment, you can filter by them. So the next part is the execution plan which actually allows you to declare parameters similar to heat but now you're going to also declare the shell scripts you may run on a guest VMs and the policies about how do you want to process or you just want to skip their inputs and output that they generate. And if you select to capture this inputs and outputs then these outputs will be available for the next parameters usage and you will be able to reuse them for other components provisioning. Next step is where you define the resources you want to create like on the initial phase of your application provisioning, after the on deployment phase, after the deployments and so on and so forth. You also can define the constraints you put on the instances and this is actually the place where you set up like auto scaling or auto healing properties as well. Finally, you go to the UI definition which is another YAML file but what it will say is how many and which type of the fields you want to see on UI, what are the default values, what are the descriptions and these parameters will be also available in your execution plan after when you deploy the application. So then when all of these components are in place, you just put them in a simple directory tree and package that as a zip file, which then once it's uploaded to the Morano application catalog, you'll be able to provision it from UI with a few clicks while just setting up environment and filling in the parameters. So that's the provisioning part we've done with Hidden Morano and here I want to pass the mic back to Michael. Thanks, Nikita. All right, I realized we're a couple of minutes over at this point. So just quickly, going back everything we talked about, you want to think about the application lifecycle, how are you going to deploy these applications, how will your teams iterate them, how will you help empower the collaboration between these parts? And it all goes into resiliency and everything we just talked about. A couple of quick links, you want to watch the OpenStack SDK project, Morano of course, I wouldn't be here without giving a shout out to Spark so definitely check that out. And then there are some example code pieces in my repository at this big data barbecue on GitHub and I will put a PDF version of this there as well. So if you want to review these links at that point, please check that out. A little more reading, just general, check the API site, look at the app catalog, look at these Morano examples. So I guess do we have any questions in the negative few minutes remaining? Yeah, sure. Go back one more. Yeah, the link here is get this one at the bottom. I will put the PDF up there later on today and there's some more code examples there that show you kind of what you can do. So all right, no questions. Get out there and build something.