 Hello, and welcome to build and operate your first open stack application My name is James Dempsey. I work with a company called Catalyst in New Zealand doing private and public clouds With me. We have Christian Barron from B1 Systems and from Mirantis. We have Sean Collins and Nick Chase And we also have Tom, but Tom is not here. Tom is not here. I'm sure that Tom is oh Tom five field Yes, from the foundation. He did a lot of this work. I Remember Tom so Why are we here? the short answer is Christian wrote an application and then we wrote a guide about Installing this application in open-sat clouds Why are you here? So there are a few reasons that you could be here You're a developer and you want to develop open-sat applications. That's a good reason to be here You're an operator or an advocate kind of like myself and you see a need for this documentation Or you're lost so there were two very very good-looking seftalks happening right now and also one Neutron talk so you may be lost I won't take offense if you need to get up and find the correct talk. That's okay Or some other reason if if you're here for a reason that is not on this list come and talk to me I'm curious about everybody's motivations Just tap me on the shoulder at some point Why am I here? So I build clouds I Like to see my developers win. I Like to see them build awesome things and so I view this as a way for me to sort of foster the growth of my cloud I like my developers I Don't like my developers standing at my desk There's a slight problem. They're all lovely people, but I do not scale and So when people come to me and say hey James How do we do this thing in the cloud and I take five minutes and explain to them and everybody's happy and everybody smiles And then ten minutes later somebody else comes and asks the same question I can't handle that so Hopefully this application and this guide will help Everyone to learn and I don't have to scale So I'm also selfish and kind of greedy When I have developers who are enabled and informed and build awesome things in the cloud People use those awesome things and when people use those things they need to scale out and when that happens Directors come to me and say James. We are going to buy you bigger shinier toys So I view this as also a way to get a bigger better cloud more switches more routers all of this And so my goal is essentially to plant seeds in your brain This is an introduction on building applications in open stack Some of it will go into detail some of it we brought ideas and concepts But really I just want you to sort of think about what the possibilities are and then if you walk away with ideas That's excellent and I've done my job So next I'll hand off to Christian who will talk to you about the application so yes so before we could start it to write the documentation itself and get something to document so We needed to build an application and we decided to write our own application because we do not want to decide So for example to use WordPress or some other kind of application so we ended With this application so this is the web interface of the application and it's simply a fractal image generator So you can generate a lot of rectal images with this application really large fractals and a lot of them on on compute nodes and This way we have a scalable Application you do not have to care about how it works. It's a black box application You just have an installer we use inside the document so that you can call the installer on your compute nodes and so on and Then you have a simple CLI interface to generate the fractals and you have the web interface to see the fractal images And then you can share them with your friends and so on and you can reuse them for some need So that's all and then around this application. We wrote the documentation itself So and in this documentation, we demonstrate how to use this application on the cloud platform like this open stack and yeah So the user guide this is what the top 8th of the front page of it looks like a lot of it is very similar to this The reason this exists is we saw a documentation gap and That was revolving around getting new users of the cloud bootstrap so that they can become effective So this guide aims to do two things aims to do at least two things Lower the barrier of entry For new developers. We want to have a step-by-step guide where you can copy and paste things or download the script and make changes So that you can get going without having to figure out what code do I need to write next? What am I what am I missing? What do I not know that I'm missing? The other reason is we want to get you thinking about how applications should be architected differently when deployed to the cloud It's it's easy to take infrastructure and then copy it into the cloud and Then it's just the same and and you have problems because there's sort of a different modality that needs to Sort of be imbued in your applications So you have to for instance plan for failure Plan for failure hypervisors crash plan for outages. I don't know if you've heard about the venom Vulnerability so everybody had their instances surprise rebooted So you need to be prepared for that you need to be prepared for performance issues You'll have noisy neighbors in the cloud that you wouldn't necessarily have had in your sort of bespoke infrastructure in addition to that there are a number of service offerings and These can be exceedingly helpful to use and it's not always intuitive How they should be used so things like object storage. What what does that mean for application development in the cloud? So hopefully we will explain some of these things and give advice So here I will pass it over to Nick and he will chat about some of the sections and then we'll start going through the sections One by one right hitting the high points. Okay, so what I wanted to do here is kind of talk a little bit about the structure of the document just to kind of lay give you the lay of the land to talk about The types of things that you need to think about so as James was saying It's a whole different mindset when you're talking about a cloud application You have to think about different aspects of what you're putting together and Now Christian was talking about the application what we're really talking about here is two applications Okay, we have the application that users are going to interact with and then we have the application that deploys and manages it among the open stack instances that you're going to deploy and so The part that your users interact with that's the black box. That's the part. You don't need to know what it does You don't need to know how it works all you need to know is it gave us a good example for something that you would have a reason to scale up or scale out It's the part. It's the management part that we're going to show you how to build in this In this session so basically we're going to start out by just showing you the very basics How do you create a VM? How do you destroy a VM? And so on and so forth Then we're going to talk about okay now that I've got the basics. So, you know, I can do that Well, what's the point of putting in the cloud? Well, I need to be able to scale out and give it more Resources when we're getting you know when somebody's asking for a thousand fractals instead of two All right now we've we've got all that taken care of but now what good is making all these fractals if nobody can actually see them so Now we're going to make it durable so we're going to store files It's not just about storing these fractals storing files are things that you guys do all the time in your applications And that's the whole point of this we tried to build something that you could generalize to your own situation so We talk about block storage the same thing, you know, we've got moving to Databases of service and so on orchestration of course That's almost the whole point of this whole thing because if all you were going to do was oh look we've got You know, we've got capacity up at 90% Let me go quickly start up another VM You don't really need cloud for that. Okay, what you need is a situation where oh look It got up to 90 percent or 60 percent or 80 percent or whatever you set it to and it automatically scaled it up I don't have to touch it So we'll talk about how to do that and of course Wouldn't be an open-stack talk if we weren't talking about networking. So thank God we have Sean here cuz And of course we've got some general advice. It's like Okay, look if you're gonna do this think about these things because this will help you so So that's the general lay of the land what we're gonna do so Getting started here. So the first thing we're gonna do is we're going to decide what SDK we're gonna use now This in this particular In this particular Example that you're gonna see we used lib cloud lib cloud is just a cloud API, you know SDK basically It Doesn't matter which one you use the concepts are all the same Okay, we just had to pick one for this example and this one was the most complete when we were writing this book so So we'll we'll go through and so you'll choose an SDK In this case, we're gonna look at connecting to the API's remember about open stack. Everything is built as an API You know, that's the whole point of cloud. It's all an API I don't have to go in there and modify the nova database, you know, no, I just call the API and in this case we are Talking about the general open-stack API not the individual project REST API's and then we're going to actually deploy the application to The to an instance that we create so we're gonna start there. Yeah, thank you. That's probably better All right, so this is this is the example so we so we start out Looking at is this a laser pointer also? We'll just we'll work for that. So as you can see here You can look if you see off, you know off username off password those all look familiar. I'm sure Because those are the typical Authentication parameters that you have on any open stack application that you are doing So we're going to grab all of that grab your region name And you're going to need to create a connection Obviously without a connection. You can't do anything. So we start out by We get a provider. All right in this case. It's an open stack provider Remember I said this is lip cloud doesn't have to be open stack in this case. It is so we get the provider So from the open-stack driver, then we create the connection now You know when you are creating when you're launching AVM Well, what are you launching it from while you're getting an image? You're deciding what flavor you want We're doing the same thing here Okay, so we're saying all right I'm going to do this particular image and this is the flavor that I need based on the size You know and When you are creating a VM you want to give it, you know a key pair We'll set aside the argument. No, I don't really need a key pair. Yes, you need a key pair Okay, so I'll get about it. You need a key pair. All right You're going to create a security group All of these are the same things you normally do so Then we're going to go ahead now When we create this instance We can tell it to go ahead and do something so in this case as you can see here. I Don't want to Like yeah, there it is. Okay, so as you can see here when we create the instance We are going ahead and we're passing it all the information that you would normally pass in when you Create an instance say, you know, you know booting, you know using Nova But one thing that we're doing here is this user data Now in this case what that is is that is a command or in this case a series of commands I'm sorry. Well, actually it's really just Yeah, it is a series of commands What I get for looking at it closely That the instance is going to the VM is going to execute when it starts up So we're creating this VM and this is just the install for our For a fractal application It doesn't matter what this command is in this case. We're just using that command because it's convenient We're talking about how to install this particular application could be anything. Okay, so Once we set up What that instance is then we are just going to go ahead and wait Until we get word that it's running Okay, fairly straightforward anybody have any glaring. Oh my god. It's not making any sense questions good Okay, so Now we've got it. We need to go ahead and We need to go ahead and make it accessible So we're going to go ahead and grab a floating IP You know, it's these are all the things that should normally do when you're creating an application In when you're when you're working with OpenStack, but the thing about all these is You know, we're grabbing an IP. We're you know connecting to it. We're attaching it to the VM rather in This case we're printing out a statement that says okay where it is so that people can get to it The important thing about all this is this is all programmatic. Okay, you're doing this in a program There is no reason that you can't add logic to this program to manage your overall cloud environment That's the key that It takes a lot of people a minute once they get to that point because they know it's like I want to run an application on the cloud Okay, we'll run as you know James was saying Taking your legacy workload and dumping it into the cloud does not make it a cloud application It makes it an application in the cloud, but it is not a cloud application a cloud application is one that reacts to the environment and This this stuff here all of this code that I've been showing you that's how you react to the environment So if something goes down you could restart it whatever it is and this is going to be this is just a quick example of What we were just looking at so as you can see here We're gonna go ahead getting started that PY. It's just the you can see it's just the actual code that we were just looking at and we'll go ahead and Run it and you can see what it's doing. It's generating all this information. It's listing out the available images The available flavors all of that stuff. So and you can see here We've created a key pair and So on so you can see the output Checking the IP the floating IP and so on and there you go. And so we've got it deployed so I'm gonna go ahead and Hint it oh, we've got a couple more seconds on here. Okay, so there you go. So you can see okay. Yeah, we're gonna see that actually worked You minutes out because it does some app to get installs in this process So there is five minutes of missing time. Yes, that I am not subjecting you to And then you can see there's the image so there you go all right, so I Think we need to advance here. Okay. There we go. Oh, I was gonna talk about this piece. Okay, so This is kind of the basic. You know what? I'm gonna give this to you. I'm gonna let you do this one. Yeah, you take this one So this is just a quick view on the architecture of the fractal sabotage So you have not to take care about this architecture, but It's simply that we have normally an API to access our cloud application So that we do not have to care about the Android point on the on the backside we have a database to store everything So that we have a stateless application that we can simply move the API node and so on and We have the web interface in front of the API so that we can see something and we have of course the workers on the To compute all the fractal images and the workers are necessary to be scalable so that we can have a lot of workers So we cannot only have one instance of the worker We can have ten or thousand of them to to generate the fractals and All of so we can run all of the services Multiple times on multiple nodes so that we can be fault tolerant Doesn't matter if an API service dies because we're most have several Notes and and this way we have a scalable fault tolerant stateless cloud application That can be automated for example with heat for deployment with the installer and in front of all of this We have an API to control it via the CLI or we develop interface and this is just The normal architecture of a scalable cloud application Yeah, that's it. So now we get to scaling out This is one of the sections in the user guide and we take the original application that was deployed in the sort of all in one fashion and Figure out how we can make it more scalable So we move the database and message queue services to its own instance and then we sort of Look for the other components which are stateless and easily replicated without having to do magic between them And we find that we can have multiple API instances because they're just taking requests and then putting that on to the work queue And then we can have multiple worker instances. I have API in there twice. Imagine one of those is worker Multiple worker instances and so we scale out they can do fractal generation in parallel It's much more Performant and is fault-tolerant you kill a worker and maybe you will lose One fractal that was sort of in progress, but you don't lose the ability to create new fractals And that's sort of the important thing An interesting thing to point out here is when you Look at the database and the message queue and you say I will put them on an instance that's sort of Saying something important and and we have sort of Implicitly identified them as components that we don't know how to scale So when you have taken the stateless things and made them really big you need to go back and say okay now What mess do I have remaining? Well? I've got a database and an MQ and that's dangerous. I need to figure out At some point. How do I make this particular choke point more robust? And that's not in the guide you can figure that as an exercise So after scaling out we talk about durability Durability in this context we're going to talk about object storage and object storage is Essentially a place for you to cram objects. It's a rest API You have containers and you put objects into containers and then you have this namespace that can be accessed over HTTP One of the selling points for Object storage is the durability Oftentimes cloud for briders will replicate objects multiple times and only tell you that yes I have received your object once it is safely on however many nodes they care about And so you can get many of the nines that you want to make sure that you feel safe that your data is gonna be in existence tomorrow ease of administration also you don't have file systems that fill up you don't have That new since admin who learns in the middle of the night that there's a constant number of I nodes on some file systems Service resilience is another thing if you have Object storage that is replicating across multiple regions or multiple clouds If one region goes down you can point your application at another region and Your application will continue going also you get performance because you're leveraging Whatever fast storage these object storage providers are using but there are caveats you should understand that object storage is Different depending on how you implement it and you can't assume that I have Multiplied replicated objects just because you've put them in object storage You really do need to go and read your SLA read what your performance is likely to be and make sure that You sort of know the sharp points that you are likely to find In the context of the fractals app We use it to backup fractals and it's sort of a very simple use case and there's a lot that could be expanded there but We use it for backups and I won't show a demo of that. It's not really exciting But just think about it. Like I said, I just want to plant seeds in your brain Um block storage so Often times when you ask for an instance you will get an ephemeral disc And if that instance goes away then you just goes away and If that's a problem for you and you need to look at finding some persistent storage and block storage is Often where you would do that It gives you the fault tolerance against your hypervisors crashing and taking away your data That gives you a limited amount of durability Typically it's pretty durable, but maybe not as durable as object storage again read your SLAs Oftentimes you can get provisioned IOPS if you have this database that really needs to go fast You can say please can I have a fast disk not just a regular disk and you'll get that one important thing to recognize is When you ask for a faster disk you're scaling up and not out And so that should raise red flags in your head and you should say okay that has given me a little breathing room because now I have a fast disk, but How long will it be until I can't ask for a disk that's fast enough for my workload? These are the kinds of things that you need to consider when building cloud applications Cloud applications. Yes, not applications in the cloud As far as in the context of the fractals app We move the database From the ephemeral operating system disk on the services instance to an actual block device that has much better durability standards Another thing to consider would be Maybe I don't need to do block devices at all because trove is implemented and I can use databases of service Just a thing to think about What do we have next? Ah Orchestration possibly the most interesting part of this I think So what we viewed earlier was Programmatic definition of infrastructure and application. We ran a program. It did things. It was automated. That was excellent another modality is declarative Orchestration so we can just say okay. I don't actually care about These steps involved like I don't want to know I don't want to have to know that after I create my instance I need to tell my script to stand still and wait for the instance to finish building before I attach a floating IP Because I know that the port won't exist and so I have to wait These are things that we don't really care about and we shouldn't really care about so Declare it of Methods so we use templates in this in this case will let you just say this is my infrastructure you make it happen This is your problem So in open stack heat does the orchestration it uses templates and You have environments so you take these environments and their sort of data inputs into your templates We'll have an example in a moment This gives you automatic Dependency resolution again. I don't care what needs to happen first. I want I want the orchestration service to take care of that Orchestration will also take care of auto scaling so it has integration with Solometer alarms so you can say when CPU load gets high scale this group out life is wonderful It's also revision controlled infrastructure because if you define your networks in your orchestration templates and those templates are in get you now have a full revision history of your physical physical infrastructure and Not only that but since it is tightly coupled with how you deploy applications you have Also that for applications, but sort of also even more interestingly you have You know what your exact infrastructure looked like for this particular version So if you have some weird interaction of my infrastructure caused a problem and there's also a software bug maybe you need to go back and and sort of evaluate the entire ecosystem and Knowing that this is what my infrastructure looked like at the exact time that I had this application bug can be very valuable Here it will do an orchestration demo I Think Yeah, we'll do an art demo So here we have nothing and then we have two files. We have an environment file This will be the input the data that we push into our templates So I'll have one environment file per cloud region and then I'll just have a different environment file for the separate cloud region Inside the template we have inputs That's defining what is it going to be in our environment. We can have default values And then we define resources so here we have a router We have Networks I think we define three networks here and then we define some instances and the instances Have ports and floating IP addresses and we even have custom configuration options that we pass to them And that is the rest of the file. So we have three instances here that we're creating. We have an app server database server and a worker so here we use the heat stack create command and Give it our environments file and also our templates file and we'll give it a lovely name and then it's off So now it will do all of the things that Our programmatic script would have done But it will do it in whatever order it thinks is important. So we've created networks. I should say This example has some lies. I didn't create Security groups. I didn't make sure that the SH key was there so this part of the User guide is in progress You can also do some interesting things around template composition So instead of defining three instances and sort of repeating yourself a lot of times I think in Juneau they have some features that make composition of templates much better so you can have Concise templates that refer to other templates and so you'll have network templates and Application templates and they'll just all be together one big happy family And if you saw in the script earlier that there was a password in there I'll just let you know that that is no longer the password. So don't don't bother Unless I'm just telling you that because I'm freaking out who knows No, no, no, I actually did change it I actually did change it. I just like awkward situations So here we go we have our three instances We're not going to Demo the application because that part doesn't work yet So I'll pass it to Sean for networking So my job was comparatively simple Compared to the great work that everybody else has done Obviously from all of the tracks the networking track has been fairly popular and My view was that in this first app application guide Networking is the foundation on which everything is realized. So All of the workers will communicate with each other using the networking pieces That are provided by the networking service otherwise known as neutron with a Under like no capital and for the neutron for the documentation standards so What I wanted to do was do a showcase of what the networking API could actually provide for you So what we started with was working with the command line interface Since the lib cloud API that we were using Still has some Work that needs to be done for the new trip the neutron provider So we just fell back to using the neutron command line client in most cases So one of the concepts that we had through writing the first app application is the concept of segmentation meaning that You would have the API services you would have the database services So you would have distinct pieces of functionality that were completely contained within either an instance or a Template or something like that on the networking portion We wanted to have segmentation of the actual networking infrastructure Where the database nodes would have their own private network that's isolated from the rest of the infrastructure You would have API services that would have their own network segment And then you would have the web piece that would probably have the actual External connectivity in since that's what your users are going to be interacting with The idea is is that it's in some cases You're bringing over like a three-tier application into your cloud environment on the networking side so The idea is is that you would use your security group rules and then the addressing scheme that you're using between These instances to sort of separate things out So that they can be managed separately and one change in one area doesn't affect the rest of the of the infrastructure So that's what the tenant networking does. We walk through creating the networks through the CLI We do an actual creation of a router Which will provide the external connectivity and then connectivity between all of the nodes in the network Sort of a funny anecdote was originally I had it where it was three routers and I sort of went a little overboard with it and then I Realized oh well the three routers would have to have links between each other And then it sort of just grew into this big ball And then I just went back to just simplifying it a little bit by just having one router that would contain everything And then really what is probably the most pertinent info for an application developer is what the networking Service can do on the load balancing side of the equation Meaning that you can use the networking load balancer functionality to load balance between your virtual machines and then swap pieces out if they have an error or It goes into a failure state without having to update DNS or have anything that the actual user is aware that Something has happened behind the load balancer that is for some configuration changes I Did notice that We worked Nick and I and a few others were doing the networking guide portion and that Demonstrated like on the operator side of what neutron networking looks like when you're deploying and I felt that the first app Was very complimentary because it showed the other side of it where the consumers of the service This is what you would have to do to use the load balancing API and so on and so forth Because that needs to be documented fully and understandably for our users so that they actually as James said selfishly. I want people to use the networking API and Want more functionality out of it and have higher adoption because that keeps me presenting every summit So with that is there any other slides? Okay, so I wanted to give a bit of advice very quickly I come from a systems administration background. So when I have a platform to talk to developers I want to give good advice System in appreciation day July 31st a lot of developers don't realize how much is given to them by their systems team And so I think if you are a developer who is new to doing operations It's worth sort of establishing communications and chatting with them and saying hey What is it that you were doing for me that keeps me safe? And how can I make sure that I'm doing the same thing in the cloud? Backups always do backups Security you got to apply those patches Configuration management and deployment this actually comes with orchestration So if you're using the orchestration service, then this forces you to have good habits Phoenix servers maybe consider Designing your servers such that if they die, you don't even try and troubleshoot them Just build a new one that sort of rises from the ashes fail fast Use this as An opportunity to try out new things and if it doesn't work just blow it away If it's if your entire application is orchestrated There's essentially no cost to trying something so fail fast And my favorite piece of advice if you liked it, then you should have put some monitoring on it So resources this Document is available in draft form up here on the in the API site guide On github. It's part of the open stack slash API site repo and it's called first app So if you generally Google for first app you should find it Thank you