 Welcome back everyone who came back. Who's excited for Ansible Lightning Talks? All right, so you guys have like 10 12 ish Minutes and you know if these are it's going over the other two get to beat him up So this could this could be amazing except he needs to go and make sure that like Yes, someone's gonna wave at you and and then and then you can go over to Back to the things that you need to be doing in other parts of Summit Mr. Doug Hellman everybody take it away Thank You Robin So I'm gonna talk about a project that's very very new just imported into the open stack infrastructure in the past week It is partial, but I hope to get some people excited to help me finish it So downpour is a new tool for taking workloads from one open stack cloud and putting them into another open stack cloud So it's probably not something that people who are already building cloud native applications are gonna be really excited about using But if you have some legacy apps or some pets or whatever we're calling them these days It may be something that would be useful for you. I Started building this because we were talking about several different use cases within Red Hat We have people who need to move between public and private clouds between two different cloud providers and also between Private clouds during some sort of an upgrade process, right? We talked about what was involved in moving a workload and the requirements that we came up with were obviously you need to move all of The data because otherwise you're not actually doing the move so we have to download it from one place and upload it to another place and In the process of doing that we want to keep all the relationships between the data So if you've got instances with volumes and things like that you want to keep all of that stuff attached We also wanted to support multiple open stack versions. I mentioned upgrades as a case So we wanted to be careful about which parts of the API we were using and how we were using them And then we wanted to allow for public to private moves So we had to deal with Differences in the API's in different clouds in terms of what was active what was supported and that sort of thing There are a bunch of different ranges of Sort of ease of use or things that would cause some of these moves to be easier or harder So depending on the level of permission you have in the cloud if you're operating the cloud you have database access And you can do basically whatever you want to to that data If you are an admin user you might have more rights than a regular user And then the hardest case there is just if you're just a random tenant you just have a public cloud account You want to get your stuff out of it If you have two clouds with a shared storage back in that makes some things easier You can move volumes between the two clouds without actually having to export all of the data for example If you have two clouds that can talk to each other over some sort of fast interconnect Maybe they're in the same data center or maybe they're Maybe you have a temporary connection set up between them. That's that's really fast And then there's the hard case which is Basically no interconnect between them You might have to go offline in order to move the data between one and the other that sort of thing and then the other Criteria that I looked at was how the application itself is configured. How complicated that is so If you're doing really good clean deployments where you have a separate application or a separate tenant for each application Then it becomes really easy to isolate the data that you need to move in one go in order to get the whole application If you have one tenant, but you're using some naming conventions or some other sorts of things to Sort of isolate applications from each other then that that's not quite as easy But it's still fairly easy and then if you're like me and you just sort of randomly create a bunch of virtual machines And they do things that makes it harder because you have to figure out which virtual machines You need to move and what networks they use and things like that So I decided to start by looking at the hardest possible case because That meets some of the use cases and it sort of exposes some of the challenges that we were going to see So I said let's assume that you only have tenant permission. You don't have admin and you have no access to the database Nothing between the two clouds is shared including you might have to move data physically on a hard drive of like From one place to another because there's no network access between them And then you know my tenant is the tenant so it's full of all kinds of random virtual machines So I need to be careful about selecting them And I said what are the the trade-offs that I have to make by making that hard case my first case that I'm going to try and solve So obviously it's not a live migration tool because you don't have any interconnect You're not going to be preventing disruption like there's going to be some kind of a hard switch over between the two instances at some point You have to export all of that data So you have to have a place to store all of the volumes and all of the databases and all of that kind of stuff that you're going to export and then Something that's not immediately obvious But that becomes apparent when you try to manage the relationships between these things as the UIDs between the two clouds are not going to be Compatible, like you can't necessarily set the UID of a new thing that you create in your second cloud as As I started approaching trying to solve the problem I came up with a multi-phase approach So there are four different phases to deal with that rats nest of different instances and networks and things I said well, we need some kind of a way to select which ones were going to export. That's kind of obvious Then you have to do the actual export and then you have to do the import those those are pretty obvious And then the cleanup of the old thing so you after you do the move You want to delete the data so that you don't have duplicates or confusion or somebody reboots an instance and suddenly you've got mismatched data in a database somewhere Between each of those steps because no migration is ever fully automated No never fully clean especially when you're dealing with different clouds with different characteristics and different support features and things like that I wanted the opportunity to stop and Modify whatever had happened at the previous step before moving on to the next step So there's a lot of text files involved in the process in the tool outputs a bunch of these files So you can stop and generate Modify the the file and then move on to the next step and it'll do exactly what you want to do at that stage So the first step for figuring out what you want to export is the the query command It does things like find all of the servers that have certain name characteristics or that are a certain flavor or that It could potentially do things like give me all of the instances attached to a particular network or something like that And then it builds a YAML file with the instructions that are going to be used in the next step for doing the export so in this example It's just found one instance because I had a little demo environment set up with just one instance But you can see that it could also add images you could query for images with certain name characteristics or things like that And then SSH key pairs and then also volumes that are not attached currently to an instance Then the export step you basically run the export command against that file and it goes through and processes them So it's going to capture all of the SSH keys. It's going to download all of the volumes It's going to snapshot the instance and download the image that it creates from that process Store all of that information in the directory that you specify and then it's going to write a playbook To recreate all of those resources in your new cloud and it'll reference the files that it's creating when it does all of the downloads So then the the import command Is just to run the playbook So there's a whole bunch of stuff that I didn't have to write by virtue of using ansible as part of the process here So I have it creating at this point Public keys volumes network subnets All of that kind of stuff everything that ansible is supporting at this point it also creates as part of the process a CSV file that maps the old you you IDs to the new you you IDs so that you have a data file that you can then import into a Tool or run scripts against or whatever to see if you had a network with a particular you you ID and you were Referencing it by that ID somewhere in your old cloud then you can update that reference in your new cloud And then this is just an example of the the output where it's creating a the private network that was associated with the instance So it automatically detects those resources It looks at this. This is the current set of Relationships that it understands So if you tell it to extract a server it will automatically get the associated things It doesn't do attached volumes right now, but it does everything else on this list You see though that the relationship graph is is a tree and ansible is obviously a linear Sequential set of steps So I had to figure out how to serialize that tree and get all of the resources to be output in the right order So that you create your security group before you try to add rules to it and things like that And the way I did that so this is a little bit of the implementation here It's a Python 3 only tool and it uses a Generator function inside of Python and then it uses the call stack to actually do the serialization So instead of walking the graph myself I'm basically letting the Python interpreter do all of that stuff for me and then it builds a set of tasks and The export command just Generates all of the tasks in the order that they come out of the the resolver code and then so it has a list that then it Dumps that list to yaml and that's your bit your playbook All right, so I mentioned a few of the different resource types that it supports SSH keys if you have a key attached to a server That you created the server using that key it will automatically grab the public key for it security groups networks subnets It doesn't do I mentioned attached volumes right now In part because that's extremely disruptive you have to actually shut down the instance and detach the volume in order to do the snapshot so If you have the volume the instance already shut down and the volume detached then you can tell it to export that data But it doesn't do that automatically And I just haven't gotten to floating IPs and routers And I'm sure there's a bunch of other stuff in here that that people would be interested in exporting and moving that it doesn't do yet Okay, so I would like help with it if you're interested This is where you can find the code. It's in the open stack infrastructure But I do also accept pull requests on my github account the one that gets synced from the open stack infrastructure The pull requests will be closed automatically by the robot But if you apply them to my account then I'll do the merging for you if you're not set up to work with open stack infrastructure stuff right Okay, all right, so grab me around the conference or on Twitter if you have questions. Thanks All right This is spam apps. He does not spam you With apps Yes Anyways, this is Clint Byron and he's gonna talk about Answer turtles. All right, here we go. So this is not Anzi turtles all the way down by the way What am I? Yeah, all right. It'd be great, right? I was ready for some Anzi art, but I Don't know if open stack was ASCII turtles would have been even even simpler, but we'll work on more clouds So hi, I'm Clint Byron. I work for IBM as a cloud architect on a project called Bonnie CI Which is an attempt to bring Zool to to the masses somewhat and and some more things like that and today I'm gonna talk about how we use Ansible because I gotta actually tell you our team is a little bit Ansible crazy So if you like Ansible You probably don't like it as much as we do So first I'll explain a little more. What's Bonnie CI if you're wondering who that is. That's Anne Bonnie She's a pirate. We love pirates. We started a group in IBM called Pirate Cloud we were building some clouds and open stack using open stack and using a lot of pirate terminology and so when we When we finished that project Everybody said they really liked working with the open stack development infrastructure because we had a lot of people working not only in the upstream one But we built our own Downstream version with Garrett and Zool But we took a look at the problem. We thought how do we bring this to an even wider audience and so What we did was basically we take open sacks excellent Zool scheduler Which if you're not familiar with what Zool is I'll talk more about that tomorrow but we decided to take some community github patches and And then help finish Zool v3 which made Jim very happy and we're very happy to to work on that too One of the coolest features is if you're used to the way Zool works in open stack You have to go land patches in the project config repo But Zool v3 actually has in repo config. So if you're used to like the way Travis Works or some others you can actually just edit your own Jobs and change parameters, which is really cool and there are many other features that we needed as well we also added pirates because that's what we do and Beyond that I'm gonna leave the suspense hanging for tomorrow at 355 mr. 206 or just come ask me after we're done talking So so what what what do we actually use Ansible for you might think that's an easy answer But actually it's kind of it's kind of ridiculous So first when we decide we're gonna deploy some more Bonnie CI we provision our nodes using Ansible We use the most excellent OS modules, which Doug just referred to which are pretty amazing which talk to the open stack apis and when Once we're done with that we deploy all of our software on those nodes Using Ansible and all of our configuration as well One of the pieces of software to plan there is Ansible We also deploy Zool Which actually runs Ansible to test code And then we gate all of our code using Zool which runs Ansible to test our Ansible Which deploys a Zool which and I'm gonna stop because you see the hands That's how I'm feeling right now or it's you know for more elementary example just 20 go to 10 We actually in the hallway Jim, and I weren't sure when we stopped running Ansible but we use it profusely and The key there is just that What we found was that Ansible was sort of a gateway drug to more Ansible So so as we adopted it, you know, we a lot of us have Experienced with pop it or chef or anything like that, but we found that that That Ansible just kind of was an accelerant onto to all of our automation so After doing that for a while, I thought I could share some wisdom with you If I had candy I would throw it to anyone who can who can name the individual who made that quote Anybody shout it out All right imaginary candy for you So what have we learned One the OS modules are fantastic. Here's an example snippet it goes down further And you can see a link to it there of how we provision servers I think it's easy to read it makes sense and We don't have any trouble with this code. We actually have a an IBM Blue mix private cloud, which is if you remember blue box They rebranded as that and and this creates all of our all of our things in there There's actually a lot more OS calls for users Security groups everything we need, but we definitely find that those are solid. They work just install shade I think there was a presentation earlier where where where Paul showed it and it worked So we highly recommend that The other thing we've learned is This seems obvious, but code review and fully testing answerable deployments using an excellent testing tool hint hint Is an excellent idea? We've had a lot of success. You might see our Beautiful friends over at Travis are also mentioned there Part of that is that we want to know how Travis works because we feel like Bonnie is is a higher scale Different focused version, but we want to understand how the world uses it We found it's fairly useful, and it's a little redundant, but it doesn't usually cost us any extra time But what we have found is by by using github even though we love Garrett Who else loves Garrett? Alright, you should all be raising your hand someday. Hopefully Garrett will win but github is is fine and we use it for everything and And we use if you see check github there. There's a little Bonnie She actually merges all of our code and the other thing that we've kind of learned recently is that like we keep hearing about containers Kubernetes, and they're cool, and I bet Lars could teach me something about this But it's just we're having a hard time fitting it into our our sort of torrid love affair with Ansible Like it's just it feels like putting another truck on top So we want to do it. We think it's the right way to go There's a lot of benefits to using containers and and and and things like Kubernetes but we haven't actually done it and We we would love people to help us do that actually because we're out of resources And the cool thing is you can actually help us because all of our deployment is entirely open source We you can you can see all the code for our Ansible The only thing you won't see there are the actual secrets that make everything go So sort of and the model of OpenStack's info team We run an open infrastructure So we invite if you're interested in running Zool if you're interested in building another CI system if you want to run one behind your firewall and Contribute patches to make that easier. We're here and we would love to have your contribution so so That's actually all I have So I want to say thank you for listening to it and open up. We have two minutes for questions Hello, hello So Clint my understanding is that in bonus? Yeah, you have already the github integration going or I'm mistaken So another company, I think it's there called best data Sorry Good data good data. Sorry best data is entirely different thing good data wrote some patches and submitted them to Zool master branch and We actually merged those in and we're using those to test just to play with Zool before v3 and then Jesse Keating's on our team and he's actually porting those together with Jim to v3 and those should land in v3 Hopefully someday soon. It's a big take a questions. How we can expect to have that Merge, I mean, we're super interested in Ansible to use the github integration and so There's plenty of work to be done in the review queue for the features will be three branch. Okay message taken But yeah, I that's that's eminent I would say any other questions All right, well, thanks so much for listening to my little Ansible love of air and I'll give you the rest of the time Lars What does your shirt say? Ask me about see I smiley face and last but certainly not least is Lars and he's I'm sorry. I'm I'm French and I butchered saying Paul's last name. Hey people at say that I'm Robin Bergeson And there's not even an actual s in my name. So Sorry Lars Is that close? Great. Thank you so much. So my name is Lars. I work at Red Hat. I'm in the open-stack engineering group And as a kind of slide project, I'm also the cloud in it package maintainer This is also a CI story although not quite as exciting and automated as the previous one But still hopefully interesting So the goal I had as the package maintainer to cloud in it is to Test the package in its actual role when adding patches and producing new releases So cloud in it is a tool that performs initial system configuration of a server when it first boots up Typically in a cloud environment, although in some situations also for bare metal So I wanted to be able to take my new package Stick it on a server boot it up and then apply some tests of that server to make sure that everything was working as I expected So I have this all wired up through Jenkins Kicks off a Jenkins job and three things that clones three repositories to start with there's the package sources There is an ansible project Which is the playbooks that will set up the test environment and then there is the project that will actually run the tests on That server once it's created now the first step is creating packages and images. That's not exciting. We're just going to move on So before we can get the image into open stack We have to authenticate to open stack and you've probably heard this like at least twice today But I'm going to go through anyway because it's here There are so all the open stack modules support the OS client config library Which is a Python library for managing open stack credentials So there are at least three ways of providing authentication credentials to your ansible playbooks There is the environment variables that we all know and love that you've been working with for years Those are great. You can put literal values into your ansible playbooks which Seems crazy from a manageable manageability perspective once you get beyond a single playbook or you can use a clouds dot yaml file Which is really exciting that lets you store credentials for many cloud environments in a single file and The library will look for that file in several standard locations It'll look for in a user level directory so that You can have a common set of credentials that use for all of the stuff you're doing both with ansible And also with things like the open stack unified client which also uses the same library You can have credentials in your current directory Which is neat if you have different projects that are targeting different cloud environments and just changing directories You want that to change the cloud that you're targeting. That's also cool You can also have that in a system location Which if would expose the same credentials if one on the system. That's not a use case I'm currently taking advantage of right now, but I guess it's there if you want it and You've got your credentials you've got your image and now we need to send out to the cloud now I was going to lose this image, but it was interesting I was looking for a disc to represent a disc image and then I realized I haven't seen a disc myself in about 20 years So I use the Kiwi because that seemed to be just as representative these days so See see so we get the image into open stack with the OS image module and I'm not going to go through this in detail, but pretty typical with some sane defaults here This is in a role that is part of this larger project And so there's some sane defaults so you're not always having to specify the container format the image format But this pushes it up now I am targeting a couple different image formats just about everybody out there Rel sent off fedora Ubuntu Everybody gives you a standalone cloud image that you can download and boot into your open stack environment And it just works Then there is the triple O upstream CI environment, which I wanted to test against also and their tools Produce an image that requires an external kernel and RAM disk so with a whole bunch of conditionals in my ansible playbooks and a couple calls to the OS image module We can upload the kernel and the RAM disk first record those in variables And then one last call to OS image where we can provide the Properties that map that file system image to the appropriate kernel and RAM disk and now once this is complete regardless of what we started with We have an image in open stack that we can boot and test out and it will be awesome So we need to boot an image. Well, we need network first We need the subnet this goes with the network We need a router because we're going to need inbound access to that environment We need a security group to make sure that the SSH port is open. We need a floating IP address We need to boot the server associate the floating IP address wait for it to finish booting and then when we're all done with everything We need to tear it down again cleanly so that we don't litter our testing environment with the detritus from our test. I Started doing this with just the basic ansible OS modules So OS server OS network OS subnet and so on and it quickly got a little bit hairy There was a lot of repetitive work just to get the environment up and running and I wasn't happy with it The tear down was getting kind of ugly too So I realized I could leverage open stack heat open stacks own orchestration tool to try to make this process a little cleaner now heat orchestrates open stack resources of collections or resources that are related somehow It uses a declarative yaml syntax sounds awful familiar And it can instantiate resources in parallel, but it is aware of the dependencies dependencies between resources So it can be reasonably time efficient. We'll also making sure that things happen in the right order Ansible has a module called OS stack for interacting with the heat API You can provide that with a template to the yaml file that describes your heat stack and You can provide that with parameters which are inputs into that template now I'm not going to go into heat templates in detail, but I want to take a quick look at one because there are two things I want to highlight here. Let's make that oh there we go so Two parts here this beginning section is input parameters to the heat stack Those are values that are referenced elsewhere in the stack kind of like variables in Ansible we get down here to the resources section and There are resources for all of those things. I was just listing But there are two here that are really kind of interesting the first one here is called a weight condition a Weight condition is a heat resource that is not considered to be complete until it receives a rest notification and We are also providing a chunk of user data that we are delivering to clouded it right there and This user data includes a shell script which calls curl using a generated URL provided by heat when we deploy the stack That provides that notification to the weight condition now. What this means is when we deploy our Stack using the OS stack module. It starts up the server Passes the data to clouded it boots up cloud and it reads the shell script runs curl Notifies the weight condition and the heat stack complete completes and that means we have then returned control back to our Ansible playbook And we know at that point that the server booted correctly that clouded it ran correctly that it has correctly Configured the credentials will need to log in and perform tests to get the server So we are good to go and one step there I Record the interest so most of the open stack modules in Ansible give you useful information back And you can take that and you can record it in a variable extract bits of it I take all of it and dump it to a JSON file because that means I can run subsequent playbooks against the same stack without having to Do like introspection and call API's or recreate things, but it's neither here nor there Once we have the stack created we have our server We can add that server now to Ansible's inventory using the ad host module so we take some of the values that we got back from OS stack the server name in the IP address that we just deployed and we assign it to a host group and Now that is in our Ansible inventory and now that it's in the inventory we can run tests against it using another Ansible playbook I'm not going to look at that It's a lot of assert statements making sure that file systems were resized and files were created where we expected them to But once we're done This is the cool part cleanup is now basically a one-step operation I just say hey heat. I'm done with all those resources now. You can go ahead and delete them and In this example here I'm actually calling the open set command line to query for a list of stacks that match the prefix That I'm getting from Jenkins with the job ID because that way I can do clean up at the beginning of the task and leave Stuff floating around after the Jenkins job is complete so I can inspect it manually later but we can tell heat to delete all of those resources it cleans everything up and we are done and I thought that was kind of a neat way of Utilizing both the open-sec modules in Ansible as well as open stacks own orchestration tools to get a cleaner set up and tear down of a full test stack without having to rely on Statically configured resources beforehand and that is the end of my story. Thank you very much Hi for time I'm just curious how I did That's okay Are there any questions out there? I don't think we have any questions. Well, thank you all very much