 So my name's Ryan. I'm an associate software engineer at Red Hat. I work on a tool or on a team called Contra We're working to unify and streamline all of the product delivery across the company more specifically I work on a tool called Lynch pin, which is a provisioner that I'm going to be talking to you about today So how many of you have well first of all can I see a show of hands who has any experience with Ansible? Awesome, it's a good place to start So some of you may have worked on a project or may currently be working on a project that has some sort of Ansible script or Ansible playbook that looks like this that's made to deploy whatever you work on. It's like 900 lines of code It's really difficult to tell what's going on if you need to modify or update things And hopefully if you you know if you're deploying slightly different versions of it You use extra vars to parameterize it But if you're doing that a lot and you have a lot of different parameters you could change you end up with these really long commands That are kind of difficult to Difficult to remember whatever you want to modify something So we developed Lynch pin as a solution to that Lynch pins more or less a declarative wrapper around Ansible So instead of telling Ansible how you want things to be done You tell Lynch pin what you want and it Lynch pin figures out how to do it It's highly extensible through the use of something called hooks and it has a pretty simple API and CLI which you can see below here So this is just a quick overview of the architecture of Lynch pin before we get into the weeds I know there's a lot going on everything in yellow here is user inputs So there's topology layout and hooks which we'll get into what each of those do Those are inputted to Lynch pin which then converts that into something that Ansible can understand Ansible goes provisions whatever cloud providers it needs to provision and then Lynch pin takes the data It gets back from those clouds and it sends them to a few different outputs We won't talk too much about those but all you need to know for now is that the run DB is what allows Lynch pin to? Store data from every single time you've run Lynch pin It's what allows you to say run Lynch pin up twice in a row and tear down the stuff You provision the first in the first run and the inventory is a user It kind of is a is the type of output that the user can customize based on the layout So Lynch pin workspaces at its core is made up of something called a pin file Which describes all the inputs and outputs you need for provisioning A pin file is a bunch of targets, which is a target is just something you're going to provision As a single unit you're never going to break down a target into things you might want to provision one at a time And each target can contain a topology a layout and hooks So the topology is technically all you need to provision something it describes your desired state Now so here we've got a topology that provisioned some open stack resources Resource groups in Lynch pin describe a set of resources you want to provision to a single provider So here we're just provisioning open stack resources, but if we wanted to provision something to AWS as well We could do that just in a different resource group And within a resource group you have resource definitions, which are you know everything you want to provision We have a key pair here and for open stack server instances If we wanted to provisioned another open stack server with slightly different parameters We would do that as another resource definition You could do an open stack network for example is another resource definition And then we get to the layout so the layout There are dependencies between them What do you so Yes, so the resource definitions are provisioned by default in order and so in this example I actually provisioned a key pair and then I use that key pair as the as the key pair for those servers You can provision asynchronously in which case you of course lose those guarantees, but by default it's done synchronously So moving on to the layout the layout is how you define user outputs The layouts are used to generate an inventory file, which is something like what you'll see on the right After provisioning Lynchman gets a list of all of the hosts that are provisioned So this doesn't include networks or low balances or anything like that It's open stack compute instances EC2 instances, etc And it goes through this list of hosts and it assigns it to the relevant host groups So for example here, we'll get a list of four hosts, which we saw in the topology earlier It'll pop the first one off and say that's the master So I need to assign it to the workers and the masters host group pop the next two off and assign it to the workers host Group and finally pop the last one off and assign it to the database in addition to listing each host in this group Variables which are defined in the vars section can be associated with each So here we just have the host name equals Dunder IP variable Dunder IP is a built-in It's just the default host name for whatever host you're using but any data that That Lynchman gets back from a cloud provider you can put in your vars so that can be networks That can be zones and open stack Instead of a public IP before address for example, you could do a private IPv4 address and This this data, you know by default isn't a human readable format like this But you can also format it as Jason if you want it to be parsed by parsed by a script or something like that And finally we get to hooks so hooks are what allows Lynchman to be extensible It allows you to define custom actions that go beyond just the basic provisioning You can write them in a whole bunch of languages and one thing that makes them so Powerful is that they can communicate with one another as well as communicate with Lynchman itself And with the data that Lynchman gets back from these cloud providers So just as an example here, there are four hooks So our first hook here, which would run before you provision something. It's a Priya hook Creates a floating IP pool with an open stack and then after the provisioning is done These two post-up hooks will be run the first one installs dependencies on the hosts and the second one sets up a firewall and then finally if you run Lynchman destroy After destroying this post destroy hook will be will be run to destroy the floating IP pool now Let's say your your first script your create FIP pool. You only want to create the pool if it doesn't already exist And then you only want to destroy the FIP pool at the end if you want if you had to create it in the first place This is the where hook communication comes into play that first hook can save data to the run debut and Lynchman that says Yes, I actually had to create this FIP pool or no I didn't and the post destroy hook can take action based on that The last thing I want to point out is this context variable The context determines whether a hook is run on the machine on which Lynchman is running or on the provision resources So in this Priya hook context is false It'll run locally or in the post-up hook when where you're installing the dependencies context is true and it'll run remotely So now that we've kind of seen how all these components work. Let's let's put them together I got a couple demos for you There's one that's a little bit easier a little bit less flashy and the second one I actually can't do because we're not on the VPN unfortunately So I'll go through and explain to you how it would have worked so you can see kind of a complex deployment via lynchpin So let's say I just want to deploy a basic livered instance on lynchpin I can do that. I can run lynchpin in it And say liver so this is going to pull down actually a director we have on the lynchpin repository With a number of resources used to provision lynchman So we have all sorts of examples here, but I don't actually need any of that. So I'm going to delete that and I'm going to create a A target called devconf demo. I only need a topology really So in resource groups, I'm just going to have one because I'm only provisioning livered instances. So I'm going to name it liver demo And the type like I said will be livered And then I got my resource definitions. So let's name it demo node Role will be livered node And then there's there are some fields in here that are That are required and some that aren't I'm sure you saw all the all the fields in that really big pin file So not all of those are required, but let's see what is required. I believe the URI is required. So that's I know It's kind of hard to do 10 lines or less when you when you have seven lines of bootstrapping code So we'll pretend that doesn't count Yeah, I was a little ambitious with my title should have gone with seven lines, but A Role I of course need an image. I didn't bother memorizing this image name. We'll be honest. I'm just going to copy it Or an image source. I Know I need memory So I'll set that to 2048 megabytes And I want to make sure this is valid before I run it. I want to make sure I got all my syntax correct So I'll run linchman validate And Try to validate that target and let's see what happened. Oh, I feel topology names required. That's a that's a pretty basic one Oh, and I know what else I forgot Yeah, but I wasn't gonna start off with an athlete apology Exactly Yeah, and it'll tell you if you misspelled something for example the only one where it's a little bit finicky is the role if you leave out role Sometimes you'll get an error But yeah, it's I mean it's pretty reliable. The other thing I forgot is the It is the VCB use I think and I knew one of those but things like count like you could provision two of these and it'll Default to one so not all the fields are required by net by default. I think it uses the default network So I got a success here The topology is valid so I can run linchpin dash BV uses the same verbosity flags as Ansible So that'll look familiar to you guys up And if I had multiple targets in this pen file, and I just want to run one I can provide that at the end of the command line And we'll gather facts any facts we need will oh Yeah, I forgot I'm not part of the liver group so it'll run any My This is a run any bootstrapping stuff we need it'll download the image for us it'll provision our node Start our VM if we define networks, it'll make sure they're there I don't know why it's waiting for it to shut down But you get the idea Takes a minute or so you know if you're familiar with Ansible This is all this is all pretty standard the The big advantage of this is the abstraction that with that just short chunk of code You can get this provision and I can do so to a pseudo version list all and it'll And you'll see the demo node is yours running there Yes, yeah, I think if you change the your idle you can use a remote Libbird instance So and then of course I will make you sit through watching this, but tear down is pretty easy as well So let's move on to something cooler. This is supposed to be my flashy demo. I was going to provision a Compute node on open stack. I was going to deploy open shift to that node And then deploy an application to open shift and show that it's running But like I said, I can't access the Red Hat VPN right now and so I can't deploy it But I can work I can step you through through, you know, all the components to see how it would work. What? Yeah Okay, I'll keep that in mind for the future, but I didn't I had someone else that suggests I set up tethering on my phone I couldn't get it working. So I would have liked to record the demo, but I just got it working Last night of we're being honest But it's working You'll have to trust me on that Part of this is actually in our examples if you go to our github repository There's an open shift on open stack example and an open shift on beaker example that you can take a look at So this pin file is actually It looks a lot simpler because we separated out the topology in the layout Lynch pin will look for the topology and layout in the same directory as the pin file in these topologies and layouts directories so I Got a topology open shift on open stack. We actually split things into two resource groups here because well my lynchman.com has a synchronous provisioning set to true and I wanted to make sure that the That the what's it called the OS server instance is provisioned last So the first resource group provisions a network and a security group I'm not gonna go too much in the details of that but that's just you know This is all the rules that open shift needs in order to be able to run correctly and the second one Creates a creates a pretty large Compute instance for open shift to run on top of and of course I pass at the cloud config So I can SSH in and lynchpin can run those hooks And this credentials section at the bottom which you didn't see in Libvert Allows you to pass the credentials file in the case of open stack It can also read environment variables. That's typically what I do there But something like AWS doesn't allow you to do that. So you can pass a credentials file there And moving on to the layouts because we've seen something like that before The only thing that's really different here is this far section in the host groups So this will set environment variables on the host themselves And open shift needs some of these environment variables. So, you know open shift public host name and things like that allow open shift to set up routing Ansible SSH users that Ansible knows how to log in things like that Open shift release so that open shift knows what we're released to set up If you notice, there's a ginger template here on line 26 You probably can't read that when I do that, but there's a ginger template here. So lynchpin does support templating in fact if you When you're when you're pulling variables in for the layouts for that var section from from the data that AWS and open stack send back You can access that via ginger templates And you can also Know which Can I get to that later because I want to I want to stay on track here. So Now I lost my trip. Oh, yeah, you can also send template data to lynchpin so if I were to Go to my topology here kind of jumping around. I'm sorry. I could instead of doing m1.large I could do flavor here And when I ran lynchpin up I could send in template data Flavor And one large and that would have the same effect. This won't run because it'll Oh, yeah, sorry, it should be lynchpin template data up. This won't run correct correctly because it's a Because of the VPN but you get the idea But lastly, I wanted to focus on the hooks here So we have four hooks here once again kind of mixing things like the context and all that So the first hook installs an RPM repository. That's just the open shift origin RPM through sent OS The second playbook We actually cloned the the open shift ansible repository from github and use those ansible playbooks for hooks So if you have existing Infrastructure that you use to use after you provision to do bootstrapping you can just plug that right into lynchpin Instead of using a hook to pull it down every sign every single time we run it We had a make file that we we used to pull and I did that before the demo But yes, that's within the open shift ansible repository. We just set up some prerequisites and deploy the cluster Let the experts and deploying open shift do that for us And the last thing we do on the local machine if you know context is false is Login to to open shift so that you can you can run OSC commands in that That's a Python script that I just wrote Wrote myself and then finally we had a topology for rocket chat For deploying that on on open shift You go to the rocket chat deposit Topology the resource definitions as you can see are exactly what you'd see in like an s2i image It's just you know this basic YAML You know a lot of a lot of code. There's like 600 lines here But you know, it's all pretty standard and that will that is the equivalent of running OC apply and OC new app on these resources See, I'm sorry. You couldn't see that running But yeah, I hope you you get the idea you can kind of see the power that lynchpin provides So moving back there's one more thing which I want to talk about and that's the API So lynchpin can be used programmatically The the API you see here is our older API that we're currently phasing out as you can see It's there's a lot of kind of bootstrapping doing it yourself But you you know you set up your workspace You validate you can run lynchpin up with the provisioner.do action. You could also run lynchpin destroy with that And then if you'd like you can get the run data from the run DB after provisioning and generate a layout from that The planned API which we currently have a beta version out of now is quite a bit simpler So by default you can use the workspace in the same way where you provide a file path But here you just have to run workspace dot-up and it'll provision all of that for you You can also generate pin files as a deck if you'd like we actually work with a tool called carbon, which is Something it's an it's an integration testing tool that one of the QE teams is working on and they generate Generate pin files programmatically for customer scenarios So they can do that as a dict instead of having to create a file on the disk every single time they want to They want to generate a new a new pin file So now do you have any questions? And actually Sorry, we want to get you on the mic so that we can Just So having had to do this for years and years and nice to see how you ended up there And I really like the elegance of what you did I'm just trying to figure out how I would use this on a day-to-day basis So when you ran the linchpin command it seemed to know the context is that just based on like Files inside the present working directory kind of like doing a Git command and looks for a did dot get subdirector something that or how does it know like which Cluster you're working with right you want to be able to work with multiple resources or resource things How do you know which one? So I'm actually gonna go into a slightly different command for a second called linchpin fetch So linchpin fetch allows you to basically do a get pull of a repository actually a pull of a of a Subdirector within a repository that contains a linchpin workspace to make it easy to share provisioning tools between You know between different people linchpin init Basically wraps linchpin fetch if you run linchpin init by default it creates a folder And pulls the the dummy workspace examples we have within linchpin So we have an examples directory in our repository and it'll pull that dummy is just a provider We made up that that mocks provisioning so we can test some of the other features we have within linchpin But you if you provide another provider name or another example name that we have it'll pull that workspace instead of the Instead of the dummy workspace Does it do get locally when I do this the linchpin in it is a creating a local get repo for that It's not creating a get repo now. Okay, so if I want to do get ops and all that I just do that externally to this and yeah, it's just a two-step process Yes, because you might want to have a linchpin workspace within an existing repository, so Okay, and You you showed Livevert open stack and you mentioned AWS What what does linchpin support as far as like virtualization technologies? Virtualization, I believe we support live vert over and VMWare right now and VMWare meaning vSphere or ESX direct. I Will have to get back. Okay. Thanks So with the hooks Maybe I missed it, but where you have the hooks and you have like the playbook specified Where How does it know where to look for those like game files of those local in the workspace or those pulled down from like though You said the open shift ansible. Yes, so they're local in the workspace I didn't I did have the the tree here, but I didn't mention it. So within the workspace. There's a hooks directory and so linchpin will search for the hooks in the hooks directory at a path of the Hooks slash the type in this case. We have ansible and then the name of the hook So hook slash ansible slash create FIP pool is here In the other example, I had the OC log in would be searched for and hook slash python slash OC log in So kind of a big picture question, but like what exactly is provisioning and why would you use linchpin with software like open shift or ansible? so provisioning is How do I define provisioning? It's provisioning is I guess creating resources to use for For any sort of like computation or networking and in the case of like virtualization, you're kind of reserving resources But you're also defining resources so that is so that they can be managed something like open stack and you know Keep track of say say or actually I'll use open shift and more familiar with it open shift can for example create track of pods So if you provision pods to open shift Open shift can make sure they're running and if they go down it can reprovision them so to the advantage so Traditionally way back when provisioning was largely done by hand you wanted a VM You had to go and you know run a command line or run your own script to create this VM And then tools like terraform and ansible started to to be built to automate that But they're largely procedural So you tell it the exact steps that you want to take and it's hard to parameterize them and it's hard to modify them And and it's hard to create them programmatically. It's hard to tell get a computer to figure out how to get from point A to point B So the advantage of something like linchpin that's very very focused on one thing ansible can also do a lot more than just provisioning So linchpin is very focused on one thing. So it makes it very streamlined to provision things it makes it easy to figure out how to do it programmatically because it You know pieces of software only has to figure out what it wants And it's easier to read and modify Hi, so this looks awesome first, so thank you Would I be wrong to say that it it feels and sounds very similar to what terraforms is doing And is this like intentionally being created as an alternative to that so that we have like an ansible native Version so You're right. It does seem very similar to what terraform is doing. I know terraform actually you're right I think terraform is also declarative. I don't know What the trade-off is between terraform and ansible or between terraform and linchpin. I'm not just not that familiar with it So yeah, I'm sorry. I don't I can't really answer that but I Can get back to you if you want afterwards if you give me your contact info. Yeah, I've got a ton more questions So I'll follow up with you. Yeah, absolutely Hey, thanks Ryan, that was a great talk So one of the things I that I'm really interested in here is the ability I'm gonna butcher the jargon and linchpin But the ability to pass variables from like one stage to the next or one task to the next you're gonna talking about that And and I'm guessing that this can wrap the ansible k8 module pretty easily To deploy things into kubernetes or whatever has anybody given thought to doing like a linchpin operator or something like that And the same vein is like the ansible operator. Yeah, so we've talked about it. I Mean I think the reason we haven't done it yet is just because you know, there's more demand from customers for other things But yeah, it's definitely something that's come up It's You know, if you have a linchpin operator I believe operators are pods so they're running all the time And so that would sort of require like a linchpin process that stays running which right now linchpin is kind of a one-off deployment So we'd have to look into if you if we'd be building a pod that You know triggers linchpin when it's run or if there's like a linchpin data that Damon, excuse me that actively stores data And manages these resources All right, any other questions So you said that linchpin is aligned with like ansible as a wrapper How is the versioning so when like you put out new versions of linchpin? How does it align with the versions of Ansible? It's not Yeah, so right now We we don't really work with the ansible team much So so we don't align the versions linchpin is compatible with multiple different versions of Ansible We try to be backwards compatible with whatever current version of Ansible is as well as you know anything older I think we're technically backwards compatible back to like 2.4 right now, but Is that everything any other questions