 Welcome to sorting things up in the DevOps world. I'm here with Juan Manuel Silva, a salty guy. Give him some claps. Right. Thank you all for coming. For those of you who've been here last year, I gave a similar talk. This is going to be a little more in-depth, so let's get started. Okay, let's get the boring stuff out of the way. My name is Juan Manuel Santos. I work as a team leader and a board engineer at Red Hat. I'm also one of the organizers of CISarmy, which is the Argentinian system administrator's community, and Arderla, which is a local co-working slash tech conference event that we do every year in Buenos Aires since 2014. I've been using Salt for a couple of years now, mainly with no regrets or with all regrets, whichever you want to choose. Let me get this disclaimer first, so let's get this out of the way, too. I am only a humble user of Salt. I have tinkered a bit with the code. I have submitted an ugly patch, but not much more. I only have three days to prepare this, so who doesn't like pressure, right? My thanks go to the EuroPython team for managing to squeeze this talking. So, why Salt? As you may or may not know, Salt is a configuration management system. In case you don't know what that is, think puppet chef Ansible, but only better. And why do I say better? It's because it's written in Python, and it leverages YAML and Jinja. Now, I know some people in the room might not like YAML, but you can also use JSON if you want. It is relatively easy to understand, and I said relatively, because it has some complex things, but what it lacks in simplicity of reading and understanding, it makes it up in being extremely powerful and giving you a huge amount of control over what you can do with it. Some of this we'll be seeing in the next few minutes. One more detail that frequently gets lost in translation. Salt can work without an agent, in the case you don't have root access or you're not allowed to run the agent on your machines. VSSH, much like Ansible does. So, previously in EuroPython, as I said, last year I gave a talk. This was mainly an introduction, speaking of the basic mechanics in terms and concepts behind Salt. As a quick recap, Salt has a master minion architecture, where the master is the one that gives out the orders, and the minions are ordered to do minion stuff. It does so by defining states and high states. The states are representing the state a system should be in, and the whole collection of states that should be applied to a system is called a high state. Another cool concept is that of matching, which means targeting your minions and to determine which states apply to which minions. And finally, there's the concepts of grains and pillar, grains being information sent from the minions to the master, and pillar being information sent from the master to the minions. Sadly, and I have to say this, still no Python 3 support. Salt is still in Python 2. It's getting there, though. There's a big issue, hopefully we'll get there. As usual, it's not because of Salt, it's because of the Salt's dependencies. But anyway, moving on. Two more concepts that didn't make it to last year presentation are those of the mine and the syndic. Now, the mine essentially gets data from the minions sent to the master on a regular interval. Now, even though this is done on a regular interval, this is not useful for metrics because only the most recent data that you collect is maintained. Another thing that might confuse you is that all the data is made available to all the minions. So when you query it, you might get the answer of the same data from all the minions at the same time, which can be quite confusing. In fact, you might be wondering, isn't this like grains? Isn't what grains are supposed to do? Get data from the minions sent into the master? Kind of. The thing is, the mine data is updated more often. The grains are mainly static. They're only updated if you purposefully update them, which it's not something that you would usually do. Also, if minions need data from other slower minions, the mine acts as a kind of cache. So there's that, too. And, okay. There are two ways of defining which mine functions you want to collect from the minions. In the case of normal operation of salts, you would do so either in the pillar or on the minions' configuration file. In the special case of not using the agent, as I mentioned before, you have three ways. Since you don't have the minions' configuration file, you can either use the roster, the pillar, and the master configuration file. And so, a quick example of what the salt mine would be so that you don't get much to confused, but promise you, you'll get confused. Looks like this. So let's say we want to first target all the minions in our web servers group. We are going to be applying a mine function to gather the IP addresses of the first network card every five minutes. So this we can later use, for example, in an HA proxy configuration to populate the server list. Now, I know that you might be getting baffled by all the ginger here. Try not to think of it. The important thing to understand here is that should we add a new host to the web server group within five minutes, we can have its IP address up in the HA proxy configuration file. This is all thanks to the mine, which we can configure the interval of updating. Now, before we continue, since we already mentioned that salt has a master minion architecture, there's an inherent topology to it. So let's talk a bit about that. The most common one would be one too many, meaning one master many minions. But of course, this is boring. This might not scale. This also kills cat during lunar eclipse. So what are the alternatives? How much can we toy around with this? Could we have, say, more masters? Could we have a multi-master topology? And I don't know if there's any information security guys in the room, but if you are, you're going to love this question. Could we implement segregation? Meaning, could we segregate part of the infrastructure split it so they don't communicate with each other? But, you know, there's still a functioning salt infrastructure. And coincidentally, now I'm wearing the right hat apparel, let's answer those questions with another question. So what if we try more power? What's all this? This is something called the syndic node. The syndic node is an intermediate node type, which acts as a pass-through. The aim of it is to control a given set of lower level minions, which means that in the case of the syndic node, we're going to be having two demons, the syndic and the master. Optionally, you can run a minion too. So the way it works is something like this. The main master, which now we're going to call the master of masters, you're going to see why, even though it's already a funny name, sends an order to the minions and to the syndic node. The syndic node relays those orders to the local master that is running in the same machine, and then that master gets the orders and relays them to the lower minions. So now our syndic node is actually called the master of minions. This, of course, works the other way around when some of the lower level minions reply to any orders, they go first to the lower level master, then to the syndic, and then up to the main master. So if we have the master, which now is our master of masters, it can have as many minions as we would like connected to it, then we can have a syndic node, for example, a syndic master of minions node, which can also have any given number of minions connected to it. But the good thing about this is that we can even nest levels of syndic, one over the other, and have as many minions as we like. So the topology here, it's kind of up to you. So the only places where you're going to have to ensure connectivity is where the lines are. So how do we actually do this? The configuration is quite simple. On the syndic node, we set in the syndic master directive. This should point to our main master. We also have to define an ID here because the syndic node takes the ID from here. Then on the master node, of course, we have to tell it that we are now ordering other masters. We are now in control of syndic nodes. In the case of the minions, they should have the lower level minions. They should have the IP address of the syndic node in their configuration file. Just a few more steps. We run the syndic node, of course. On the main master, we're going to have to accept the key because, essentially, there's a new key that gets generated. So now you might be getting the idea that behind this talk is to make you think of the possibilities. You could have different syndics per environment, development, QA, production. You could also have different syndics to comply with the security standards that you might have, that you might want to come up with. Just to mention it, we can even do multi-master with this. We can have syndics and many masters, main masters. We will not cover it here, but just know that this is possible. So that's it for mine and the syndic node. Now we're on to more heavy metal stuff. Our first stop here is going to be the event system. So what do you think an event system does? Of course, it keeps track of events. But that's not the only thing it does. The important thing is that events can be acted upon. And this system is also the base of the rest of the systems that we're going to see in this talk. In essence, this is mainly a Serium-Q pub interface. The important things to understand here is that every event has a tag which allows for quick filtering and identifying an event, and also has an amount of arbitrary data inside of it which tells us information about the event. So with just a simple command, run in the master, we can already start watching for events. Start watching what's going on. We can also use this other command to send a random event that we are just making it up. You can see that this would be the tag. Those would be the data of the event. The data is mainly a JSON string. In Python, it would be a dictionary because, in fact, you can also send events from Python code, from pure Python code. And if we did things right after sending the event, this should show up if we were watching attentively to the event bus. We can see that there's our tag and there's our data. Okay. Now, another interesting bit that I didn't get to make the distinction last year. There are two kinds of modules, state modules, actually. The first one is the execution modules, and the other one is the runner modules. So the execution modules is the main kind of state that you see in salt. It means something that is going to be run on the minions, whereas the runner modules are going to be run on the master. And these runner modules can be either synchronous or asynchronous. They are added via the runner directory's configuration in the master file. And that's the best part. What do we put inside that directory? Pure Python code. So runner modules are essentially, is essentially Python code. And an addendum to this is we just talked about events. Any print statements that we put inside our runner modules will be converted to events. So if we do this inside a runner module, we will get something like this. See that, okay, the tag is not quite nice, but there's a data. So even though you can write runner modules, and you're certainly welcome to do so, it is tempting, but there's actually no need. I mean, there's already a full list of runner modules available in salt in the documentation. So feel free to check those out. Now, wouldn't it be nice to live in a place like that? Sadly, we're not talking about those kind of beacons, but kind of. Salt beacons are like those concrete towers with the light bulb on top. They're also a kind of single. Or something like that. I mean, they use the event system to monitor things that are happening outside of salt. And when something happens to those things and notification is sent, which is actually an event. So those are configured via the Minions configuration file because we're actually interested in the Minions at this point. And any system administrators in the room, anyone? Okay. Does anything of this ring a bell, something that is, you know, notifications? I notify maybe? Okay. Yeah. I mean, I notify, which is a file system monitoring API to track changes on files and directories. Kind of looks like this. So in fact, there is an I notify beacon, which you use to monitor changes to a certain kind of file, to a certain file, in a given time interval, and then you have it. Anytime the resolve.conf file changes, we now get an event. There's also other types of beacons, for example, a process. We can be monitoring whether or not a certain process with a process name we specify is running or not. If it's not running and it starts to run, we get an event. If it's running and it stops, we get an event. So kind of nice, right? There are actually several beacon types. Memory, disk, system load, network settings. It works. There are really a lot and they're growing. It's just going to leave you, you can also write your own, of course. It's just going to leave you the documentation here so you can check it out later. Now, this is where things are going to get a little bit more interesting. Yeah, like that. It would be nice if actually the reactor was like this. Believe me, it's actually closed. So, what is the salt reactor? As its name implies, the main job of the salt reactor is to react, but not react in a javascript way, thankfully, to react in a salt way or salty way. In other words, the reactor is the component that is responsible for triggering actions in response to events. So now you see why we saw the event bus earlier. Of course, we need the event system first, but what is an action? Since we're in salt, an action is essentially a state that we define. And what is actually going to happen in reality is something goes something like this. Something is going to happen if with the thing right, there's going to be an event, maybe because something was being monitored by a beacon or something else, and the event is going to be picked up by the reactor and the reactor is going to translate that event into an action or actually a state. Reactors are defined in the master's configuration file. It's a component of the salt master engine. As we said, the reactor will be making these associations. The associations, if you remember what an event was, you remember that it had a tag. So the association is made via the tag. So we put a tag in the configuration file and we define which states are going to cover that action. Sync tags here is quite clear. Do note that there's an asterisk there. We can use wildcards because some events are fired by more than just one minion and have the minion ID in the tag. So for example, this first one here is the event of a minion starting up. Two if you want to match all the minions starting up, you just put the wildcard in the right place. So this whole slide is actually the main reason I'm here. It's the one thing I spend the most time while working with salt. So I ask you to please pair with me. There are a few caveats. The state system that we just saw here, those are states living inside the reactor. The state system is actually rather limited and you can easily skip this while you're reading the documentation and trying out your reactor states. Trying to run things that would normally work in the rest of salt, in the rest of the states that you have might not work here. You will find the things are missing and for starters, forget about grains and pillar. Grains and pillar are not available in the reactor. If you choose to use those, you get unexpected results. Also, reactor states are actually processed sequentially. They're first rendered and the data is then sent to a worker pool. But since they're first processed sequentially, you're going to want to make your states as simple and as small and as fast as possible. So after long hours of fighting over the reactor and tearing the little hair I have left in my head, this is the short version. Do not handle logic in your reactor states. This might be a bit too confusing because what's the point then? But I'm going to explain it in a bit more detail. You should use your reactor states for matching and then decide which minions to which states based on an event. And then just call your normal salt states that you have lying around. Do not try to add some logic here. You're going to spend a very long time and you won't be happy about it. So I don't know if this is actually true. It's what it looks like from the outside. It appears there's a disconnect because we're talking about two different engines, even if it's under the same demon. I like to think it's because of Python namespaces, but I could be wrong. But so too long didn't read. Do not handle logic there. So as we said, with the reactor we are associating events to states. So if we have our custom event and we have our custom reactor state file, the idea is to keep it as simple as this. And if you really have to do complex things and ensure that many, many things are done when a given event is fired, just put those inside the long running and complex state. So once the reactor parses this and sends it to the worker pool, this will be running on the main salt namespace, so to speak. So what can we use a reactor for? One good example is out of accepting all the keys of all the minions in our environment. Since it's quite a hassle every time you start a minion, you have to go to the master to accept the key and so on and so forth. So as you might have guessed, whenever a minion tries to authenticate an event is fired and whenever a minion finished starting up, there's another event. So for the purposes of this example we are going to assume that all minions whose names start with nice are going to have their keys auto-accepted. So first of all in the state that's going to be dealing with authentication we'll first want to remove the keys coming from the minions that have failed to authenticate. The next step is going to be to trigger a minion restart and now I know this is ugly, this is just for the purposes of examples. Every time I read SSH in the middle of another language another configuration management system I kind of creep out a bit but this is just an example. What we want to do is have the minion re-authenticate generate a new key, so to speak. So reaching the end of our big state if we are in pending status of authentication pending status and the name starts with nice we accept the key. And as for the last state when the minion finishes starting up this is actually a good practice that you can implement. Whenever a minion finishes starting up we apply a high state to that minion. This is something nice to ensure that all your minions are consistent at least when starting up. Now note here that we've been hard coding the nice and maybe some other things around it it's because as we said before we don't have the grains, we don't have the pillar we don't have a safe way to store information make it available to the reactor. So keep that in mind and use the reactor. And our last component today is going to be the API. Of course, Solst has a REST API. The main idea behind it is to send commands to our running master. The API supports both encryption and authentication. The authentication which is something that you might not see very usually in Solst well Solst has an external authentication system it allows for authenticate against LDAP, against PAM it also has access control in it so it's really outside the scope of this talk it's a very big thing to talk about but it is worth mentioning that it actually exists and the entire things that are managed by the API are controlled by another demon the Solst API demon. So if it's a REST API we can of course use anything that can make HTTP requests and get information from it or send information to it. In this very short example we are making a request to a certain URI for minions and if we pass the correct minion ID we're going to start getting data about that minion. In this case we are for the sake of simplicity we're not using authentication here. Now there are several API endpoints available already bundled with this Solst API they're pretty much self-explanatory but let me draw your attention to one in particular the slash hook this is a special endpoint it's a generic webhook entry point and the whole reason for existing is that any post requests that are done here will be generated events on the master side, on the event bus and the post data that we send to it is going to become the data of our generated event. Another important thing because this is a special endpoint it's the only one that Solst allows you to explicitly disable authentication in this particular part. Another thing is if you're disabled authentication it does not mean that you can do whatever you like you're expected to implement some kind of security. Why would you be disabled authentication? Well, I like to think of apps that can barely perform an HTTP request that can barely understand a URL so they can only do a request with a special hard-coded token that you specify so that's why we have that there. Now how about from all the rush that we just been in it how about we put them all together be nice and friends now I know you might be better here you've seen a lot of information and I think that you might be a little bit confused but I assure you we can do pretty interesting stuff with all that we saw the events, the beacons, the reactor and the API now more graphical understanding of how all this connects together we first have the beacons and the API the main interesting point about these two is that they're related to elements outside salt the beacons monitor things outside salt and the API it's an API so anybody can make a request to it so they're both related to elements from the outside you know both of these two will be generating events in our event system in our event bus those events can be later picked up by the reactor given what we define inside the reactor which then can be translated into salt states now with the great responsibility of having to manage your entire DevOps, Lops, Workflow infrastructure comes a great power there's a deliberate reordering of the phrase here because if you configure salt properly you're going to have full control of everything in your infrastructure, in your workflow everything and from within salt so as such you're expected to know what you're doing and you should always rely on a sensible way of doing things for example, beware of the security risks you might be tempted to you know take way too much power to salt and that's a good thing but beware of somebody trying to do an ugly thing with it so to finish this off, let's take a minute talk about what all you can do with all of this I'll just be naming you a couple of examples from the top of my head leave you to think the rest that's because that's what salt is salt is kind of like a batteries included approach to give you the space to create your own solutions much like Python is, which is why I love salt so just to name an example let's talk about self-healing anybody knows what self-healing is, what it consists of anybody heard the term? okay, so in more humane words self-healing is the ability that we give our applications or systems the ability to repair themselves whenever something bad has happened whenever they encounter an adverse situation on their own that's the thing, that's why it's called self-healing now all this might just be a REST API call away because if in your application you can identify the bad thing that has happened can be corrected by something that can be automated you can do it with an API call because salt can have control of that or another example and I think many of you have encountered this let's say half your team refuses to use Jenkins or the CI tool that you're using if you're not because you can leave them be whatever they are using and integrate the rest of the push build test deploy endless CI cycle with salt you can manage it with salt too another example if you were talking about scaling both up or down or sideways growing shrinking you can prepare for it with salt and you can also trust in salt to do some provisioning we haven't covered it here but salt also has a salt cloud demon to provision cloud instances and last but not least with a good beacon setup you can make sure that your environments are consistent if you have things that aren't supposed to change and you suspect that somebody tends to do nasty things with the beacons you can react immediately upon any changes that you deem unwanted and so these are mainly all the examples that I could think of with the short time that I was given as I said before I really do hope that you can leverage what you saw here to come up with your own solutions because I'm sure that your problems might be worse than what I simply presented him so as for the docs and as for last year all the documentation is in the official salt stack documentation I really encourage you to take a read if you have any particular questions there's also the possibility of bothering the guys at the salt free note channel in IRC I do that a lot and we're reaching the end so now we have time for some questions so feel free to shoot away Can you compare salt with Ansible? I'm going to be honest with you I haven't used Ansible I know that it's my maybe has a more basic approach what I've been told from the people that have tried both is that Ansible lacks some components that salt has like the reactor for example so it goes around those lines I was wondering how one could use salt as a deployment tool is it feasible to deploy a complete web application with it or is it just well fit to set up the system and then you need to revert to a proper deployment tool to deploy your application for application deployments right? yeah web application so set up a database put some basic data in it deploy your jungle application web server and things like that yeah I mean maybe that was covered in the previous talk which is more basic but no no no that's a very good question and yeah you can do it you might have to handle a slightly more manual approach in order to tailor it for your environment but you can certainly do it and if you're thinking of doing some bare metal provisioning you can also do that not exactly with salt on its own but salt has form and integration form and it's provisioning software that was mainly written for puppet but now has salt integration so you can do the whole cycle from it hello I understood that the communication channel is SSH is it correct salt has a way of working with SSH it's not the main way that it works and what I want to ask is how do you handle minions at running windows that's a very good question actually it is possible but I'm sorry I've never had to do it I have been a little bit playing around with that and found a way to insert an SSH daemon in windows using suckwind that has an SSH it seems to be working I'm just curious if there's other options and as long as you are aware of any limitations that you might have it should work the rest of the system is shared and it's the same thanks for the great talk I'm using salt for three months it's really cool there's also one stuff engines getting some additional events and currently I'm looking for the way to make pillars dynamic to get information from console or from at city during deployment to get some information from key value storage external is it possible I would have to look that up not entirely sure but I mean everything appears to be extensible in salt so I don't see why not yeah maybe it is hi how do you install without SSH or is there any good approach to do that upgrading salt without SSH you mean the master I don't think I understood where you are going with the question how do you you need to have access to the system you need to have access to install the new version and you also have to restart the asian that's one thing that is still not handled very well in salt is restarting minions whenever there's an upgrade because you can't do it from inside the master because you're going to be losing communication for a bit so yeah it's kind of a tricky spot still regarding the question about dynamic pillar I think it's possible salt has a mechanism to get a pillar from external systems you can implement Python model to there is a plugin called Glass which uses that to make the pillar more usable in fact and the question is how do you test your states we have several breakages in production due to human error I know I don't have a system I use salt for personal use I don't have the luxury of working with the salt environment but I know where you're going with it it would be nice if you could have a development environment or QA to try things out because yeah once you've made a change to a state and salt doesn't like it it will blow up it's kind of tricky you have to keep looking at the logs careful what you changed you're bound to have the last change that you may cause a problem if you see a problem it's working how do you handle provisioning new servers and how do you handle your inventory of servers well from the let's answer the first question in two parts if we're talking about bare metal provisioning you have to use something like formant that allows you to boot a system and then apply salt states to it so salt is not it's like puppet in that way and it's not like ansibot that way it doesn't have the ability to provision a system from bare metal from the ground up when the system is already installed and has a minion running you can do whatever you want with it if you log the inventory from the perspective of master all the master sees are minions so it is up to you to group them using node groups or grains or whatever you deem to be necessary you would be basically setting your categories on your own building node groups setting grains on certain minions to identify them from the perspective of the master but essentially there is no no way of distinction in fact when we talked about the syndic node the main master the master of masters will see all minions connected to it even those from lower level syndic nodes so this is in response to the question about testing the salt states we do this we'll use vagrant on your local machines with a masterless minion setup and then spin up in number of VMs and actually test the states at least to some minimal so that we can catch human error like that because we have the same problem how we deploy across hundreds of machines simultaneously in one error can really mess up your day so I've tried with vagrant locally it works pretty well because you can spin up different kinds of VMs so we use FreeBSD or Ubuntu or Sintos and you can simulate a lot of those environments easily Interesting, thank you Hi, thanks for the look Thank you I'm seeing that most of the questions we are asking about salt we think that salt can't do can be done perfectly using Ansible like initial system configuration or maybe someone asked about service management I know you already said you haven't used Ansible but have you heard about someone using Ansible and salt together? No, pretty much when somebody chooses configuration management system they like to stick to it it has to do with the learning curve and all that stuff so it would be harder from the very few things that I've seen from Ansible it's quite different from salt even though both are written in Python even though both use YAML they're quite different so every organization wants to choose one technology that's interesting Ansible can be used for the bare metal provisioning parts and salt maybe for the rest or salt for the reactors you could certainly mix those two to create a richer machine to install salt master on it and to configure it and then to apply high state are we talking about the fabric python module? Yes, right it comes from paramico it uses SSH to Yes, right it uses SSH to make a basic provisioning to start a richer machine to put salt master on it and then use salt master and full power of salt Yeah, you can also do that We have time for one more question or not Thank you guys