 Thank you everybody for for coming and and thank you so much to to has geek and to Sequoia And to everybody who has sponsored my trip here and for the kind invitation and and for the wonderful Generosity that everybody has shown me since I've been here My name is Mike place. I am on the core engineering team at Saltstack. I am from obviously not here I am from Salt Lake City, Utah and My Twitter handle is is cached out. I'm easy to find and What I want to talk to you today is What's hopefully a very gentle introduction to Saltstack? I don't know how many people in here have any experience with Saltstack if you've ever heard of Saltstack Could you potentially raise your hand? You know that's fantastic used Saltstack. Oh That's really wonderful. I'm glad to hear it Love Saltstack. Hopefully that's the same number as the last one So there are a couple of things that that we want to talk about today Obviously introductions. Where did Saltstack come from? Why do I need it a little bit about remote execution 101? And then some basics to be completely honest with you When I proposed to my bosses that I come out here They said okay fine, and they they turned me over to the marketing department who set me up With the set of slides and they had all of these very complex Relationships on them and they just look like business slides and I threw them all away Because I don't like them. I'm an engineer. I'm not here to sell you stuff I'm here to to talk to you about actual engineering problems So anyway the point being that after we run out of these few slides We're going to go straight to the terminal and we're going to do live demos and have some fun So hopefully that'll be a good time So Saltstack right now as I've mentioned before is an open-source project It is also backed by a commercial company We now have about 45 people in our office and a number of people around the world We are hiring so I would be very interested to to talk to people who were interested in that sort of thing I Always talk I always start my talks with with this slide Does anybody know who this is? Dennis Richie. Yeah, that's right. Does anybody know who the other one is that one's actually a little harder It's Ken Thompson. I couldn't tell if somebody said that if you did well done The reason that I showed the slide. This is Dennis and Ken at 18t Bell Labs This is what's called the the Unix room and this is Where much early work on Unix was done The reason that that I bring this slide up is because it's important in the DevOps movement I think to remember where we came from Because right now we're in a very very interesting space where operations and development are coming together And DevOps is well known for having a very broad set of tooling But the reason that I that I point these guys out is that there was a point in time In the history of computing when we didn't necessarily have the idea of tooling at all In fact, we had a single machine that sat in the room and there was no difference between development and operations If you were on the team You wrote punch cards and then you walked across the room and you fed them into the computer Of course right now things have become considerably more complex As we've moved away from from single systems and moved away from physical computing Into virtualization and now recently into containerization We've seen problems that the Dennis and Ken potentially never anticipated the idea that Even very very small teams could be administrating not just one big machine in a room But potentially thousands tens of thousands or hundreds of thousands and so very very recently We've been introduced to a whole set of problems and salt stack is about solving those problems Hopefully in in new and interesting ways So I want to talk a little bit about where salt came from before we get into the nitty-gritty Thomas Hatch who is is my friend and my boss Started salt stack in February of 2001 He got his first contributor there in March 2001 and then the Holy Terror set in which is that third spot there Which is when Tom got an email Asking for a little help and it had a very interesting domain on the end it was it was linkedin.com and Tom said that's odd and He went back and forth and it turned out that Tom writing salt stack in his basement Had written something that was actually at this point deployed in and powering large parts of LinkedIn's infrastructure Which terrified him he didn't want to come out of his bit his basement at all It still terrifies him a little bit But this was well before we had a company in in 2012 and then of course, you know, we we started to hire a couple of Developers we won some awards, which is all fine and good And then from there the project really really began to explode to give you an idea of the amount of Or the level to which salt stack is a very active healthy open source Project on any given week salt stack merges somewhere between a hundred and fifty to three hundred pull requests We churn anywhere between fifty to two hundred and fifty thousand lines of code And we have about a thousand unique contributors that we deal with And which is really really wonderful and it just goes to show that if you make a popular or if you make a Project that solves real problems that that people will will show up So why do I need it? This is one of these slides delivered to me by the marketing department that I don't actually really know what it means To be to be completely honest. I just want to get I just want to get to the code, right? So stack provides predictive infrastructure orchestration and configuration management The thing that I want to say about that is that most people know about salt stack as a configuration management platform Which is all fine and good salt stack is does have a configuration management component But I think it's important to to go out and spread the word that it's not really how we view ourselves configuration management is an old solution to an even older problem You know Mark Burgess started work on a CF engine in the early 90s 93 to be exact So it's inaccurate to say the configuration management is something that suddenly Come to us as a result of DevOps or virtualization or any of this. That's just not true Nor certainly is the need for automations a new need or a sudden need but Salt stack instead takes a much broader approach to configuration management And I want to talk a little bit about what our thinking is before we get into some demonstrations about that As you begin to scale out dramatically you begin to have a very interesting set of problems one of those problems is as you have these application stack silos that begin to emerge you need to have some sort of messaging bus that can communicate between these silos and allow you or us as Dev OS practitioners or as I just call them systems administrators because that's what they are and that's what we should call them dammit Anyway We need some sort of some sort of universal messaging bus that we can use to connect These disparate systems together so that we can build complex systems That have stateful understanding of all of its components. So Salt stack is built on top First and foremost on top of a high-speed messaging bus Initially we used zero and Q Probably many people are familiar with them zero MQ with a standalone Messaging service that provides pub sub and push pull and all of these typical messaging patterns We for people who use salt stack you may be pleased to know that We're now moving In other directions and additional to zero MQ We now have in development a pure TCP Transport which eliminates the need for zero MQ Anyway on top of this this high-speed messaging bus You if you have a system that you can use to connect all of your disparate components together You can begin to solve problems Like remote execution or configuration management What we call the or what we hope is the future of this idea is something that we call event-driven infrastructure And so when you hear us talk about event-driven infrastructure What we're talking about is this idea that if you have a messaging bus that could connect all of your systems together that your entire application stack can speak to and Services that can ride upon it such as configuration management suddenly you can build truly adaptive and reactive and Reflexive systems because that stateful Understanding can work up and down the stack and configuration management then becomes something that does the heavy list lifting that allows your infrastructure To react and adapt to its own state or to its own failure I know that sounds kind of abstract. We'll do some demos that kind of bring that back down to earth You know salt is extremely flexible It becomes very very challenging in fact to talk about Because we see salt as being much more of a toolkit Or even a distributed operating system than simply a configuration management platform Salt at its core is highly highly pluggable As we'll see during some of our demonstrations. We have right now. We're shipping with salt about 23 different pluggable systems from remote execution to to state stateful management To log handling to to whatever you like And that pluggability and that modularity means that in addition to just being a software package that you can download from github We have tried to design it so that it's something that is very very easy to use as a development tool Inside your own workflow to add additional modules to what have you As I've mentioned now, hopefully several times salt is extremely scalable Salt is in many many deployments now that are in the tens of thousands And in fact, there are many deployments in which salt it controls many tens of thousands of Minions, which is what we call the machines being controlled from a single master, which is the controlling machine So our scalability in that direction is very very good and there are several deployments which are which are now reaching The many many tens of thousands and probably soon into the hundreds of thousands Security obviously all of this is is is backed by a well-tested encryption framework And of course remote execution is where things started And the reason that that we started there is because that's the problem that we were trying to solve initially, right? Because this was that was back in 2011 Right at that time we had puppet and we had chef and we wanted to be able to do very basic things, right? For example run a single command on a thousand or in ten thousand machines and doing stuff like that in puppet And in chef being the declarative Configuration management languages that they are is not as easy as it could have been And of course many many other people were solving that with other tools at the time fabric and was and still is very popular But the fact is that most people were still solving this with bash scripts, right? And that you know more or less worked fine But of course if you're using bash scripts to try and do remote execution to try to run Singular commands against many hundreds or thousands of hosts it becomes slow and you have to deal You know with drift, right? What if I have different login credentials different operating systems? What if I need to do something more complex with the output than just use t or write to a file or whatever it is? And what if I need to do this many times so you think to yourself, oh, okay, no problem I'll build a script, right? And you do that and you think okay. Well, this is fine. This is working, you know for the time being but again It's really hard to scale that sort of approach, right? Because you need to do great, you know error handling and logging and authentication and all of this stuff I don't have to explain this problem to you. Everybody here is has dealt with it, right? And then of course, you know you deal with even more problems, right? Like, you know, how do you detect? You know properly detect conditionals between disparate environments for example, you know, like different OSes, right? Or very basic orchestration problems, which are sort of hard to handle in a bash environment, right? I want to do X on a group of servers Y before, you know, Z happens and less A, B or C, right? That can be very hard to manage in strictly a bash sort of SSH environment, right? And of course, keeping external data secure is challenging. So we've built the SaltStack infrastructure. This is one of those graphs that essentially means nothing. We'll look at all this stuff anyway. So forget about it, all right? So salt, as I mentioned before, is is based on a master minion model. I don't think I have a... No, I don't unfortunately... Okay, great. Thank you. Salt is based on a hub-and-spoke model, right? The idea that we have a central controlling machine, right? Or machines, right? Called masters, right? And then we have multiple minions on them, okay? One of the ways right now in which there is a battle in the configuration management space is between the idea of agent-based and agent-less systems. You've probably heard this debate if you talked about this. What's unfortunately not well known is that salt is both, right? Salt both has an agent-based approach in which you run demons on particular machines, right? You can run them on your minions, which of course gives you a very, very rapid response time. The memory footprint is very low and it allows you to have something always running in case you need to do remote execution very, very quickly or orchestration or what have you. But it also has an agent-less mode, which we call salt SSH, which means that you don't need any type of demon running on these minions. You can in fact connect and use all the power of salt simply over SSH. We'll look at that here in a couple of minutes. Salt has an open API for third-party cloud and software integration. We actually have our own cloud deployment platform. It's called Salt Cloud and what it does is it allows you to connect to and provision virtual machines on many, many different commercial cloud providers or your own private cloud. Be it LXC or OpenStack or whatever it is. And it allows you to command and control and provision those machines. I'm sorry, it allows you to provision those machines and then it allows you to turn things over to salt for command and control. No more slides, finally. Okay, I hate slides. Okay, let's do some demos, okay? Now, I'm going to run through a couple of things that I just generally think are interesting. If people have questions along the way or something isn't clear or if you want to to see something different, please don't wait until the end. Please feel free to raise your hand and and we can go back and forth. So what we have here just to give you a walkthrough kind of what we're looking at, this window down here in the lower left, this is going to be our salt master, okay? All right. It's very easy to start. Salt, in case I haven't mentioned it before, is written in Python. It's very easy to contribute to and it runs on a very, very wide variety of machines. Everything from REL5 to AIX to Solaris to Raspberry Pis. Pretty much whatever you like, we will support. I'm going to bring up a single minion and I'm going to connect it to this machine. Over here is a regular terminal, okay? And what we can do is we can illustrate that, first off, that command and control, actually, let's do this. This is the Salt CLI. It's how we do remote execution. And There's a demo fail right there. There we go. As you can see, Salt is quite fast. Now, you may be saying to yourself, well, okay, Mike, this is not the most elegant demo because all you're doing is basically sending a ping to one machine, which is local to this one and asking for it to reply. And to that I say, fair point. But what if we did something like this? Actually, not that. Actually, let's do this, dash, dash, master. So what we're going to do here, let's be very brave. Let's bring up, oops, 100 minions. Okay. We'll do that. Now, I should point out that right now I'm running the Salt Master in open mode, which means that it blindly accepts connection requests. Of course, in a production environment, authentication initially is a public key to communicate shared AES key. And so normally you authenticate things that way. But for the purposes of the demo, we're not going to do that. So let's ask ourselves, all right, we were able to communicate with a single minion in 0.19 seconds. Let's look at the response time. 400 minions. Not bad, all right? 0.26, 0.27. So you can see that Salt scales pretty well in terms of adding additional hosts. The load on the master is quite light. Let's now actually look at some code. One of the things that I like most about Salt is that Salt takes a very, very open approach. We don't like magic, right? We like easy to understand interfaces that appeal to people who can actually go and look at code, all right? So in terms of remote execution, I mentioned this modular framework. Here are the remote execution modules which Salt ships with out of the box. As you can see, there are quite a lot. There are some PYC files in there, of course, so those are duplicates. Ignore those. But just in terms of what you can do with a very basic Salt installation, you can, let's see, let's clean out these PYC files so that we can actually see here. So let's see what might be interesting to look at today. How about any requests? I couldn't quite hear that. I'm going to need a microphone. Oh, the font size, yeah. Is that better? Oops. One more? Or is that good? A couple of more. Okay. This is going to be hard for me to see, but I'll give it my best. Okay. All right. Well, let's just look at a very simple one here, all right? Okay. All right. Okay. This is manipulating, right, a host file, okay? Obviously, this is Python, right? But what we do is we map remote execution directly to the function signatures in these files here, right? And so what that means is if we want to use the add host module, right? Obviously, we see the documentation there. It's quite straightforward. Add a host to an existing entry. If the entry is not in place, then create it with the given host, right? So, okay, that's easy enough, right? Let's do, all right, salt, right? Let's do this. So we give it a target, all right? Add host, right? 27. Actually, let's do 192.168.0.100, right? Fubar, right? So you can see there, right? We've got the function signature IP alias, right? That maps directly here into the command line, right? The two arguments that we need, right? I keep forgetting to do that. Oh, sorry. So the way that this works, right, is that it's module name, function name, right? So it should be really straightforward. Obviously, over here, right? This file is called host.py and the function that we're using is add host, right? Yep. Okay. There we go. Done. So, right? So, you know, whether it's one machine or a thousand machines, remote execution command and control becomes very, very easy. Now, the other thing that I want to point out is that it should be, if you know any Python at all, very, very easy to write your own execution module. Literally, right, all you have to do, let's write one called rootconf.py, right? Okay. Def say hello, right? And actually, let's do, just do this. Okay. Okay. We may need to restart this guy. Okay. And we called this rootconf.say hello, right? Very simple. So if we, you know, if as a part of your continuous deployment or as a part of your day-to-day system administration needs, instead of encapsulating all of this in batch scripts or even worse in a wiki or, you know, just in your head somewhere, you can actually encapsulate very, very easily your commonly used system administration commands directly into Python functions, simple, easy, no problem. Any questions about that? So, yeah. So I think what you're asking is about targeting here. Okay. So as you can see here, the syntax is salt, and then a target, right? And then a function to run, and then any arguments to pass it into that function. Because I have all of these other minions running on the same laptop, it doesn't make any sense to have all of the minions write to the same host file, which ultimately is the same file, right? Obviously, if you were in a real infrastructure and we had, you know, a thousand minions, then, yeah, it would be something that looks like this, okay? Oops, like that, right? Okay. So that's really, really nice because I do want to talk about targeting for a second, and I'm actually running out of time a little bit. Salt supports very complex targeting, and the way that it does that is that we have a system called grains, right? Grains are similar to puppet facts if you've ever used those, right? Let's look at what grains might look like, right? Okay. Just like that, right? So you could see there, here's a bunch of information. Of course, you can write your own custom grains. Interestingly, you can either declare custom grains in a configuration file, right? i.e., to say, this host is part of cluster x or part of y, or you can write a Python that will then generate and produce a grain. So do something like go out and query a database and figure out which cluster I should be in, so on and so forth, right? You can see here this particular minion has some roles, so then what I can do is something along the lines of this, right? Actually, let's, font? Oh, thank you. Is that better? Okay. Let's do, let's target everything with this particular, let's just do this, everything that's running Arch Linux, right? Oops, yeah. So root conf.say hello, okay? So we can build really, and we can do compound targeting, right? We can do, you know, grains. We can do lots, all sorts of things. We don't unfortunately have time right now to go through all of it. Before I run out of time, though, I do want to look at Salt's state system really quick and give you a quick run through of what that looks like. Since configuration management, of course, is something that many, many people end up using Salt for, okay? So let's go here, all right? Salt has this concept of SLS files, their state files, and we're using the same paradigm here, right? This idea that we can write states, i.e., in Python, right, with a particular function signature, and then we can simply write data structures, right, which map to those. I'll give you an example since I'm not going to do live coding right now since I'm running out of time, but I'll give you an example of how that might look, right? So what we have here are our two states at the very beginning. The first line is an ID declaration that's completely arbitrary, and then secondly we have a state module, right, and a function. This is extremely similar to, all right, I got the font this time, all right, to what we saw in execution modules, right? If we look at this, all right, and I should say that it's not required that you have to go in and look at the code. Of course, this is all, you know, heavily documented on the web, so on and so forth, so please don't assume that you have to pop open the code every time you want to do something. But as you can see, this maps quite cleanly into the function arguments here, right? The name, right, sorry, I'll make this bigger, user minute, hour, so on and so forth, right? Of course, when we talk about stateful configuration management, right, the idea behind it is that it's idempotent, right, i.e., that if you run the same command, unlike simply remote execution or something from the shell, if you run the same command multiple times, the first time it, or every time rather, it will look for whether or not the thing that you want to be true is enforced. If it is not enforced, it will enforce it. The second time that you run it, if it's already enforced, obviously it's not going to go out and rewrite a file for you every time. So let me show you what that looks like, okay? As you can see, right, we've gone through just that first block there. So these state files, our state configurations are quite easy to write. If you don't like YAML, which some people don't, obviously this is YAML, SALT is completely data agnostic. If you want to use Mako, if you want to pass the stuff in with Pi DSL, so long as you can pass SALT a data structure of your choosing, we're completely fine with it, which I think is really, really nice. They're not strictly a DSL, they're just a data structure representation, right? So to give you an idea of how this would render, the question was whether this is really a DSL or what we're really looking at here. This is really just a data structure representation. Just to give you a crash course on YAML, if you see the colon at the end, you're looking at a key value pair. If you see the dash, right, you're looking at a list, so on and so forth. So this breaks down quite quickly. So if you go to like an online YAML parser, right, you can see it, you know, parse into whatever language you choose. Okay, so, all right. In this case, what we're doing is we're calling state.sls. State is an execution module, which just happens to be the entry point for the state system, for the configuration management system. We're calling the sls function, and we're giving it the name of this state file here, which as you can see is cron demo, right? Oops, and I did exactly the same thing again. And this failed for some reason. Let me comment this out here. Very pleasant. As you can see, this is an example of a configuration management system being idempotent, because if we look at this crontab, we can see that in fact this does already exist. If we edit it such that it does not, right, and we run this again, obviously it's added it. If we run it again, it says that it is already present, right? So this is how we go out and we enforce states on a system. Of course, salt like many other configuration management engines does have a complex requisite grammar, and that's just a fancy way of saying we allow you in state files to create a dependency tree such that you can say only enforce state x if state y happens to be true, right? Or for example, watch what's happening in state x, right? Watch the configuration of my Apache vhost configuration and if it changes, restart Apache, so on and so forth. Any questions about how that operates? Oh, okay. I wanted to look and see what the last thing that I wanted to talk about here was. What I want to do at the end here is do a couple of things. We talked about this messaging bus. In salt, we call it the event bus. And the nice thing with the event bus is that it's effectively universal. You can use it as a part of your continuous deployment process. You can use it for monitoring. So for example, you could tell your application to signal or drop signals onto the event bus every so often. You can have salt watch those signals and when one of those signals happens to hit, you could invoke the configuration management engine to make a change to your infrastructure, which I think is a very, very powerful paradigm, which is what I would like to close with today. Oh, I do actually, before I do that, want to show you salt SSH just in terms of, well, actually, no, we really don't have time. But salt SSH, of course, does everything that we just did, right, i.e., whether it's with the state system or with remote execution, but it doesn't require a demon running on the other side. In that case, that actually opened up an SSH connection to the local host. No demon required. It stood up a salt system, which is how these agentless approaches actually work, which I think is very, very interesting, right? Because in my view, memory isn't necessarily saved if you have to use it every time you want to deploy something, right? To me, that doesn't strike me as a particularly compelling argument for agentless architecture, but it is very valuable in things like deployment, right, or provisioning, but it may not be as appropriate for the long life cycles of machines. So to just jump back real quick, I want to conclude with this demonstration of the reactor system. What I want to do here is show you that, we have this idea of runners. Runners are master side pieces of Python that can go out, execute arbitrary commands, do things in an ordered way, but it runs entirely master side, whereas obviously the remote x stuff and the state configuration management stuff runs minion side. Okay. Oh, that's not good. So what I want to do here is show you the reactor system, and what the reactor can do is it can watch this event bus on the master for salt events or events coming from your application or what have you, and then map the detection of those events into your state system for configuration management or what have you. Okay. All right, prepared this already? All we need to do here, right, is enable what we call the reactor, give it an event tag to look for, that's this, right, and a state file to run, which is that. Okay. All right. For purposes of demonstration, we will see that the state file, cleverly, looks like this. This will be the end of the presentation because when this minion starts up, it will fire an event saying that it has started up. The salt master, the reactor will detect that event and then it will run this, which will, in theory, shut the laptop down and end the presentation. And if it doesn't, then I'll just have to stand up here all day. So hopefully it works. Okay. So we'll restart the salt master here to make sure that we have that configuration. Okay. And here we'll, in this guy down here, you'll see the events flying past. All right. And here is the, what should be the minion starting up. All right. And thank you very much. That's it. And I know that we only covered bits and pieces of salt. There's so much. Obviously, we're going to be giving a training class on Sunday. Please come by and see the many, many, many things that we didn't get a chance to cover and talk about your individual deployments and how salt can fit into your infrastructure. It looks like I have just a couple of minutes, so I'm happy to take some questions. Oh, I have 15 minutes. Oh, okay. Well, I can take some questions. Hi. I have a question on salt. Actually, we have built a scheduler on top of Jeff that has got pre-built cookbooks and recipes. We found some performance issues and we're trying to migrate to salt. Uh-huh. One of the things that we found in salt was you showed that a minion actually runs in your laptop, but when you go to an infrastructure, your minion needs to run inside a virtual machine. Sure. So in terms of Jeff, we have got an ability to bootstrap that virtual machine and the minion connects to the master. To get around this problem, what is that you would suggest? The typical workflow for that in salt is to use salt SSH to connect to the machine. Salt has its own bootstrap engine that can either download directly from GitHub or wherever you like, and you can use that to bootstrap the machine and have it connect to a master. You could also potentially use salt cloud itself to deploy the machine and that also has the ability to bootstrap it, actually to bring the entire virtual machine up, whether it's in a public cloud or a private cloud, deploy it with salt and then connect it to a master. Does that answer your question? Yeah. We tried salt cloud as well. Salt cloud needs to run inside the virtual machine, right? Salt cloud just needs to run on a Linux machine. It doesn't need a master running at all. But it needs to run by the minioness, right? Sorry? It needs to run by the minioness, right? No. It can run standalone. It doesn't need a minion running. Okay. Thank you. Hi. So basically, we are using salt in our infrastructure. So what we wanted to know is the same thing like how we can dynamically host to the master without any manual intervention. Right. So the question was in a highly dynamic infrastructure, how you can add minions. So this reactor demo is actually a very, very popular, a good illustration of how that might be done, right? So the classical way that people do that is they bring their minions up, right? Watch for that connection, right? And then perhaps invoke key acceptance or whatever your security policy is going to be, right? And then from there, once the key is accepted, you should be able to go about your business with command and control. One more question here is basically when we are bringing up a container, is it dependent on the MAC ID anyhow? Because what we are doing is we are maintaining the same IP address and the same minion ID. But every time I destroy the container and bring up the key, means it doesn't match. But I have the same fingerprint. Right. Yeah. So salt uses public key authentication for the key exchange, right? If you bring up a new minion with the same ID, salt is just going to generate a new set of keys, right? So that's what's happening there. So what's happening is you're presenting this new set of keys. The master is like, aha, you're trying to trick me. You're not who you were the last time I saw you. And so it's rejecting the session. What you want to do is either preserve the keys or manage that in some other way with being aware of that condition. Hi. Hello. Thank you. So there may be a few odd slash corner use cases where the runtime state of the application may necessitate a few changes to your to your infra. So is there like a non-command line tool interface also? So where then from my application, I could actually invoke salt and make it do things to my infrastructure. Are you looking for a graphical? No, no, no. Just to change the state of my infra at runtime. So there's maybe something in my caching layer that's changed. And now I need to, you know, update the conflicts of a few machines to reflect that change. Right. Are you looking to do that automatically or? Automatically from within the running app. Right, right, right. So yeah, I mean, that's part of the event driven infrastructure view that we have, right? So in our sense, it should be very, very easy to write a small bit of Python or call out to a CLI, right? From whatever service is making the change, right? Announce onto the salt event, event bus that the change is being made, right? The salt master and the reactor, just as in that last demo, watches for that change and then configuration management happens in response. And that configuration management can be very simple, like the one we just pointed out, or it can be very complex orchestration with many dependencies and what have you. It just depends what you need. Hi. Thanks for the nice introduction to salt. Could you touch upon the distributed capabilities of salt? For example, can I have a, you know, cluster of minions sharing distributed state across them? Can I have a high availability in my minions and my master and things of that nature? Right, right, right. So right now in terms of the distributed state, it's brokered by the master. And so, you know, you can create clusters, but there's, you know, messaging is still going to be brokered in that way, right? Now you can, you know, there are some limited decision engine capabilities on the master, right, to decide who's going to get what information and you can do some filtering. We're doing some development work in that area to try and make things better. In terms of true distributed peer-to-peer communication, we have the ability to publish from the minion messages or remote execution destined for other minions, right? But it's not a distributed mesh in the way that like a classical distributed system might be. Does that answer your question? Yeah, do you have plans to extend that? We're trying, yeah, we're trying to go in that direction. Yeah, very much so. Right now, we're engaged in a transport layer addition, adding TCP transports and a couple of other custom transports. So that's a direction that we'll be going. Thanks. I have a question about orchestration. I did mention. Hi. Thank you. Thank you. So I did see orchestration somewhere in the slide, you know. Could you just elaborate more on how exactly orchestration has been done? Sure. Orchestration is one of those words that's like cloud was in 2006 or 2007. Everybody's saying orchestration and nobody really knows what they mean. So let me tell you what I mean when I talk about orchestration. Orchestration is the idea that through targeted sets of, we call them minions, right? Through targeted sets of minions, right? You can have a deployment process which has dependencies between those sets, right? So let me give you an example of how that might look. A typical orchestration workflow might say something like, okay, I'm going to deploy my application, right? Therefore, make sure get checkout happens, right? If the get checkout is successful, then go over here to my load balancer cluster and take down 10% and then, right, start to do that deploy on the web tier, so on and so forth with the ability to create those dependency chains and of course fail hard and potentially roll back at any point where that stuff fails. Great. So of course a workflow that you sort of set up based on rules, it sort of go ahead and execute. Can it go and talk to multiple different, I would say, systems in the data center because it might, yeah. So can you just give some examples on that? That's what I was most interested in. Of multiple different systems, do you mean like servers as well as network gear? Yes, servers, networks, yes. I believe, yes, that should be the case, right? What I mean to say, let's say I've got SAP and I have got, let's say, I don't know, Active Directory and I have got Apache server. All of them are part of my infrastructure because it's heterogeneous. So how exactly I can do orchestration here? Right. Yeah, I mean very much, you know, in the way that I've already talked about, you know, because, you know, you can put minions on all of these machines, you can control networking gear like Juniper gear, right? You use salt state files to declare how that orchestration is going to operate, right? What the workflow for your deployment process or whatever it is is going to be. And then, of course, you just kick off that orchestration either, you know, in an automated fashion or manually. And later we can probably talk about the particulars of your infrastructure and get into how that might work for you. I saw another question I thought. Yeah, there it is. How do you test or unit test those plugins? Yeah. How do we unit test the plugins? Yeah. We use a lot of magic mock, actually. So the unit testing is heavily mocked. A lot of people ask about how do they test state files like infrastructure testing, all right, which is a different question. Salt, I didn't demonstrate it, but salt has a test equal true mode, right, which basically will go out, you know, examine the state of the systems and determine whether or not it will make changes. Yeah. Did that answer the question that you had? Yeah. Okay. Oh, the plugin side. Yeah, yeah. So, yeah, we have our own test suite. Unfortunately, it doesn't take two minutes to run. I was very jealous of that. But yeah, there's unit testing and, of course, there's integration testing for the plugin stuff as well. Absolutely, yeah. The test suite that we have is very well done and very elaborate and quite easy to run on your own machine. Yeah. There's, you'll just want to download. We maintain a separate repository for our testing dependencies. So you'll just grab that and then you'll be off to the races. Yeah. Hi. Hi. So how is it different from Ansible? I mean, I see a lot of advancement over Puppet in SaltStack, but how different is it from Ansible and what are the parts that I have using SaltStack over Ansible? Okay. Yeah. Ansible is a very popular product. This can be a hard question for me to answer because I don't want to speak for the Ansible people, but I'll just speak in generalities. I think Ansible tends to be focused more on the provisioning deployment side. Our view is that machines, even in immutable infrastructure, machines have life cycles, right? And you need to manage a machine beyond the point that you simply install the packages that you might need. So conceptually, our approach is more focused on the complete life cycle of the system versus the initial deployment step. Of course, the big and most notable difference is that Ansible talks a lot about being agentless. Salt, of course, has an agent-based and an agentless mode, which I believe makes that point moot. They focus a lot on simplicity, which certainly has value. However, complex infrastructures have complex needs, and that's not to dismiss simplicity as a principal or a virtue. But we would much rather solve hard problems even if it makes the software slightly more complex. Does that answer your question? Hi. Yeah. One is the selectors. I mean, how do you actually select the infra that you are going to operate on? Okay. And how users can define that? Yeah. And the other one is, given you are, you were talking about salt in a cloud-based environment, right? What happens for, I mean, as far as entropy is concerned of the infra over time, I mean, people are arbitrarily running, you know, salt directives or even base things are getting triggered. You see there is disparity between machines that are supposed to be the same, but they aren't. Usually in a declarative framework, the machines themselves come to a state that has been well-defined. In an imperative framework, you could, there is a possibility of entropy. So how do you deal with that? Right. So I think part of that's, I'll address the second part of the question first. And I think the question was mostly about imperative declarative differences, right? So to fill people in a little bit about this, there has been a very long debate in the configuration management community about whether declarative approach is the correct one or whether an imperative approach is the correct one. To give you some feedback or to give you an idea, a declarative approach declares the intended state of a system, right, in a very general sense. Whereas an imperative approach says, do x, do y, right? Sorry, I missed the last part. Yes, where things are elastic, right? Yeah, I mean, where, you know, salt is a configuration management, it has a configuration management engine, obviously. And so when things, you know, come and go, right? You know, there is, there certainly comes a certain point where your application has to be aware of what's happening, of machines coming and going. Salt can, to answer the first part of your question, salt can target either via machine ID, via grains information, right, i.e., you know, facts about the machine, right? Private data, which we call pillars, which that machine might contain. And again, you know, I always put my emphasis on salt being a pluggable system. So if you need another targeting system, right, if you need, for example, to be able to pull from a database, right, somebody needed like CCO range once, I don't know if anybody's ever used that, as a targeting system and they plugged it in. So it's really whatever targeting system you need, but those are the ones we ship with salt right now. Hello, hi. Yeah, what do you talk about a particular scenario? Say I want to manage a lot of minions and these minions are installed on systems which are mobile in nature. Okay, and these systems are connected to the master through a very inconsistent internet connection, you know, which is very flaky, very slow. Such a scenario is salt a good choice? It depends on the extent to which you're going to want to command and control these machines. Salt, 0MQ is TCP based, right, which is, which kind of should be enough said, right, TCP is not necessarily designed for extremely long running session times and for really, really quick session reestablishment. What I would do in the scenario that you're describing is use one of our alternative transports. We have written our own network transport called rate, it's the reliable asynchronous event based transport, it's UDP based and it's much more resilient in the sorts of situations that you describe. And that's like production grade ready to be used something like that. Yes, it's being used in production at infrastructures in the tens of thousands of nodes right now. Okay, cool. Hey, this is a question, it's more about like, can you provide a brief info about what's the Windows support for salt, because that's something kind of a pain point when you compare different of the tools in the same domain. Yeah, so salt does have Windows support. We actually just finished master support for the Windows, which not a lot of people know. But yeah, Windows is a first class citizen for salt. We actually just hired about three months ago now a Windows development team. So we now have full-time Windows developers. Obviously, there has been a little bit of a backlog for Windows bugs, but those are rapidly, rapidly going away. So the improvement that you'll see on the Windows side in this most recent release is probably miles ahead of what you've seen in previous releases. Thank you, Mike. Please take the rest of your questions offline. We're out of time. So Mike's workshop is only going to have 12 participants. So please register quickly at the registration desk to get into the workshop. Thank you, Mike, and thank you, Sequoia, for getting him here.