 This is actually a talk that I gave internally at Adobe last fall, and it was really successful. I wanted to present it to a wider audience, get more feedback, and see how we might improve this tool. Like she said, my name's Christopher Edwards. I am, that is actually my title at Adobe, Saltmaster. I do all the salt things for Adobe, which is really fun. This is about incorporating security and compliance into SaltStack for our security initiatives and compliance standards. First of all, I want to talk about why did we do this? Why am I doing this? Why did we create this tool? I like pain. Primarily, well, I should get credit to my wife. She inspired me to do this. She works in InfoSec, and I work in operations. Not at Adobe. We don't both work at the same company, but after work, we'll talk about our frustrations. She's trying to come up with the security guidelines to give to the operations people, and they always say, you guys probably know how it is, you know, you're not my boss. Why are you telling me what to run on my box, and how to set up my servers, and how to do all this stuff? I'm on the upside, and I have the same frustration from the other end. So we were talking one day and she said, you know, you really should take some time, go talk with the security people, see what their initiatives are, talk to them about your initiatives, see how you guys can collaborate, see how you can make each other's lives easier, right? So I went and had lunch with some of the security people, and we talked, and I got a good idea for what they were trying to accomplish, and how it aligned with what I was trying to accomplish, or also in some cases, how it didn't align with what I was trying to accomplish, and we started to have some meetings and discuss how we can make each other's lives easier. So we started, well, it kind of went like, they had bought some tool and it was supposed to just solve all our problems, you doesn't know how that is, right? And here, just run this tool, like everything will be great, but from the operation side, I don't have any insight into the tool, I don't know what it's doing, I don't know where it's sending information, I don't have any insight into the results. What we had was, they would run something, it would send it back to them, and then they would analyze it, make some report, then we have the whole ticket as a service, right? They'd make me a GR ticket, and then I'd use, and it was just a lot of this around and around, and it was really inefficient, and I was frustrated with it, and I think they were a little frustrated with it. So I started asking them about, what are your requirements, what do you really need to do with this, and what are you getting with your current tool? And one day I thought, you know, I think I could probably just achieve the same thing by leveraging SaltStack as this execution framework, and then essentially send these compliance audits to my systems and have them report back. So from my perspective, I wanted to do this so that I could gain greater insight into the compliance level of my systems. I don't really care so much about the security guys, like what, I mean, they do important stuff, but they're my machines, they're my babies, I'm in charge, my necks on the line if they break, right? I want to know if they're secure, whether they know or not, it's not terribly important to me. So I wanted to move the insight from them to me. I wanted to have more control over what my compliance level was, and have better visibility into that, than just having them tell me, oh, go patch this, or go, you know, here's another ticket, here's another ticket. So we went from this, just run this tool, and then, you know, security, right? This is what we're using, and I wanted something that wasn't so much of a black box, that worked a little bit better for me, and wasn't just magic, right? So I decided, all right, how am I going to do this? How should I approach this? So I wanted to leverage our existing tools and existing frameworks, and not take new tools and new frameworks and add on. This was also about the same time where we had, well, I'm sure you guys are aware, Adobe's a big company, we have lots of different departments, and I was getting kind of instruction from a bunch of different departments, hey, run this agent on your systems, and run this agent, run this, you know, security wanted something, and monitoring wanted something, and this other group wanted something, and it just felt like we were just going to pile more and more agents onto my machine to watch this and watch that, and watch some, like, no, no, no, no, that's way too much, I don't want to have, because then I got to monitor this agent, and I got to monitor that agent, and the knock has to call me to restart this one, and then restart, of course, they're not calling InfoSec to restart the security agent, they're calling me, right? And when it's a pain point for me, I want to do something else about it. So I thought, why don't we kind of, why don't we try to consolidate these down? Why don't we take these multiple agents, and tie them all into the SaltStack agent. SaltStack has an agent that runs on a machine, it has a scheduler, it has custom modules, so we can just say, all right, Salt, run this, Salt run that, Salt audit this, Salt monitor that, let's compile them that, or let's consolidate them down into one, and leverage that, instead of just loading more and more modules onto our, or monitors onto our machine. So I wanted to, based on my talks with InfoSec, I wanted to let operations do the driving, but let security navigate. So have them tell us what the requirements are, and then let us do them, not just keep it all on their side, and then they'll just create tickets for us. Again, like I said, I wanted to have better insight into the compliance level of my machines, based on their navigation, or based on their requirements. So this was, this was kind of the early, this is one thing that I really wanted to leverage. And I really wanted to have, again, insights by both teams, and not just from one team. I wanted to get rid of the black box and the magic. So the next requirement, of course, comply with all the things. We had this meme pasted up on the walls. This last fall, we had this big push for some compliance standards. We wanted to achieve, you know, this standard, and this standard, and so on. And so everybody was heads down, security, security, security for a number of months. And, of course, I have to have some memes in here, or, right, it's not gonna, anyway. So I came up with a project. I kind of put it together in my head. I have, I don't know if you guys have the same thing. I have a lazy boy that I call my thinking chair. And I kick back in it. And I think, and then I usually fall asleep. But it's still percolating in there while I'm sleeping, right? And then I wake up and I'm like, aha, I've got it, right? So I came up with some ideas in my head and I thought, we should do this. And I had to come up with a name for it. I've been accused in the past of coming up with boring project names. So I came up with Project Argus. If you guys remember this old mythology, maybe that's another dumb project name, but I like it, so deal with it. Yes? Never mind. You're the worst, Corey. So, so this project is broken up into three different pieces. Based on the requirements from the security team, we needed to have a baseline set of security requirements. We also needed to do file integrity monitoring and watch for bad checks, sums of files and watch for changes in binaries and that kind of thing. And then we also wanted to add additional insights so we could look for like, you know, what, what open ports are on a system bound to some binary, like is somebody running some, some rogue thing that's sending stuff to Russia or whatever, I don't know, right? So we broke it up into three pieces. The first piece is the CIS benchmarks. Is anybody familiar with CIS? Centers for Internet Security. Is anybody affiliated with CIS? Before I bad mouth them a little bit? You have to comply with it, but you didn't write it. Okay. I like, that's okay. I like CIS in that they define a list of checks and benchmarks. You should comply with these things. What I don't like is I think they should hire a technical editor and have even a little bit of standardization between their documents. Like in the CENOS 6 benchmark, you know, rule 2.1 is this check, but in the CENOS 7 benchmark, rule 2.1 is a different check. And the Debian benchmark is a different check number. And for a standard, they don't understand standards. Like it drives me crazy. Maybe it's a little the OCD that, you know, but yes. So if anybody knows someone at CIS or something, I don't know, hit them, of course, or tell them to hire like Larry or some technical editor or something and fix it. Anyhow, so this is the first piece. And I'll talk a little bit more about this in a minute. And then we have the second piece is the file integrity monitoring. Like I said, watching for changes in binaries, we want to gather check sums like SHA256 or an MD5 or something of all the binaries in our systems. When a CVE is released, they usually say, you know, here's a known bad check sum from a binary. Be able to look for this. Be able to query for that. And be able to query our entire infrastructure for known bad check sums was another thing we wanted to do. And then the third piece, we decided to go with OS query. Is anybody familiar with that? It comes from Facebook, right? It kind of allows you to query your server like it's an SQL database. So you can run these select statements like select open ports, comma, processes where port equals the, you know, and put together these query statements that allow you to gather insight into running agents and SUID binaries and open ports and all kinds of stuff. It's a really, really cool tool. And there is an assault module for OS query. So we can, with salt, we can send out and query all of our systems at once and then bring the data back, just like we have this large distributed database system, and then do comparisons. And we, at Adobe, we send most of our data into Splunk, and then we can create reports and all kinds of stuff. But again, this is the third piece. Now, OS query can run as a daemon, but we decided not to run it as a daemon. Because again, like I mentioned, I don't want more and more daemons on my system. I want to consolidate them down. I have assault agent, and the assault agent does these queries for me. So you can just send a query right into the OS query binary and get the results back without running it as a daemon. So we're essentially using the assault agent as the daemon to pass the query over and then get the results back. Now, so to go back to the first component, the CIS execution module, when I first gave this talk back in August, I hadn't quite open sourced it yet, but it is available on my GitHub now. Be warned, it only currently supports sent six and sent seven. But I have a new roadmap defined where I want to make it a lot more modular. And I'll talk a little more about this at the end, the roadmap to support Daemon and pretty much anything else, any kind of test suite that people want. And even make it a little bit more flexible so it doesn't require SaltStack. I think SaltStack gives us a lot of benefits, but make it supportable by Puppet or Chef or whatever. So I've got some ideas for that. And again, talk about that in a little bit here. So let me do a quick demo of the CIS module that I wrote. And hopefully, the demo gods will smile on me and actually work. If I can get my console to show. Okay. Is that big enough? Can you guys see that? All right. Two more from the front row. So the CIS module, I wanted to, it's a huge list of these security benchmarks. You should have these mount points, you should have, you know, no exec on this, you should have these security settings and so on and so on. It's probably 130 different checks. And so I wrote them all into a Salt execution module. And I made it so that I could run an individual check. I could run a group of checks or I could run the entire suite of checks with one command. So I'll do Salt and I'll target a machine here and say CIS audit 11. So audit 11 corresponds with the CIS document chapter one unit one. Now if you can even see the results there. But I passed a couple, I failed some. I can run it with details. So let me, actually, let me run the whole suite here. And to black. It was on black and I didn't know if that was very visible. But I will try that better. Okay. So running the entire suite of tests only took about that long. That was 10 seconds. It's not that bad. I was really concerned about the performance. I didn't want to bog down these machines. So on this test VM I have based on the CIS benchmarks, I've failed 57 of the tests and I've passed 55 of them. Not great. But this is kind of just a default center was installed. It doesn't have some of these things applied to it. Now, these results are useful, but I need more insight into this. So I can also run it with details equals true. And it will tell me which tests I passed and which tests I failed in the name of those tests and what chapters and sections they correspond with. Now it's wrapping here because of the resolution. But it says I failed. All right, create separate partition for temp. This is a scored item in the benchmark. I failed that one. So some of these partition things by default. Well, yeah, the default install doesn't set some of these settings. Remove the telnet client. That's one of the benchmarks, which I've argued back and forth with the security guys. The telnet client is not a security problem. I use it to test port connections and all you know, but but it's in and it's unencrypted. Well, I understand removing the telnet server. But not the telnet client. Anyway, that's that's another thing. Right, they've already just used that cat, which can also be unencrypted. So anyhow, so those I failed and these I've passed. So some of these, honestly, I think are dumb, like remove the talk server. What the hell is the talk server? What's charging D gram? I don't even know. But this in the benchmark. So anyway, those are easy wins. Disable all these so. So you see the the test name and the test number. If I wanted to rerun test 2115, I could run just that test by doing audit underscore two underscore 115. And so I can run individual tests, I can run groupings of tests, and I can run the whole suite. But this gives me a good quick, like I said, about 10 seconds insight into the compliance level of my box. Now, I just ran this on one machine. But because because of the way that salt is designed, it offloads it everything to the minion. So I'm not bogging down a central box, I'm sending. I'm executing the audit on each minion, and then just reporting it back. So I could run this on 10 systems, 1000 systems, 100 systems and and work fine. But this I what I really like about this is this gives me as the operations guy, the insight into what the requirements are and where I stand on those requirements. And then what we additionally did to this was we created everybody's familiar with salt states, like puppet manifest or, you know, chef recipes or whatever, where you you set the configuration down. We created a corresponding set of states that match up to each of these tests. So if I failed on nine to one, I could apply the salt state nine to one, and it changes that setting. And now I'm compliant. So I have a suite of states that go along with the, the compliance checks. So I can easily just patch them, rerun the test. All right, I'm past. Good, great. And then I go on and do I can do these one at a time if I want, I can apply a group of them. And then I can easily and quickly patch my systems, and then send this report back to InfoSec and say, look, now I'm only failing three of these and I will argue on why that's okay. Yes. Yeah, I mean, a little bit. It we tried to so each of our product teams has their own salt installation. And so we tried to work with them to update their existing states to inject these fixes. So we don't have, you know, they have a high state that does 90% of it and then another state that does and then they're walking all over each other. So, so yeah, there's a little bit of difficulty there. But oh, yeah, you're doing it wrong. Oh, okay. Yes. Right. Right. So that is something that I wanted to add in the next iteration. This was strictly just kind of a proof of concept. So I wanted to not say, here's the list for everybody. This is the list you have to use. You can define your own lists or you can define your own custom modules and your own custom checks. That's what I want to do in the next in the next builds. Because yeah, I don't want to have to fail every time on I'm running Apache, which is running on my web servers because duh, right? Oh, let's see. So, which that's another thing that really bugs me with the CIS benchmark standard. There's a whole chapter which I call the the dumbness chapter. It says don't run Apache, don't run squid, don't run, like, well, it's a web server. Of course, I'm going to, and if it's not a web server, I'm not running Apache. My security team insists that, well, people are dumb and they'll run Apache on not an Apache web server. Maybe it's just me, but I don't want to fail my web server audit every time because I'm running Apache. There's a whole section of turn off like every service. Yeah, and I think this only checks for Apache, doesn't check for engine X or lighty or whatever. So yeah, so custom checks, custom things, that's in the next iteration. So I think the wins that we get with these CIS benchmark checks is it's really fast. The checks, this was on a VM and it took about 10 seconds. So if I'm running it on a bare metal box, it's probably going to be faster. It's very flexible. I can add and I can remove checks. And in the next iterations, I want to add more and make it more flexible. And it's collaborative. These check, the checks is just a Python module for salt. So Infoset can add checks to it, or I can add text. It's all just in a Git repo and people can collaborate and add and remove checks and make it more customizable and so on. They do. Um, so we created some really pretty pictures in Splunk, where we can return this data. So anybody familiar with the concept install stack of returners, instead of just sending the data back to the console, you can say send this to my SQL or send this to whatever, we've written a Splunk returner. So we can send the data directly back to Splunk, they can parse it, create some pretty dashboard with graphs and whatever, and then the auditors are happy. I don't know if Scott open sourced it yet, but it totally will be. There's no reason we wouldn't. Yeah. You want that too? Okay. Okay. Oh, all right. So these I think are some of the wins. There's been a couple of questions about potential flaws here, there, or how do we address this? And again, in the next iteration, I want to I want to improve all those and address those. And because I want to open source this, I want feedback and I want contributors. And if you have ideas on how to address the, you know, the difficulty with with competing states or something, let's do that if we want to, you know, create prettier output or something. Let's do that. Let's work together on it. Now the next piece is the file integrity monitoring execution module. Also open sourced in my personal GitHub. Now what this does, of course, is it's actually broken up into four steps. There's an execution module, but then there's also a salt orchestration runner that runs each of that runs all four steps and brings the data together. So first of all, it gathers the data. So it gathers the checksums of all the binaries. The execution module is flexible. So you can, you can tell it what type of checksum you want to gather, whether it's a 512 or shot to be six or 95 or whatever. You can give it a list of targets. If the target is a directory, it'll walk the directory. If it's just a binary, it'll just do that binary. And you can configure where does it store the results. One thing that we found early on with this is that gathering all this data for all the binaries across all the systems was a lot of data. So early on we thought, okay, that's, I mean, we're already running two and a half terabytes a day through our Splunk. We didn't want to make it worse. So what we did was let's gather these every day. So we'll gather all the checksums and save them on the system. That creates our day one benchmark. Then the next day it runs it again, creates a second file, then it diffs the files and only reports to Splunk what the changes were. So if this has changed or this was removed or whatever, that's the only thing that's going to Splunk. So instead of thousands of lines, it's a dozen lines maybe if we apply to patch that day, right? So in this iteration, it will gather all the checksums, save them on the machine, and then date them and then diff them and so on. And then the I put together the orchestration runner that goes out and says, all right, everybody do your checksums, save them. And then the next step is the master will go out and say, all right, everybody give me your diffs. And they're all collected back centrally to the master. And this was before we wrote the Splunk returner. So we were running a Splunk forwarder on the master. And it would read in that log. And that's how it would get the daily diffs. But now we just want to do the returner directly from the minion. So it does the diff and then sends it directly to Splunk. And we kind of move the master out of it. This, if you can see this, this is this is the four steps that it does. This is the salt orchestration runner definition. So it says, all right, everybody do your checksums. All right, then use the CP push module and send them back to the master. All right, then run the FIM diff execution module. And then send it so it's orchestration as code, do this, then this, then this, then this and do it on these targets and gather the data and so on. So it's actually kind of simple. There's just the functions inside of the FIM module. And then it runs those functions in a certain order to gather that data and send it back. Yes. We've run this through the salt scheduler. But yeah, on a daily basis. Yeah. And because they're all coming back to the master. Again, well, that's something we want to improve on and send them directly to Splunk. But we got to use Splay with this to say, all right, don't everybody do it all at once. But let's use the salt scheduler to send it out. I'm going to try and demo this one. But as is normal with presenting, I was fiddling with it earlier. So it might not work. But I'll give it a try. But for this, I can just do salt, run, state.orchestrate. That's not my bug. That's an upstream salt deprecation. Dave, you can fix that. Dave is a salt engineer. If you guys have salt questions. Okay, that's Red Hat's bug. I don't mean to smash that. So I broke this one. But I promise it worked yesterday. That's my fault. Anyhow, you kind of saw what it was doing with with this, if you can see this, but it would target out everybody. All right, gather all your checksums, save them. And then we can do a diff that we can collect it back and we can send it. Yes, Corey again. It really depends on what the targets you define are. Like, if you wanted to scan everything on your box, that's going to take a long time. We limited it initially to like, just the paths that are the the the directories that are in your path. So most of the binaries. And actually, this is a demo that I can run. I can't run the whole orchestrate thing. But I can run. So I want to run FIM.Check some on this list of targets. I can define the targets. And it doesn't really take too long. Now that's not a huge list of targets, but it's going to walk through all those directories and scan every binary that's in those. But, you know, there you go. By default, it's shot 256. But it's configurable. You can define whatever checks that we want. And it gathers all of this data. A time, check some, C time, GID, group, hostname, inode, mode, M time, size, target type, UID user. They want in our infosec team wanted like everything. So if the ownership's changed, we want to know about it. If the M time changed, we want to know about it. Yeah. It's actually it's a combination of two existing salt modules. There's the file.get hash. And I forget the other one. But it's just, hey, salt, go run and check some on this file. And then it parses it and puts it in this little YAML output and stuff. So that didn't take, is that too long for you, Corey? That's acceptable. So there's that piece. Here's some of the stuff that we put together in Splunk, which I don't know if you can see. This is rare targets. So binaries that we're only seeing a few times across the environment. So we found like two instances of AB or two instances of area check. I don't know what that is. But based on the information reporting back to Splunk, the security team can then run these these tools and say, why do we have this one binary on out on all of our systems that seems to be nonstandard? And then they can, because we're gathering the host name that it's on and all that other detail, they can go track down, all right, this host has this one binary. Let's talk to that team. Let's see why it is on there. Let's remove it if we can. We also gather rare permissions. So some of these are 0655. Why is it 0655? That's or 0510 or whatever. These are rare permissions that we find based after gathering all this information and parsing and generating the reports. So we can go in and look for weird permissions on files and easily track that and see it. And then again, the last piece, the OS query execution module. This was newly added in salt 2015.8. I believe the original developer of this module for salt is Gareth, one of the conference organizers. So awesome for him. I don't think he's in here, but we are loving this at Adobe. And so I need to thank him for building this module. I went to build it and I found this a number of times. Corey's probably seen this too. You go to build a module for salt. You go check really quick. Oh, already done. Already done. Salt, what's the quote? Salt reads my mind in the future. Right? Your own blog. Yeah. So let me demo quickly the OS query module. Again, OS query allows you to send queries to your system. Kind of like a select statement. So I've got some of these in my history here. I'll just go back up. So this is going to query for name, address, port, path, command line for processes, anything essentially that's that's bound to a network address that's not local host. So we want to look for anything that has external facing connections. So we can feed these queries into salt and run it across all of our boxes and boom. That was really pretty quick too. So let's see what we have here. A number of these. So SSHD, that's safe enough. The salt master, XINETD. So it looks like it's grabbing for IPv4 and IPv6 and all the information about the path and the port and the PID and everything. So we've put together kind of a long list of these queries. We stored them in salt's pillar system so we can update them real time without having to restart daemons and stuff. And so the OS query, we have an OS query runner, which will query all the queries that we need out of pillar and then run them. We put that into the salt scheduler and it feeds it into Splunk and we can generate, you know, why does this one box have an outbound connection on some random port, you know, where it really shouldn't. We've got a couple examples of queries here. So that's the one I just did. This is processes. I didn't write some of these queries. I'm not like a DBA time. I don't have to be a DBA to run this, but you know. So this one found the Monit Demon, the MD5 of it, SHA256, how long it's been running, all that kind of stuff. Salt Minion, SaltSyndic, that one. So yeah. So the OS query module is open source. It's upstream salt. It's not something that we made, but it's something we use. And the queries, I don't know that we've published the list of queries we're using. I don't know if that's of interest to people or if that should be part of the project or not, but you can. Yeah. So what's coming? What are the next steps here? We, like I said, I demoed this at Adobe last fall. It got the attention of the security managers and then the security director and then the VP and then it kept going higher and higher and I had to demo this to the CISO, which was a little nerve bragging. And his first issue was, well, we already have this tool that we're using. Why do we need another tool? Why do we need to recreate the wheel? From my opinion, it's, well, we don't have to spend a million dollars on this other tool. We can write our own and we can just hire somebody to do this, right? And we, honestly, we get the results faster. We have better insight. The tool that we have does essentially the CIS benchmarks and the file integrity monitoring, but it can take up to a week for us to get the report, which is totally not useful at all. We get these back in 10 seconds, right? And all the functionality of OS Query is just icing on the cake. None of that functionality is in the tool that we currently are replacing with this. So they like that and they've given us the go-ahead to keep going with it. The one concern, though, is Adobe has two major business units. We have digital marketing and digital media. Digital marketing will be using this. Digital media is still using the old solution. So one of my challenges is to make this flexible enough so it doesn't require SaltStack. I think SaltStack, again, gives us some wins, but it doesn't require SaltStack. It will just require Python. If you can run Python, you can run these audits and get these reports. I want to make it more flexible. Right now the CIS execution module is about 3,000 lines long. It's got 130 checks in it. It's one Python file. It's too big. It's unwieldy. It's difficult to maintain. I want to break it all apart into individual checks so you can then just piece together and say I want this little collection of checks or this collection of checks. It's not an all-or- nothing thing. I want to add support for Debian and Ubuntu and Arch and whatever and if contributors want to support OpenSUSA or Slackware or whatever, make it flexible so it supports multiple distributions. Right now the current state again only supports 106 and 107. Pardon? SaltSyndic? We're using that. We have a central master and then multi-tiered masters underneath and we run this and it currently supports and passes through the syntax and sends the data back up to the top. So that's already in there. Trying to think of some other things but essentially we want to make it more standard, more flexible. I want more people to be able to use it. I don't want it to be, although I'm definitely a salt fanboy, I don't want it to be specifically tied to salt, although I may add extra bells and whistles in the salt version. But so if you're interested in this project, you know, hit me up on GitHub or whatever, email me or something. I'd like more collaborators, I'd like more insight and more feedback on how we can improve this and make it a shareable open source project that lots of people can leverage. In that regard, we are hiring somebody to help me build this. Adobe is looking for a Python developer, preferably with some salt stack experience, to help build this out, mature it, make it available and we're open sourcing it, which is huge. I really like that they've said okay about that at work. You would probably have to move to Utah, so you're probably a bunch of you just checked out. But we do have an office in San Jose and San Francisco. You could probably work out of those offices if you're near one of those. Anyway, if you're interested you can email me. Beyond that, here's my contact info, here's my GitHub. I've got some old crappy projects on my GitHub, don't pay attention to those, just pay attention to the salt ones. But who doesn't have really old legacy crappy code on the GitHub, right? I was just trying something out and it's on there, but I should probably delete it because it's embarrassing. Anyway, so with that, is there any questions? Yes. So that everyone can hear. When you were speaking about the file integrity monitoring, it sounded like you were storing the files locally on the machines. Is that actually the case and doesn't that have some substantial security implications? So maybe I should clarify because that came up. We gather checksums of the binaries on the local system and then the master gathers those results and brings them back to the salt master. So they're removed from that local box so nobody can tamper with them. If somebody had access to our salt master we got bigger problems, right? Yeah. So we immediately grab them, bring them back to the master, do the diffs, process the data and send it to Splunk. So we tried to, yes, that was a concern that we addressed. Cool, I have one other. Yes. This is very interesting. How much of it is highly specific to salt and how much of it could be brought into another configuration management system? Right now it is fairly salt specific, but like I said in the next iterations I want to make it more generic and more flexible. So if anybody wants to help with that, patches are welcome, right? That's what we say. So, all right, thank you. Yes, we got one over here. I have one and a half questions. First, are you allowed to mention the name of the product that the security department was using? I can, but how about I tell you after? Okay, and so the other is when you put this on GitHub then all of a sudden the world knows some of the checks that you're using on your servers. Did security have a problem with that? We have additional checks that we use internally. The stuff that we've published is based on public standards that anyone has access to. So, I suppose it gives some insight into what some of our checks are, but if they're patched then we should be safe from them. So that has come up as well, but I think we're comfortable with it. So, that was the argument you had given the security people when that came up. Like, this is publicly known anyways that these checks are standard. Yeah. Okay, yeah. Thanks. We have another one over here, I think. Have you looked into like salt master list and and demon list things for places where you don't actually want to run assault demon, where you can just run states locally whenever you run a salt call? We're not really using any of that at Adobe, so I haven't really looked into how it would work in that situation, but I can see that would be a valid use case that we could explore and see. Right. We've looked into a little bit of doing things like that where if people don't actually want to be connected to the salt master, we can give them all the states and they can run it locally and do their audits locally as well. Yeah, that's something we could look at into into the next iterations. Anybody else? No? Okay. Well, thank you for your time.