 Let's get rolling here. All right, so we're going to talk about continuous security with Docker. And our company has been using Docker for several years now. So this is kind of a culmination of stories and experiences that I've had through the lens of trying to get our company to take Docker to production and bring security to that realm for us as well. So starting off here, I want to see for an audience poll, how many people here use an automated build system for Docker? A lot awesome. I've asked this before, and there's been one hand in the room raised, and I'm like, ooh, that was a couple years ago, right? So how many people have Docker containers in production? Like, you have a customer-facing system run on Docker, and it is out in production. All right, cool. Still a good number of people in here. That's good. And next, how many people have implemented some sort of automated security measure for Docker? Like, you're scanning your APIs, or you're looking at the layers to see if they're vulnerable. Still a couple hands. A few less hands. But yes, I see one in the back there as well, perfect. And then this next one, I can really tell the people who have gotten really deep with the Docker engine and worked with it and such. How many people have found a breaking API change and a patch release of Docker? Yes. You're banging your head against the wall, and they're like, it's not supposed to break. Yeah, it was a couple years ago I was at DockerCon, and one of the engineers for Uber was giving a talk about pushing Docker to have more performant GoLang code. And right before the talk started, maybe a couple hours before he upgraded his Docker engine with a patch release, and suddenly his demo just didn't work at all, like fell flat on the floor. He looks in the front row, and he's like, what happened, Docker? What did you guys do? And then he just, like, read in the face smiling. It's really funny you use Docker a lot. Eventually you're going to run into something like that, right? So Docker at Workiva, you start out with a lot of smaller deployments at Workiva. We were starting on version 0.7 of Docker, right? So we are building the Docker files on our laptops, pushing it to a registry, things like that. So it's pretty small. You might grab an internal tool for your company and say, hey, let's test out this cool new tech by dockerizing this internal tool that builds code, or pushes code up for us, like, all right, that's how we started, right? So you get a couple of containerized tools that way. And our first bump with security there is when we were taking one of our systems and dockerizing it. We talked to a security auditing firm, like a really nice small boutique shop, really super good at web app penetration testing. We come in and said, all right, we have this really small internal web app we use to track some statistics. Why don't you guys just do a test of it, see what you find, let us know. And it was part of the onboarding meeting for that. It's a completely dockerized application. And they'd never heard that. They're like, dockerized what? They hadn't heard a docker. But it didn't really matter. They still popped our top, grabbed some data out of our database, right, showed us what was going wrong. So they're like, ah, god, right? And one of the engineers on our team was like, but they didn't get root on the server. You bought two box was fine. Like, well, it doesn't matter, right? They still grabbed all your data out of the database. Who cares if they grabbed root or not, right? So the first little bump showing you that just because you have a little bit of obscurity inside Docker, you're not necessarily free from any vulnerabilities or security problems you might have, right? So then it gets a little bit bigger. Docker started getting more mature as a platform. There were more companies offering tools and methodologies for that. And so we started taking an even more serious look at, like, can we get some of our more production applications in this platform? Are there tools to help automation to push this out, clustering it, getting it running? And so you start getting several containers for a system, right? You start taking a look at microservice architectures, rapid deployment, redundant failovers, things like that, right? And then eventually, if you really like it and you are similar to Rik even, you're completely bought in, you're like, all right. Our dock deployments are getting super complex. Moving forward, everything in production for our application is going to be dockerized. We're going to have hundreds of thousands of containers, multiple data centers, failovers in different regions, things like that, right? And so suddenly, your story gets really complex. It's kind of what happened to us. Rikiva exploded. If you know me, I love Iron Man. I always got to try to fit a gif of this in. But it wasn't a good way, right? So we were completely bought all into the container story, started having our entire applications, like I said, built on this. And when you're going that direction, you kind of got to ask yourself, how do companies, Rikiva or not, deal with security threats inside this, right? You can almost think of docker as an abstraction layer. How do you deal with security inside your container when your entire platform is now built inside here? You're shipping it differently. You're building it slightly differently. What really changes? So let's dive into this. Why is security a little bit different inside docker? So for some of you that were like, I found bugs in docker, we run docker production, we run vulnerability scan for production, you've probably seen some of this before. But for some of the folks a little bit newer in the room to docker, I don't want to just skim over this part. You have a complete shift in the application architecture. So previously, how you have is that you have VMs you're installing for every one of your applications, right? So you have this Debian box, you install a bunch of C libraries or other libraries you need, and you put your application on top of that, right? You just have a bunch of instances of this. And then when you move to putting your application inside a docker, right, you're sharing kernel space, your application libraries a lot of times can be shipping with your docker containers. And you just have one host OS, all your libraries package inside your docker container. And it looks a little bit more like this, right? You might have your host OS being CoralS or Debian, Ubuntu, Windows, things like that. And you just have hundreds of applications sitting on the same server now. They're completely shipping with their entire set of problems they need to run. So that's kind of how the game changed overnight in terms of what you need to focus on in terms of security now. So what I want to run through today is kind of the maturity of Docker security that we went through as a company, where we started out at and where we are at today and some of the tools we've used to get there in methodologies. So starting out here, running containers in production or looking at the viability of running containers in production, if you're a financial services company like us or any type of regulated industry, you might have to solve some of these security challenges before you can get to that step of being in production. So you might have to look at, you might not be able to start at just pushing production, figuring some of this out next. But getting containers running in production or in a development environment, right? Getting that situation figured out for your company. And next, creating this automated, immutable infrastructure. So that's like one of the most beautiful things about Docker, right? You can ship your container anywhere and it's identical across the cloud, across your data centers, et cetera. It's great. It's a good tool you can use to leverage. It can be really bad if you ship bad code. It's going to be identical everywhere and vulnerable everywhere. We can also use that as a tool to know exactly what's running. We'll talk more about that here in a minute. And then passive data collection. We found, actually, with Docker it's a lot easier for us to figure out what libraries and tools and code are running inside our containers way quicker and more efficiently than we did deploying applications, the more classical way with tools like using JustChef or JustSalt Stack and putting it on a raw Ubuntu host, for example. And we're going to talk about active scanning. You have your containers out in production now. You have some automated way of building and pushing it out there. Now we need to do this continuous scanning to see what's going on inside our environments. And we'll discuss a couple of methodologies for that. And then the automated rebuild and redeploy. And I kind of see that as the pinnacle of what you can get at to Docker security. You see what's going wrong in your platform. You identified that with a bunch of automated tools. You get a rebuild and you redeploy and push out either bug-free or vulnerability-free code. So starting out with immutable infrastructure. So when we started using Docker, we just built on our laptops, like Steve's like, all right, I'm ready to push my code. I'm just going to pull master from GitHub, do a Docker build, and Docker pushed to our registry. And that is unsafe for a number of reasons. But one of the biggest ones is he might add anything. He wants that doc container thinking it might make the application work, thinking he's just going to fix a bug really quick on the way out the door. There's just a lot of processes that can go wrong the more manual things are. So the first step that a lot of you that are using Docker are like, I use a build system, right? Awesome. That's the biggest key first step that you'd be surprised that I've seen a lot of people that are still building locally or building on their laptop, especially when they're just getting new and into the space and introduced to using a Dockerized platform. So there's a ton of options out there, everything, from Jenkins to CircleCI and even Docker Hub, you can push your containers up there and you can push your code and Docker files up to GitHub and Docker Hub will build them for you automatically, right? That's awesome. But a key thing to be aware of is once you build an image, once vulnerable, always vulnerable. So in an immutable infrastructure where your code should not change, once that container is built, it is there that way forever. Like when we started running Docker, I'll admit we occasionally were like, let's just jump into that container, do an app get update and upgrade. We're good to go. We're only wanting one container for this application. It's all set, right? So that's not really a good route to go, right? Because something might go wrong and it's not caught in your unit test. It's not caught in smoke testing as easily, right? So we don't do that anymore. If it's not built by CI, it's not running for us. But that also means if you build a bad version of your code or you have a CVE, like hard bleed, you build it in that container, it's always there, right? So it gives you an idea that your containers have a shelf life. When we, everyone containers in the past, you get a microservice architecture, you know, that idea that your code does one small thing very well. So when you succeed at that and does it very well, there's not a lot of updates to that microservice, right? So that container might sit there for six, eight months because you didn't have to make any changes to it. Well, in that time, God knows how many system libraries are now vulnerable and have CVEs published. So we'll have that thing sitting out there for a while and we realize like that's not great. You know, you can't really, you gotta really be aware of how long these are sitting out there and make sure you're continuously rebuilding and redeploying these, even if you don't have an automated vulnerability scanner, you know, every couple of weeks, just make a new container, deploy it out there, patch all the bugs that might be in system libraries for yourself. So for example, for us, once we're doing this, you know, how does your company respond to G-Lib C? You know, for us, what really happened was security came to us and asked like, hey guys, G-Lib C is a thing, shell shock's out there, what, you know, heart bleed, et cetera, what's impacted? And at this particular point in time when our company says to you, you're like, I don't know, it's some of it maybe, we'll see. And so if you've had, you know, the DevOps culture type of thing in your company, you've probably seen this mean before, right? Either you or security has had this, I know Jeff Smith has had this in Toxies had before, but you know, sometimes you're like, we're just pushing stuff out really quick and forgetting about it and, you know, creating all these security problems. Well, in this case, it was actually exact ops that way around. Security's like, you don't know what's running out there? Dope, you're rebuilding everything, let's go guys, every doc container's turned over. And so we're sitting in this conference room, you know, you set up your war room, you get your operations team or DevOps engineers, your site reliability engineers, you get everybody in the conference room and they're like, all right, we got to re-roll and smoke test our entire infrastructure, go. And so we're sitting in the conference room for like 72 hours or so, right? You know, like taking shifts, rebuilding, redeploying everything. And you know, In comes our head of security with just huge grin on his face and he has a cake and he's like, as soon as you fix G-Lib C, we'll set you free guys. So, you know, you have your postmortem, your sprint review, you try to figure out what did we do wrong here? Why did we take so much time to fix this? What could we do better? And like Bill Weiss was talking about in his talk yesterday, anytime you have security incidents like this, and you have to dedicate engineers for two, three, four days to solve the problem, like you just lost four days worth of development work that you could have been pushing out features to your platform, you know, more automated tooling for deploying or fixing bugs, et cetera. So out of this situation, we came to the idea that, you know, let's do some passive data collection on these Docker builds. Next time security comes knocking on our door and says, we got a vulnerability in our platform, what's affected, we can be able to answer that more easily for them. So, in our situation, we can even avoid this idea of initially scanning prod, right? You don't have to, you know, set up Nessus or other tools and just try to point them at a host and scan everything and hope it picks up a problem in your API or they come out with a, you know, feature plugin due Docker. It's like, well, great first step is just know what's inside your container. So we wrote a plugin for a build system that literally spit out the list of app packages and Python packages that are sitting inside your container. It sounds pretty simple, but it's super powerful. So after you just do like, if you can see on the screen here, this is just an output from an actual build system. It's literally doing a DPKG dash dash list, spitting out a giant list of app to packages, storing them off in a SQL database. And then same with the Python package order, you know, pip list, pip freeze, grab list the Python libraries and versions you're using, throw it in a database. So security comes to us and is like, what version of XYZ library are you running? Like just hit our API endpoint, you know? Check it there. So you have this entire record of packages for every container and we map that back to a tag. You know, so what we did in our instance is we let the build process go all the way through and finish and build this container, right? So as soon as container is finished being built, you just write this small service that does a Docker run on that container, runs those two commands, stores the output and puts it in this database and then you can, you know, use anything from a Flask app to, you know, API gateway and Lambda to be able to query this database and you just tie your list of packages to a list of Docker images and tags. So you can tie these two together. So on top of that, to be even more powerful is we have an API endpoint in our production system so they can look at what containers are running in prod, right? So we're running in the Amazon cloud, we're using ECS to run our containers so they can go out there and look what version and tag of an application is running in production, cross-reference that against a list of packages in the database and say like, all right, this one needs to change, this one does not. Instead of three days in this war room and a cake from the head of your security team, you have like Steve like hit a couple buttons where you build a few containers and you're good to go, right? You didn't like lose an entire sprint to a security bug. So like literally those two commands being put into a database like saved a world of time for us. All right, so the next part of this is I wanna talk about semi-active scanning and we have kind of a couple different flavors of this. You know, we have your traditional style of active scanning using things like Nessus or Rational App Scan or a lot of those tools in the market. But then also with Docker, and we have this idea of it's a completely immutable infrastructure. Does it make sense to scan every identical copy of something? Maybe, maybe not, well, we'll give it a shot and find out. So this is actually really important for us especially as a financial services company when you're operating the regulated space. A lot of these are required by compliance standards, right? So you wanna go for your PCI audits, you gotta do PCI DSS scans on a regular basis, save the results, be able to supply those records to auditors, et cetera. For us right now, we just recently announced that our company is gonna be going for FedRAM compliance rights so being able to sell our products in the federal marketplace. So what's interesting about that is the same kind of thing. You have an entire Dockrise platform, still have to do a lot of these automated scans in production and a couple of our security members went out there and started trying to talk to auditors and vendors for FedRAM compliance saying, hey, we're looking to do this thing as a company. We're going, building roadmap and working on this right. What tips can you offer us? What tools do you have? What things can you give us to make this process easier? Because they're reading the controls in this particular audit and saying, we need to scan all the platforms that our applications are running in. And so they pretty much posed the question, we're not running SSH inside our Dock containers. Nobody's running SSH inside there. These security tools can't log in and do an authenticated scan. And I think that was the term that was used in this particular controls, like the authenticated scan, right? So in the past when we were running on just like a raw Ubuntu host, USSHN scans the box, everything's good. Ship that report off and now it's like, well, USSHN scan the box, but our app's not running there technically, right? It's all package-in-sized Dock containers that's not getting scanned by that. So it's not the same exact thing. So I think it was like five or six auditors we talked to and five of them were like, weird, you should just put Nessus on the Ubuntu box and scan that, it's like close enough, right? And then our security team, being who they are, they're really passionate about what they do and they're not gonna settle for that answer, right? Right, that they're saying, well, it's technically gonna check a box, but in the spirit of this control, we're not doing what we're supposed to do, we're not actually providing security and value to our application if you're just saying Ubuntu is secure, right? So trying to get an idea of how they can look and peek inside what's actually in your container to pass these audit standards, I'm gonna come up with this idea of semi-active scanning and I'm gonna jump into the minute of some of the, in a minute here, jump into some of the tools we've used for this. But first, another one of the reasons that I, when I worked at jobs, you know, doing pen testing in the past or web application vulnerability assessments, things like that, if you try to like scan or run vulnerability audits against production continuously, people in operations get pretty nervous a lot of times, right? So active scanning can be really bad. So I stole this picture from another presenter that was at DockerCon several years ago. He was using it in the context of debugging containers, but I thought it worked perfectly for, you know, assessing the security containers in production as well. So this was the container ship that MOLComfort, it was sailing through the ocean out of nowhere, it's hull just snapped in half and relatively calm seas. And then next it just set on fire. The crew jumped out, got on life rafts, went to another boat and then the thing just sank like 13,000 feet. Like, well, everybody made it out. We're not really gonna spend a lot of time figuring out what happened, that just fire just happened. It's okay, cool. So that's happened to me before when I'm like assessing web applications. Like the first thing I do is like, you know, you open up Burp Suite, you might open up Durabuster, you just start fuzzing this application, trying to like traverse directory trees, see what's all is there in the application. And you're like, oh my God, it stopped responding. I think I knocked over this company's production, right? Like, dear God. So when you're looking at Docker, you're like, well, I don't have to like scan and knock over production potentially, right? Like when you're, the operations team, the security team, you don't always have the luxury of making sure these are the most robust, non-fragile applications, right? So for us, we used by semi-active scanning, we were able to take a Docker image, push it to the registry, use the Docker feature of the content trusted registry and their notary tool and you get that cryptographic signature of what is pushed up by your Docker client that pushes that registry. So your build system finishes an image, can cryptographically sign it and push it up to your registry and then when you have a security system pulling down to scan this image, you can say, what's running in production? Let me hit an API endpoint. It's gonna give me the name of a Docker container with a tag. I can see what the signature of that container is, compare it to something that I just pulled from the registry and make sure that signature is identical and we'll just scan that way and then we can say, yeah, see, I can cryptographically prove this is the same thing running production that I just pulled from the registry and scanned on a box that's, you know, on the security team owns, right? So for us right now, we're using a tool called Clare, made by CoreOS. So this actually provides you an endpoint where you can just give them a container name and a tag. They'll pull it down. They'll do all the scanning for you. They are backed against the MITRE CVE database, right? So that gives you an opportunity to just have all the CVEs that are published out there immediately available to be checked against. So this is a great tool to use and I've actually had people that I've talked about this before say, well, if you have this active scanner that's looking through all your dock containers that are in production, why are you guys still like pulling all this metadata about your containers, right? Like if Clare can just say this has a CVE or it doesn't, why are you still storing the name of every package built in every container, right? Like that's a very fair and valid question. In reality, sometimes when these vulnerabilities hit, you know, you see a hacker news post or proof of concept code dropping and say, hey, I just like rocked the open SSL library and here's my proof of concept code of how you do it. And people are like, you know, scrambling around trying to find patches for it or trying to like make the patch for it, right and release that. And sometimes there can be a little bit of leg time between when proof of concept codes dropped when the patch becomes available to you put in your environment. And in those situations, you know, it gives everybody a lot more comforting feeling when you can say where and how you're affected before any patches are available. So before, you know, CVE is officially published and recognized by a scanning tool and scan your environment. You know, you might be able to see a news post or something about one of the researchers of that bug saying these are the versions of the libraries that are affected. You can still look in your environment and say, all right, this is super critical. We might need to just like shut this application down until they have a patch available because the vulnerability is like that bad in game ending. Or you can say like, well, it's just some internal tools like, you know, whatever they're firewalled off. Nobody outside the company can get to it anyway. We're just gonna keep running it. It's cool. We'll wait for the patch to become available in a couple of days and we'll go about our business, right? So in our environment, you know, it gives the ability to go either route. You know, you're continuously scanning with this or when the news story hits, you know what's in your environment anyways. So other than this tool, there's a lot of vulnerability scanners out there. When we first started using this, this is one of the, Claire was one of the most mature and robust tools, but now like, you know, you do a quick survey and there's just a ton out there. You know, these are just four of the ones that I played around with and tested myself. You know, I was relatively happy with every one of them. Like, yeah, that does a good job. It'll pull down your container, look through the image layers for everyone, tell you what CVEs match up in there, if any, and you know, generate a nice, pretty report for you. So other than that, like a quick Google search for Dockervano vulnerability scanners, like a hundred more than there were last year. It's really cool. You know, it really shows you a lot of people are taking this security of containerization a lot more seriously, which in turn makes it a lot more easily doable in large enterprise environments and getting your containers to production. Like the first day when they asked like, who here works for a 10,000 plus person company and a ton of people raise their hands? Like this is great for those type of companies, right? Cause sometimes they have, you know, they're really a lot more mature than a startup. They have a lot more requirements and concerns on security than a younger company would. So this is gonna help them out a lot as well. So looking at this and the development life cycle, and I'm literally just taking this straight from our handbook of how we started looking at pushing our containers and how we do now. So when we first started, we had our traditional development life cycle develop, build, you know, CI build for your code, you're building your binary, you're running your unit tests, et cetera. And you do your Docker build through your binary or your code into a container and then ship it out and deploy it to production, right? And so when you're adding your security measures in there for our environment today, it looks more like this where we've added a couple more steps in there where we have right after the Docker build that image audit. And then after the image audit at the very end, you have that semi-active scanning where we're taking a look at all the containers running production and scanning a copy of them as well. So that's why, you know, that's how like actually throwing that in your, you know, standard operating procedures of the SDLC is really gonna help paint the picture for how important security is in your environment as well. So these are kind of some of the concepts that, you know, we just went through here talking about this today. You know, I found that a lot of times after I go to a meetup group or something like that and talk about this, a lot of the smaller conferences and places like DevOps days, there's a lot of great conversation that comes on this topic afterwards, you know, grab a drink and a beer. A lot of people have had some really good ideas and I've actually, some of these scanning tools actually figured out about at other conferences and meetups and started testing them out at our company as well. So after going over this, the question kind of is like, what's the way forward, right? We talked about a couple of different tools we can use here. You know, there's a bunch of different things out there and honestly, there are a ton of vendors that are gonna try to sell you security if you're a big enterprise company. They're just gonna throw these buzzwords at you where cloud encrypted docker with a seem scanning DLP, right? You're gonna like solve all your security needs by buying something off the shelf. A couple, I was a month or two ago, one of the people on our security team got a phone call. You know, one of the fun cold calls from a vendor that everybody loves to get saying that, hey, we got this great storage solution for containers and disk and disk space in general. And the best part about it is we write our own encryption that's better than AES in-house so you know what's good, right? And he thought he was gonna be like a little cheeky and he's like, so you guys work with like Bruce Schneier? That's great. And the sales guy was like, Bruce who? Like, I don't know this like God of cryptography now. So you know, you immediately know like they're kind of full of it. They're just throwing buzzwords out there. So there's a lot of signal to noise ratio out there. You know, vendors are gonna promise you promise the ability to solve the security and compliance need. And a lot of times your BS meter is just gonna be flying there like, is this real or are you just a really good salesman trying to sell me some crap? But in reality, there are some really good vendors out there and they're gonna solve one piece of that puzzle, right? We talked about it a couple of them today. Nessus is awesome. It's been a vulnerability scanner that's been around forever. Claire is pretty great. You know, a lot of CI tools in creating that beautiful infrastructure are really good pieces of that puzzle. But they're one piece, right? When you start talking about security, you realize that, you know, the security and layers type of idea, right? You know, one piece of the puzzle to a holistic security idea. So in terms of your Docker environment, I like to think of these products as kind of like their own little islands, right? Each thing is an island providing a certain feature set for you. You know, you have your Docker build. You have your scan. You have your metadata. You know, you have your logging system. You have your monitoring system. And like as DevOps engineers and site reliability engineers, it's kind of our job to figure out what tools are actually gonna add to this holistic picture of security for us. And like we're tasked with like building the bridges between these islands, right? Take your build system. Take your security scanner. Figure out how to get the data you need from production and feed it through the pipeline so you can get the kind of information that allows your security team to have, you know, a good incident response to any situations that pop up, right? So that is all I have in terms of Docker security. So I'll open it up for questions now. And I got my slides there at the bottom on my website if you wanna go and look through some of that later. Otherwise, I'll be out in the lobby hanging out here as well and feel free to come up and talk to me afterwards as well. You mentioned Claire to help see inside of a Docker container that was running in production. Obviously you're not accessing the production instance you have, you know, another copy down. Is Claire a tool to see into a container that we normally don't have that visibility in production? Like, did you install an SSH client? I just, I've not heard of it before and I'm curious how it's implemented. Sure, so the way we use that tool at our company is, you know, our security team wrote a small tool that will grab the list of containers running in production, right? And then it can pass that container name to the actual Claire system. So Claire has an API endpoint. You hit it with a container name basically and say, you know, scan this container essentially and then Claire will go out and actually download that container from your company's Docker registry and then it will start inspecting each one of those layers. So it's just a go-lang tool that can inspect Docker image layers. So it's not actually, it doesn't need to actually do like a Docker run and all that to spin it up. It looks at the file system layers for it. All right, I got two questions here and then we'll have to call it at the time. We haven't moved to containers yet but we have issues with automating some of our, just automating the audit points. Like, I don't know what it is for other companies but they want screenshots sometimes. Versus like we could run a batch audit and pull the data. They don't trust that data. Like, does that an issue with you or have you guys ever experienced that or is that just one after me? Sure, I'll talk to that the best I can because I am not an otter and I'm not on our audit team. Being kind of more in the site reliability and operations side of it, right now I'm the person that gets the call that says, can you pull this screenshot for us, right? But they do need screenshots for a lot of this and to my knowledge, a lot of times, we are still doing a lot of the classic audit steps, showing them the screenshots of the updates happening and the version of XYZ application running, et cetera. So yeah, to my knowledge, I don't think a lot of that type of audit requirements has really changed for us. A lot of these are just tools that really help us respond quickly to incidents. It doesn't really shift the audit responsibility for us at all. A lot of the ideas you presented here in your slides seem to be at least somewhat oriented around the idea of Docker images that are built on top of a full base OS. How do you feel about other approaches like using a more minimal OS like Alpine or even a scratch image and static compilation in compared to using the full OS base? Sure. So last time I looked, actually, the CIS standard for building Docker images was based on Alpine. So we, for our base image, actually use like Deb Bootstrap and start from scratch and put like this super minimal Linux image in there, whether it's like Ubuntu or Debian, et cetera. And that works best for our environment right now, but what we actually wanna move towards is exactly that, just throw an Alpine base in there, throw your Golang binary in there, ship it, because the smaller it is, the less there is to audit. So if you don't have a ton of libraries in there, everything's compiled into your binary, like that's awesome. So like that, like I was saying, I'm kind of giving this talk through the lens of my experience. So that's definitely the direction we are heading and wanna be in. And this is kind of where we are at right now. Great, well thanks again. Let's hear it one more time for Matthew.