 Okay, so about me, my name is George Clifford Williams. You can email me at GCW at eightions.com. The G in my name stands for George. I load the name George. So you can call me that if you like. I probably won't respond. I go by Cliff, live in Chicago, married, no kids, two dogs, love spoiling my nieces and nephews. Highly opinionated, practically agnostic means that when it's time to discuss the merits of something, I will gladly say what I think is the best solution. But when it comes to actually getting work done, it's not time for deciding anymore, so it's just time to get work done. In my day job, I spend a lot of time consulting with clients to put things in the cloud, to help them develop better CI CD pipelines. I do all the things that get labeled as DevOps, and I try to automate as much as possible. So what I talk about here is all from the perspective of someone who digs deeply into delivering packages to the cloud. So understanding the problem. If you are a developer or if you support developers by pushing out applications, typically someone's developing code and they find an operating system to deploy it on and everything works out perfectly and there are never any problems. In Dreamland, this view of your application is not really tied to the real world. A slightly more realistic view of it is that you have a kernel that your application will need services from. On top of that, you have a library of some sort that interfaces with system calls. You have your user land utilities and then packages that get installed. And then finally, your application code gets deployed. This, by the way, applies to Pozixy-type systems, Linux, Solaris, FreeBSD, the sort. If you're on something that's not Pozixy, then I'm sorry, but I say it doesn't really apply to you. So this is the way it really looks as compared to the kind of loose idea of how your application deployment might work. So if you get that view set in, you write your code, you deploy it on your application, I'm sorry, you deploy it on your operating system, what happens when you need to upgrade? So if you upgrade your operating system, let's say, can somebody name an operating system or distribution that they use, just for one out? FreeBSD, okay. So let's say you deploy your application on FreeBSD and then you need to do an upgrade. You could upgrade packages, you could upgrade your base operating system. Hopefully everything goes well, but if it doesn't, your application may break in interesting ways. And when that happens, you might have to rework your code or you might have to back out the upgrade. Now, why would it break? There could be conflicts with versions of libraries, there could be a security fix that forces a change of some tool or things could just get deprecated, which is pretty common. So what if your application isn't affected by an upgrade but is affected by needing something different? Maybe a tool that conflicts with another tool or a newer version of something that's already on your system. There's a chance that you could use a private repository. On FreeBSD systems, it's pretty easy to use the packaging system to say, go fetch this set of packages from another place. Similarly on Red Hat or Ubuntu, you can point to other repositories, add that into the mix of the repositories that you use to deploy your application and hopefully everything will be fine. If it isn't, you could download and compile something yourself and the enterprise frequently, Java is a requirement that people need to download because there's specific versions that come with most enterprise operating systems or distributions and then your application may require a different version. So downloading and compiling yourself, it actually used to be the common way that people would deploy applications, now it's rare. Alternatively, if your operating system has a maintainer for a particular package, you can reach out to them and say, hey, I need a new version of this, hopefully they'll get back to you. Sometimes they don't. If you're needing something that conflicts, a good example of this is SSL. There are many SSL libraries out there. Everybody's starting to be more security conscious with the way that they develop applications, so TLS and SSL are pretty common. Your application may require a different version of a TLS library than what comes with your operating system and that can cause problems. So one way to deal with that is with the Troup jail or containers, Linux containers, jails, Solaris Zones, that kind of thing, or environment management, something like virtual inv on Python, or if you're using a Java app, setting the Java home, Lua apps can use Lua path, the kind of thing that lets you set a narrow focus for your application to find the tools that it needs. Alternatively, you can just wait and hope that whatever tool that you need becomes the prevalent tool for your distribution or operating system. A consideration in all of this is how you're actually deploying your code. It's not uncommon for people to write their code, tire it up, ship it over the wire with SSH or something, or put it on like a gift server, go out to machine, pull it down, and consider that a deployment. Also, some people package their apps in the native format for their operating system, so in FreeBSD it would be a port, on Red Hat it would be an RPM, or a Debian on, I'm sorry, a Deb, on a Debian and Ubuntu systems. You could package it in the runtime for a language that you're building in, so if it's a JavaScript app, maybe it's an NPM package, or a RubyGem, a LuaRap, Pyconic, et cetera. You could also do a make file. Maven and Ant are really just fancy make systems, so you can specify where files should go, ship that along with your tarball, sorry, ship that along with your tarball, and hope for the best. Then there are kind of smarter tools for doing all of the above. PuppetChef, AnsibleSalt, Tivoli, BladeLogic. They all do configuration management, basically take a set of files from here, put them over there, and it's better than doing things by hand, but when it comes to deploying applications, it's not the best solution. So the statement, the problem statement, what I'm here to talk about and try to solve is that when you build your applications on top of the facilities provided by your operating system, you could be locking yourself into an ecosystem that does not meet the needs of your application and or customers. And the solution is to build your applications to be independent of the underlying operating system and its packages. So what that would look like in practice is you have your kernel, again, your library that interfaces with the system calls, your user land utilities, and alongside your system packages, you have a set of packages that meet all the dependencies for your application. And then your application goes on top of the dependencies. And this can be duplicated so that you can have multiple instances of dependencies for your applications and deploy your applications on specific stacks just for those particular applications. So some of the ways that this helps with application delivery, release management, et cetera, is you get fully autonomous applications, meaning that you can upgrade your operating system and packages without worrying about breaking the dependencies for your applications. This means that if you're on Ubuntu and you have to do an upgrade because of some security patch, you don't have to worry about going back and reworking code to work with the new dependencies or with the new stack. You can create multiple application silos that contain conflicting libraries and tools. So if one application requires a particular version of, let's say, Libre SSL and something else requires OpenSSL, you can install those right alongside each other and they will not conflict because of the isolation that you get from autonomous application setups. Your deployments can be standardized across multiple operating systems. So a lot of the clients that I've worked with have a mandate to not have more than 50% of any one distribution or operating system be, not to have any one distribution or operating system be more than 50% of their deployment. That means frequently working with SLES and Red Hat and FreeBSD and Ubuntu and HPUX and Solaris in some constellation all at once. Taking the approach of building your dependency separately from your operating system means that you can deploy your application on any operating system with one set of configuration and not have to worry about specialties and file paths conflicting. You can isolate the exposure to security flaws in underlying libraries. That goes back to, again, there's some update that needs to happen. It can happen whether it's in your application or on the operating system. So if your application uses a particular version of a library and you discover that there's vulnerability, you can fix that in your application and not worry about it causing a problem with your operating system. The features of your application can develop at your pace, not the pace of your operating system's package maintainer. So if you need new features, you can go out and get those new features implemented in your code and not have to wait for the next version of your OS. And you still have access to all of your system packages. So what this approach really does is let you decide what the delineation is between a platform on your network and the application and its dependencies. And you can swap out the application, you can swap out the platform. It really doesn't matter because they're completely orthogonal to each other. One doesn't depend on the other. So I think that sounds great. Let's presume that all of you are on board and you wanna get started with that. There are several frameworks that come out of the box with the ability to let you do this on a POSIXE system. POSIXE again, I mean Linux, HP UX, Solaris, something like that. There's Package Source, which is part of the NetBSD project. It's very similar to FreeBSD ports or any of the other BSDs ports systems. There's Open Package, which was started by a former FreeBSD contributor named Ralph S. Ingleshaw. He used to be the security or a security officer. And he was also a founder of OpenSSL. There's the Nix Package Manager. It's currently mostly Linux specific and driven by the NixOS operating system. I happen to prefer Package Source. It has more than 1,700 packages available and includes all of the big things that people need in most deployments. Internet, varnish, databases like Postgres. Just about everything that you would need. It offers you the choice of binary or source builds. So systems like Gen2, FreeBSD, OpenBSD. By default, if you want to install a package, frequently you navigate to some file path and then type make, wait for a while, you get your package, you type make install and you're off and running. If you don't have time for that, you can do binary builds of your packages and pre-stage them and use something simple to install analogous to a YUM install or an app. Install. It's really easy to set up. The multiple prefixes are what allow you to set up the application silos. So a quick example is that I had a client who had several Ruby on Rails applications that they needed to install and many of them required different versions of Ruby to run. They were all deployed on one machine using Package Source with a different prefix defined for every application that needed to be run. A prefix is basically a path for the dependencies and you can have as many as you need. It's a simple straightforward process to package your own applications. So when I was talking before about how you might deploy your applications using a tarball or a make file, if you package your applications in the Package Source format, you can have something like salt, ansible, chef, puppet, et cetera, go out to a box and install your application as a package, which also lets you be able to do rollbacks seamlessly and upgrades seamlessly. It's easy to fork the repository and add dependencies that you need. So when you check out Package Source, it comes with 1700 plus packages, but if there's something that you want that's not in there, you can add it very easily. And it has unprivileged operation, meaning that you can deploy Package Source for a given user and let that user manage its own dependencies for an application stack. And you can do that for multiple users and they'll never conflict. Package Source is also portable, gets its whole own slide for portability. I don't know if anybody in here has run show of hands. Has anyone run more than two of these? Okay, so, okay, three people, great. So daily, I tend to work with at least three of these and having Package Source set up with autonomous deploys for deployments means that there's much less gnashing of teeth when it's time to do deployments. Okay, so what does it look like? This is what it takes to get going with a very basic installation of Package Source. By that, I mean, you pull the source code down, that's what the get clone does. You then change into the directory that's created and then go into Bootstrap. You run a Bootstrap script, takes about 15 minutes and then you're all set to go. Then you can CD into a application path, Package Source development cache in this instance and then type make install clean. What that will do is build the package, it will install the package and then it will clean up all the temp files that were created as a result. And you can do that for every single part of your application stack. So the idea, again, is to decide what you need for a system to be on your network and then decide what you need for your application to be able to be installed and treat the two as completely separate. And here, the Package Source develop Memcache D. When you do the install there, it will install to the prefix that's defined in the environment. So if I needed Memcache installed five different times, I could rerun this command five different times with a different prefix and have different separate installations that I could then run in parallel as peers. So can someone name a run time or language that they use to deploy applications to? Something like Python, JavaScript, anything? Sorry? So you use Capistrano for actually doing the deployments? Okay, are you deploying Ruby apps? Okay, so do you ever have problems with gem versions? So with a setup like this, you would do the bootstrap, you'd go in, build Ruby, then build Capistrano on another system, you could build Capistrano to do the actual deployments. For those who don't know, Capistrano is simply a tool that goes out and it lets you use Ruby code to specify a set of things to happen on a remote server. Similar in some respects to tools like Ansible, except that it's not, it's imperative, meaning that you actually have to do every step very much like a shell script. So you would have your Ruby script, or I'm sorry, you'd build Ruby as your application dependency and all of the gems that come along with it would also be part of that dependency bundle. And you would have a definition in Capistrano to say this is what's required for my application, including Ruby, every single gem, and then deploy that. When you needed to upgrade, you would simply redefine the versions of the packages required for that, rerun your Capistrano script, and then you'd have an upgrade that doesn't care what version of Ruby is required by utilities in your system. So I'm trying to think of a good Ruby utility that's commonly installed, and I can't. But in a situation like this, your application is completely safe from any changes that happen with the operating system packages using Capistrano. So I ended up burning through my slides pretty quickly. I will now take questions if anybody has any. No question? I just wanted to ask, how does this stack up against me just pushing my application into a container? Like, why will I not go with a Docker container, put all my dependencies in it, and ship it everywhere, instead of using one of the tools that you just mentioned? Great question. And thank you for asking it. So how does this compare to containers, basically? One, containers exist on, well, if you're talking about Docker containers, or LXC type containers, then only on one platform, Linux. There are different distributions, so you can have Red Hat, or Boom 2 Origin 2, or whatever, but it's still one platform. The clients that I work with can't have just one platform, so they need a solution that works on multiple things, Solaris, FreeBSD, et cetera. This is a solution that works on those, and okay, because I'm known as a strong advocate for BSD systems, sometimes people think that I'm aligning other platforms, and I'm not. The fact of the matter is that containers frequently hang in ominous ways. You don't have to worry about weird, unrecoverable hangs with a system like this. It's very lightweight, straightforward. If you need network level isolation, then it gets a little bit tricky, so containers may work for you there, or if you're in an environment where you have access to more robust container systems, like Zones on Solaris, or Jails on FreeBSD, then there's kind of an administrative overhead trade-off, like would you rather manage containers or manage just the application dependencies, so that's basically the difference in trade-off there. Thanks. Sure thing. Hey, hi. Hello. So my question is regarding, let's say for example, when I upgrade my OS, and let's suppose the TLS package has been upgraded, but then my code and my web server and everything is all of a sudden breaking. Now, if I use packet source and install the previous version, then what's the point of sort of upgrading or having the newest security update in the first place? Then this is sort of like, is this more of a temporary fix that we do so that things are in production, and then we try to make it more compatible to the newer TLS libraries or something like that? I'm sorry, I lost you at the end there. You said is this more... Like for example, if the OS has come up with the new TLS library, right? And if we use packet source to have the previous TLS library, which is more compatible with my web server for now, then I can still have everything in production and nothing would break, right? Yes. Now, in that case, what's the point of upgrading the OS in the first case then is packet source only for a temporary measure or what do you recommend? Okay, so if I understand your question correctly, you're saying that if I can handle all of my application dependencies without touching the operating system, what's the point in ever upgrading the operating system? Well, no, I want to upgrade the operating system so that I can have the latest TLS package, right? And, but if I'm gonna go reverse compatible and install the previous TLS package using packet source, then what's the point? Then is this a temporary measure? Like we use packet source as a temporary measure so that everything is working fine and once we resolve all of the issues, then we just knock this off completely. Gotcha, okay. So, packet source in this case would not be a temporary measure. It's the way you build your application stack, right? So, from whatever layer forward, and that's something that needs to be decided by a dev or engineering team, saying that, okay, we have Java, right? Let's say it's a OpenJDK 1.8. Everything from that point forward is part of our application stack. You may need to do rollbacks, but that's related to your application, not to your operating system. And so you would need to start thinking about this division between the platform and the app. What needs to happen in one doesn't need to happen in the other. And it's not that you are using one as a short-term fix. It's just the way that things are from that point forward. Does that answer your question? Yeah, it sort of does, but yeah, again, what happens is that each application has like hundreds of dependencies. Let's say for example, if I have a node application and it has Bcrypt and Bcrypt relies on these browser packages which come along, right? And then if I created the browser, then now everything starts to break. So I don't have to just fix my code, I'll have to probably dig into my dependency that is the Bcrypt library and then modify it. So it's got to pick the right version of the package that it needs. So yeah, of course, it answers the question, but it's just a lot of work around as well. Oh, I see. So you're talking about the administrative overhead of having package source packages. Yeah, going forward, yes. Yes, so that's a concern, just as it is with containers, right? So if you have a containerized system, which by the way, I'm sorry, I should have mentioned before with containers, there's one additional consideration with containers in that many people get prepackaged containers and there were recent audits of many prepackaged containers and something like 70x percent of them had really egregious security flaws in them. And these, not like an app had like a buffer overflow in it, but like bad configuration that like a systems engineer would like never do. But yes, you then have at least two sets of packages to maintain, right? Your operating system packages and the packages related to your application. But again, it's really easy to manage package source repositories. You have things coming from upstream from the NetBSD project, you also have, and there are many contributors to that. So Joiant that happens to run a huge cloud platform, they're active contributors to the package source project. So it gets a lot of updates, you know, like fairly quickly. And if you need to go out and do something on your own, it's really easy to do. But yes, you're right, there is administrative overhead there. I think it's worth the trade off. It may not be if you're always deploying to one platform. Yes. I'm sorry, I lost you at the end. Oh, resource isolation. Yeah, package source does not do that. It's simply a way for you to install dependencies for your application. And anything that you need beyond that, you have to have either a user-laid facility to do it, like something that comes with Daemon Tools or Runnit that does that or use an operating system facility like Capsicum or Jails or Zones or some other thing like that. Anybody else? I can probably deal with, say, how an application needs to run. I'll take it as a model. How do you deal with it? What's the more distortion of the program until you really deal with a package source or... Great question. So the question that we stated is, how do you deal with building multiple packages and in particular the conflicts that can happen with tools like SSL or others where your application may require one and the operating system requires another and frequently they don't play together nicely and can package source handle that? And the short answer is yes, absolutely. Very easily in 99% of the cases. There are some edge cases where simply having file system and path isolation for your libraries or your applications isn't enough. And in those cases, you might need to go to a container solution like Solaris Zones or FreeBSD Jails or Linux containers. But in most cases with something like SSL, the version that's installed by your operating system is usually not an issue for anything that's an application installed separately from the packages for your operating system. So if you install package source, if you install Libre SSL or OpenSSL or BoringSSL or whatever for your application and you install your application in the same prefix, it will use those libraries and not conflict at all with the system SSL libraries. It same goes for 99% of the other libraries out there. There was a problem a while ago with Postgres, but that's been resolved and I'm not aware of any others that exist. So the question was, does package source support reproducible builds? And yes, for most of the packages, I'm not sure that they've gotten through all of them, but reproducible builds has been like a goal for package source for a while. They've made really good progress on it. I know that on NetBSD, Elumos, I think FreeBSD, more than half of them can compile with reproducible builds, which brings me to building multiple packages in general. If you're going to build a repository for your packages so that you can do a very simple install similar to an apt install or a yum install, there is a distributed package builder that comes along with package source. Actually, there are three of them. There's one that's official though that will build every application that you specify in an isolated environment and you can tweak the parameters for that. So if you need reproducible builds, that's where you would go and do it. Tell it the list of packages that you want to have built and it will kick off a distributed build for that, even if distributed means local to one host using multiple cores. And then it will produce an index file that your package manager can use. The package manager is called PKGN and you can install from there without needing to compile every single time. Any other questions? Okay. Thank you very much.