 Hello everyone. This is a little bit of a different talk than I normally do, but here what I'm trying to do, my overall goal, is to communicate an idea. And in some sense, it's also trying to develop a better way to talk about this idea. And so even if you agree or disagree, or even if I'm unclear, I really want to have this be, turn it to a more of a conversation afterwards and ongoing, because this is kind of a core concept that I'm not quite clear on the best way to talk about it. So if there's any sort of way to improve this, that's why I would ask a view. So what is this core issue? The core issue is that I want to look at things very much from this developer perspective. How do we make sure that developers can continue to have the high speed of iteration that they want, and how do we kind of focus on some of their needs as well? But let's go into a little bit of where I come from. So I spent a lot of time doing all sorts of weird things. Started off at Google, then went into the military, started building all sorts of interesting capabilities, got into cybersecurity, training our cyber teams, then went into digital service. This is where we do a lot more of the review of existing projects, trying to kind of improve software quality, again, mainly in government and defense, became a consultant, became a contractor. And all throughout this process, I encountered a very common set of problems. I usually kind of trim them down to be like, hey, we have networking problems. This is about connectivity. This is about distributed systems. This is about how do we make sure that all the things are talking in the right way? Next big category of problem has tended to be really about users. I want to be able to manage users. I want to be able to authenticate them. I want to authorize them. I want to be able to control this and have set policies for these things. So it's very condensed to say that this is about users, and how do we manage them? How do we control them? How do we organize them? Sometimes these users are humans. Sometimes they happen to be machines or services, but still a very similar set of problems. But then there's another category. And those are real problems to be solved. Those are really problems to be addressed. But this last category is what I want to focus on today is this category of, I call it, packaging. But it's the idea of how do we make sure that we have all of our things that we need to for our software to run? Or hell, maybe it's our hardware to run. But we need all these things for this to run, for it to be effective. We want to control it over time. This is a lot of what the lot of talks have been here today. And along the way, I encountered a thing that gives me a set of superpowers. It's this tool called Nix. And I want to be able to bring this kind of tool that is either relatively unknown or not as adoptable, will make it adoptable. Make those superpowers available to everyone. And that's a company called Flock's that I'm a part of to make that possible. But I don't want this to be necessarily just a pitch about this one particular tool set. But overall, that is something that I'm trying to develop. All right. So we're talking about software management. And there's a lot of focus on what does this look like when it comes to on the deployment side? Well, too much focus there tends to create this situation where there's a difference between what this looks like on the right and what it looks like on the left. So I want to create a far more parity between what developers are using on a day-to-day basis in an iterative way in terms of how they debug things and inspect things and what they look like in production, as much parity as possible. We want these things to also compose well. I want to be able to have grab bags and concepts where I can take a thing and merge it with another thing and actually have something usable at the end of that. I want to be able to combine features. I want to be able to combine software. I want to be able to combine libraries and modules. And so composition is important. Next piece is reuse. I want to be able to reuse things. If I built something, I want to be able to reuse not just the exact binary at that time, but I want to reuse these libraries. I want to be able to have the benefits of this. And all throughout this, we can't just forget, hey, we also need to go into deployment. So let's not forget those needs either. So solutions. I call these almost solutions because they're pretty good. They usually get you most of the way. But I've often found a lot of these sometimes lacking. So containers seem to solve a lot of the problems that we have in the packaging world or in the software world, except when they don't. It's often containers provide too much isolation. So what do we do? We put something into a container and then we break into it. We try to expose all the things. We bind mount everything in or bind them on everything out or we start creating network compositions in order to actually be able to interact with this thing. Too much isolation. A perfectly isolated text editor can't edit anything. We want our software to interact with each other. That's why we build it. And so that's even more so when the needs of a developer come into play, because I want to be able to debug things. I want to be able to look at things outside of the normal operation of that software. So isolation or too much of it can be its own problem. Hey, static binaries. This is a great tool if you have it available to you. But very often, non-trivial software really very quickly expands beyond the realm where that's a possible or feasible way to run things. Might work for one particular service, but hey, now I have several services that all have to interact. Again, I can't just say, hey, bundle this into one single little artifact that is easy for me to understand and track and manage over time. Serverless. Very similar sort of issue. It's how do I use this locally? Serverless often has lock-in in terms of vendors. There's sometimes a lack of standardization. I'm also not 100%. Self-hosting all my services, great. What if I make my local environment from a developer perspective the same thing as prod? Well, now I run to sometimes issues of scale, issues of access, issues of networking. So again, we start to run into problems here. And in some sense, what we're looking for is these are all approximations to this problem of being able to do packaging. And yet these are ways that we avoid the need for packaging itself. And I'm using that as the term for the creation of these things that are reusable. So let's go through kind of a little bit of journey of, say, a sample project and how this works. Let's start off. Step one, I've got nothing. I just want to start doing development. What does that mean? Hey, I want to develop on some effort. I want simple onboarding. That's a very big need that we like. We want to ease development flow. I'm going to be able to run my local build. I want to run my local tests. I'm very much focused on that kind of iterative, fast, iterative, fast feedback cycle that we want to do. And we want to do something at not a snail's pace. I'm not going to plan out every single possible thing. I want people to be able to work. Cool. So quick iteration. That's like the primary goal here. All right. Well, at some point, I actually want to have more than this. I want a little bit of CI. I want to say, hey, I'm no longer the solar person on this. I have a team. Or I have a group of people. Or maybe we don't even talk to each other. We're just a distributed set of open source contributors. Now I need to make it so that, cool, it works for me. But how do I make it so it works for everyone else? How do I make it that I can test this sort of thing? How do I get a little more automation? Because me just pushing things manually is not going to scale forever. So here we start needing a little bit more CI. What does this mean? Well, hey, maybe I can develop. How do I make this so others can develop? So we start adding build tooling. We start adding debuggers, scanners, linters, test suites. This is the next phase usually that we end up seeing. And now, often, what it takes for CI to do its job starts to be of the most importance. Cool. So it starts to be nice. What do I do next? Well, let's get into the next section. Hey, this is a service. It runs. It's great. It does something amazing. Who knows what it does? But I want to get it to product because I want to tell the world. Cool. Let's take those build artifacts that our CI produced and actually put them together. Let's stand the appropriate services that they need. They need access to databases. They need access to other systems that need to be up. I need to configure these things. Again, let's look at those needs. Usually, here, the primary focus tends to be the runtime I've chosen. So if I'm going to run this in Fly.io, I'm going to run this in serverless, I'm going to run this on a standalone instance, I'm going to run this in Kubernetes. At this point, that runtime orchestrator and its configuration, its needs, tends to dominate the conversation. And then, once this thing is in production, now I need to operate this. How do I look at it? How do I inspect it? How do I fix problems? And then, when there are problems, inevitably, with any non-trivial piece of software, I want to now be able to go figure out what went wrong. So here, that ability to inspect. Here, that ability to look at that runtime is going to be the main thing that we care about. So this is pretty good. There's a whole bunch of things we did for us to get into production, except now what? It seems that along the way, the needs of each of these other steps began to dominate, become the focus. And well, what if I want to go actually still iterate on my software, right? We almost forgot the needs of that original developer, that developer environment, that fast iteration, that ability to get the job done and improve our software and make fixes. And this is kind of a problem I'm seeing a little bit more and more of. So how do we ensure that we don't kind of lose this ability? So there's some examples. I've seen a bunch of examples where we say, hey, how about we make it so that that developer experience looks just like prod? So what do we do? We do tricks like this, where we say, hey, we're going to have our runtime system stand up some sort of a developer environment and it looks something like this, where, hey, it's running and then we kind of give that local developer access to it. That's a way to kind of sneak in our tooling, our IDEs, our experience to look like prod, because it's really just another example or instantiation of it. So that's an example of that. We have examples where, hey, my local machine is basically just like a thin veneer over an access to something underneath the hood. And that might be some service that's spun up remotely. It starts to be managed. But again, this is a lot of machinery, a lot of machinery that looks very different than the story we started with, with that developer just being able to iterate fast and do their work. Now they have to use all sorts of tooling. They have to kind of sneak their way into standing up all these services or get access to these services. And we've kind of lost some of that initial feel. Here's another example. This is more like a VS code style. But again, we're making a deployed, in some sense, a production environment just dedicated for the purpose of that developer. And we're kind of sneaking our way in for all our development tasks to be done there. Again, it's a perfectly viable solution. But to me, it seems like we're kind of missing something fundamental. So why are we doing this? All this sneaking in, what are we doing here? We're breaking isolation first off, especially as we're trying to use our, let's say, development tools, our debuggers, our iterative tooling. We're sneaking into these containers. We're making changes. And now we're no longer, we don't want that isolation at this point. We want to be able to integrate our tool sets in our environments. This is the trick we see often is, hey, let's bind them on everything and bind them on everything out. And then what was the point of containerizing to begin with is kind of what I ask. Orchestrating all of our services, at some points, becomes difficult. I need to have a whole set of either a local or a group of entities I need to find out a way to deploy everything into a hosted one for the purpose of, hey, it's past the scale of what I can do in a local way. So why are these realms different? That's kind of the question I want to ask is, why is it that we have a completely different set of toolings and different set of interfaces, different set of deployment strategies depending on whether you're developing or depending on whether you're on the deployment side? And it seems like this is not an ideal situation. For the developer, it's hard to bring my tooling in and do things in an ad hoc manner that allows me to inspect. It's not ideal there. Not ideal for operations either, because now that infrastructure is the only way often for them to actually be able to get their work done. So now you start getting over-reliance upon that infrastructure. For production, again, it's kind of weird because we want to have things that are able to scale, but roughly what did we do? We just shipped our machine. It happens to be a machine that was built for us by the cyborg, the tooling. But we just shipped that co-workers, our automated co-workers tooling, not that of the developer. And that's probably a good thing, but now there's a bit of a disconnect. This is not ideal for ops. You can't go inspect things in the way you normally would. Now we have to add a whole bunch of other things, a whole bunch of other monitoring tools, a whole bunch of sidecars, a whole bunch of logging frameworks to basically give us the information that normally you would want to have in a much more native way. So it seems like there's a problem. So why? Well, let's figure this out. Our building environment, right? It depends on a container registry, all right? That's something that's kind of usually external to get itself. If we build an artifact in CI, that's nice. But the fact that we did so, or what that artifact actually is and where it's located, that tends to start to fall out of the being tracked and get model. If we have our pipelines, often the configuration for these pipelines starts to leak out and sneak out of the being captured and get and is now in those particular systems. That could be an issue. Our developer environments, right? Either we don't specify them, or they're in a readme, or they're over-specified and it's, thou must run your developer environment using this VDI solution, or this VM, or inside of this container. And that becomes over-restrictive. So either we've under-specified it, or we've over-restricted the developers. Either way, in all these situations, our iteration cycle has slowed down. All right. Hey, let's go back to the original thought. Hey, git is this great thing that allows us to track stuff. Well, let's track stuff. So we keep track of build recipes. That should be a good thing. I think this should, generally, this is already done. Cool. Let's also figure out, let's record these build outcomes. Let's keep track of what our infrastructure is. Here, we're talking about things of, you have your terraform, you have your terraform, you have your helm charts. These are things that specify what is the intent that I want my infrastructure to look like right now. And then when we run these things in CI, well, what actually happened? There's often additional details that you get with the real world. A real world tells you, OK, yeah, this is what you wanted. Here's what you got. And hopefully, those are the same. But sometimes, those details matter. So those things need to be recorded. Let's say we have our services are now running their live. How are those things changing over time? There's outages. The real world actually has a say. Just because something's in git, it doesn't mean that's actually current reality. So let's start understanding that those external events also take place. And so let's keep track of those things. Usually, we call all of these things state. So we need to be really careful of where do I have state in my system. And let's actually try to minimize this. So let's try to figure out, well, can I put as much of that state as possible into something that's tracked? Put it in databases. Put it in git. Put it somewhere where it's not going to just disappear. And I'm going to have to either reconstruct it or figure out or discover it again anew. Hard to do. And we want to leverage our automation. There's something the previous was talking a lot about this. But let's keep track of some of the artifacts we're talking about. These are human-edited. These are things where a human says, I want this to exist, either this resource or this service. Or here's how I want to build a piece of software. Then we have stuff that's more like lock files. This is stuff where we say, hey, the humans shouldn't keep track of these things because humans can't. We are horrible at these details. Instead, something else should lock this down. And we've started to understand that putting these into our repositories, putting these into git tracking is helpful because the humans are not as specific as our machines need to be. And therefore, let's lock these things down. State files are a very similar concept of, well, again, these are details that the humans probably don't need to necessarily track or they probably are not good at. Again, let's track these things. Let's put them into databases and repositories so that we know what they are. All right. But fundamentally, I want to bring up this problem of we need composition. Seems like we've often fallen away from this. So we want to do sharing. We need to figure out, well, what are we trying to reuse? What's the simplicity we're aiming towards? And I've called out three categories of stuff that we want to be able to reuse and have sharing with. These are recipes. So we've been talking about the software. This is the build recipe for something. If this is something like infrastructure, this is your infrastructure declaration, either what's in your Terraform files, or this is what's in your CloudFormation scripts, right? Or it's in your Puppet scripts. These are definitions of what I want my infrastructure to look like right now. But in some sense, these are recipes. These things are stuff that anyone can rerun. That's the idea. I can give this to you, and then you can go build my software, or I can go give this to my CI, and it can go deploy my infrastructure. Again, I can share it. I can reuse it because other people in other contexts can make use of it. Now we get to binaries. This is kind of the blobs. This is, let's say, a Docker image that you push up, and now it's hosted in some registry. This is a piece of software. This will be bundled up in various different ways, but again, it's an actual binary that people are gonna be running. This, you want this to be reusable as well, but you kind of have a different kind of reuse. You're not referring to the recipe where they're gonna go run that thing. No, they're not gonna rerun that thing and rebuild it. They're just gonna use that thing as is. They're gonna put it into some other context, into some runtime and run it. These are usually declared in a very immutable sense. And then there's another type of a resource. I'm calling these snapshots for the moment, but this is a reference to something that can change over time. As much as you might not want this, this is the reality. We have a reference to a service, reference to a database. That database is changing all the time, that's as intended, but that is a reference to a thing. And that can change over time. We can snapshot this. We can try to kind of recover some aspects of reuse and sharing, but in some sense, a database in production, you're not gonna necessarily, you can reuse it as it is in its current state. You're not gonna be reusing all sorts of different snapshots over time, at least not without difficulty. So again, we need to have kind of a better way to communicate these concepts, trying to figure out what the right ways are, but I want us to be able to specify in our language and in our tooling, what are we reusing? Am I reusing that recipe? Am I reusing the binary exactly as is, or am I reusing some current snapshot or current state of something which changes over time? Because how we talk about them will be different. And then lastly, we want things that are reusable, and I kind of bundle this under the term of packaging. I wanna bring back this notion of we've sidestepped this a lot, especially with an overuse of containers as the escape hatch, as I can now package anything with this, which is true, but now it means I've lost a little bit of that reuse. I can't reuse these things in the ways that you normally would want to. If you go look at the standard way we build containers, it's very often the beginnings of it are very package-centric. I start with the distro, and I'm bringing a bunch of dependencies, or I'm going to build something, and, hey, this is defined with some sort of a package set because I'm in the, I don't know, the Go ecosystem. Here's all the libraries I need. Here's all the packages I need. Again, packages tend to have this composition and modularity, whereas things like these binary blobs, these containers, tend to be a lot more locked in. They're less mutable, they're less composable. I wanna impose, how do we do this, though? This is a discipline that imposes, you have to impose it much farther to the left. The onus is on the developer very often to get the packaging side of things correct. This is their domain of XP, this is their specialty. I would make it so that when they package something, it's reusable elsewhere. Let's remove sources of uncertainty. Let's remove the issues of network, my personal settings. This is stuff that makes it work on my machine and break on someone else's. We don't like those things. Remove those things as much as possible. And we end up, hopefully, with something that is reusable. I wanna be able to, say, combine the artifacts that I have or the recipes that I have and use them everywhere. Again, I think that the concept here is that of being able to create something, we call this a package, sometimes people call these modules, but something that could be reused. And that's kind of what I wanna keep reiterating. At the end, hey, once you have this, cool, you can compose all these things and make them enter into container-based workflows. We do wanna end up there because that's where a lot of our orchestration tools are, that's where a lot of our automations, our runtimes, that's where they wanna be. So we still wanna be able to end up there. So make sure that if you do have a solution that allows you to do the packaging, end up in a place where you can actually escape and put into your runtime, make your operations happy. So the Super Pro I found to be able to do a lot of these things has been a tool called Nix. And the nice thing about this is that it allows us to package something in a way that's distribution-agnostic. It makes it so that I'm now isolated from my personal configurations. I'm isolated from a lot of these problems, but I still have the compositionality I've been looking for. I don't wanna go too much into this. I'd love to talk with anyone about this as far as long as possible, but the point is that there are tools that can help us address these issues. And I'm with a company called Flock's. We're trying to bring Nix to the world. Nix is that open-source side of things. And I wanna kind of be able to make this more aware, make it more popular. And a lot of it really comes down to this notion of make things that can be reusable. And that's kind of the thought here, is make it packaging, right? Use that same ecosystem to develop and package and inspect my runtime and manage my runtime. I want a consistent set of tools for this. So let's see if we can kind of make that a possibility. A quick demonstration of what I mean by this might be in order, so I'll see how much of this is helpful or useful. So, let's have a piece of software. In this case, it's gonna be really simple. It's gonna be a wrapper around, like, hello world. Great. What does that mean? Well, to make this reusable, I have to make it so that anyone else can run this. Usually that also means I need to be able other people to develop on this. Cool. So a person shows up and goes, hey, oh, great. Like, I don't even have a make on my machine. Like, that's really bad. What I wanna do. Okay. Well, let's actually get started and do some development. Cool. Get started. I'm gonna get started. Cool. Now I'm ready to go. I have all the tools that I need. Let's say my operations department told me I have all this stuff. Everything needs to be in the right versions. I'm ready to go. Cool. Now I wanna build my software. Everything's been built. Let's see if it works. Great. That's exactly what my application needs to do. Now what? Well, I wanna be able to have all this also happen in CI. Well, why can't I just have that same sequence of operations run in CI? Because CI is just like a co-worker. It just happens to be a little bit faster and needs less coffee. But we actually want people to be able to run through builds and make things and make artifacts that are immutable in this same exact way. Cool. This is now up and running. What I wanna do next? Let's see. I actually forgot my own stuff. Let's see what I had next. Let's make a container. Cool. Let's make a container out of this thing. All right. Now let's load that container. Now let's run that container. Cool. What just happened here? I just said, hey, that software I just made, that package, let's bundle it up and compose it. This case on its own, but you can imagine even combining more and more things together. Let's put this into a container. Let's load it into my current system and let's run it. Cool. Now the thing that I ran locally is exactly the same thing as I'm running in my run time. So this ability to translate and transform something that was locally developed on into something that I can now deploy anywhere is interesting. Because that gives me the composition that I want, but also the similarity between my local development and my run time. That usually is a good thing. So that's a piece of that. Let's see. I don't know what else I want to show here. I want to run tests. Cool. I could run this sort of a test in CI. I could have my coworker run this. Again, it shouldn't matter. What else am I doing? Oh, what's actually inside this thing might be of interest. Not a lot of stuff. In fact, it's a little bit scary with all these things, but hey, these are right now all the different dependencies I need to run my little example. Cool. These are my run time dependencies. This is essentially an S-ball. It tells you all the stuff that's up there, all the pieces of software that it requires, and it runs and I'm pretty happy about it. All right, what else do we need to do? We probably need to... We could probably use those tags, all those hashes and things to keep track of software over time. Same way we would like to keep track of user events over time. So let's kind of like a quick example of a piece of this just to give it a taste. And... One moment. Maybe I've lost. So here's the questions for the audience. And I kind of want to know how different companies solve this problem. So when you have multiple interrelated dependencies, how do you manage this? There's different ways I've seen this done where you have one centralized model where this is the one place where all my software gets composed. That's where my lock files are that declare at this moment in time, my company's view of all my software stack is fully defined, fully locked down. And there's also the... I have a bunch of independent projects and they're all going to be operating on the trunk branch and they're all moving along. CI is going to deploy them. They're going to run. That's an approach. We've seen leaf projects where you only have things that are in as leaf dependencies. So that way, they can depend on things from different moments in time and everything kind of marches along. This kind of starts to decentralize things. There's also binary driven and this is the manifest of the things in production right now and it's defined not by the recipes, often as I might get repositories, but it's defined by here's what's in my registries, here's the containers I'm currently running. That becomes binary driven. And so my question is actually, how do you want to do it? How do you want to manage the situation when you have lots of different pieces of software that have to be composed together, lots of different services? There's not only there's a right answer here, what do they normally need, what do they normally want, what are they trying to solve? And then, similar to this, what then do you use as your source of truth? How do you know what currently is in production? How do you know what is currently in development? What are my developers using, what are my operators using, what are my customers using? And all of it, how do we make sure that we have good reuse of this stuff? So those are some questions. I want to get ideas from people about that. Lastly, a little bit of a plug here. Flux is trying to bring the superpowers of Next to the World. Some of these are the kind of problems and issues I'm thinking about. And we're also hiring. Thank you for your attention. Any questions? I'll just repeat. That block people from committing things that are not well-formatted or not doing linting. And then we have a developer local cluster that has an observability stack on it that people can send over OTP, their telemetry data to do validation. You're kind of... I'm trying to wrap my head around all this, but you're essentially saying that there can be one sort of holistic way to centralize a lot of these tasks that you either see on the far right or the far left of things instead of one uniformed way that can be improved upon or distributed as an organizational best practice. Is that sort of the idea? Yeah. So there should be a... the way in which I define that developer environment as that person's doing that should be extremely similar as what it looks like in production. And so it's not going to be the same because the needs of a developer are going to be different. But we should kind of make it as similar as possible and be able to reuse as much as possible from that one context for that developer context and be able to use it in production in vice versa. I should be able to take something in production and then start developing on it. Okay. So, like, as a platform engineer or something, I have a backstage template that essentially encompasses a flux manifest or whatever. Developer hits self-service. They start a new project or whatever commit into Git and then use this entire workflow to get into production. Okay. Yeah. The key is I want to have a consistent software development lifecycle where I can use a similar, like, set of tooling, a similar set of vocabulary the whole way. Rather than there being a big phase shift of, oh, now I'm no longer doing local dev, now I need to do prod, right? Those needs should kind of smoothly meld into one another rather than being a complete, okay, now we did a whole new team, a whole new set of stuff. Do you see a conflict for, like, UAT testing where you have integrations from, like, third-party testers or consumers that don't necessarily exist on the local dev plane but are later consumers of, like, an environment, like a UAT environment? And how does Flux kind of fit into that sort of testing? I mean, if this is a situation where by design it's not something that's going to, you know, work on my machine and your machine and your machine and everyone's machine by design, that's a situation where we don't want that to take place either because of licensing or because of costs or scale, then at the very least it should in principle be possible to rerun that thing whether that's because it's making some, you know, hey, I could run an API call, maybe I don't have the keys for it. Hey, I want to be able to define that when it does get into that other, like, let's say scanning framework, okay, conceivably if I had that or I was able to run this or I did have a remote access to it, conceivably I could run it. I could run it upon the same artifact. I just probably can't because, well, I'm not that vendor, let's say. So it's, yeah, in some cases you're obviously not going to have that kind of I can run everything local but at least in theory you should be able to and should have the tools to be able to express that. Okay? That's all I got for the moment. Part of this is, like I said, I'm still trying to figure out the best way to communicate this idea and I'm not 100% sure what the right vocabulary is so please, if you have any thoughts or any ideas or if you completely disagree or if I just confused you, either way please say hello. My name's Tom. Have a good day. Thank you.