 All right, so I'll go ahead and get started with this. So my name's Andy. I work at Foundry's I.O. And one of the big focuses that I've had is looking at our company's OTA update solution. So I want to talk about the project that we're using. And maybe kind of get some of you guys excited and engage with the same community. And I'm kind of hoping maybe we can make this kind of a standard way that people are doing updates in the embedded development community. So if you're here, I'm guessing you probably spent 30 your life or so walking to your closet, grabbing an SD card off your dev board, taking it back to your desktop, trying to DD it. Then the USB boss, there's something's wrong in your laptop with your SD card reader. So you do like an angry boot. You finally get it back up. Then you DD it, it takes forever. Go back to your closet, fill the SD card in your board. And then you stare at the serial console, and you hope that's the last time you have to do it for the day. And I've worked on some more advanced boards that have maybe a jumper setting that puts the board into fast boot mode. But then you're still running some slow copy over and over. And it's just kind of a bad way to live. So I think there's a better way to live. And I'm going to talk to you about how we're doing things with our company right now that are making our lives better. So first, I know what I'm talking about. We always have that one slide of all your credentials. And I know I'm talking about it because I've been failing at this for a really long time. For the last several years, most of my focus is on CI automation, actually. And we just need to update things all the time in CI. But what I'm starting to find is having good update system can also just drive your CI system. And by doing that, you also have your CI system testing your update system every day. So you're kind of keeping everything in a nice working mode. And then kind of another little tangent, but I found it's kind of interesting for CI and embedded automations. One of the big fears we've always had on all those projects I've got listed there is we always assume that every image that came in had the potential to brick your board, and then you're going to be out of business. And what I'm starting to see now is that's kind of an edge case for a lot of people. And you wind up building this thing that's really complex and it's really slow. And because of all that, it's also really error prone. So my original thinking of why I chose to use an update system to do our CI testing in our company was I wasn't worried about our devices getting bricked, but maybe it's because I'm lazy, but I'm also coming around to this kind of new thinking that an OTA system has to handle rollbacks. So if we do have a bad build, we should be able to rollback and recover from it. So in a way, it also kind of helps us test something that people are going to see in the field potentially. So this is a little bit about update systems kind of in my mind. I think usually you're going to see like a AB partitioning scheme, like at least physically. I'm not a huge fan of that. It's like double the storage space and all. So I'm kind of bullish now on this thing called OS Tree, which if you're not familiar with it, it's almost exactly Git for file systems. The commands feel the same and you can use one partition to manage that. The next thing that you have to start thinking about update systems is how you're delivering securely what targets your devices need to use. And it's usually something kind of GPG-ish. Now, there's also a new thing that's out. It's called the update framework or tough. And in my opinion, I think probably most people that look around, it's got to be the most secure update framework out there. It's also an industry standard. It's not just used by like embedded things like this. It's also used by Docker, PyPI uses it. There's a bunch of people using this now. And then in the world of updates, there's kind of a third thing called uptane, which extends tough. And I'll get along talking more about that in a bit. So at this point, I'm guessing you know my accent and tell where I'm from Texas and maybe I don't sound that smart to you. And that's perfectly fine because we're kind of standing on the shoulders of giants on this project. So tough has come out of like academia. We have actualizer, which is used in automotive industry. So we have security experts and you're also using things like open embedded. So there's a lot of smart guys doing this kind of stuff. And what we found looking around for the ideal update system is actualizer plus the OTA connect project, which I'll talk about more. It's like the one open source place where they're doing all the right things. And I don't want anyone to forget that this thing is designed for cars. So you've got like serious people that know what they're doing on this. So I think one thing that people tend to kind of question when I pitched this idea of using actualizer or this actualizer light is once you just use OS tree and GPG, that's okay, but tough adds a lot of things that GPG doesn't address. The biggest one is like if you ever lose your signing key, how you can rotate that out and recover from it. But there's all these other attack vectors that they've thought through. So like doing downgrade attacks or serving up stale files that are no longer good. So tough handles all these things for you and it's pretty easy to use if you have things deployed the right way. So I think the complexity that tough gives you is worth it for what you get over GPG. Now the next thing that I've been kind of pitching lately is so actualizer is a project that's really good and I started working with these guys and I kind of had this idea for actualizer light. And the rationale I had was that in short, the way you can kind of think about tough is it gives you a list of what they call installable targets and tough saying here's what's available for you to use. Now uptane, it's actually kind of the same ideas that tough has, but it's about telling each device you need to be this and you need to be this. So to me like when you kind of think about like doing like internet type scale stuff, we talk about dealing with pets versus cattle. And uptane to me feels like you're starting to deal with pets. We're talking to each one, telling what they need to be. And I just want to work with the fleet. And the second kind of part from uptane is that what we started finding with everybody that's using like the OTA Connect UI, the first thing they do is click the button always up to date. So what we're finding now is if your goal is always up to date, you can really simplify things quite a bit and you can get rid of the complexity that uptane brings in. So once I say I don't want to use uptane, people get real worried about like rolling out updates. In OTA Connect, they call these this kind of rolling mechanism as a campaign is what they refer to that as. And a campaign is basically just going through and knows what all the devices are and it keeps telling uptane, hey, tell this next device that it needs to go to this level. Tell this device to go to this level. So it kind of rolls things out that way. What we're doing now with our project is instead of using like that one by one, each device that we have says, I'm going to take updates with a certain tag. So as we push out builds now, we'll have like this is a pre-merge build. So you're only crazy if you want to run this. This one's been merged but not fully QA. So we have all these tags so we can have our devices that like if they're CI devices, it'll take anything. But if we just want them running like promoted builds, then they just take that tag. That's one way to do it. There's other ways. Internet guys are pretty familiar with doing things through load balancers and AB rollouts and stuff like that. So it's a different way to solve the same problem. But for us, we found it's a lot more scalable way because you're not having to tell people everything. And I find that's really hard to scale when you're having to tell people what to do. Instead of just let them ask what they should do. So by moving that out, it's allowing us to avoid several big back end services from the OTA Connect project. Now another thing that I tend to see is people have already built their own, I'll call it, secure update system. And what I'm here to tell you is you haven't done it. And if you think you have, it's just because Matthew Garrett hadn't played with your project yet. So I'm not saying that we're perfect. I'm saying we can give you, I think I put two hours on the slide. We can keep him prevented for like three hours maybe. Now the next part about this is the back end systems that OTA Connect provides. The great thing with that is that the back end is completely open sourced. And it's actively maintained by here.com. They're pushing to all their GitHub repos. They do their issue tracking. That's not great. They're getting better community on that. They put all their Docker builds up on Docker Hub. So it's a project you can track pretty easily. They also include Kubernetes deployment tool. Now that tool is a little hard to use. I've written a blog about it that's in the notes later at the end of the slide deck. And it's like a four part blog that'll help walk you through how you can do that. I also created another tool because it's really hard standing up Kubernetes clusters and tearing them down. And when you're trying to do quick development on stuff, it's just wasting all my time. So I created this little small project called OTA Compose. And this is just me taking the Kubernetes logic and making it into a Docker Compose file. So now instead, when I do a fresh cluster in the cloud, it's going to take me like 20 or 30 minutes. With OTA Compose, it comes up in two minutes. So I think the OTA Compose is pretty handy for development testing. Maybe if you had like a small set of devices that weren't mission critical, you could also do that. The part that isn't really included with OTA Connect is the security of it. So it's in there, but they don't really tell you how. You have to build something that yourself. So again, that's something I blogged about. It's included in the end. And we've kind of come up with our own way to add the security on top of it. But if you kind of read into their code, they have some other notes on like they have some OpenID Connect plug-ins and some OAuth2 stuff. So that stuff's there as well. It takes a little more work. And then there's what I'll call kind of the glue pieces. And those are missing. If you use here.com has their commercial version of this running, it's a really great service. They have some tooling that's there. So at Foundries, we kind of created our own tooling and glue pieces that help do things that we like and do them in the way we like. For instance, like doing tags. So I think the tooling is kind of one of those kind of value add areas. The nice thing with that is there's still no technical lock-in on that kind of stuff. So it makes us feel good about making this decision. Now, I keep talking about tough and the targets. So I think the nice thing to do now is just to show you what tough looks like. So this is a tough targets file. I'm going to step down so I can see this as I'm talking to you guys. But you wind up having just an array of what they call targets. And you can see in here, there's a lot of good information. The most important thing is that's a hash of your OS tree commit that you want your device to pull down. So that's where this file itself in tough gets signed to where you know that it's a valid file. You know the timestamps good on it. And then you can tell the device, hey, I want to be this build 148. And it's going to know, OK, I need this OS tree hash. You can see these tags here. For this one, I'm saying this is just a QA level build. So if you were on the promoted tag, you would just skip that and not use it. Hardware IDs, pretty self-explanatory. And then we have a little section up there called Docker apps. I'm going to get to that in just a minute. But as you can see there, a Docker app winds up being a pointer. So you can see there's this shell HTTP app that I've labeled there. And winds up being a pointer to here. So that's tough as well. So it's just all signed targets from top to bottom. One second. So are you guys going to make fun of me for using my tiling window manager? So as I was showing you in that targets file, there's some what we call Docker apps in there. So Actualizer has a new feature that we called a Docker app package manager. Technically speaking, Actualizer Lite and Docker apps are two independent features that are in Actualizer. A lot of times people are calling it all Actualizer Lite because I kind of upstreamed it all at once. But it's two things. And Docker apps themselves, I think, are worth talking about. So are you familiar with Docker Compose? That's probably a more familiar thing. So Docker apps are a more friendly way to publish Docker Compose files. I actually wind up being just a bigger YAML file that has a top section of metadata than another section, which is just raw Docker Compose. And then a third section where you talk about the environment variables that you want to set for your file. So as we were doing this, it was a little while using Docker apps because right now it's stable. Back in January when we were looking at this Docker apps, it's still kind of a new thing. People aren't real familiar with. But if you look at what's going on in the Docker community right now and the commits that they're making, I'm kind of reading the T-leaves, but Docker apps is the future there. So we've gone ahead and gone with that. But interestingly enough, the way that you use the Docker apps on a board right now is you pull the Docker app file down through Tuff. You know it's good. And then you literally render that Docker app file to a Docker Compose file and then fire it up with Docker Compose. So it's kind of a weird thing right now. They're making it better, but it's all familiar, is I guess the short for that. And then how is it working for us? We kind of have a nice system now. We've integrated our CI, our code, our test. And we've kind of, for our own workflow, we've set up these things that we call factories. And we're all kind of, in a way, not getting good at OE anymore because we just make changes, get push, see what happens. It puts a build in our personal streams. We do an updated, our home device that looks good. Then we push that to our, we merge that change to master, which then goes into our pre-merge and QA streams. And eventually that gets promoted. But it's kind of made a nice system where it's just this workflow of kind of more like cloudy stuff, I guess. So if you want to try this out right now, you can just integrate this into your own open embedded builds, which is kind of the hard way, but it's the ultimate way you're going to do this kind of thing. There's kind of an easier way right now. I apologize. We don't have an official release yet with the actualizer light enabled. The next one will have it. But we do have a build 607. And you'll see in a second build 606 have this. You can just run it. It'll talk to our OTA server. And you can kind of get an updateable stream for your Raspberry Pi. So at this point, let me flip over to my Raspberry Pi and I can kind of show you what this looks like a little bit and give you a better feel. That's embarrassing. I'll have to delete that key soon. Yeah, actually, I'm going to fire up Docker compose here. So I'm just running this thing with the OTA compose project I was talking about. So I'm going to bring up the server right now. Let's make this a little bigger. Can you guys see that OK now on the bottom? So everything here is stored under this var soda directory. This is how this is standard actualizer stuff. We have a config file. It's called SodaToml. It's actually a pretty small file here. It's got what kind of hardware platform I'm on. Some of this is a polling interval. Doesn't really matter for this. This is a repo server. So that's where it's going to go to get the tough information. And we have our OS Tree server. And you can see here that my package manager type is OS Tree and Docker apps. And then down here is just what Docker apps I'm going to run. I've created one for the presentation here. It's called TIG. It's running Telegraph, InfluxDB, and Grafana. This is my other small parameter. So this is an actualizer like command. So it's like status. This tells you what I'm running at right now. So my active image is this build 606. That's the OS Tree show there. And I have these Docker apps that are enabled. I can also do a listing. This is going to show, uh-oh, only my network's talking to each other. Well, do a quick debug here. All right, well, I may have to skip the cool live update now since I can't see my server. That stinks. I'll show you some other stuff here. So there's this Docker apps directory. And in here, you can start to see. This file is the target app. So this is the custom data that you saw from above, from our targets that's telling it, OK, this is the Docker app that I need you running. And this is how you find it. And all that stuff is coming from the tough target so you know that it's a cryptographically sound thing to pull down. And I can go down into this one app here. And I'll actually show you the Docker app file. It's kind of an interesting thing that we have here. So as I was saying, at the top, there's just a little bit of metadata in the description. When you get down into this section of the yaml, this is just Docker composed that we all know and love. And at this point, I don't know how a lot of people are doing Docker apps. This is kind of a dirty little hack I've been finding that makes it really easy to customize a container without having to build your own container with files in it. So I'm echoing that telegraph comp file down, and then I'm starting the daemon. And then in here, this is the variables that you can configure for Docker apps. I'm actually setting that telegraph.comp file as basically a giant environment variable. But what's nice with this is a lot of times like Docker compose and stuff, it's hard to publish a file where you just need to customize your configuration. And the answer tends to be, well, just build your own version of the container. And that's kind of a pain, because now you've got this whole build and CI loop you've got to manage. So with this, it kind of hacks around that in a pretty easy way. And then that right there turns into a Docker compose file. So it's called a render command. And then we call Docker compose up on that to get the containers running. Now what's nice with all that is it's tough all the way from the top to the bottom. So you had tough for your target that you were going to do for your operating system. And then within that target and the custom data, it's saying, OK, here are the Docker apps I need you to run. And those are pointers in your targets also that I was showing you. And those are also toughified for lack of a better word. So you kind of have, you know everything that you're running is the exact right thing on the device. So that really concludes all the demo and slides I had. If you guys have some questions, I'd be happy to answer them. I know it's kind of a lot of stuff, so go ahead, Brandon. You can do it either way. It's really more, it's kind of what works good for you. You could just, yeah, you could put that in your OS tree image. And like you say, the bind mounts are going to be fine. It's actually a pretty clever way. If not all your devices are going to be wanting to run that Docker config, you might be pushing some extra data, but it's still talking bytes. So it's probably a smart way to do it. One thing that we haven't thought through with this fully yet is like managing device-specific configuration. So I can kind of show you, I have a rough concept of it that's built into, I don't have a file here. So you can actually include device-specific configuration in a Docker app.toml file in this top directory. And when Docker app is running inside Actualizer, it will apply those custom environment variables into your Docker app. The problem there is you have to come up with your secure way to distribute those credentials to each device. And if they're the same, it's one thing. If they vary per device, it's more work. So that's an area we're still kind of working through. And it's a tricky one. We've got to figure out how to make that an upstream friendly approach. Everybody wants to manage that differently. And I don't have a great answer yet for that part. Yeah. It's funny. And it's actually pinging it. So I'll show you guys a really handy command since we're sitting here with Actualizer. Their logging is pretty good. You can put log level 0. And it's going to be a lot to do. But you get a little more info from it. Anyways, sorry. But what it would have looked like is I would list. And it would show you that there was a 606 and a 607. It's basically that target's file I was showing you guys earlier. But it's running on here. And then there's an update command. And there's two ways you can run the update command. You can just run a raw update. And it finds the latest build and applies that. Or you can say, I want to update to this specific target. So it kind of gives you the freedom to do it either way. So for our devices in the field, we just have them put me on the latest. But for our CI devices where you're testing out different builds at different levels, we have our CI system, say, do ActualizerLightUpdate to build 15 and reboot and test. Yeah, go ahead. Great question. So r slash var soda is not managed by OS tree. And we keep all the Docker stuff in there. And our var lib Docker is also not managed within OS tree. Now, some people are kind of interested or there are ways to produce OS tree images that have Docker kind of pre-populated for your images and stuff. It's something we're kind of looking at. But that's not our current approach. So it's a great question. So Docker, what we have with our Docker images, so Docker has something called Notary, which is actually tough. And if your Docker images, the key that you need to do with your Docker app, and this is up to you to do properly. But if you pin your app to a specific version of a tag, then Docker Notary makes sure that you're putting the right thing down. So if you're tagging the version of what you're getting, then Tough is told you this is the thing and it's already tagged, so you're getting what you expect. So I'm probably the wrong guy since I've been pushing Actualizer Lite on this. But it uptains the right solution for a lot of people. I mean, if I was putting this in a car, I would use uptain probably. But with uptain, it tends to, I think that if I was doing a more conservative thing where I don't want all my devices being up to date, we're kind of working in a place where we think keeping devices up to date all the time is the right way to be. So it fits our model well. But uptain does that. Another thing that uptain kind of gives you, and we're kind of working through some of this with tooling. So with uptain, that device is kind of phoning home, and it's giving some information like, hey, this is all the things that are on my system. This is my IP. And you get some what I'll call active data about the target. And with what we have now, this is running in an anonymous mode, so we don't have the active data. We do get what I'm kind of calling passive information. So for instance, in the HTTP client that's talking to the server, it includes some headers like x, a, t, s, os tree hash. So we're getting that just kind of a different, what I think is a lighter weight way to do it. And the other thing with uptain, like you see my file there, we have a daemon mode. I haven't upstreamed that yet. It's in our company's fork of actualizer. We have just a daemon mode now that runs. And the polling interval is like once an hour for default. So if you roll out a change, you could wait 59 minutes before you're going to see the update go to the device. Where in uptain, you can just say do it now. But I think that it's annoying for CI, but we do our CI a little bit different. But in the field, that seems safe. So they use the same approach. So uptain is actually doing tough stuff. But it has online keys where tough you're going to have. The first thing you're going to do with tough is basically you kind of do this rotation. You have an offline copy of the keys. Within uptain, since it's talking to each device, when it pairs initially, they create an agreed upon set of keys, but that sits live. And you're getting into some, I won't go too far into that because I don't understand the uptain part of the spec that well. But they do have safety around that, to make sense. I don't think you'd want to do that for your board bring up stuff. I mean, actually, Mike does our kernel work on the LMP at our company. I don't think you're just making, you're still, you're going to be having to DD and whatever. So we haven't gone that far with our CI yet, just from time constraints. One of the things we've always had is we want to, in the future, like when we're testing a promotional candidate, we need to, like when the CI kicks off on that hardware device, and we actually just run our CI agents directly on the hardware, it should say, OK, I need to move myself. If I'm not on the latest promoted build, I need to downgrade myself to that one, reboot, then upgrade myself to what we want to test and go through that. But you can go up and down like that. So you can be more conservative about, we let you ride around to different things, and you could get your overlay messed up like that. But I think you can be more conservative and just prevent that towards read-only, except for a certain place. Yeah, well, I mean, any system that allows, if you're going to have configuration of some form, it's got to be writable. So even with AB, if you have anywhere where you can customize it, if you ever have that poison config file, the good thing with that is, well, hopefully rollback can help you, but we've actually talked about before, if someone wanted to, because now, if you ever had a device and you wanted to somehow switch to a different server, how would you do that? And you almost have to make special updates. If you knew you had a device that was malfunctioning, but it could update itself, at least, you could almost make a specific update for it that overrode that file and fix you out of it, and then update from that. Device management can be scary. Any other questions? All right, thanks, guys.