 Hello, so hi, I'm Steve Siegel. I'm here with Matt Farnero. We're from the embedded systems team at Cruise. For those of you who don't know, Cruise is an autonomous vehicle developer. We're going to talk a little bit today about how we use Linux in what is really a large distributed computing system. Quick side note, when the slides are imported into their system, there might have been a few spacing issues and visual artifacts, so please bear with us. First, I'm going to talk about us. I'm going to talk about the vehicle. I'm going to talk about the challenges that we faced, the challenges that we face as building and getting Linux onto a device such as this. I'm going to talk a little bit about, or Matt's going to talk a little bit about BuildRoot. We'll talk a little bit about how we boot, and then we'll do a little bit of talking about device management. So first, about Cruise. So Cruise, we are a majority-owned subsidiary of General Motors. We're also backed by Honda and some other investors like SoftBank and T-Roll Price. We are based in San Francisco. We also test our vehicles on the streets of San Francisco. You can see this is a picture of one of our vehicles in San Francisco's Mission District. Our goal is to design and operate autonomous vehicles in a ridesharing service. And this is what we're ultimately building. So this is the Cruise Origin. It's an all-new, totally electric vehicle. It's designed to ground up for autonomous ridesharing, designed for long life, modular components so that we don't have to replace the entire vehicle as technology improves or if we want to change out sensors or something. And this is basically intended to deliver autonomous ridesharing at scale. So embedded systems. So we're the lowest level of software in the autonomous driving system. So we provide the interface from the rest of the stack, from the stack to the rest of the vehicle. This includes, we bring up all our custom hardware. We provide Linux OS images. We develop the application software for the edge devices. Or more succinctly, we figure out how to get a supercomputer into a car, which is, I mean, that's a gross oversimplification because we also have a really, really great hardware team and lots of really great people who work on the non-software aspects of this. But when I first started to cruise, I needed a way to explain to my mother what I did for a living. And this is what I came up with. So the vehicle, right? This is our current, our third generation vehicle. It's originally based on the Chevy Bolt. But we made a lot of changes to it. This kind of shows all the changes that we made to the Bolt to enable the autonomous driving case. Much of this hardware is custom built specifically for our vehicles. So this includes sensors like cameras, lidars, lots of other sensors, the in-car networking infrastructure, telematics, a core compute that runs the actual autonomy stack. And this kind of makes our vehicle a little bit more like a roaming data center than a typical passenger car, right? And many of these components, they don't exist commercially, right? We have to develop and they are developed by our hardware team. So our hardware team, they source these parts in various places. Traditional automotive tier one suppliers, they have a lot of experience of building traditional automotive parts. And in situations where we just need minor variants of these, they can do this very efficiently. But we're basically, like I said before, a data center on wheels, right? And this requires us to design IT parts that had never been put into a car before. And so sometimes we work with non-automotive IT suppliers. We do a lot of our own internal designs. This is because sometimes it's easier to teach an IT supplier how to make automotive parts than it is to teach an automotive supplier to make IT equipment. We also use a lot, there's also a ton of parts in the car that are not related to the autonomy stack. And these are typically sourced by our OEM partners, especially General Motors. But we developed the software for all the autonomy components, all the components related to autonomy. We developed that largely in-house. A lot of these tier one suppliers are used to delivering an entirely functional component. Basically, you hand them a spec and they come back and they deliver to you a functioning board with software and everything. But in our case, we work on the software internally so that we can iterate quickly and ensure consistently see between all the different boards that go into our vehicle. So some challenges. You can imagine, this brings lots of challenges, right? So here's a brief overview. So like I said before, tier one automotive suppliers, they have very little experience with Linux. They're normally used to working with Autostar, other kinds of RTOSs like QNX or FreeRTOS for certain cases. They're not used to developing hardware for a customer-developed operating system, Linux operating system. Basically, a lot of the parts that we use, they don't even provide Linux drivers for support. Typically, in the automotive world, automotive systems on chip are typically designed for part, for infotainment systems. So if we go to a Silicon vendor and we say we want an automotive grade part and we want to put Linux on it, they'll usually push us to their devices that are designed for infotainment. And then a lot of the peripherals we need, they might not, there might only be one for an automotive style bus. There might only be one or two peripheral vendors and they don't provide Linux drivers at all because they're previous customers that have never asked for it. So autonomous vehicles, they represent a globally unsolved problem, right? We need to build devices that nobody has ever conceived of before using automotive qualified parts that don't actually exist. This involves some creativity, right? We often have to use components like systems on chip for different purposes than they were originally intended. And also, because we have so many new things to invent. It's very important for us to reduce our development time and risk to use proven solutions when possible. So our devices, we have everything from large high-performance computers to small constrained devices. We, the components where performance is the primary consideration like the core compute use x86 parts. Most of our other components are ARM based. Many of the ARM systems on chip designed for automotive applications typically have like an additional microcontroller core for functional safety or other real-time operations. And then again, like I said before that the traditional electronic control units that you would find in a typical vehicle are developed by our OEM partners. So ultimately, right? We want to support a large number of diverse devices. We have a huge number of devices that are all could be very different from each other whether we might have a camera and we might have like I said like networking infrastructure for a car or again our core compute like these are all very, very different. And yet they all have to work together. The new devices that we may have to build we don't always know what they're gonna be, right? We don't know, this is again this is a globally unsolved problem which means that we have to be able to iterate quickly and be able to bring up new devices or new components for our vehicle as we determine that we need them. We also need to be able to guarantee reliability and security for our devices. So how do we do this? Well, we need to enforce consistency where we can, right? How do we make all our devices look as similar as possible and make sure that diverges or divergences are deliberate and they're not accidental. How do we want to use existing solutions when we can but we also wanna maximize flexibility because the range of devices that we may be asked to develop in the future is very, very large. So now I'm gonna turn it over to Matt who's gonna talk a little bit about how we use BuildRoot. Hey folks. So the first step in kind of enforcing this consistency and building up this common development environment for us was thinking about what is our build system and we landed on BuildRoot. So we did so for a handful of reasons. So first of all, our goal here is to build a firmware image, not go build our own custom Linux distribution. We were looking for a tool that was relatively simple so that we could easily onboard new developers as our team grows and it has quite a bit. We're also looking for speed, right? So we value being able to do a high performance and kind of rapid loop around clean builds. So building everything from scratch but doing so in a relatively quick and repeatable way. We value also the sustainability of it, right? So being able to go do things like have internal package mirrors and build up kind of a mono repo for our edge components wrapped around our build system. And of course with any build system that we choose we must have extensibility, right? We need to be able to customize it for our use case and enable it to scale as our scope and our team scales. So for us, BuildRoot kind of was the right build system that ticked all of those boxes. So why do we need to extend BuildRoot, right? So for those of you that are familiar with BuildRoot, right, it is a K config based system. So it has a really wonderful support for going and kind of building a def config and quickly regenerating your target. So we need to go a little bit beyond that, right? So we have, if we look across all of the work that we're doing, as Steve identified there are many, many things that are very different but there's a core set that we by design want to be very similar. So to enable this in BuildRoot we've taken a config layering approach, right? Which we'll go into a bit more to basically ensure that the stuff that is constant we define it exactly once and the stuff that is board specific is isolated and captured well. This has some other really nice properties. It's kind of the foundational tenant that allows our developers to rapidly switch between boards, right? So maybe today you're working on that networking component and tomorrow you're working on a camera. We want that developer experience to be very similar between the two. And then we want to be able to go and further extend BuildRoot to support some kind of development best practices, right? So automated test execution as part of our continuous integration environment, both in kind of the traditional unit tests and extending out to integration tests like a hardware in the loop environment. So this is a quick snapshot of our repository itself and what you'll be able to see here is that at the core of it we have kind of a BuildRoot folder which is BuildRoot as a sub module. And then we've built up some additional content that wraps around it to do that configuration layering to support our locally developed applications. So for things that don't warrant a standalone repository of their own to build make files, right? To manage kind of easier to use top level targets, right? Or extended targets both for our end developers as well as our build system. And then some just nice to have features, right? A common place for out of tree kernel modules so that we can write those once and share those across different kernel trees whether they're mainline or the SOC vendor forks that we can't seem to quite get away from. And then a common place for output files, right? So that a developer could have multiple different boards on their local system and they don't conflict with each other. If we dive into the BR2 external folder so BuildRoot has a kind of a built-in piece of functionality called BR2 external which allows you to extend the tool. And so we try to follow this as closely as we possibly can. And this allows us to go define our own boards, our own packages, as well as a few kind of dirty little hacks to get the build system to conform to our use case. We try to isolate those and kind of minimize them as much as possible. But we generally see that as like the lesser of two evils versus doing deeper surgery in BuildRoot and having to maintain that out of tree for things that just don't make sense to upstream. We talked about this earlier but this is the configuration layering approach, right? So we've taken, as I mentioned before, BuildRoot is a K config based system. So every board that you may want to define has a def config. But the K config and the def config, the default configs don't lend themselves well to layering. So we've taken a trick out of the kernel build process, what they do for device trees, we do for our def config files. And we essentially run our kind of partial def configs through the C preprocessor. And that allows us to use some of the nice kind of if def language as well as the include language to go and build a modular system, a layered system. So as I mentioned before, the other major area that we look to extend is testing. So we went and extended our kind of concept of the BuildRoot packages so that we can go and define per target or per package steps that allow us to run tests, right? So this allows just like a package might define a set of build commands or a set of install commands. We can now have a per package set of test commands. So this lets us do some pretty nice things, right? So by enabling this and by defining those test commands for packages, we can then guarantee that if we are building the package for the target, we are also building the tests. And we can automate our CI system or our continuous integration system to ensure that for each board, we are actually executing each of the tests that are defined for any of the packages, right? So any of the configuration options that package has or that board has selected will be reflected in the environment and we get really consistent and automated testing. Today we're using this for unit tests that run on our build machines, but we're looking to further extend this to go run tests on the emulated version of a target and QEMU as well as to tie into our existing hardware in the loop system so that we can further automate that process. And this is just a quick snapshot of our continuous integration pipeline, right? So one of the first things that we did when building up our wrappers around build route was to go and automate the builds, right? So when we're building for dozens of different kind of board and configuration targets, we want to be able to make changes to common areas and actually be able to test them, right? And we certainly don't want a human to have to go through and run two dozen builds. So one of the first things we built was this continuous integration pipeline where for as a pre-merge gate for every change in our repository, we go and compile every board, every configuration. We run all of the defined unit tests and where applicable will actually go and run integration tests in a hardware in the loop system, right? Where we're actually taking that new firmware, downloading it to the target and running tests against that device. So I'll hand it back over to Steve from here. So I'm going to talk a little bit about how we boot and how our boot loaders look like. So one of our goals is to maintain consistency among platforms. So boot loaders are inherently board specific, right? So consistency is definitely a challenge here. Still, there are ways to make this happen. We have lots of very different boards to manage. As Matt said, the same team manages all these boards and we need them to be conceptually similar for to have a consistent developer experience. So first we need to decide like what can we reasonably make common? So a cruise like this could be secure boot, hardware support for this is a hard requirement at cruise. We have redundant OS images, which Matt's going to talk about in a little bit and also like initial flashing procedure. How do we enable these boards to be de-flashed in the factory or at manufacturing time in a similar way? So you boot is kind of the de facto standard boot loader for embedded ARM systems. It's well known and supported. It's extremely configurable, but SOC vendors almost always forth and customize it. Sometimes vendors will continue development on their private forks for years, right? We could go to a vendor and they'll be saying, like, oh, you need a boot loader? We'll just use this modified UBOOT from 2016. This might support all of UBOOT's features. It might support a subset based on the vendor, what the vendor prioritizes and their interests, which is often what most of their customers ask for. But we're often very different from a lot of their previous customers. And so the features we need may not necessarily be what they prioritize. In any case, this makes sharing code between different devices very difficult because they may have radically different UBOOT treats. So yeah, reusing code in UBOOT can be a bit of a challenge, but we want to enforce a few things. We want to keep the files we add to UBOOT to a minimum so that we can easily carry those additional files and patches through vendor version updates. And ideally, if they could apply to multiple forks if necessary, that'd be great. We want consistency. We want, again, we want common standards of all our boards. We want them to all have roughly similar feature sets and operate in similar ways. And yet we also need flexibility, right? Because we want to be able to reuse our code as much as possible, even when we can't even necessarily conceive of the boards that we're gonna have to develop. So we can't require all vendors to work from the same tree, right? But we can enforce requirements, right? As part of procurement. So we can say we have, between image signing, network access on at least one defined interface, these we can basically put as hard requirements in our procurement process to ensure that vendors provide at least this minimum level of support. And for peripherals, some may be required, some may be nice to have. We have some flexibility there, depending on what the part of what it needs to do and what needs to be accessible at boot. But we still have the ability to kind of enforce as part of that process, of a part selection that we get the support we need. So best way to enforce common alien eBoot is just to get as much out of the source code as possible. This means device trees, gets device description into a portable format. This is standard eBoot practice now, although sometimes we still encounter vendors that might be using old eBoot or have never converted their devices to be device tree based. So we can kind of like make this part of our vendor requirements when we go to market. Additionally, a common eBoot script helps get a lot of the business logic out of the eBoot source code, right? Scripts are eBoot best practice. They can be signed for security. So you can store your script in another location. You don't have to store it with your eBoot. You have to store it embedded in your eBoot image and yet you can have eBoot check a signature on it to ensure that it's secure. And you can generate them from a template, right? So that you can have a common and board specific functionality in the same script. So this is a short excerpt of a script. Relative, this kind of shows a situation where we are attempting to net boot is kind of like a emergency recovery mechanism. So what you can see here is basically we just try to net boot three times. If it fails, we move on, right? And it's a fairly simple script, but by putting this into a common place then all our boards immediately have this functionality, and then they can all recover in the same way so that we can depend on this functionality existing in all of our edge devices. And again, for this particular case, we can work with our vendors to ensure that we have the necessary network access in eBoot. So we can ensure that say they provide drivers or provide support for doing this kind of network access on their device. And scripts can't handle everything, right? Sometimes you still have to go in and write C code. But we try to keep this as minimal as possible. UBoot does have normal customization methods for this and we try to stay in those and not patch extraneous parts of UBoot simply because those become very unmaintainable. And especially if they're board specific things that don't really make sense to upstream, then they become a patch you have to carry, which is not something we wanna keep those patches as small as possible. Okay, I'm gonna turn it back over to Matt for device management. Yeah, so building on top of what Steve just presented, right? Now that we have this common bootloader layer, right? With our core expected functionality captured in a script, we want to go build some kind of business features on top of that, right? So if we look at the autonomous vehicle as a whole and specifically the autonomous components of it, we can really visualize it as kind of a large distributed computer system. And so that means that if we want to apply an update, right? If we have a release of our software, we want every node in that system to be in a well-defined state, right? So that means we are actually applying a software update to potentially every node in the system when we update to a release. And so in order to enable kind of our small team to tackle this sort of a problem, right? Once again, consistency is very fundamental, right? So if we can come up with a similar update mechanism that we can share across all of our components, all the components in the system or at least the ones that we develop in-house, we make the update process much simpler and far more reliable. So to do that, we've chosen the SW update tool to kind of manage our system updates and we're able to use some broadly common configurations across the board. So update, and actually there's a really great talk from Jan Kiska that happened this morning that goes into much more detail on a system that is rather profoundly similar to ours. So I encourage folks to go take a look at that. But around this software update tool, we've actually gone and built some outward facing REST APIs, right? So we've kind of said that this is a kind of an easy, least common denominator interface to expose outside of our system. It's something that we can make use of in development and in debugging, and it's a thing that we can secure for production. So we've gone and taken the underlying software update tool and then wrapped it in some REST APIs that we develop to automate things such as querying versions, that first-time flash step that Steve talked about, that kind of provisioning, as well as updates in the field. And given this REST API, we can then go and build easy-to-use client tools for all of our different use cases. So when we look at actually deploying a release to our system, we have kind of a fairly nice one-way graph, right? We can, we have our core compute, right? The brain of our vehicle, which itself runs the same kind of operating system that we've built here that we've described here and can of course update itself via that same tool. It can then act as the orchestrating element for talking to all of our edge components. It can query the version, determine if it matches whatever is in the release manifest. And if it doesn't match, go and apply that update. As Steve identified, we also have built-in through our scripts, this nice fallback mechanism of a TFTP boot. So the core compute can act as that TFTP server of last resort in that case as well, right? So we can have, again, very consistent, both kind of happy path updates as well as failure recovery. And so if we're going and updating all of these components, right, possibly frequently, certainly in development, very frequently, we need this to be extremely reliable. As Steve identified, many of the components in our system might be buried somewhere deep in our vehicle that would take a tremendous amount of effort to go and expose more than an ethernet interface if we make a mistake or if we lose power, or something goes wrong during applying the update. So to solve this, we try to use redundant copies of everything that we possibly can. So this is an example kind of memory or storage layout. And this is illustrative, this is not the actual storage layout on our devices. But essentially we try to have these blue and green copies of almost everything. If we can do it for the bootloader, we do so. That ends up being very SOC vendor specific depending on what sort of capabilities are in their ROM. But where possible, we do this. And then certainly all of the layers beyond the bootloader, we protect with redundant updates. So that means we're having these two images stored on the device. And when we are running on one, we can update the other. And this plays really nicely with secure boot. We can still sign and verify all of these different image stages and maintain our root of trust. And it gives us nice fallback mechanisms where fallbacks could be triggered for a variety of reasons, right? Maybe storage is corrupt. Maybe we released a bug. Maybe there was just a power failure. In all of these cases, we are nicely resilient and we can recover the system and reattempt the update. So with that, I'll hand back to Steve to wrap up. So in summary, ultimately we need to be able to rapidly innovate a completely new hardware, right? And to do that, as we described, we want to use common, well-known flexible tools. We want to define common components and configurations so that we can drive reuse across our system. And yeah, we don't want to reinvent the wheel, right? We have plenty of other parts of the car to reinvent and we really don't want to, we want to minimize our risk by reusing solutions that already work for, that are already effective. So for more information, I included some links to some blog posts where we've talked about some related topics. You can download the slides off of Sked and click on these at your leisure. One of them talks about how we source and design hardware. Vehicle security, I didn't really cover security in this talk because we have a separate team of crews that focuses specifically on vehicle security. And so this is a great overview of how they think about this. And also we wanted to go to the shout out to a talk by Jan Moore and I may be totally mispronouncing his name at ELC Europe in 2017, mainly because it was a large inspiration for how we built our system and we felt we'd be remiss if we didn't mention it. And with that, now we can start looking at some of the questions. Thank you for participating. So I think there's like a nice little cluster of questions around using, sorry, can you folks hear me? Okay, so I think there's a nice cluster of questions around using mainline U-boot. So in some boards, we are able to do so. So when that is feasible, we absolutely want to do that. The patches that we actually need to author for U-boot are usually fairly well-contained to the truly board-specific initialization content, right? So not something that is of tremendous benefit to the broader community, but where possible, we do try to use mainline. We have a handful of vendors that have some more bespoke content in their U-boot tree that have diverged rather heavily from mainline. And so for those cases, we find it to be lower effort for our team to simply use the vendor fork, rather than essentially do the SSE vendor's job of maintaining their patch set on top of mainline. So the nice thing about the script-based solution is that I'd say 95% of the logic that we care about is actually contained in that U-boot script. And that's a thing that is highly portable from board to board. So I think, again, to kind of address the mainline. One thing I'll add to that is I definitely... Go ahead, Steve. We, apparently, Matt, there's a minute or second delay, like a minute and a half delay between you and me, which is really awkward. But what I was going to say was that I've definitely experienced in my career where I've worked with a vendor U-boot or I've used a mainline U-boot and then some processor orata that they fix, they fix on their branch and don't tell anybody. And then you spend weeks trying to diagnose an issue that turns out was actually already fixed. So that is not an experience I really like to repeat. But I think that the solution is that for these things is to really just have the vendors aggressively merge this stuff as they find it and to upstream them as they do. At least that's my experience. And just to quickly address, I tried to put this in the chat, but I have no idea if this is working or not. The talk that I referenced from earlier this morning that goes into much more detail on a kind of edge device, embedded device update system is secure boot and over-the-air updates. That's simple, no, from Jan Kiska. So definitely take a look at that. I think that the PDF will be posted if it's not already up there. Just scrolling down the list of questions, we have a question, are you using any RTOS or how do you meet real-time constraints? Great question. So what we've focused on for this chat was our use of Linux and some kind of standard open source boot loaders and other tools. Our system is heterogeneous. We do have parts of the system that have much harder real-time constraints. We have other parts of the system that have far looser constraints. And so our general approach here is to look at the requirements for the system and when that can be well-served by a kind of nicely tuned embedded Linux system, we will pursue that because it gives us some major benefits in terms of our speed of development. Certainly for other parts of the system, we have to use different approaches. So we have another question. I can talk a little bit about, there's talking there about the partitioning scheme. This is a strange layout. Why is the reason of such a layout? So the fit image is not stored in the root FS. The fit image is actually, is the kernel, right? And so we load the kernel from one location and then it will go and mount the, the root FS from a different one. So that is the reason for that format. Yeah, so essentially that's the slide here. This is illustrative. This is not an actual flash layout. We're simply showing the logical elements that exist in kind of linked together copies. So essentially we have a fit image which includes our kernel and device tree. And then we have a root FS and those two are kind of locked together as a configuration. So there was a question on, how long does the CI loop take to run against individual changes? Sorry folks, we're dealing with a little latency on the audio here. So for the CI loop, we've, it really depends on the complexity of the target. So some of our less complex targets, we can do a clean build in CI in about 30 minutes or so for our more complex ones they can take on the order of an hour. And that includes both the build time, the test time for our unit tests and some of the preliminary work to go and test against like emulated targets, right? So doing some of our integration tests against like a QEMU based target. So we find that that's a pretty nice balance of developer efficiency combined with local builds that are much faster for iterative changes, right? So build route has some nice tools in it that allow you to only rebuild the changes that you've made, reconstruct the image, test it on the target or on emulated hardware and do that verification locally, rather than being exclusively reliant on like a cloud build from a clean slate. I think there's a question about what Linux kernel do we use? Right now, I mean, in general we try to stay on LTS releases. I think most of our boards are now using 4.19. I believe that's true. And the idea is that we would move forward over time. So yes, we would eventually move forward to a 5.4 release or then so on and so forth. Yeah, so the kernel is certainly easier to go and leverage like a mainline kind of tree. And again, we do that for many of our boards where we can for other boards that have larger amounts of content in a vendor fork that we again, like don't want to sign our team up for doing the Silicon vendor's work of mainlining that. We will in some cases go and use a vendor fork. We try to make sure that the development that we are doing like any of our novel drivers are done in such a way that they could be mainlined when there is some common content that is useful for that or that they are at least easily portable that they work against the mainline kernel so that we can freely move between versions. So we have a couple of questions related to SW update. So one question is does the core compute utilize SW update to send updates to other devices? And a related question, are your REST API extensions for SW update available to look at? So essentially, I can kind of address both of those questions. The REST API is part of kind of a more generic device configuration and control framework that we've written. And under the hood, it simply automates the SW update command line tool. So SW update has some built-in tools for like running a server or for talking over a more programmatic API. But for our use case, we found it easier to simply automate the command line tool. So the REST API is not really an extension to SW update. It's more of a generic kind of device control framework that we've written for our devices that exposes a few endpoints related to performing updates. So under the hood, we're simply calling that SW update command line executable with switches that are populated by the REST API. And so we don't directly use the SW update tool to push out updates. We instead would call into that REST API from the core compute to the edge devices, which would cause them to run the update process locally. So we have a question, is there a reason why you didn't go down the AGL route? Yeah, and the answer for that is I think it ties back to some of Steve's earlier content. Much of the devices that we're building on were not envisioned as automotive grade components, right? So if we go look at what support we can get from a silicon vendor, they have in many, many cases not standardized around AGL. So it's certainly something that we're tracking and something that we want to be able to leverage if possible, but it's simply not feasible or I shouldn't say not feasible. It would be a too large of an effort for our team to try to standardize on that across all of our different SOCs and silicon platforms at this stage. So there's a question about firmware upgrades. Any situation that prevents some security risks, upgrades we have by driving. So yeah, so the situation is we just don't do upgrades while driving. So the upgrades pretty much only happen when the car is in a quiescent state. So we, to cruise, we'll operate these vehicles and the result is so we can control when they get upgraded. And so yeah, there's no over the air, there's no over the air update. You will not be riding in a car and have it suddenly decide to upgrade. All the upgrades would happen when the car is quiescent. Yeah, and just to address the security aspect of that. Again, the nice thing about this platform is that it is a fully enumerated system. We know exactly all of the components that are in the system, all the network elements of the system at when we launch kind of the overall software release. So we can take some extra steps to ensure that the only traffic kind of flowing through our system and the only things talking to those REST APIs are coming from known and trusted sources. So a combination of some simple policy and some security techniques to ensure that there won't be updates occurring while we're driving. So the next question, how many different SOC systems on ship and vendors do you use? I don't know the exact number, but it's several. It's enough that it becomes a management challenge, right? And there's a lot of reasons for that, right? Like we don't want to have all the parts in our system. We don't want to be a single source, right? We want to have a vendor diversity, but yeah, that does introduce some of the challenges you see. And also because like we're making something that hasn't been built before. And so the result is there isn't really one vendor that's going to provide everything that we need, right? We have to constantly go out and find the best tool for the job. And the best tool for the job might be taking a tool that's designed for something else and using it for what we want to do. So yeah, we use a large variety, but I don't know offhand what the exact number is. So we have a question here just related to our kind of storage devices. What about the disk systems, EMMC, NAND, NOR, reliability on failures like power fails, et cetera, what kind of file systems are you using for to ensure corruptions? Great question. So essentially we use a variety of flash technologies but quite a bit of EMMC and NOR flash and we use those in different places depending on the density of storage that we require. Secure boot actually provides us with some really nice guarantees around the integrity of our system, right? So by verifying each of the stages cryptographically, we can ensure that each step in that chain is both from a trusted source but it also has not been corrupted. And we can use a similar tool. Again, I think fortunately if you want more details on this, Jan Kiska talks quite a bit about this in his talk, but using something like DM Verity to verify the integrity of our RudaFest, right? Which is a read-only system. So we can actually cryptographically verify that as well. For our configuration data, right? This other data area here, we have really intentionally designed our system to be resilient to any failures there, right? So while we take steps to try to ensure that we are, using file systems that are resilient to corruptions to power failures, we take steps to try to ensure that we quiet all users of that file system before going through a planned reboot or power cycle. We've also taken steps to ensure that if that partition or file system becomes corrupt, we can seamlessly recover it, right? And with no harm or no kind of indeterminate state for our system to be in. We have about one more minute here. So maybe I can take another related question. So are both images identical, one for backup or some sort of previous version, current version? The scheme we're looking at right now does a kind of previous and current, right? And this is what we use across almost all of our systems. We have some that might take a similar approach with like a golden image and then an active image, right? Or other systems where if we simply don't have enough storage, maybe due to mechanical constraints, we simply rely on that kind of TFTP based recovery mechanism to ensure that we can get back to a known state. But nominally, we're looking at using a kind of current and previous, so toggling from A to B or from blue to green and then back. I think that puts us out of time, but if you're always welcome to find us on Slack, any other questions, there's a number of questions we didn't get to. So if you want to reach out to us on Slack, we'll be happy to try to answer them.