 Got it. Makes sense. Sweet. This should be fun. So let's talk about Edge Computer a little bit more. Our definition of it, we'll get some common ground. And all the things we did wrong, that hopefully, you'll not do wrong, which is very important. And the first thing I like to do is a little bit of a level set. So Edge Compute, what is it to me? So you have an understanding. So maybe it will make you go like, hey, wait, that is entirely different than what we're talking about. Basically, we're looking at compute that takes place at or near the physical location. Apologies for the creaky floor here. Computing that takes place at or near the physical location of the producer or consumer of data. Beautiful wording, super exciting. Edge locations may be properly built up data centers. More often than not, they're not properly built up data centers. They're shoddy hardware held together by stuff that you could find last minute at Home Depot or whatever your local equivalent is. And, well, you sold it to the customer and they signed for it, and that's great. Way before we had Edge Compute, beginning of the 90s, any point of presence for CDN, content delivery network was pretty much Edge Compute. Wasn't as smart as it is today, not a lot of functionality running on the Edge, usually serving some HTML computationally, still very important, still very impressive, but certainly not as beautiful as mobile wireless deployments that we see these days where you can run very high-end computing, very far out, in the middle of nowhere. Good stuff, though. Good stuff, way more fun than some of the things that we're running on. Point is, Edge Compute isn't new. And yet, what we see with a lot of projects is that a lot of folks and engineering teams, good engineering teams, are making the same mistakes at the age that we made in our data center that we made in the cloud, irrespective of that actually being a mistake you need to be making. So for today, I want to look at three cases, three engineering projects, share some of the things that we learned from them, I want to talk about over-engineering and thermostat, I want to talk about making a really fancy counting script, and I want to dive deep into feeding fish without the frenzy. And my goal is really for you to leave this session with enough information that you can make new and better mistakes than the ones we did. So before I joined HashiCorp, I worked on industrial IoT at the Amsterdam Airport. Super exciting. Airports, fun places to work at. Quick shout out to my favorite airport, 52 million passengers a year. That is a lot. Puts us in third or fourth place in the world. That's post-pandemic, obviously. Amsterdam Airport. Script poll. Yes, sir. Thank you. I'll take all the credit for that, except when the security lines are long. That's definitely somebody else. If you've been at airports, most people come to conferences via airports. You know that the primary function of an airport is to get people from A to B. Load and unload humans meet where it works. The parting passengers take a route that starts at the curb or train station, obviously in the Netherlands you show up by bike. You head through baggage drop-off, you go through security, pass chops, hopefully enough that they can offset their very high retail fees and real estate fees. And to the gate, cross the jet bridge into the plane with the goal of having butts in seat before the plane is supposed to depart. Plane departs late. Somebody is paying for that. And next year your ticket prices are going to go up. Not very great. Arriving passengers, pretty much the reverse of this process, leave the plane, head on out to the curb. Usually don't go past the shops. Much is chagrin of all those shop owners. And of course, if you're one of the passengers that has a connecting flight, we do about 30 million of those in Amsterdam every year. Not 30 million flights, 30 million passengers. Should point that out. Not that bad for the environment. Your journey ends and starts at jet bridges or in planes. And computationally, every person is a couple of those in API requests. When you show up, you scan your ticket. Of course, even when you book your ticket, which is the first time a new data entry gets created, when you check in, you drop off your bag, everything gets calculated. And the airline has to know where you are at. The airport has to know where you're at. We're at a venue that has a fire code. An airport has a fire code. Slightly stricter. Because panic there can be much more troublesome. If you've ever been at airports, where there's a fire alarm, you'll find that that is the smoothest experience that you'll ever experience for a fire alarm. Because people know their stuff. Because there's data that tells them these are the areas that we need to focus on first. And so to make those protocols work, there's a lot of data that goes around. Literally every step that you take in the airport and every status change in terms of drop off, checking in, going through security, there's an API request at least for us attached to it. There's an acronym attached to it because, well, aviation is very much like the military in that sense. Anything that can be a three-letter acronym will be a three-letter acronym. But as engineers, of course, we know all these API requests, they get stored somewhere. So imagine yourself at a busy airport. It seems like a good thing. I mean, if it's busy there, it's good for you, good for the airline. In reality, it's not. Other than fire and safety hazards, keeping those aside for a second, busy airport where you're spending too much time not deep-planing and actually planning and leaving the area means it's a heavier burden on the API, the Twitter API, because people end up complaining. So airport design is all about risk mitigation, which means PAX ops, passenger operations, is very much about understanding what the pulse of an area is. And one way to ensure that is to use some sensing equipment. And the mission statement is always fun. We would like to know how our airport is doing. Imagine stepping into a control room, beautiful, high-risk screens, and all you want to know is, how's my baby doing? Problem with that is how your baby is doing is not a binary question. It is a question of about 150 different data points that somehow need to end up in a database that you then can correlate from, correlate the right insights from, and basically deduce the answer to that question. So if we're talking about sensing equipment, well, it's easy. Airport, big area, so let's deploy a wireless IoT network. I'm some airport-use Laura, one of the coolest technologies, if you ever get to work with it. On paper, 16 kilometers or 10-mile range, six miles and 10 kilometers. Math is very hard right now, which is why we usually use spreadsheets for this. And to put that into perspective, if you have a home Wi-Fi, even fancy outdoor Wi-Fi, best you're going to get is like 500 feet. So we're talking range. Except if you've ever worked with radio and spectrums like that, it's not the range you're going to get. Luckily, our software can account for that. If you have beacons that don't transmit and get a check that, hey, your data has actually been transmitted, they'll happily retransmit. Of course, you're going to pay some battery for that. So we'll look into that a little bit. The cool thing about Laura is it uses a chirp protocol. Spectrum, there is too difficult to explain because I lack the way of simplifying it. But imagine a beacon that can't have a battery can last from five to 10 years. My phone lasts about 18 hours. So five to 10 years sounds pretty good to me. And of course, you're not going to get the same fidelity. Like, your phone does much more. It's high-powered. But a beacon at the edge that can calculate humidity, temperature, noise level, assign those values to an area and then help you figure out, hey, is this area safe or not? Very important. Because that means you can make operational choices. So if we simplify the statement a little bit, let's deploy 1,000 plus sensors that can track occupancy. Is the check-in desk open? Very important because it means that little check in the app that you have from the airline that says go to this counter actually sends you to a counter that is open at that point. Temperature and humidity. Do we need to kick in AC? Do we need to make sure we have a temperature that's useful for people that's healthy? Very important. Amsterdam does get three days of summer. And we like to be cold at that point. Not chill, just actually cold. Because the Netherlands, complaining about the weather, is very important. It's one of the first skills people learn. And of course, safety. We carry around phones. You have a wireless network. We have whatever the PC term is of sniffers that figure out how many people are actually in an area to make security assessments. If there's too many people, you need more security personnel because there's a ratio you need to hit. Lots and lots of different data points. And remember, again, those sensors can last five to 10 years from an operational perspective. Well, let me take a step back here. Who here stayed at a company for more than three years? That is surprisingly different than I thought this was going to go. How about five years? All right, there we go. See? Already getting a lower group. Most software engineers stay two years. You're never going to worry about the battery having to be replaced. So it's not a problem that you're going to face early on. It's very much said it in forget-it kind of territory. You deploy it, and by the time the battery needs replacement, you're probably going to get the third generation of that product. Operationally, though, that means you have a mass problem. And the PAXOps team would like to know how long this stuff is going to run before they have to write a new check. And luckily, simple formula for it. We'll start by defining the capacity of a single battery. This is going to be fun, I think. We'll define the current that's used for various active and passive operations. We'll, of course, define the amount of measurements we're taking every time you're measuring, you're using battery. Every time you switch on your phone, you're using battery. And then we define how long each operation takes. A measurement might take half a second. A transmission of the payload that you have might actually take five seconds. There's a cost attached to that. And finally, we have to account for self-discharging, because even when not in use, batteries bleed capacity. And so with all those defined, we've got a pretty simple formula. It is so simple that you could run this as a shell script, as a bash script. You probably shouldn't. We didn't. We found a vendor that commercialized a very nice open source, big data solution and allowed us to store this. Reason for that, really simple. If you deploy 10,000 of these devices and you need to do these calculations a lot, it is nice to have an integrated solution. Ultimately, if we go through this, take the amount of time that the device is in measurement mode, times the, looking at my preview here, the current, when the device is in measurement mode, divide it by the amount of milliseconds per hour, it's a lot. I don't like it. And then just some more math. And once we get through that, we're figuring out every part here. And we were really smart. 365.25. You know why the 25 is in there. Better is going to last 10 years. We're going to hit at least two leap days in those 10 years. I think that's great. We were so smart. We considered ourselves a powerful engineering culture. And we got our number. We're kind of disappointed. Package said eight to nine years. We'll take 7.6. I mean, it's still a beautiful number. None of your AAA batteries get there. But our plan was to exchange the batteries after six years for one simple reason. Operational overhead. And we wanted to make sure stuff doesn't break when we don't need it to break. And so, if we're going back to that mission control room, we graphed this for every sensor. And we got a nice little line. Line looked great. We started in 2018. By 2023, we were going to be at just above 50% of battery left. It's great. Like at that point, I was long going to be rotated out of that project. It's going to be somebody else's problem. But at least it was an immediate problem. And so the next thing we did was we started tracking it with the actual battery. Rural usage versus what our spec sheet said. And you might say we were confused. You might think the formula is wrong, but the formula is simple enough that if we spend 15 minutes together, you can completely understand that formula. It was not complex after all. So we kept measuring. We went from confused to, I think puzzled is a word that we would use. Because something was happening. What was happening? Nobody knew. Summer was coming, so most of us just decided to take a break. We came back. It was great. This was not the 7.6 years that we were promised. You're no longer puzzled at this point. You're just deeply frustrated. But it gets better. After we hit 4%, we hit 0% pretty quickly thereafter. Battery capacity that bleeds like this, when you now have to deploy an army of people to replace batteries in small hidden devices, is very costly. The cost of the batteries pales in comparison to the people and the hours they're going to spend fixing this problem, which was a software engineering problem from the get-go. What we didn't account for when you buy 10k batteries is that buying them in batches of 1k batteries is great, because you should never buy lots of hardware from the same vendor in the same batch. It also means that you can just test 5% of them and extrapolate from that. That will not tell you if the whole lot is good to go. The other thing that is very important, software engineers bring a certain confidence or hubris, depending on which side of this conversation you're on, to engineering projects. As a software engineer, I know how to make code go from left to right. I can also line it on PowerPoint slides. I'm less well-versed in thermal engineering and figuring out how the sensor that is attached to a ceiling might get affected by the ambient temperature. The surrounding humidity and the fact that an airport is built like a bunker in terms of radio transmissions, nothing goes out through the windows. That's great because we don't want to mess with the planes. It also means that all your devices actually transmit at about 50 times the power they were supposed to. Sorry, not 50 times the power, 50 times more often. For one simple reason, the device is told to send that payload, that measurement, to our central server. And it will do so until the battery is empty. Great, great experience. We were confused, puzzled, frustrated, all the above. It didn't look like we expected it to look like. And one reason for that is we had our very own NASA moment. NASA at one point kinetically landed a probe by misunderstanding metric and imperial and mixing both of those up in the same projects. And when I say kinetically landed, I mean that they smashed it right into the surface. I think the term for that is unscheduled disassembly. In our case, Laura has this fun way of calculating batteries. Most sensors give you battery range on a scale of 0 to 254. The engineer that worked on this project and the reviewer both did not read that part of the spec sheet. So the 83 in your calculation is not 83%. It's slightly less than that. It's about 2 thirds empty. Makes projecting things a lot harder. So first learning for today, when you do engineering projects, edge computing projects, understand the whole flow. Understand what you can be monitoring. I think that is literally the most critical thing. You are going to be deploying a network, usually, because most edge places are like dogs. You can't have just one. You have multiple deployments. Have monitoring in place that can account for the stuff that we don't traditionally account for. If you're running stuff in a cloud, things are easy. I don't have to worry about the temperature of my data center. I don't usually have to worry about it getting flooded unless you have something deployed in Paris. Different story, of course. And understand how math works, which is really the hardest part here. In our case, Laura does not give you enough compute on an edge node to run the monitoring you want to run. This is good, because in theory, you get many years of battery. In reality, it also means that you being able to monitor what's going on there is very far from the edge. In our case, you have to go to the packet forwarders. And then usually, in our case, we used our asset management system to figure out what was going to be needing replacement next. So first learning is really go beyond the software. I feel for this project, a software engineer would almost do well to have a brief course on electrical engineering and material sciences. When you walk around airports, you generally not see a lot of the IoT most airports deploy. That's because all that stuff is hidden. It's painted the same exact way as a ceiling. It's made to not stand out. That pain that looks like the ceiling of your airport might have some effect on the radio transmission. Not enough for it to be noticeable in a lab test, noticeable in a real world deployment, though. No contact survives, sorry, no plan survives contact with the enemy. This was pretty much that. For us being able to put these parts together, not just by understanding the spec sheets, but by having real world data, could have made a lot of difference. But Glasshuffful, we were also able to give job security to many maintenance engineers who were kind enough to replace batteries. Glasshuffful. So airports are cool, but let's talk about grocers. And in this case, building a fancy counting script and d-dossing ourselves from within. So imagine a company, an organization, that has 1,200 retail locations. 12% of these, 150, are in the hypermarket category. Basically, department store, grocer in one location. Same physical location. 80% on the go, kind of shops like the stuff you see at airports, the stuff you see at train stations. There's a lot of them. And then between the two, of course, the traditional supermarket. Just a retail location, nothing else attached to it. Altogether, sizable amount. This is the beautiful edge deployment. Each of those has between two and 75 checkout registers. All running outdated versions of Windows, not even Linux. Outdated version of Linux would be fine. Outdated version of Windows is just, well. Good times. We all know how retail stores work. You pay for a product and someone in the back of a storage area, somebody gets a notification that, hey, that was the last manga that we could sell for today. We need to restock and reorder. And usually, that notification is pretty seamless. You might also swipe your loyalty card and get points for the transaction that you carried out. Of course, that mapping is never one-to-one. It's usually not in your favor. But hey, we still chase those points because those points eventually mean we get a free drink after spending about $561. And we love it, right? Free stuff is good. The thing with that is really, there's a lot of engineering that happens for the store or anyone who uses a loyalty scheme to cheat you. Because all that screwing around with numbers has to happen somewhere. In our case, it happened on the point of sales. POS, not the other POS. And let me put it this way. The about this operating system screen had a number that started with a 1. That was not because the operating system was version 1. That was a copyright. So you can figure out how modern our tooling was and what kind of access we had to in terms of anything remotely modern. So one way to work around this, if the checkout register is your absolute edge, and it's moderately locked down. Sure, USB ports are open. And as an attacker, there's about 15 different talks you could have for any supply chain security conference, any security conference as well. But we couldn't install anything. The good team was not allowed to install software in there. We just invited the hackers to do it. Well, all of the stores have their own processing. Slightly more powerful devices, usually unattended. In our case, I wanted to put the picture in that we had. Unattended Dell desktops that collected all the data. And I see you shaking your head. Sorry, nodding your head. It is not as far-fetched as it sounds. When you see that in movies, they didn't pick those computers because that was the only thing that was available in the prop department. It is because that is the real world. And it's painful. Usually the Windows license key stickers also scratched off. So the data that gets sent around is not super special for each transaction in our case. We built loyalty card software, and we wanted to make sure people got their loyalty. You've got about four different things that we want to have from you. We've got a transaction ID, very important, because loyalty points are technically equivalent to money, PCI DSS, and everything else applies. We, of course, need to know when the transaction took place, which card was swiped, or scanned, or whatever, and the amount before taxes, because obviously you don't get the points with taxes included. That would be bad for everyone. These transactions are 120 to 200 bytes, nothing big. Even in this case, this was a store in Asia, so if you're using UTF-16, it's still not a lot. One megabyte is good for about 4.1K transactions. And of course, you're accounting for envelopes. So in our case, we had a factor of 25 on top of it, 25%, because we want to be generous, because we want to make sure we have enough storage. And per store, average 2.5K transactions a day. Of course, the hypermarts much more, but all the small ones much less. Averages are always a lie, but the 2.5 is actually what we worked with, and it worked for us. Apologies for that. Per store, transaction size minus Q. If I'm downloading an app update from the app store right now, it's going to be about 100 times that. Here's the problem. 723 megs is not a lot of data. What is a lot of data is 3 million API calls, especially because all of these stores are in a single time zone, and they all close at the same time or within 15 minutes of each other. You cannot build an API that withstands that many inputs without having any brokering in between. We didn't, so we deduced ourselves, which was great. All these devices came from a trusted network. So of course, the web application firewall appliance that was not cheap did not account for any of that, because if you come from within, then who's going to do that at scale? Who's going to attack us from the inside? It was great. Our network security team had the brilliant idea of, why don't we just have the web application firewall use as a throttling mechanism? That, on its own, is a fundamentally stupid idea, because your web will happily block all your traffic. So now the stores are closed. The stores are going to open again in about seven hours, and we are blocking stores from actually sending the data. All those POS devices did not have a lot of storage, because they were not designed to have a lot of storage. The edge location was not meant to be the point where you store the data. It was purely a pass-through. So if we block devices and don't get our transactions actually transacted, there are problems. There are problems in terms of did you get your points assigned? Did you get enough points assigned for that special deal because this transaction happened at this point? There's a whole lot of crap happening. So in our case, the easiest solution was build a local cache. Lots of back and forth, lots of very silly ideas. I know we started every engineering meeting with, there's no silly idea. There's no stupid suggestion that you can make. There weren't so many. But we found one that I was just reading DevOps weekly, I think, or one of the DevOps newsletters, and I was like, man, this is like the shit that we did five years ago. We built a local cache in the coolest way possible, and the reason I think it's cool is because it worked. And code that works and solutions that work always beat solutions that don't work but have more features. We stored all store transactions per store in a single SQLite database. That was our local cache. They were eventually going to go to a Postgres database. But by having all of them per store on that one Dell machine in a single database, we had a solution. Inserting all those things in there, not a problem. Transmitting a single file, R-sync, in our case, SCP, super easy. We can account for that and we can then process all of that stuff, not at the age. And you might think, doesn't that give you a different problem? Isn't that like just shifting it? It's not. So I want to present you this from the SQLite documentation. The entire database ends up in a single cross-platform compatible file. I can give you one of those files now after not having transacted it for the past five years and you can run it. That is power. That is power because it's compatibility. And if there's one thing we figured out in IT, it's how to get files from A to B. Did we use the Dropbox client to do this? I cannot comment on that. But if one were to do that, it would certainly work. And I think that's what's important. Turns out that grabbing a thousand files, one each per location, processing them, super easy. And yes, we still run them through an API just because it's cleaner than actually inserting them. So the learning here, design simpler flows. That is the absolute key. Remove everything that you don't need at the age. Don't build it with your machine in mind. Build it with the edge in mind. Deploy early to the edge so you know the stuff you're gonna run into. If I were to find myself in that same situation today, we'd probably not use Dropbox, but we would definitely build the same thing and just encrypt it to be on the safe side. Simple reason there is these are basically financial transactions and what you want to be doing there is make sure you have an audit log. Constraints are your friend, not your enemy. Generally speaking and rebuilding from an API driven workflow that we all love that was very microservices and everything to basically let sync a file was not easy, but it allowed us to keep the security model intact. It allowed us to not open huge problems for everyone, including the ordering department and people got their loyalty rewards sooner. I feel like that's a win for everyone. And then while we're at it, build for offline first and this is the hardest part. Putting a fair day cage around your hardware and seeing how it behaves at that point is beautiful. Beautifully hard as well. Think about what you need in terms of storage, power and application architecture to support that. When you build your edge, basically build a mini data center. Think about job orchestration. If you're running these processes at the edge, how do you account for failures? These are unattended devices. People in the store are not generally the people that know how to fix all of that stuff. They can flip the power, that's about it. They can make sure the cable is working and plugged in. They can't even make sure the cable is working. They can make sure it's plugged in. These are very different things. Have a strategy that allows you to process enough at the edge and recover from failure. And so for the last five minutes, I wanna look at one of my favorite US cases, agricultural food production. I'm a huge fan of agriculture tech or agri tech. For one simple reason, we have more mouth to feed. If we don't use automation, shit's not gonna work, and people are gonna go hungry. We have lofty goals on every continent. We have lofty goals with every organization that wants to get into the news. The reality is it's a hard problem to solve. Lots of farms use IoT. John Deere, very famous for having a lot of stuff in their machinery, these are edge compute devices. Edge compute devices you don't own and you pay a licensing fee for, at least nowadays you get to change some of them. In Europe, I don't have the exact stats for all the other places, two thirds of farms use industrialized IoT. Robotics, basically farms managed by robots and industrial IoT deployments. Problem for that is the traditional farm is very good at caring for livestock, knows what to plant, which grains are ready for harvest, they're not traditionally software engineers. And that's okay. I'm not a farmer myself, I just enjoy the final product. In their case, if you think about it from our side, our industry, IT struggles with massive skill shortage in traditional IT, right? Even just getting our group back in the engineers to understand more front end, front end engineers to understand operations, security people to be well-liked by anyone. We struggle with that and we can figure out with our group, getting a farmer to become enough of a software engineer to be able to figure stuff out, impossible. When you're the kind of farmer that runs a fishery, so fish farm, your livestock gets even harder to maintain. You can't just walk up to the cow and look at it, you have a million different things, not a million, but a lot of different things that matter. You want to have lots of fish because lots of fish you can sell for a lot, it's a good thing, but to get to that stage, you need to figure out how to feed them, how to care for them. Not too much, not too little. If one of those kicks in, then you get that beautiful animation where your fish now no longer has any skin on their bones. And that creates a problem. You have to figure out when you're feeding, is the wind that's blowing over the pond shifting all the feed to the side of the water? Are the fish actually getting any of that? Again, just deploying a workload to the edge, doesn't mean it's actually being consumed. In this case, how do we measure that? Well, we can measure various parts of the water. The salinity is usually a good indication. The pH values, because they tell you how much of the food is actually hitting the water and changing the water makeup. Problem with changing the water makeup is also, too much of it lands and goes down, your algae grows. And most fish don't do well in that environment. If you were to account for all of these variables, you have a lot of stuff to keep in your head. So of course, in our case, we need aggregate tech to the rescue and we can figure this out. That animation was definitely slightly more colorful the last time I opened this file, which was yesterday morning. So close your eyes. We're gonna make this in your head. Imagine a box of hardware filled with a dispensing mechanism for food, various sensors attached to it, one cable for power, and that could be actual electrical power or power that comes from a solar panel or a battery pack. We all imagine beautiful device, beautiful edge deployment. It works, right? The first thing that we need to deal with here is we're dealing with a highly unqualified operator for that device. So we need to optimize generously. Every little tweak that you can make to your system that makes it stronger, more reliable and more trustworthy to somebody who doesn't want to expend time on figuring out how the tech works is worth its weight in gold. Not only will you have fast operations, you'll have lower CPU usage, all that great. You'll have a better working system. And for that, you need to be very, very strict with what you're building. We see a lot of architecture that talks about being compatible with the edge. Unless you're custom building stuff or working with projects that have stuff for the very edge, you're gonna have a system that uses more CPU than you want to be expanding. For a single device in your lab, nobody cares. For a deployment in the middle of nowhere, 150 miles from the closest repair shop, all of these things start to matter. So the real hardcore thing here, compile CPU-specific binaries, strip everything that you don't need. Second learning, use boring hardware. The fancy hardware your vendors will sell you is not the stuff you want. Figure out what you need, figure out what you can build. This project was built on Raspberry Pi 2s. We're talking 2022 using 2015, 2014 hardware. One gig of RAM, 900 megahertz of CPU. And at least in production until 2026. 11 years after it was, 11, 12 years after it was originally introduced. That is power. Boring hardware is good hardware. Readily having access to large amounts of that hardware is even better. And so when we last chatted about this with somebody, they were like, well, boring hardware, I don't want to use that. I want to have something fancy at the age. I want to have lots of LEDs that tell me everything is going well. And this just doesn't really matter. I present to you a setup that had 115 compute nodes, all Raspberry Pis. It was a single cluster running HashiCorp Nomad. All together, 115 gigabytes. My Mac at home has 48 gigs, so that amount is not a lot. But if that amount puts 115 fish farms in a position where they can monitor food production for a cost of less than 50 bucks in terms of hardware, those 115 gigs are the single coolest 115 gigs that you will ever come across. Small things add up, especially when used with the right orchestration. And so with that, I want to finish on a lighter node. I came across this one this morning on ours. The airport project struggled heavily with battery consumption. The farming project entirely focused on producing food better and more sustainably. And it looks like those two worlds are coming together. Technology on the edge and everywhere is always changing. Sometimes food are better. I don't know how I feel about edible batteries, but I think it's cool. So I want to leave you with one thought. All of these projects are hard, especially if you're one of a few engineers that works on it. Engineering, security, networking, everything in this area is a team sport. Don't do it all by yourself. Make friends along the way. Learn about the other areas of engineering and build a better final product. Thank you so much. Definitely taking questions if you have any. It is sadly not a very isolated setup. Windows at one point sold licenses for point of sale systems for very cheap. That was before everyone started switching to iPads and we will pay for that until 2045. Did I actually admit to using Dropbox? I just hinted at, yes, well, error correction is important and we solve the problem. And as engineers, that's key to what we do. Thank you.