 Hi, I'm James Bevor. I'm a PhD student at Oxford University where I research satellite cybersecurity. And today I'm really excited to present on what I think is a pretty cool topic and that is rocket launches. And in particular, what's happening at the top of this stick full of fire. And if we peel back the payload fairing at the top of this launch vehicle, looking at that complex world of launch diplomacy and ride sharing that's going on inside. So if we look at this mission here, which is the European Space Agency's vacant mission, it carried something like 50 different satellites on a shared journey towards the stars. And the entities that own these satellites were all quite different from each other. We had everything from Latin American tech startups to a Russian Nuclear Physics Institute to the Air Force of Thailand to Facebook. And each of these organizations has a different relationship with each other and a different relationship and understanding of what it means to be secure or what it means to be safe. And so when we think about them interacting in this environment where they're all kind of in the same boat, both metaphorically and almost literally here, it's worth thinking about what happens if one of them doesn't have good intentions. If there is an adversary who's seeking to abuse the ride sharing dynamics of this launch to cause harm to the overall space mission, whether that's by injecting their own device with malicious behaviors in its code or by compromising other people's devices and causing them to behave in a harmful way and what those behaviors might look like. But before we get too deep into any of that, I think it's a good idea to build a basic understanding of why people are sharing rockets in the first place. So imagine that you wanted to launch a satellite. You go out and you purchase a rocket for your satellite. It costs $150 million or something and you stick your satellite on it. But you purchased way more rocket than you need your satellite is heavy, but it's not so heavy that it uses up the whole rocket. So you call your colleagues in the airspace industry and you say, hey, I'm launching this rocket. Any chance you all want to hit your ride and split the costs? And these secondary payloads can be added to your launch vehicle to use up more of the available space and available thrust and kind of reduce the overall cost of the launch mission, reduce the waste from unused capacity. And we've gotten really good at this. Today, we have these small devices called nanosets or cubesets and we can cram them onto rockets by the dozens with the end result being that we can really use up all of the available capacity that we have on these launch vehicles. And I think what's interesting here about cubesets versus these kind of larger payloads is what risk means to each of these like different tiers of operators. So if we think about the really big traditional bulky satellite stuff like a GPS satellite like the one in this image here, these devices cost hundreds of millions of dollars. And if something goes wrong during launch, it can be really bad both in terms of cost which can be on the scale of billions but also in terms of strategic and tactical implications for the customers who are building and relying on these devices. You'll see years of delay from a failed launch mission on some of these programs. And so they are very risk adverse. They want the rocket launch to be as safe and reliable as possible. And basically any risk of failure is unacceptable. If we go down a scale to kind of the smaller satellites whether they're launching on their own vehicles or sharing a ride, we see that the risk appetite is quite different. We don't know a ton about how SpaceX's Starlink budgets operate but it's estimated that the satellites cost about a million bucks each which is way cheaper than a $600 million GPS satellite. And so the total launch costs of a failed mission of say 60 Starlink satellites is being on the scale of a couple hundred million dollars certainly not near billions that you would see from larger missions. And so Starlink could quite reasonably take on larger risks of failure or of some sort of anomaly or even just risks in terms of how they design their devices. On the further end of the spectrum we have these tiny CubeSats. These devices are often the size of a Rubik's Cube 10 centimeters by 10 centimeters by 10 centimeters. And then you can staple them together essentially and get bigger ones, but they're all quite small. In theory, you can get a CubeSat into space for $50,000. In practice, it's gonna be closer to one or two million dollars by the time you add everything together. One of the biggest determinants of cost for a CubeSat is whether or not you're able to use unpaid university student labor to build it or if you have to hire aerospace engineers. But in general, they're going to be dramatically cheaper. Orders of magnitude cheaper than the small satellites that I talked about earlier or than the really big traditional missions. And so for a CubeSat operator, well, it might be a lot of money to them. The total cost and the total harm of losing their mission is going to be quite a bit more palatable than for the big space operators. And from a security perspective, you can start to see the problem when there are people with different stake in the risk of a mission who are all sharing this launch vehicle. And so if we think about an attacker who tries to compromise, say a CubeSat that wasn't designed with security in mind or injects a malicious CubeSat into the payload fairing of one of these rockets, we can see how it might be acceptable to lose your own CubeSat for the chance of causing harm to the overall space mission. Now CubeSats themselves are actually quite interesting targets for cyber attackers for a number of reasons. The first is how they're built. With these big bespoke traditional satellites, we're talking about hardware that maybe only exists in one spot in the world because they're so custom and so fit for their mission purpose by design. Whereas with CubeSats, the devices are often using commercial off the shelf components, maybe even standard computing hardware depending on the specific mission intention. And so as an attacker, you might be able to buy these components and design exploits against them and reverse engineer them in a way that you couldn't with the larger space missions. In a similar vein, the way that hardware is acquired for CubeSat mission is often a process of typing in credit card numbers online. Whereas when we're talking about acquisitions for components for a big aerospace mission, you'll have like a list of trusted suppliers maybe or at the very least you'll be going through like multiple rounds of business to business contract negotiations. And it's gonna be a little bit harder to slip in a backdoor device or to swap out a package that's coming into someone's mailbox. The organizations themselves are quite different as well in terms of cyber risk. So the sort of people who are building CubeSats are going to be university projects or space tech startups. And their understanding of what cybersecurity is is gonna be very different from defense contractors and your big aerospace corporations. And in particular, they're not gonna be really geared against sophisticated attackers that they don't see in other parts of their business. So for example, you might imagine a university where a visiting researcher comes in to work on a CubeSat project for six months. And on behalf of their nation's government inserts a backdoor into the device. There's basically nothing a university can do to defend against that sort of threat or at least it's incredibly difficult. Whereas that same sort of threat is gonna be much harder to execute against an aerospace or defense contractor. Finally, and I think perhaps most importantly, CubeSats are cheap enough that for sophisticated attackers like a nation state, you can probably just build a standard-looking CubeSat of yourself and then add malicious behavior or malicious hardware to it and afford it. The costs are low enough for state actors, a million dollars, two million dollars is reasonable for the benefits of something like delaying a GPS deployment. So the real problem with exploiting CubeSats is that you can't really do all that much with them. This is a quote from a person who builds CubeSats when talking to the press about the security of CubeSats and propose cybersecurity regulations for CubeSats. And they rightly point out that CubeSats are really low capability devices. They don't necessarily have much power on boards. It's not like you can cause them to catch fire or combust with much force. They don't necessarily have sophisticated or high power radio transmitters and they don't necessarily have maneuvering capabilities. They're just kind of boxes that float around in space. However, when you ask about this kind of thing, when I see a question like this in the press as a cybersecurity person and I'm sure it was all of you at DEF CON, these are the kind of quotes that get me interested because what is the worst that could happen? What could you do within this highly constrained space of standard CubeSat hardware to cause harm to an overall space mission? One way to look at this is to consider the problem kind of backwards. So there are a lot of rules when you're building a CubeSat, a lot of regulations you have to comply with from the government, from your launch integrator and potentially from the rocket developer. And these guidelines list all of the things your CubeSat has to comply with to be safe. And safety is really important because the people who are building CubeSats are university students or people who've never built a satellite before. And so you really wanna be really sure that if that CubeSat malfunctions, like a lot of them do, it doesn't cause harm to your overall space mission. However, if we look at these standards from a slightly different angle, if we invert them, they become essentially a list of things that we can try to get a CubeSat to do to cause it to be unsafe. Things that we might wanna design our cyber attacks against these devices to achieve in order to degrade the safety of a launch mission. And so that's what we're doing today. We peeled out a bunch of different standards from the CubeSat design specification and from the Air Force Space Manual and looked at some of the controls related to radio interference, which is a topic that I find really interesting to try to see how we might create a safety incident using kind of cyber mediated or digital means. And if you look at these controls, they're all fairly intuitive and fairly reasonable. So for example, CubeSats, while they're attached to the rocket during launch need to be turned off. They have little buttons on them. And while the buttons are pressed, the CubeSat's not allowed to turn on. Once the CubeSat pushes away from the rocket, it's allowed to boot up, but it's not allowed to transmit radio signals for 45 minutes. It's supposed to start a timer and just count down. So that it's far enough from the device, from the launch vehicle that it's not likely to cause interference. Very reasonable things like that, restrictions on which radio frequencies you can use or how much power you can have on the device, but more or less kind of standard intuitive controls. But what was interesting about these controls is where the source of truth for their compliance comes from. And what we see is that basically all of these controls boil down to paperwork and checkboxes. You send electrical diagrams to the launch integrator, they look at it and say that looks compliant. You fill out paperwork saying you did a day in the life test and this is what you found. Launch integrator looks at it says that seems compliant. They don't necessarily tear apart your CubeSat to verify that what you wrote down in the paperwork is true or that the device actually behaves that way in practice. And they really couldn't. If you do that to most people's CubeSats, you just destroy them. And so we end up in this world where an attacker who's either deceived or willing to deceive can get away with quite a lot by circumventing these safety standards which exists under the assumption that everyone kind of shares the goal of building a safe device. And so we publish an academic paper a while back that really delves into these standards from two different threat perspectives. One is a malicious outsider. So someone who compromises the CubeSat via like a hardware backdoor or via some other cyber mediated attack factor. And the other being an insider. Someone who either controls the CubeSat mission completely say a startup that's actually being run by a nation-states intelligence service or an organization that has been compromised by a sophisticated insider threat and the blind to the CubeSat integrator about the safety properties. And we find that very few of these controls are robust against both of these threat models and many are weak to both of them. So that answers one question. It might be possible to circumvent safety controls but do these safety controls really matter? Does a CubeSat that breaks these rules really pose a threat or are the rules more of an abundance of caution thing where there's not really much that can go wrong but we just wanna be absolutely sure. So we picked out two controls to kind of test that through simulations. We wanted to look at what happens if a CubeSat starts transmitting radio signals too early and in an unapproved frequency. So there's some backdoor logic that makes that countdown timer that's supposed to be 45 minutes, actually 45 seconds once you're in space and you transmit in a different frequency using something like a software defined radio which we see pretty commonly on new CubeSat designs. So the requirements here are very straightforward. We have a kind of a very standard CubeSat it's using a commercial CubeSat SDR and antenna that has operating frequencies within this RF range although they don't need to be optimal and a transmitter that fits kind of within the zone that you would expect the typical CubeSat device to be. One watt is on the lower end, 10 watts is quite a bit on the higher end but both could conceivably be crammed into a 3U device. So while the CubeSats in the launch vehicle it doesn't do anything but once it separates it breaks that first rule, the timer rule and it immediately begins transmitting radio signals while it's quite close to the rocket that deployed it. And those radio signals are selected to overlap with the L1 band of GPS reception. The goal here being to create interference at the launch vehicle for GPS signals. And the reason this is an interesting interference attack is because while the CubeSats antenna is really, really weak and it's quite low powered and it's not gonna be able to transmit that much, GPS satellites are really far away. And by the time the signal gets to lower orbit it's not very strong. And so our attacker has a significant proximity advantage here because they're quite close to the target device. And so they need much less power to cause interference than say someone on the ground would in the exact same threat model. So the ultimate goal of the attacker in causing this interference isn't just to show that they can it's to trigger a safety incident during the rocket launch. Rockets are really dangerous devices. If they crash into something it tends to be quite bad for the thing they crash into. And perhaps even more importantly the only real difference between a space launch vehicle and an intercontinental ballistic missile is the direction it's pointing and what's on the top of it. And so if you don't want to accidentally start World War III and you don't wanna accidentally blow up a school by having your rocket land in the wrong spot you wanna be really sure that it sticks to its flight path. And traditionally this has been done with kind of a guy on the ground with a big red button. And if the rocket strays from its flight path or if it has sensor anomalies so press the big red button a flight destruction system will trigger that essentially tries to combust all of the rocket fuel and burn up the rocket before it can crash into anything. These days we've seen a shift away from guy with a big red button to what we call autonomous flight termination systems. And these are devices that can make that self-destruct sequence decision without a human in the loop. And they use GPS signals and accelerometer data to get information on the position of the rocket. And if that position strays outside of defined parameters or there's some sort of other device anomaly they can trigger that flight termination and destroy a rocket completely autonomously. Now nobody tells me what the threshold for these devices are or what kind of redundancy there is between the accelerometers and the GPS signals but we can at least hope as an attacker that if we degrade GPS signals we might trigger one of these range safety incidents or reduce the reliability of these devices in the presence of some other failure like a failure in those accelerometers that happens coincidentally. But there's still a question of whether or not we can have a meaningful effect on GPS signals. And to figure that out, we needed to simulate two different dynamics. The first is the separation of a CubeSat from the launch vehicle. So CubeSat separation is a surprisingly low tech thing. CubeSats are put in these little tubes. There's a spring on one end, a door on the other. And at some point in the mission the door opens and the spring just kind of shoves the CubeSat out into space pretty gently about one to two meters per second separation. And so there's this long period when the CubeSat is pretty close to the launch vehicle that deployed it. And we modeled this period using some Azure Dynamics software and you can see that there's a bit of a swoop here. So during the first couple of minutes you tend to kind of loiter around the device before the shape of the orbits causes better, a more dramatic separation. And so in that kind of 45 minute window when we're illegally transmitting we can expect to be relatively close to the launch vehicle for a long time. So then the question is, while we're this close to the launch vehicle, how much power can we get from our antenna to an antenna on the launch vehicle in the GPS band? Now this is a very complicated question that we simplified by modeling perfectly isotropic antennas and kind of thinking about it without any like pointing models. So we wanted to just get a baseline estimate here. And we calculated the power for various different transmitter capabilities or different wattage on your transmitter. But we didn't really know what these numbers meant, right? I have no idea how much signal you need to jam GPS reception on the orbit. It's not something many people study but it turns out the Department of Transportation did a study that was very useful for us. They looked at LTE cell towers on the ground and estimated how much power they would have to get into an antenna in orbit to create degradation in GPS signal reception. And LTE is a mirror band to GPS, which is why it's an interesting question for the Department of Transportation. But we can assume that our attacker, if they're able to get at least that much power into lower orbit can probably cause disruption because they're in band as opposed to near band. So they're gonna be at least this good, if not better. And we can see this window here on the graph of about 20 to 40 minutes of disruption at that threshold. We can also look at this from a different angle by calculating the signal to interference plus noise ratio that we would expect to have at the launch vehicle's receiver during the attack. The dotted line at the top of this graph shows kind of what's normal, which you would expect if there was no attacker in terms of your GPS reception. And then if we kind of map this graph to the performance characteristics of high-end commercial GPS receivers, we can get a rough idea of the disruption we would cause. So the green zone is your GPS signal works exactly as you'd expect. The yellow zone is your signal has degraded, but it still has some precision. There's a deletion of precision, but it's not broken. And then the red zone is either a complete loss of precision or a pretty dramatic dilation. And we can see that our attacker can expect to cause between zero to 40 minutes of disruption, depending on how strong their transmitter is. And as a really strong transmitter, they can sustain disruption for basically as long as they hang around the rocket. Now, whether this means anything, it's really unclear. It's more of a question for the people who build these devices to check their tolerances and their threat models against the possibility of a malicious keeps out degrading signal quality as opposed to simply against the possibility of hardware malfunction causing that or of someone on the ground trying to cause that disruption because the model is quite different. However, there also is some future work we can think about in terms of how we would defend against these attacks. So you might think about anti-jamming techniques we can incorporate into the flight termination systems in order to be sure that by the time we're getting those GPS signals, their authentic GPS signals, things like frequency hopping or using military band GPS signals could potentially improve those characteristics quite a lot. We might also think about how we would pre-approve hardware for integration onto CubeSats to make them less able to carry out this attack. If we look at the filters and the antennas that are used for communication stack on CubeSats, those are devices that are actually fairly easy to independently audit. Not perfect, people can still lie to you but you might be able to catch more deceptions or at least provide a faster route to certification that's more secure by having a pre-approved vendor list. We might think about attribution as kind of a roundabout way to deter these attacks. If we make it clear that if someone pulls off this attack the evidence is not going to literally go up in smoke because of the rocket combustion or vanish into space if it fails because the satellites are all far away and no one's gonna look at them. We have some sort of logging mechanism to monitor the radio environment around our launch vehicles. It might create a political cost to engaging these sorts of operations especially during peacetime. We might also think about other policy mitigations in terms of which devices we integrate into which launches and how we can be sure that everyone shares equally in the risk appetite for emissions that we don't put perhaps university payloads with high value primary payloads and we kind of balance out the risk appropriately as launch sharing becomes more of the norm. This is an evolving landscape, things are changing and they're definitely new threats that might be interesting. In particular one thing I'm very curious about is we're seeing a shift in the way that those deployers are built. So historically the deployers are aluminum boxes and they are functionally Faraday cages. You can't get a radio signal in or out of them but we're starting to build them without doors on the ends to make them lighter or using plastics and carbon fiber. And those don't necessarily have the same radio attenuation properties which means you might be able to start jamming on the ground during launch and create even worse disruptions during kind of that critical launch period which is something that I think is definitely worth considering as the technology evolves. In short, there are a couple of key takeaways that I think come from this presentation. The first is the difference between safety and security. CubeSats are designed in a safety mindset. There are rules you have to comply with to prove your CubeSat is safe but those rules don't necessarily hold up against an intelligent adversary. Especially when that intelligent adversary takes advantages of emergent capabilities like software-defined radios or changes in the employer design that don't necessarily alter the safety model of the system but do alter what an attacker can get away with. So when we incorporate trust into our space missions which is something we have to do I think we need to be conscious and aware of it and explicit about who we're trusting to comply with what and what that trust actually means. Thank you so much for listening to this presentation and please feel free to shoot me an email if you have any questions.