 He is a professor of security engineering at Cambridge University. He is the author of the book Security Engineering. He has done a lot of things already. He has been inventing semi-invasive attacks based on inducing photo occurrence. He has done API attacks. He has added a lot of stuff. If you read his bio, it feels like he's involved in almost everything related to security. So please give a huge round and a warm welcome to Ross Anderson and his talk, The Sustainability of Safety, Security and Privacy. Thanks. Right, it's great to be here. And I'm going to tell a story that starts a few years ago. And it's about the regulation of safety. Just to set the scene, you may recall that in February this year, there was this watch. Enox's Safe Kid 1 suddenly got recalled. And why? Well, it turned out that it had unencrypted communications with the back-end server, allowing unauthenticated access. And translated into layman language, that meant that hackers could track and call your kids, change the device ID and do arbitrary bad things. So this was immediately recalled by the European Union using powers that it had under the radio equipment directive. And this was a bit of a wake-up call for industry, because up until then, people active in the so-called Internet of Things didn't have any idea that, you know, if they produced an unsafe device, then they could suddenly be ordered to take it off the market. Anyway, back in 2015, the European Union's research department asked Aaron Leverett, Richard Clayton and me, to examine what IoT implied. For the regulation of safety. Because the European institutions regulate all sorts of things from toys to railway signals and from cars through drugs to aircraft. And if you start having software and everything, does this mean that all these dozens of agencies suddenly start to have software safety experts and software security experts? So what does this mean in institutional terms? We produced a report for them in 2016, which the Commission sat on for a year. A version of the report came out in 2017 and later that year. The full report. And the gist of our report was once you get software everywhere, safety and security become entangled. And in fact, when you think about it, the two are the same in pretty well all the languages spoken by EU citizens. It's only English that distinguishes between the two. And with Britain leaving the EU, of course you will have languages in which safety and security become the same throughout Brussels and throughout the continent. But anyway, how are we going to update safety regulation in order to cope? This was the problem that Brussels was trying to get its head around. So one of the things that we had been looking at over the past 15, 20 years is the economics of information security. Because often big complex systems fail because the incentives are wrong. If Alice guards the system and Bob pays the cost of failure, you can expect trouble. And many of these ideas go across to safety as well. Now it's already well known that markets do safety in some industries such as aviation, way better than in others such as medicine. And cars were dreadful for many years. For the first 80 years of the car industry, people didn't bother with things like seatbelts. And it was only until Ralph Nader's book Unsafe at Eddie's Speed led the Americans to set up the National Highways Transportation and Safety Administration. And various court cases brought this forcefully to public attention that car safety started to become a thing. Now in the EU, we've got a whole series of broad frameworks and specific directives and detailed rules. And there's over 20 EU agencies plus the UNEC and play here. So how can we navigate this? Well, what we were asked to do was to look at three specific verticals and study them in some detail so that the lessons from them could be then taken to the other verticals in which the EU operates. And cars were one of those. And some of you may remember the car shark paper in 2011 where a guy from San Diego and the University of Washington figured out how to hack a vehicle and control it remotely. And I used to have a lovely little video of this that the researchers gave me, but my Mac got upgraded to Catalina last week and it doesn't play anymore. So, I'm for Schlimbesen. Okay. We'll get it going sooner or later. Anyway, this was largely ignored because one little video didn't make the biscuit. But in 2015, this suddenly came to the attention of the industry because Charlie Miller and Chris Fallochek, two guys who'd been in the NSA's hacking team, hacked a cheap jerokey using Chrysler's UConnect. And this meant that they could go down through all the Chrysler's UConnect and this meant that they could go down through all the Chrysler vehicles in America and look at them one by one and ask, where are you? And then when they found a vehicle that was somewhere interesting, they could go in and do things to it. And what they found was that to hack a vehicle, suddenly you just needed the vehicle's IP address. And so they got a journalist into a vehicle and they got in to slow down and had trucks behind them hooting away and eventually they ran the vehicle off the road. And when the TV footage of this got out, suddenly people carried. It made the front pages of the press in the USA and Chrysler had to recall 1.4 million vehicles for a software fix, which meant actually re-flashing the firmware of the devices and it cost them billions and billions of dollars. So all of a sudden, this is something to which people paid attention. Some of you may know this chap here, at least by sight. So this is Martin Winterkorn who used to run Volkswagen and when it turned out that he had hacked millions and millions of Volkswagen vehicles by putting in evil software that defeated emissions controls, that's what happened to Volkswagen's stock price. Oh, and he lost his job and got prosecuted. So this is an important point about vehicles and in fact about many things in the Internet of Things or Internet of Targets, whatever you want to call it. The threat model isn't just external, it is internal as well. There are bad people all the way up and down the supply chain, even at the OEM. So that's the state of play in cars and we investigated that and wrote a bit about it. Now here's medicine, this was the second thing that we looked at. These are some pictures of the scene in the intensive care unit in Swansea Hospital. So after your car gets hacked and you go off the road, this is where you end up. And just as a car has got about 50 computers in it, you're now going to see that there's quite a few computers at your bedside. How many CPUs can you see? You see there's quite a few. About a comparable number to the number of CPUs in your car. Only here the system's integration is done by the nurse. Not by the engineers at Volkswagen or Mercedes. And does this cause safety problems? All sure. Here are pictures of the user interface of infusion pumps taken from Swansea's intensive care unit. And as you can see, they're all different. This is a little bit like if you suddenly had to drive a car from the 1930s, an old Lanchester for example, and then you find that the accelerator is between the brake and the clutch. Right? Honestly, there used to be such cars. You can still find them in antique car ferrets or a Model T Ford, for example, where the accelerator is actually a lever on the dashboard and one of the pedals is a gear change. And yet you're asking nurses to operate a variety of different pieces of equipment. And look, for example, at the bodyguard 545. At the one on the top, to increase the dose, right, this is the morphine that is being dripped into your vein once you've had your car crash, to increase the dose you have to press 2 and to decrease it you have to press 0. And on the bodyguard 545 and the bottom right, to increase the dose you press 5 and to decrease it you press 0. And this leads to accidents, to fatal accidents, a significant number of them. Okay, so you might say, well, why not have standards? Well, we have standards. We've got standards which say that liter should always be a capital L, so it is not confused with a 1. And then you see that on the bodyguard on the bottom right, milliliters is a capital L in green. Okay, well done, Mr. Bodyguard. The problem is, if you look up two lines, you see 500 milliliters is in small letters. So there's a standards problem, there's an enforcement problem, and there's externalities, because each of these vendors will say, well, everybody else should standardize on my kid. And there are also various other market failures. So the expert who's been investigating this is my friend Harold Thimbleby, who's a professor of computer science at Swansea. And his research shows that hospital safety usability failures kill about 2,000 people every year in the UK. Which is about the same as road accidents. And safety usability in other words gets ignored because the incentives are wrong. In Britain and indeed in the European institutions, people tend to follow the FDA in America. And that is captured by the large medical device makers over there. They only have two engineers, they're not allowed to play with pumps, etc, etc, etc. The curious thing here is that as safety and security come together, the safety of medical devices may improve, because as soon as it becomes possible to hack a medical device, then people suddenly take care. So the first of this was when Kevin Fu and researchers at the University of Michigan showed that they could hack the Hospa or Symbi confusion pump over Wi-Fi. And this led the FDA to immediately panic and blacklist the pump, recalling it from service. But then said Kevin, what about the 200 other infusion pumps that are unsafe because of the things on the previous slide? Also the FDA, we couldn't possibly recall all those. Then two years ago, there's an even bigger recall. It turned out that 450,000 pest makers made by St. Jude could similarly be hacked over Wi-Fi. And so their recall was ordered. And this is quite serious, because if you've got a heart pest maker, right, it's implanted surgically in the muscle next to your shoulder blade. And to remove that and replace it with a new one, which they do every 10 years to change the battery, you know, is a daycare surgery procedure. You have to go in there, get an anesthetic. They have to have a cardiologist ready in case you have a heart attack. It's a big deal, right? It costs maybe 3,000 pounds in the UK. And so 3,000 pounds times 450,000 pest makers multiply it by two for American health care costs and you're talking real money. So what should Europe do about this? Well, thankfully the European institutions have been getting off their butts on this and the medical device directors have been revised. And from next year, medical devices will have to have post-market surveillance, risk management plan, ergonomic design. And here's perhaps the driver for software engineering, for devices that incorporate software. The software shall be developed in accordance with the state of the art, taking into account the principles of development lifecycle, risk management, including information security, verification and validation. So there at least we have a foothold. And it continues, devices shall be designed and manufactured in such a way as to protect as far as possible against unauthorized access that could handle a device from functioning as intended. Now it's still not perfect. There's various things that the manufacturers can do to wriggle, but it's still a huge improvement. The third thing that we looked at was energy, electricity, substations and electrotechnical equipment in general. There have been one or two talks at this conference on that. Basically the problem is that you've got a 40-year life cycle for these devices. Protocols such as Modbus and DNP3 don't support authentication. And the fact that everything has gone to IP networks means that as with the Christel Jeeps, anybody who knows your IP address can read from it and with an actuator's IP address you can activate it. So the only practical fix there is to reparimitize and the entrepreneurs who noticed this 10 to 15 years ago and set up companies like Belden have now made lots and lots of money. Companies like BP now have thousands of such firewalls which isolate their chemical and other plants from the internet. So one way in which you can deal with this is having one component that connects you to the network and you replace it every five years. That's one way of doing if you like sustainable security for your oil refinery. But this is a lot harder for cars which have got multiple RF interfaces. A modern car has maybe 10 interfaces. You know, there's the internal phone, there's the short range radio link for remote key entry. There are links to the devices that monitor your tire pressure. There's all sorts of other things. And every single one of these has been exploited at least once. And there are particular difficulties in the auto industry because of the fragmented responsibility in the supply chain between the OEMs and the tier ones and the specialists to produce all the various bits and pieces that get glued together. Anyway, so the broad questions that arise from this include who will investigate incidents and to whom will they be reported? Right. How do we embed responsible disclosure? How do we bring safety engineers and security engineers together? This is an enormous project because security engineers and safety engineers use different languages. We have different university degree programs. We go to different conferences. And the world of safety is similarly fragmented between the power people, the car people, the naval people, the signal people and so on and so forth. Some companies are beginning to get this together. The first is Bosch, which put together their safety engineering and security engineering professions. But even once you have done that in organizational terms, how do you teach a security engineer to think safety and vice versa? Then the problem that bothered the European Union are the regulators all going to need security engineers. Right. I mean, many of these organizations in Brussels don't even have an engineer on staff. Right. They are mostly full of, you know, lawyers and policy people. And then, of course, for this audience, how do you prevent abuse of lock-in? You know, in America, if you've got a tractor from John Deere, and then if you don't take it to a John Deere dealer every six months or so, it stops working. Right. And if you try and hack it so you can fix it yourself, then John Deere will try to get you prosecuted. We just don't want that kind of stuff coming over the Atlantic and to Europe. So we ended up with a number of recommendations. We thought that we would get vendors to self-certify for the CE mark that products could be patched if need be. That turned out to be not viable. We then came up with another idea that things should be secured by default for the update of the radio equipment directive, and that didn't get through the European Parliament either. In fact, it was Mozilla that lobbied against it. Eventually, we got something through, which I'll discuss in a minute. We talked about requiring a secure development life cycle with vulnerability management because we've already got standards for that. We talked about creating an European Security Engineering Agency so that there would be people in Brussels to support policymakers, and the reaction to that a year and a half ago was to arrange for UNISA to be allowed to open an office in Brussels so that they can hopefully build a capability there with some technical people who can support policymakers. We recommended extending the product liability directive to services. There is enormous pushback on that. Companies like Google and Facebook and so on don't like the idea that they should be as liable for mistakes made by Google Maps, as for example, Garmin is liable for mistakes made by the navigators. And then there's the whole business of how do you take the information that European institutions already have on breaches and vulnerabilities and report this not just to UNISA but to safety regulators and users because somehow you've got to create a learning system. And this is perhaps one of the big pieces of work to be done. How do you take, I mean, once all cars are sort of semi-intelligent, once everybody's got telemetry and once there are gigabytes of data everywhere, then whenever there's a car crash, the data have to go to all sorts of places, to the police, to the insurers, to courts and then of course up to the car makers and regulators and component suppliers and so on. How do you design the system that will cause the right data to get to the right place, which will still respect people's privacy rights and all the various other legal obligations? This is a huge project and nobody has really started to think yet about how it's going to be done, right? At present, if you've got a crash in a car like a Tesla which has got very good telemetry, you basically have to take Tesla to court to get the data because otherwise they won't hand it over, right? We need a better regime for this. And that at present is a blank slate. It's up to us, I suppose, to figure out how such a system should be designed and built and it will take many years to do it, right? If you want a safe system, a system that learns, this is what it's going to involve. But there's one thing that struck us after we'd done this work. After we delivered this to the European Commission and I'd gone to Brussels and given a talk to dozens and dozens of security guys, Richard Clayton and I went to Schloss Dagstuhl for a week-long seminar on some other security topic and we were just chatting one evening and we said, well, what did we actually learn from this whole exercise on standardization and certification? Well, it's basically this. But those two types of secure thing that we currently know how to make. The first is stuff like your phone or your laptop, which is secure because you patch it every month, right? But then you have to throw it away after three years because Larry and Sergey don't have enough money to maintain three versions of Android. And then we've got things like cars and medical devices where we test them to death before release and we don't connect them to the internet and we almost never patch them unless Charlie Miller and Chris Fellowshet are going to go at your car that is. So what's going to happen to support costs now that we're starting to patch cars? And you have to patch cars because they're online and once something's online, right, anybody in the world can attack it. So if a vulnerability is discovered it can be scaled and something that you could previously ignore suddenly becomes something that you have to fix. And if you have to pull all your cars into a garage to patch them, that costs real money. So you need to be able to patch them over the air. So all of a sudden cars become like computers or phones. So what's this going to mean? So this is the dilemma. If you get a standard safety life cycle, there's no patching. You get safety and sustainability but you can't go online because you'll get hacked. And if you get the standard security life cycle, you've got patching but that breaks the safety certification so that's a problem. And if you get patching plus redoing safety certification with current methods then the cost of maintaining your safety rating can be sky high. So here's the big problem. How do you get safety, security and sustainability at the same time? Now, this brings us to another thing that a number of people at this Congress are interested in, the right to repair. This is the centennial light. It's been burning since 1901. It's in Livermore in California. It's kind of dim but you can go there and you can see it. It's still there. In 1924, the three firms who dominated the light business, GE, Osram and Phillips, agreed to reduce average bulb lifetimes from two and a half thousand dollars to one thousand dollars. Why in order to sell more of them? And one of the things that's come along with CPUs and communications and so on, with smart stuff to use that horrible word, is that firms are now using online mechanisms, software and cryptographic mechanisms in order to make it hard or even illegal to fix products. And I believe that there's a case against Apple going on in France about this. Now, you might not think it's something that politicians will get upset about, that you have to throw away your phone after three years instead of after five years. But here's something you really should worry about. Vehicle lifecycle economics. Because the lifetimes of cars in Europe have about doubled in the last 40 years. And the average edge of a car in Britain, which is scrapped, is now almost 15 years. So what's going to happen once you've got, you know, wonderful self-driving software in all the cars? Well, a number of big car companies, including in this country, were taking the view two years ago that they wanted people to scrap their cars after six years and buy a new one. Hey, makes business sense, doesn't it? If you're Mr. Mercedes, your business model, is if the customer is rich, you sell him a three-year lease on a new car. And if the customer is not quite so rich, you sell him a three-year lease on a Mercedes-approved used car. And if somebody drives a seven-year-old Mercedes, that's thought crime. You know, they should emigrate to Africa or something. So this was the view of the vehicle makers. But here's the rub. The embedded CO2 cost of a car often exceeds its lifetime fuel ban. The best estimate for the embedded CO2 cost of an E-class mark is 35 tons. So go and work out, you know, how many liters per 100 kilometers and how many kilometers it's going to run in 15 years. And you come to the conclusion that if you get a six-year lifetime, then maybe you are decreasing the range of the car from 300,000 kilometers to 100,000 kilometers. And so you're approximately doubling the overall CO2 emissions. Taking the whole life cycle. Not just the Scope One, but the Scope Two and the Scope Three, the embedded stuff as well. And then there are other consequences. What about Africa, where most vehicles are in port and secondhand? If you go to Nairobi, all the cars are between 10 and 20 years old, right? They arrive in the docks in Mombasa when they're already 10 years old and people drive them for 10 years and then they end up in Uganda or Chad or somewhere like that and they're repaired for as long as they're repairable. What's going to happen to road transport in Africa if all of a sudden there's a software time bomb that causes cars to self-destruct 10 years after they leave the showroom? And if there isn't, what about safety? I don't know what the rules are here, but in Britain I have to get my car through a safety examination every year once it's more than three years old. And it's entirely foreseeable that within two or three years the mechanic will want to check that the software is up to date. So once the software update is no longer available that's basically saying this car must now be exported or scrapped. I couldn't resist the temptation to put in a cartoon. My engine's making a weird noise. Can you take a look? Sure, just pop the hood. Or the hood latch is also broken. Okay, just pull up to that big pit and push the car and we'll go get a new one. Right, this is if we start reading cars the way we treat consumer electronics. So what's a reasonable design lifetime? Well with cars, the way it's going is maybe 18 years, say 10 years from the sale of the last product in a model range. Domestic appliances, 10 years because of spares obligation, pastoral life, say 15, medical devices. If a pacemaker lives for 10 years then maybe you need 20 years. Electricity substations even more. So from the point of view of engineers the question is how can you see to it that your software will be patchable for 20 years? So as we put in the abstract if you are writing software now for a car that will go on sale in 2023 what sort of languages? What sort of tool chain should you use? What sort of crypto should you use? So that you're sure you'll still be able to patch that software in 2043. And that isn't just about the languages and compilers and linkers and so on. That's about the whole ecosystem. So what did the EU do? Well I'm pleased to say that at the third attempt the EU managed to get some law through on this. Directive 771 this year on smart goods says that buyers of goods with digital elements are entitled to necessary updates for two years or for a longer period of time if this is a reasonable expectation of the customer. This is what they managed to get through the parliament and what we expect is that this will mean at least 10 years for cars, ovens, fridges, air conditioning and so on because of existing provisions about physical spares. And what's more the trader has got the burden of proof in the first couple of years if there's disputes. So there's now the legal framework there to create the demand for long-term patching of software. And now it's kind of up to us. If the durable goods we're designing today are still working in 2039 then a whole bunch of things are going to have to change. Computer science has always been about managing complexity ever since the very first high-level languages and the history goes on from there through types and objects and tools like GIT and Jenkins and Coverity. So here's a question for the computer scientists here. What else is going to be needed for sustainable computing once we have software in just about everything? So research topics to support 20-year patching include a more stable and powerful tool chain. We know how complex this can be from crypto with looking at history of the last 20 years of TLS. Cars teach that it's difficult and expensive to sustain all the different test environments you have for different models of cars. Control systems teach whether you can make small changes to the architecture which will then limit what you have to patch. Android teaches how do you go about motivating OEMs to patch products that they no longer sell. In this case it's European law but there's maybe other things you can do too. What does it mean for those of us who teach and research in universities? Well since 2016 I've been teaching safety and security together in the same course to first-year undergraduates. Because presenting these ideas together in lockstep will help people to think in more unified terms about how it all holds together. In research terms we've been starting to look at what we can do to make the tool chain more sustainable. For example one of the problems that you have if you maintain crypto software is that every so often the compiler writer gets a little bit smarter and the compiler figures out that these extra padding instructions that you put in to make the loops of your crypto routines run in constant time and to scrub the contents of round keys once they were no longer in use are not doing any real work and it removes them and all of a sudden from one day to the next you find that your crypto has sprung a huge big timing leak and then you have to rush to get somebody out of bed to fix the tool chain. So one of the things that we thought is that better ways for programmers to communicate intent might help. And so there's a paper by Lawrence Seymour and David Chisnell and I where we looked about zeroizing sensitive variables and doing constant time loops with a plug-in in LLVM and that led to your S&P paper a year and a half ago what you get is what you see and there's a plug-in that you can download and play with. Macroscale sustainable security is going to require a lot more. Despite the problems in the aerospace industry with the 737 MAX the aerospace industry still has got a better feedback loop of learning from incidents and accidents and we don't have that yet in any of the fields like cars and so on where it's going to be needed. What can we use as a guide? Security economics is one set of intellectual tools that can be applied we've known for almost 20 years now that complex socio-technical systems often fail because of poor incentives. If Alice guards a system and Bob pays the cost of failure you can expect trouble. And so security economics researchers can explain platform security problems, patching cycle, liability games and so on. And the same principles apply to safety and will become even more important as safety and security become entangled. Also we'll get even more data and we'll be able to do more research and get more insights from the data. So where does this lead? Well our papers on making security sustainable and the thing that we did for the EU standardization and certification in the internet of things are on my web page together with other relevant papers on topics around sustainability from smart metering to pushing back on wildlife crime and that's the first place to go if you're interested in this stuff and there's also our blog. And if you're interested in these kinds of issues at the interface between technology and policy of how incentives work and how they very often fail when it comes to complex socio-technical systems then there's the workshop on the economics of information security in Brussels next June is the place where academics interested in these topics tend to meet up. So perhaps we'll see a few of you there in June and with that there's a book on security engineering which goes over some of these things and there's a third edition in the pipeline. Thank you very much Ross Anderson for the talk. We will start the Q&A session a little bit differently than you're used to. Ross has a question to you. So he told me there will be a third edition of his book and he's not yet sure about the cover he wants to have so you are going to choose and so that the people on the stream also can hear your choice. I would like you to make a humming noise for the cover which you like more. You will first see both covers. Cover one and cover two. So who of you would like to pick for the first cover? Come on. And the second choice. Okay I think we have a clear favorite here from the audience so it would be the second cover. Thanks. And we will look forward to see this cover next year then. So if you now have questions yourself you can line up in front of the microphones. You will find eight distributed in the hall, three in the middle, two on the sides. The signal angel has the first question from the internet. The first question is, is there a reason why you didn't include aviation into your research? We were asked to choose three fields and the three fields I chose were the ones in which we had worked most recently. I did some work in avionics but that was 40 years ago so I'm no longer current. All right a question for microphone number two please. Hi thanks for your talk. What I'm wondering most about is where do you believe the balance will fall in the fight between privacy, the want of the manufacturer to prove that it wasn't their fault and the right to repair? Well this is an immensely complex question and it's one that we'll be fighting about for the next 20 years. But all I can suggest is that we study the problems in detail that we collect the data that we need to say coherent things to policy makers and that we use the international tools that we have such as the economics of security in order to inform these arguments. That's the best way that we can fight these fights you know by being clear-headed and by being informed. Thank you. A question for microphone number four please. Can you switch on the microphone number four? Oh sorry. Hello thank you for the talk. As a software engineer arguably I can cause much more damage than a single medical professional simply because of the multiplication of my work. Why is it that there is still no conversation about software engineers caring liability insurance and being liable for the work they do? Well that again is a complex question and there are some countries like Canada where being a professional engineer gives you particular status. I think it's cultural as much as anything else because our trade has always been freewheeling it's always been growing very quickly and throughout my lifetime it's been sucking up a fair proportion of science graduates. If you were to restrict software engineering to people with degrees in computer science then we would have an awful lot fewer people. I wouldn't be here for example because my first degree was in pure maths. All right a question for microphone number one please. Hi thank you for the talk. My question is also about aviation because as I understand that a lot of the old or retired aircrafts and other equipment is dumped into the so-called developing countries and with the modern technology and the modern aircraft where the the issue of maintained or software or patching would still be in question but how does how do we see that rolling out also for the so-called third world countries because I am a Pakistani journalist but this worries me a lot because we get so many devices dumped into Pakistan after they are retired and people just use them. I mean it's a country that cannot even afford a licensed operating system so maybe you could shed a light on that. Thank you. Well there are some positive things that can be done. Development IT is something in which we are engaged and you can find the details of my website but good things don't necessarily have to involve IT. One of my school friends became an anaesthetist and after he retired he devoted his energies to developing an infusion pump for use in less developed countries which is very much cheaper than the ones that we saw on the screen there and it's also safe, rugged, reliable and designed for use in places like Pakistan and Africa and South America. So the appropriate technology doesn't always have to be the wisiest right and if you've got very bad roads as in India and Africa then a relatively cheap labor then perhaps autonomous cars should not be a priority. Thank you. All right we have another question from the internet, the signal angel please. Why force updates by law? Wouldn't it be better to prohibit the important things to from accessing the internet by law? Well politics is the art of the possible and you can only realistically talk about a certain number of things at any one time in any political culture the so-called Orverton window. Now if you talked about banning technology banning cars that connected to the internet as a minister you would be immediately shouted out of office as being a Luddite right so it's just not possible to go down that path. What is possible is to go down the path of saying look if you've got a company that imports lots of dangerous toys that harm kids or dangerous CCTV cameras that are recruited into a botnet and if you don't meet European regulations we'll put the containers on the boat back to China that is something that can be sold politically and given the weakness of the car industry after the emission standard scandal it was possible for Brussels to push through something that the car industry really didn't like. So again it's and even then that was the third attempt to do something about it so again it's what you can practically achieve in real-world politics. All right we have more questions microphone number four please. Hi I'm a automotive cybersecurity analyst and embedded software engineer most a part of the SAE ISO 21434 automotive cybersecurity standard are you aware of the standard that's coming out next year hopefully. I've not done any significant work with it friends in the motor industry have talked about it but it's not something we've engaged with in any detail. Okay I guess my point is not so much of a question but a little bit about pushback a lot of the things you talked about are being worked on and are being considered over-the-air updating is going to be mandated just 30 a 30 40-year life cycle of the vehicle is being considered by engineers we're not nobody I know talks about a six-year life cycle you know that's back in the 80s maybe we talked about planned obsolescence but that's just not a thing so I'm not really sure where that language is coming from to be honest with you. Well I've been to close motor industry conferences where senior executives have been talking about just that in terms of autonomous vehicles so yeah it's something that we've disabused them of. All right so time is an unfortunately up but I think Ross will be available after the talk as well for questions so you can meet him here on the side please give a huge round of applause for Ross Anderson. Thanks and thank you for choosing the cover.