 I'd like to give a warm welcome to Marco Astin, which is giving his talk on the interethicality, please welcome Marco Astin. Thank you very kindly, thank you for being here. I'd like to start by paying my respects to the local Marie Elders and thank them for the very warm welcome at the beginning of this conference in this, their land. I have a couple of loves in my life, my wife, my faith, space science as some of you know, free and open source software and information security. And so I'd like to thank Sherry and Steve and the organisers for giving me a chance to talk about two of them. And I work for an organisation called OZSERT, which is the, I believe it's the second oldest cert in the world. There's a lot of discussion about that, but it's one of two certs in Australia. It's the original and the best. There's two certs, one's a government cert, we're the other one, and we pretty much care about everything other than, we care about everything. We just care about everything, but we let the federal government guys look after federal government stuff and mostly cooperate. And I'd like to thank you for being here. There's, you know, really good other talks that you could go to. So thanks for being here. And probably I should mention if you have questions, if they're really pressing, then interrupt me. But if you want, wait till the end. We'll try and cover them then. Oh, and if information security is something that you care about and you're interested in, go to Jim's talk back there. And you must not miss Peter Goodman's talk, all right? But do them both. Seriously, Peter Goodman's talk, if you haven't seen it, hands up here, have you seen it anywhere? Right, go to that talk. It's on Thursday. And just look for Peter Goodman. He's a rock star. Okay. So FOSS is great. We're the best thing in the world. Yay, free and open source software. It powers basically everything. 90% of the supercomputers in the world, huge amounts of mobile technology, pretty much every other network device there is connected to the net. You could easily say FOSS, free and open source software is ubiquitous and that it's responsible for making the world run. In fact, a lot of the expectations people have of what they want from the internet couldn't be possible without free and open source software. It's not just a pleasant thing anymore. It's a nice notion. It's really essential for delivering services that people expect. And whether it's at the source or through the transmission or on the receiving end, not very much at all happens on the internet that doesn't involve free and open source software. So it influences a lot of stuff. Even, look, Sacha Nadella, the new CEO of Microsoft has come out recently at a conference saying that Microsoft loves Linux now. They love it. And okay, that's just words. But Microsoft, there's 26 separate Microsoft organizations who are actually posting code to GitHub. Just sort of stop and let that, you know, permeate in. That has some really big implications. So it's not really Microsoft posting code to GitHub. And of course, you're probably aware that they're beginning to open source a couple of their technologies as well. So it's not hard to suggest that, you know, fosters come a long way from when many of us started in it. And it's important to consider that free and open source software and Linux and the internet share a symbiotic relationship. Linux couldn't really have existed. And a lot of free and open source software without the internet, it requires it to develop and to improve. And a lot of the internet, the way we take it for granted today also couldn't happen without them. So they both need each other. So you could say the world domination has already happened. The penguin reigns supreme. You could sort of say that and penguin representing not just Linux, but all free and open source software. However, as we know, with great power comes great responsibility. So the trick is it's great to be, you know, sort of having such a great influence and doing so much stuff. But the thing that comes with it is that you have to also then, it's not just for fun anymore. It was originally just for fun because there are a hell of a lot of implications. So for those in the room who aren't familiar with what a CVE is, it's a term that describes a common vulnerability and exposure. Basically, CVE is code for a vulnerability. And a CVE is an objective way to describe a particular vulnerability. It's an incredibly important term to use. It allows you to track a vulnerability regardless of the platform that it exists on. It's in international use. It's basically a ubiquitous way to track vulnerabilities. And each CVE is unique. There's a CVE database which is kept, which lists all the different CVEs and what they impact on and how serious the impact is and other attributes about it. And the other thing that's important about a CVE is that generally speaking, a CVE becomes publicly known. Not necessarily always initially, but a CVE is a way of describing publicly important. And let me stress, it's used outside of English-speaking Western nations. Okay, so CVEs are actually really important. Okay, the CVE format traditionally is pretty simple. CVE- the year- and a four-digit number, an arbitrary four-digit number. It used to be just four digits up until last year. But this year is the introduction of five digits, five arbitrary digits, and potentially six or seven. Now, that by itself should tell you something. That should potentially set up a warning bell. It's like, well, four digits used to be enough to describe how many vulnerabilities people expected in a year, but it's not enough anymore. And bear in mind, CVEs are really important for tracking vulnerabilities in a public way. But not every vulnerability in the world is getting a CVE, okay? I'd be so happy if they were, but that's not necessarily the case. This new format has already kicked in. Okay, so it's already in place now, and you can expect to start seeing five-digit CVEs this year. If you have software that processes them and all of that, and I'm sure you've heard about it by now and you're all ready for it. So let's talk about, let's take a little walk down the garden path of the vulnerabilities we saw in 2014, some open source voles. I'll start off by saying there was a lot of them, okay? We'll start off in March with GNU TLS. So just when everyone was making lots of fun of the go-to bug in OS 10, out-pop GNU TLS, it was publicly disclosed in March. It was a false certificate verification vulnerability. So basically it meant that it made you susceptible to a man in the middle of attack. It was, the vector was remote unauthenticated, so pretty much anyone in the middle of your communication could exploit the vulnerability. I was looking around for public exploits, and in the limited time I looked at, I didn't find any in particular, although I am pretty confident that I did previously see some public exploits. But for the sake of fairness, I put down unknown. So think about it, why is this important? Well, okay, if you're using GNU TLS and you're saying this particular form of communication over SSL is meant to be encrypted, firstly, potentially now it's not. It could be rendered back to plain text. But more importantly, SSL isn't just used to encrypt communications. It's also used for attestation to prove who you are, who you say you are, okay? That's really important. And when you start mucking about with this, then theoretically you can actually start injecting packages and other executables into update streams that people may use. Okay, so this is a pretty bad vol, and not a good way to start the year. Then our old mate CVE 2014.01.60 showed up. There's an awful lot, I could pretty much do an entire talk on Heartbleed, but I won't. And I didn't smatter it all with the logos and stuff, because it's just one piece in the puzzle here. But the bits that I probably should mention are that Security Research at Google discovered it and informed on the 1st of April, mind you, that seems to be the suggested date. Disclosed it to OpenSSL. There was a lot of lack of coordination, because basically OpenSSL itself, the project, had stagnated a bit. And there was a lot of confusion about what to do next. Eventually CodeNumberCon became aware of it, and they worked in cooperation to get people ready for the vulnerability. They're the guys who came up with the nice logo and a website, Heartbleed.com, to try and tell everyone exactly how bad it was in a sort of, you know, we're not really kidding kind of a way. And when it came out, there was still a bit of cognitive dissonance. There were a lot of people going, ah, yeah, it's not as bad as you say it is, because it's just like access privilege data. That's not too serious. I don't have to worry too much about that. There's a scoring regime which often accompanies security bulletins with CVs called CVSS. And if it's 10, it's really bad. If it's a low number, it's not so bad. And Heartbleed was only rating, I think, a 5.6 or so, if anyone remembers here, correct me. But basically that was just seriously misrepresenting how bad it was. So how bad was it? Not only could you read somebody's private key directly out of their service from anywhere in the world and not leave any trace on that host that you had done that in any logs or in any way that they could find out without using an intermediary layer. But if you kept at it in 64K chunks, you could read anything that the colonel could see. And the implications of that are really bad. And probably you don't need to explain it too much. But once you can do stuff like that, you pretty much have owned the box and you certainly can, via one method or another, start executing code on it. It's not unperceivable to do that. The really bad thing is too, though, all these important people around the world with really big public-facing sites like banks and very large corporations, all of a sudden couldn't trust their private key anymore, facing the world. So an awful lot of private keys were dropped and refreshed. And that by itself caused a bit of consternation. But it took quite a while for it really to engage. People didn't quite, because the release was so ad hoc and an awful lot of the information security industry wasn't properly notified, it was all a bit sort of done on the fly. It took a little while for a few people to get up to speed and to make sure that they properly communicated exactly how bad it was, and it was bad. And the other thing that even today isn't still properly appreciated, it didn't just impact web servers. A lot of people think it's just a web server problem. It is not. It impacted any service running SSL. So that included all your mail stuff as well. Potentially, NTP, DNS, VPN, and on your client. It's not just a web server problem. If you have an Android phone running 4.1.1, it's still vulnerable to Harplead, and you'll probably never get a patched. Oh, you'll definitely never get a patched now. So that phone in your hand can have code executed on it potentially from somebody you've never met and no way of tracing. After Harplead, so Harplead suffice to say garnered a bit of attention. Quite a few people looked out and went, oh my, that's sort of like toning it down a bit. And so a lot of research then happened all of a sudden in open SSL, and people going through looking at this really old code, including a lot of information security researchers. A lot of white hats. And quite quickly afterwards came out the seven siblings of Harplead, which included some which were sort of run of the mill, but one of particular node is 0224, which was not quite as bad as Harplead, but it was comparable. You certainly want to patch it. It could do some serious evil. But it was like a full buffet of various vulnerabilities. You had a bit of remote code execution directly. You had a bit of denial of service. You had some access privilege similar to Harplead itself. And so it was really pretty important to patch this bunch. The actual coordinated release of this particular patch was from the information security perspective was done a lot better. My understanding is that some of the patches themselves and various platforms may not have been optimal, but overall everyone was expecting it. And as soon as it came out, the word was very clear about how bad it was and how quickly it needed to be addressed. And even despite that, Apple took three months to patch 0224. So it's public disclosure is in June and there were lots of publicly available exploits for various bugs. Now another rock star of vulnerabilities last year was our old pal shell shop. And this is an interesting vol as well in as much... And it was interesting in that it kind of horrified people when they stopped to think about what it could do. Initially I remember talking to some security researchers and they were going, oh, it's Bash, you know, so what, whatever. And then you had to stop and say, yeah, but CGIs, you know, a lot of scripts, potentially stuff coming through SMTP. Basically there is a lot of remote ways that you can actually exploit this vol. And when you sat and described it, the penny dropped, they went, oh, hell. And eventually people began to realize it was actually worse than half lead. So pretty much you could execute code remotely. Thankfully there were generally, generally depending on the method you were using to execute the code, there were ways to see that somebody was trying to do that in your logs if you were looking at your logs. And it was evil, just suffice to say that. A patch came out for it eventually and it was pretty coordinated. But then quite quickly, Tavis or Mandy and a bunch of other security researchers started playing with it and discovered that the patch wasn't really a complete fix and that they had to go back and do it again. So that quickly spawned another couple CVEs and there was sort of like this quick follow ups like, oh, it's not quite finished so put this patch on as well. And it sort of led to a lot of consternation. Now, it's suffice to say that by the time half lead and shell shock were well known, pretty much every security researcher worth their salt was looking through all open source code. They were thinking to themselves, forget the many eyes make bugs shallow thing, that's a big load of hooey and there was a lot of security researchers rejecting that outright and they were just pouring through a code that was old. They were looking specifically for old code to look for vols and they were particularly interested in the ubiquitous stuff which then leads to our old pal, NTP. Okay, so this vol may have been missed possibly by some people in this room. It came out fairly late last year in December when a lot of people may have already been on leave for Christmas and it had two nasty sides, a bit of remote code execution and some good old fashioned denial of service. There are publicly available exploits for it and it's led to all kinds of security researchers saying various things about NTPD and one of them just came flat out and said, don't patch it, just stop running it. Just abandon NTPD altogether. It's an unfixable thing and that sort of led to a bit of discussion, etc. So the guys at Google Zero were responsible for finding that particular fellow and it was... Yeah, anyway, they did some good work doing that. So how many vulnerabilities were there last year? I've only mentioned a couple. We had a whole bunch more that I was going to mention but we don't have time to scale it back. There was a lot. Just to give you an idea, let me just say more than ever before. Okay, and when there's... And just based on what Osir publishes in our bulletins, all the ESPs are publicly available. With 2,500 bulletins we published last year which is many times more than we would normally publish all the previous years. And it stands to reason that amongst the... When you publish more vulnerabilities, when there are more vulnerabilities publicly known, there's an increased potential for super vulnerabilities being amongst them. And by super vulnerability, I'm sort of describing heart bleeds in the shell shocks, the requirements of a super vulnerability would be something that has a significant impact. Okay? Something... And a denial of service can be a significant impact. Something with a reliable exploit. And generally if there's a reliable exploit, it tends to become public and with a ubiquitous install base. So a product that's pretty much everywhere. Which is why I'm not really mentioning the Drupals and the WordPresses, etc. Which there's a lot, but they're not as ubiquitous as other stuff. And they just have so many issues. It's not funny, sadly. And they do get used then for very serious attacks against others. It's not a neutral thing. So a lot of these vulnerabilities are found by white hats. And you could easily imagine that project maintainers might be thinking why are these white hats trying to break my stuff? And there's a couple reasons to put it into context. There was a lot of increased research after the Snowden revelations because basically the Snowden revelations were implying a lot of things including that vulnerabilities were being specifically collected and used. And basically there was a lot of vulnerabilities that people weren't aware of that were known by a limited number of people. Obviously they do it for the love of information security. Amazingly a lot of information security white hat guys really dislike bugs a lot. And that sort of drives them on. And the other really important thing to bear in mind is that most of them actually are really dependent on open source products. So Google's Project Zero is sort of an example of that. But overall it's probably important to understand that generally white hats really want free and open source software to develop and to become stronger and better, right? They're not there to break it. Unlike my presentation which just broke. So once a white hat is aware of vulnerability then you can take for granted that the bad guys are as well. And when I say bad guys who do I mean? And I'm not talking specifically just about vulnerabilities I'm talking about exploiting. Which is a whole different thing. So when I talk about malicious actors or the bad guys in colloquial imagine an inverted triangle or you don't have to imagine because I've got it on the slide. An inverted triangle. Towards the top the biggest proportion of people would be activists who have in many cases a real genuine case that they're trying to promulgate. They're trying to have something to say and frequently mixed among them are a lot of script kiddies who are discovering for the first time you can actually exploit stuff on the internet. And that you know people running stuff on the internet that's not secure can you imagine it. And so there's a lot of learning in there and some of it's political. And unfortunately or fortunately depending on who you are some of them get found out, get arrested, go to jail. But definitely unfortunately some of them end up merging down into the second layer. And sometimes even without knowing it where they're actually unknowingly working for organized crime. Now organized crime is very much a global thing on the net and it exists just for one goal. It's not complicated, they're there to make money. That's it. They want to make a lot of money. And so to that end organized crime exploits a lot of vulnerabilities when they get their hands on them and they do it to make money and they do it in volume and they do it quietly. And they just tend to make a big noise about what they've done because they're trying to promulgate a cause, fair enough. Organized crime do not. So when they're abusing your systems you generally won't see much sign of it until it's really obvious. And then right at the bottom the smallest proportion but possibly the most effective and fairly active is nation states. And some are better than others and there's a little bit of sharing with some of the nation states. For example New Zealand and Australia. The United States and the United Kingdom and Canada are all members of the Five Eyes. So generally speaking, generally speaking when one of them has a bit of intelligence there's normally ways for them to share it with all the others. So if particular legislation comes in one country it pretty much has an influence on all the others. But it's really important to understand also that this group talking about it at the bottom includes all nation states. And depending on the nation state it heavily influences what they get up to what their intentions are, what they're trying to do. And some nation states, you know maybe in a really badly articulated way kind of have good intentions. But for the longest time I didn't think that there was any substance to the attribution of the attack against Sony recently coming from North Korea. Just thought there was a big load of twaddle. Until more recently when I had a few conversations with some infosec colleagues. And basically the US doesn't want to give away what they know because otherwise it would impede their ability to do what they do. But it looks like that this was actually at least in part or whole. It generally was attributable to them. And if that's the case it sets a whole different precedent because now you have a nation state who's actually prepared to try and break in a very public way stuff from another nation state. Whereas up until now generally speaking what they're trying to do is collect things to try and prevent problems rather than to make them. That's a big generalization. But when I'm talking about bad guys, this is who I'm talking about and it's really important, I'm sure pretty much everyone in this room already takes this for granted but it's really important to understand the internet with these people. When you connect your phone to the net you're connecting it to these guys as well. Okay, it's not, there isn't a disconnect between you and them is what I'm trying to say. Alright, time for a few concerns. One of them being how much time I have. Alright, so modem routers are generally horrible. I hate them with a passion. They're a big source of concern. This is from February last year a research project done by a research project, let's skip the details found that 80% of brand new modem routers you could buy from Amazon had vulnerabilities on them and 34% had known exploits available for them. Publicly exploits available for them, right? Now this is before Heartbleed and ShellShock. Okay, so it's much worse than that now. At Defconn last year there was a competition so hopelessly broken which was trying to evaluate just how bad modem routers were. And in the process of doing the competition they found 15 new zero days. Okay, and I think there are exploits for most of them. So suffice to say modem routers are really horrible. So why do you care? Modem routers are bad, so what? Every device you have that connects through your modem router at home is vulnerable. From the black hats perspective they don't care about owning your end point anymore. Why bother? I mean, if it's easy enough, sure, they'll do it. But if they can compromise your modem router then they own your end points, okay? When I'm talking about man in the middle one of the best ways to do a man in the middle is to compromise somebody's modem router. And there have been lots of malware already in modem routers, stuff like Kana which was arguably benign, the moon. There's a very famous case in Poland where tens of thousands of modem routers were compromised to redirect people doing their banking so that all these banking crids and basically money could be stolen, all right? It doesn't take too much genius to work out which group I'm talking about of the bad guys doing that. Okay, abandoned Android concerns me a lot. Recently it was stopped all support for Android 4.3 or less which means about a billion devices currently will never get patched and they have known vulnerabilities on them. So one of the better known vulnerabilities is WebView. So if you have an Android device in your pocket or near you that's running 4.3 or less you are very, very vulnerable and Google's not going to do anything to help you and what's the community going to do to help you? This is a really serious pressing problem, you know? And our immediate response is a bit of cognitive dissonance that's like, it's too big a problem, I can't handle it. Shrug shoulders and hope for the best. But it's real. The Internet of Things devices, there are so many, many, many cases of very evil badness. Recently I saw a home energy product you could buy that you plugged in and monitored all your power and somebody started tracing its communication over the net and it was immediately calling China and sending stuff through a proxy. And quite a few of these devices do that. Not all times it may be malicious but once there's a back door like that in place then a lot of malicious stuff can happen with it. And just the firmware that comes with Linux distros itself so if you plug in a USB camera or a USB Wi-Fi adapter or something like that for example in most cases you need the firmware to run it that by itself has some really serious concerns in it. When people have run binwalk across it they've found some bad stuff in it so much so that Shuttleworth recently came out and said that none of that stuff should be trusted. It's flat out evil. And it comes with every distro, you know, you're running on your desktop including mine. Some more concerns. All right, so a fairly significant CMS has described as their process that a CDE will be requested and when it's issued to them then they'll add it to their bulletin eventually. Particularly that's not good enough. System D, whatever you think of it that by itself concerns me because it's a big wedge in the community. Eventually hopefully it'll all be smoothed over it's not the first wedge in the community but whenever there's such a big division in the community certainly that's a breeding ground for really serious bugs and System D looks like it'll be something that will be certainly fairly ubiquitous. And overall the diminishing amount of confidence in the community, you know, when there's like one really superstar bug after another people start thinking well maybe you know this fuss stuff isn't, it's just not good enough. And that by itself is a concern for me because we can do stuff about it. Ultimately insufficient eyes are a problem. So let's talk about handling security voles. For those here who find them I'd like to go through that very quickly. So a vol shouldn't be made public when you discover it, okay? Why? Because the black hats, all of them will start using it straight up. Generally what you should do is contact the maintainer or if you are the maintainer you handle it yourself until such time as a patch is ready. You should straight away ask for a CVE. There are a lot of people you can ask for CVEs from. If you aren't able to issue them yourself you can ask Red Hat who will happily give you one. You can ask Debian if you don't like Red Hat. If you don't like either of those two guys go to JP cert, they'll happily issue you on with a CVE, right? So you can generally get a CVE within a day if you ask for it, okay? And then when you get the CVE you should write the accompanying CVE report because being the person who found the vulnerability should be in a pretty good place to describe it. You should describe that vulnerability properly. Then obviously you should make sure that the fix is available, verify that it's working and do that in private. And once you're happy with it, then you publish the patch and you sing about it from the rooftops. You publicly announce this patch is available and it fixes this CVE. And then it removes ambiguity and it fixes the problem and it prevents the black hats from exploiting. A few other things that will help, package managers will help but they really have to do signing and validation. OpenWRT even though it has the ability to do signing and validation currently doesn't. The Open Home Gateway Forum wrote the code and gave it to OpenWRT and SerrowWRT currently does do package signing and verification. So if you're looking for something to run on a modem router give that a consideration. Justin here in the audience is working on the OpenRouter project which will be open hardware as well as accompanying open software for routers. When that's readily available to everyone that should help a lot as well. With Internet of Things devices it's not impossible to provide package signing and verification for them but it's difficult but there are methods. Now Python, I've tried to do as much research as I could on this I can't see them doing standard package verification and signing yet. And that's a real problem because a hell of a lot of stuff runs on Python and it means that any of it is available for a man in the middle when you're updating stuff. Okay, consider what I mentioned about modem routers earlier. And using, it's great if you have the technology to do signing and validation but once it's in place you should use it. Recently Docker had a very serious vulnerability where they had the facility but they weren't using it. So you could still do a man in the middle and have stuff run that shouldn't be running. What will also help of course include CVs when you publish your security bulletins it's really important because it removes the ambiguity and it means that people can handle it quicker. Bug boundaries can help as well but make sure that when you disclose a bug that it goes back to the actual people running the bounty. If you go through a third party there's a chance that you might be selling it to, well, a third party which could include anyone to be honest. And some leading by example within the community as well sort of eating our own dog food knowing that we care about security and actually doing it ourselves. And healthy community is pretty essential because dead communities don't patch bugs as we saw in the case of OpenSSL and code reviews can help a lot as well. That can be a bit tricky because you're asking somebody else to look at what's wrong with your code. So the core infrastructure initiative has been put together by the Linux Foundation out of necessity. These are a bunch of people paying for it including Microsoft. And it's doing a lot of good work. It currently is funding a lot of research and patch fixes to OpenSSL, OpenSSH and NTP which is definitely in need. And this kind of thing is really worthwhile because it doesn't cut the original developers out. It works with them but it goes through and reviews the code and tries to find things that need attention. Which leads me to a modest proposal. I would like to think that the Linux Foundation could extend that. I think it would be excellent if the Linux Foundation with appropriate funding and support from all the people who make a hell of a lot of money out of Free and OpenSSL software could set up a permanent body where any FOSS project could request a code review from them and that potentially that could then be done with some frequency. So for the more important stuff, it could happen once a year. And that would actually enormously help and enormously improve the overall security and posture of Linux. Sorry, of Free and OpenSSL software. We should try and avoid a self-made zombie apocalypse even if it's kind of a cute steampunky one. Because basically if we don't address this issue now, just think about it, a billion Android devices that are ownable. What could go wrong, you know? We basically should lead by example and really attempt to address some of these problems. They're going to be complicated but essentially we have to say to people like Google, you know this isn't good enough. You make a lot of promises but this doesn't cut it. And alternatives need to be worked on genuinely. And we need to code like the whole world is watching because good and bad are like they are. Questions if you have any? Yes? Sorry, I'll ask to have a microphone taken up here so it can be kept on the recording. Yes, so the big problem really is logistics. So even if, for example, you know, Google have said that about 4.3 but even if when these updates and et cetera do get pushed into the baseline, it then goes through the telecoms. The telecoms got to push it out, et cetera, et cetera. So Mozilla for example could be working on Firefox 3.0 but 80% of the devices out there are on 1.1. And then when we're looking into the embedded IOT of things, then yeah, as you said, it's huge. And not only that, you know, the average, it's hard enough for the average technologist to upgrade the firmware of their new Philips Lite, let alone the person who doesn't know anything about technology. Sure, sure. So this is something that really concerns me from a logistics point of view going into the future. And I don't see a way to solve that because it's not just the technology problem, it's the human logistics distribution. Absolutely. What are your... Two answers to that. Firstly, absolutely a precedent is always required. And SeroWRT doing package signing and verification in a distro for modem routers is extremely good because now it's possible to say to all the vendors who make home modem routers, do that. They're doing it, it works. So having precedent is really essential. And look, if YOLA started demonstrating with open source mobile software that they would support or they would hand over community support of their products after end of life so that patches could be built and rolled out for essential stuff on their operating system, that sets a really good precedent for trying to say to Google, you know what, you should do exactly that as well because it's not good enough what you're doing. It's possible, it's not impossible. The other... I agree with you very strongly though from a user perspective and this is part of the reason why I'm speaking to you here today. When we code open source products, of course we have to include security but we have to make it the default. We have to make it easy for users because it's already happening. It's not something they turn on, it's something that's already working and it's already available and if it's not there, then obviously they're going to be in big trouble. Now, it's all one step at a time but those are the two most important steps in my opinion. Setting precedent to show this is the right way to do it, it's doable, here's the code, it's not going to cost you to do it, just get on with it because then it actually gives them a path that they can follow, not just theory and to make security the default for users helps a lot because the average user is not going to think about it, they're just not. Just one last question. Coming from the embedded world myself, most embedded developers don't understand security. A lot of the embedded development is actually going to industrial control systems, industrial monitoring systems and the businesses they work for don't care about security but that's obviously a really bad thing because they're all putting network points on these devices which are then put on sometimes just the local office network. How do you make them understand that this is actually an issue? It changes from one person to the next because ultimately what you're asking is sort of a personal question but you could refer to a factory from an industrial control perspective recently in Germany. It had a smelter, its industrial control system was attacked unknowingly in fact and just had a bit of malware injected into it and the entire steel smelter melted down and the whole factory caught fire and burnt to the ground and everybody is currently out of work and it hit their bottom line pretty badly. So that's the real world effects of not caring about security in embedded devices and in industrial control systems. It's not a theory. There are real tangible impacts. So maybe it's not nice to use horror stories as a stick to beat people up with but sometimes it's necessary just for a reality check. So possibly, I don't know, maybe it's a good idea to start a website. Instead of cake wrecks it could be skater wrecks or embedded wrecks and it could just be a big collection of exactly what can go wrong if you don't do it properly and look at that, you know. Have a look at that. It's just an idea. I'd like to thank Marco for his speech. Thank you for the team. Please take the scare for you. Thank you very much.