 So as you guys all know, my name is Ross Shulman. One of the things that I've been working on under the auspices of the cybersecurity initiative for the past little while here has been a work on studying vulnerabilities, software vulnerabilities, how they're found, how they're distributed, how they're sold on the black market oftentimes, how they're reported to companies and how they're patched and how those patches are disseminated. And because everybody here on stage knows more about that process than I do, I just thought I'd bring them all together and do some research from my paper that you all can listen to, it's really great. It's convenient. Good trick. And so I wanna do real quick introductions and then I wanna dive into it. So from, I don't know, stage right to stage left. Yeah, going down the line. So Christopher Robinson, otherwise known as C-Rob is Senior Program Manager for Red Hat Product Security. He's got 18 years of enterprise class engineering and he's worked at several Fortune 500 companies including financial, medical, legal and manufacturing verticals. And we are happy to have him here today. Thank you. To present a sort of open source look at things. Art Mannion is a Senior Member of the Vulnerability Analysis Team at CERT Coordination Center at CMU, right? Study Vulnerabilities and Coordinated Responsible Disclosure Efforts since 2001 and gained mild notoriety for saying don't use internet explorer in a company. At the time it was true. No longer true. Stephanie Zamoghi is Product Manager with the Security and Privacy Team at Google. He includes safe browsing Google's end-to-end system that protects over a billion users worldwide for malware and phishing and relevant to the presentation we just had from Amy. He also manages the end-to-end strong encryption Chrome plugin. So thank you for that. And then finally, and certainly not least, Kitty Masouris is Chief Policy Officer at Hacker One which is a platform provider for Coordinated Vulnerability Response and Structured Bounty Programs. A noted authority on Vulnerability Disclosure and she advises lawmakers, customers and researchers such as myself, often times to help legitimize and promote security research and help make the internet safer for everyone. So thank you for that. So without further ado, this is gonna be a slightly less formal panel. It's gonna be more of a discussion amongst everybody. We're gonna do two, actually, we're gonna discuss two case studies of recent high-profile bugs that were vulnerabilities in software or actually in some cases sort of in hardware as well. How those vulnerabilities were discovered, how they were communicated to the authorities responsible for the software question and how they were patched and fixed at the end of the day. So the first one that we're gonna discuss, a lot of people probably heard about, it made quite a splash at DEF CON this past year. On October 10th of 2015, Charlie Miller and Chris Valasek published a paper entitled, Remote Exploitation of an Unaltered Passenger Vehicle. And if you sort of have any inkling of what that means, it is a scary, scary title. Basically what Chris and Charlie discovered is that they could take a completely stock, fresh off the factory floor, Jeep Grand Cherokee, and through the use of a laptop and a cell phone and a couple other pieces of gadgetry, take over that Jeep remotely, not in the car but from anywhere, and drive it. It is a terrifying prospect. And they laid this all out on the DEF CON stage last August. Notably, the cover, and I urge everyone to look at this, but the cover of the paper has both Chris and Charlie with two tank, wearing tank tops that say, sun's out, gun's out on them. But the content after the title page is really quite serious. And so I wanna dive right in, and so obviously there's a terrifying image that sort of comes up when we first get this in our heads, but just as a blanket level question, for I think everyone that would kind of kick off our conversation, how is the Internet of Things sort of revolution? We're connecting everything to the network now that was not connected before. How is that impacting what you guys see as vulnerability research, disclosure, and patching? And I'll let anybody jump in on that. Well, from the perspective of trying to find the right contact at some of these organizations that suddenly find themselves as software vendors, essentially, these were vendors who were designing their products, and there may have been some software to make the product work, but the internet connectivity that they're adding sort of as a, it's a product feature for them, they're not appropriately securing, and they also are not in large form introducing a way to actually handle security vulnerabilities that are reported to them. There's often no way to contact these folks. I think Rapid7 did a great study on interconnected, it was baby monitors, and of all the different companies that have these apps where you can watch your baby remotely, there was only one, and that was Philips, that even had a published way to contact them about security vulnerabilities and some sort of a process to deal with it. So we're seeing Internet of Things is a proliferation of things on the internet, absolutely no plan to handle security issues when they come up. So to Katie's point, new players in the coordinated vulnerability disclosure game, absolutely. Another angle we're looking at is there's sort of the cyber physical safety sector issues. Baby monitors are one thing, you have a privacy issue there, doesn't feel good to have someone watching your kid, but to the Jeep example, we've got internet connected physical devices that could hurt, injure, or kill someone. That's much different than I lost my credit card to a phishing attack or a vulnerability in a PDF feed or something like that. So digging in a little bit on sort of Charlie and Chris's, I dare say epic hack. I think the first thing I wanna dig into is what do they do right? I know that at the time of the DEF CON talk, they were already, had been for months working with Chrysler to make sure that they knew about it. And so I'd like to talk a little bit about what does that mean in terms of like, is that a good way to, obviously that's a good way to do disclosure, what do they do right in this process? I mean, ultimately, ultimately, the point of doing responsible vulnerability disclosure is making sure the right people know about the right details and in order to enable the largest number of people to be protected. And so just starting from the ability to communicate and then going from there and not being penalized for it. I mean, it's not that long ago in the dark days where companies' first response to any form of vulnerability announcement was to rattle attorneys at the reporters. Fortunately, by and large, the enlightened parts of the industry have moved beyond that. It's not exclusively so, but fortunately, most people are doing that. And I think that's the best starting point right there is just to talk. So yes, and unfortunately parts, unenlightened parts of the industry still rattle attorneys these days. So some of these new players to the disclosure game are, some of the lessons they're learning are still how to respond and if lawyer rattle is appropriate or not. Something to your question, they got attention. Yeah. And that's a two-sided discussion. I feel both ways about that, but getting attention helped raise the priority of something that was very serious, I think. So that's, there's something to be said to that. It's real dichotomy we're in right now where you have the flashy presentation of hacking a Jeep or having a name and a logo on a vulnerability, which is very exciting to the public and catches their interest. It helps sometimes raise the priority of the issue. But the negative side of that is sometimes things are, mountains are made out of mohills. Something isn't as important as it really is and there are other more important issues that should be dealt with. I think hackers are motivated just the same as any other human being on earth. It's a combination of factors, compensation, recognition and pursuit of intellectual happiness. And these folks definitely got a huge amount of recognition for positive and negative for their method of disclosure. But I just kind of wanna ask the question. Everyone who is familiar with that car hacking knows, it's Charlie and Chris. Has anyone heard of a person named Mark Rogers? Also did a car hacking talk last year also worked with the vendor, in that case it was Tesla, but none of you have heard of him. And he did all of those things except drive a car on the freeway with somebody else behind the wheel not able to control it. So what is interesting about that attention getting is yes, it raises the level of awareness for the general public to be able to do, in that case there was a recall and so it raised the level of awareness that the customer had to take action to protect themselves. But because Tesla has an over the air update model, there was no real reason to cause such a fuss in order to get that uptake of that update. And here's another researcher who is essentially doing all the right things and you've never heard of him. Well now you have. Hello Mark. Well and so that sort of leads into what I think my next question is and that is I think a lot of people look at what Chris and Charlie did and said, oh that's anywhere if honored sort of a range between completely and utterly irresponsible to, well it's sort of stunt hacking. And it's like it's for the glory of the hack basically. And so a question for you guys is, first of all, is stunt hacking, is that a useful nomenclature? Is that a thing that we need to talk about or worry about? And if it is, is it a useful thing or a dangerous thing? So without necessarily using the words, well another angle on this is raising broad public attention is one thing. There are cases where sort of the software developer or vendor is not responding and sometimes unfortunately it takes public disclosure without prior coordination or without a fix being in place or some stunt awareness raising activity to sort of motivate a response from the software developer or vendor. That's another reason you would do something like that potentially. It's a really mixed bag. I mean I'm gonna hold my other half of my point of view on this one but there are reasons to do it, raising awareness and driving sort of a vendor response I think are two reasonably valid points. But if all you accomplish is raise the bar of knowledge on the part of your typical consumer, there's a certain benefit there. I mean trying to find the right balance between freaking people to hell out which is short term, very beneficial, gets you lots of column inches, but long term I think there was a study yesterday or somebody published some data that said of the people polled only 25% even remembered the G-PAC from last year. I think Andy Greenberg did a follow up to his piece and there's two sites of that. There's the wow, 25% actually remembered. And then there's the well 75% it just became part of the background. So there is, I hesitate to say consumer apathy. But it's- And is the sort of, since Heartbleed we've sort of like oh well we need to brand every bug. Is that like sort of contributing to the sort of oh it's just another whatever. I mean in some sense like people are at least aware that there's a thing called software vulnerability now which is probably a net good but at the same time it's like every week there's like a new one with a flashy HTML5 website that doesn't load on old browsers. It's unfortunate that over the last few years we've had so many high profile incidents whether it's a breach like a target or a Home Depot or it's been a branded vulnerability. I absolutely do believe consumers are numb to it and you sometimes will see researchers competing to have the flashiest thing. We had an incident called Drowned last week where there was actually several problems going on and it was several competing research teams. And at the end of the day one of the teams disclosed early to Trump the other folks and tried to capture some of the headlines. Interesting. That was a little against what we all agreed upon. And I think there's in terms of raising awareness the full disclosure versus coordinated disclosure debate is one of those things where reasonable people will disagree on the best way to minimize risk. So when there's a vendor that's unresponsive or doesn't respond for a very long time on an important issue often releasing public details is the only way to kind of get the vendor to move or get the end user to deal with it. And I think of the stunt hacking as an evolution in amping up that volume. So even though they coordinated with the company in question which was Chrysler they amped up the volume on the disclosure to raise the awareness and have people take action. Now I think more interesting problem is why does it take a stunt hack in order to get people to apply a patch and an even deeper question why is the consumer responsible for applying that patch to that device in that case, the vehicle? So I would love to see more vendors take more not only take security seriously but actually take the distribution of patches seriously because that is the thing that as our infrastructure becomes more and more dependent on interconnected devices consumers are you really going to patch your fridge? Are you, will you patch your toaster today? See, Rob will. You know. I patched my TVs this week. Well and you know if it's burning your toast yes you probably will but otherwise you know if it's going to be if it's gonna be used in a distributed denial of service attack to go after you know to go after a nation state will you actually think to patch that toaster? Probably not. So I mean I think we need to look at the advent of securing the internet of things and securing all these internet devices as a chance to not just build security in from the ground up and write more secure code but a chance to actually fix the problems with distributing patches. We don't have a zero day problem. We have a patch distribution problem folks. And that's gonna be endemic across the whole of things space. Many of these companies aren't a traditional software distributor they don't understand all the processes that you put into place and they're kind of pushing a product out to market. It's very exciting but they don't have the methodology around having a webpage that has a contact information. You find a problem tell us. They don't have any infrastructure for pushing out patches. They have no mechanism to really talk to their consumers because the scale of their consumers aren't just a handful of enterprises. It's potentially hundreds of thousands. Yeah. Everyone with the fridge. Yeah exactly. Well and it's turtles all the way down as well because there are plenty of nominally automatic patch mechanisms that themselves are insecure. So you wind up having to disclose a vulnerability against the patch management system. And use the patch management system to patch itself. Assuming it's not compromised. Yeah secure over the year updates are basically without that there's no hope of consumer end user stuff ever getting fixed. That has to be in place and we're well behind. Well that was really uplifting. Let's turn to our other case study that I wanted to talk about today. So on February 19th of 2015 Mark Rogers posted a blog post about some adware that he had found on Lenovo laptops that had become pre-installed with a program called SuperFish which is an Israeli company that basically replaces ads in the websites that you're visiting with their own ads. And so Lenovo had put this pre-installed on the software. Now the problem here is actually not so much the adware although that's I think a separate issue that could be discussed in length. But actually in order to do the job that it needed to do on secure websites SuperFish came with another sort of piece of software called Komodia which basically installs on the laptop a local root cert. And unfortunately in this case the local root cert used the same private key for every single laptop that Lenovo sold that year. Meaning that once that private key was broken and it was broken rapidly, anybody could man in the middle attack any Lenovo laptop that had the software installed on it. So this is a really interesting vulnerability in large part and this is the first question I wanna throw at you guys because this is a case of sort of one man's vulnerability is another man's compliance regime. And so I'd love it if you guys could take that and riff on it for a minute. Just go. Let's see which come up. So the business model right was replace ads and make ad revenue right off of that or make hits per ad that sort of thing. In this case yeah. So yeah still it's a feature. I've had a number of phone calls throughout the years where the vendor I'm talking to says that's a feature of our software. And we say well we call that a vulnerability and we usually you have to talk without those words and talk about the behavior and then you agree on the behavior call it what you want anyway. So future vulnerability is an interesting discussion. I do wanna just point out so the Lenovo came with SuperFish. There's an interesting supply chain mess here right. So Commodia makes this piece that's in SuperFish that's on Lenovo laptops pre-installed. The Commodia redirector or whatever is in a lot of other stuff. So it's not at all, that problem was not at all limited to just Lenovo laptops. A lot of sort of net nanny help save my family, monitor my family's internet traffic sort of stuff. Had that same exact problem. So weird supply chain, bundled software that comes with your device, comes with your laptop, comes with your download. More software is worse in general. You do not want more software you want less. Well and you look how to uninstall it. It was very complex to rip that thing out. There were several steps you had to do to eliminate that from your system. So from a consumer perspective, it would be very difficult. It's not a easy button, click it and it's gone. It was challenging. What's the, so just to take a step back for some folks who may not know what a men and middle attack is. It's basically allows, if you're trying to set up a secure communication with a far point somewhere on the web, you do a little handshake that says, hey are you who you say you are and the other server says yes I am and there's ways to sort of validate that. The men and middle attack basically sits, if you can get on the wire somewhere between those two points, you can basically play each other off each other if you know enough about the private key that they're trying to agree on essentially. So you can listen into what was supposed to be a secure conversation and that's basically what this software did. It was just on the wire in the computer rather than on the wire somewhere in the network which makes it even easier. What was pernicious of user invisible? The user went to HTTPStheirbank.com and they had the little lock and they had all the things in the browser that tell the user you are secure and the software unbeknownst to the user, undeliberately installed by the user was completely undermining that security. That's what the problem was there. So let's talk a little bit about how this was sort of, well we know how it was discovered, it was Mark Rogers sort of posted a blog post out and then it literally exploded from there. But let's talk about if you guys know sort of how this was mitigated. I know pretty shortly after Mark Rogers' blog post, Microsoft Defender, which is the sort of the anti-malware built into Windows, I think it's seven and up or something like that, basically started recognizing it and marking it as malware and taking it out. And I have, when I was doing my research on this, found a couple of CVEs that seemed related or somewhat related where there are other ways in which this progressed. And I think particularly I know that public pressure on Lenovo was a big part of that. So if we can talk about that as well. Yeah, as I recall, Lenovo provided directions and I think later software, some kind of little utility so users could easily point and click and remove this. As someone mentioned, it was not designed to be easily removed so it was hard for a consumer to follow eight or nine steps or something like that. So I think it was Commodia fixed the software by providing unique root CA certificates for every install which is still a man in the middle and it's still bundled and unbeknownst to the user but if you pop one certificate you could only attack that one person. Except there was a second vulnerability where the Commodia software didn't, you know if you browse to a website and you get a certificate error your browser is supposed to tell you hey this isn't the site you think you're going to, right? You mentioned their man in the middle, their client in the middle didn't honor those kinds of errors and would just pass on a good connection to the user so there's a second layer of problem in which even after the fix you could still attack people pretty easily. So this is exactly the type of pernicious software that really can only be discovered through security research. OEM manufacturers do try to do some security checks and they run automated tools and whatnot in order to try and find security bugs but you can only go so far with automated tooling and this is exactly the kind of thing that for example when we are trying to protect security research and enable their ability to reverse engineer software like this to find what it's actually doing and to observe and to warn otherwise unaware consumers of the dangers of what's actually installed by the manufacturer that is one of the underlying issues that we need to make sure we preserve. The ability for security research to uncover these pernicious types of software that are violating security and privacy unbeknownst to the user and without the user's permission or control. So I think, whoops, yeah, it's fine. Just throw it. That's where it belongs. So it seems to me, in doing the research on this I actually went back and I found a blog post from Komodia basically bragging about the abilities of their SSL sort of intercept software as far back as 2009. And so it's not as if this ability was a secret and so I'm curious what about this caused it to blow up in Lenovo's face? I think, was it the one key across all the machines? Was it that it was ads and we have sort of a shaky relationship with ads these days? Is there something else about this that made it especially horrible and make it blow up? I think it's that idea that users are much more aware of surveillance in general and the smacked of surveillance to them. So I think consumer awareness of privacy and surveillance issues certainly can be credited to the revelations of the summer of 2014 in Edward Snowden, but I think at this point consumers wanting to protect their privacy and protect their security is a significant market driver which I think is why you see mega corporations at a standoff with law enforcement, Apple versus FBI is the latest example of that. But without commercial viability, without users trusting your products, you've really got no ability to, you've got no ability to surveil anyway. So it's a mixed bag, but I think it's the user awareness and the smacked of surveillance against their will. And I would even, to swing this back to more of an of things debate, my personal opinion on the behavior of Superfish, that that was bad, it was an awful security practice to have a key across a whole fleet of devices, but it was deceptive in my opinion. Thinking about that behavior on a larger scale, more like an open source perspective where you'll have a vendor compiling a deliverable for somebody, an internet connected refrigerator, I have some enterprise software where you're picking and choosing software from all these different sources and not necessarily understanding how that software was made. That company may not have good software practices. The Superfish folks seem a little fishy. Selecting that vendor to bundle as part of your product, ultimately kind of you're taking on the ownership of that software. This is we endorse this. So a lot of vendors you'll see in this of things space where you have this drive to quickly get to market where people aren't necessarily going through vendor management they're not going through the appropriate due diligence to make sure each piece of software or each piece of hardware I'm putting into this consumer good has been vetted and we understand how to get it fixed and breaks. So I think one last question that I wanna have before we throw it open to the audience sort of as my prerogative here is to say a little sort of more open blue skyish and that is to say, given what we sort of might almost term the dual use of what Commodia is selling here, how does that affect the vulnerability sort of process that you all sort of live on a day to day basis? Is it harder to manage a vulnerability along these lines because you have to look at it and say, this isn't just like a bug that is clearly but wrong and someone's just screwed up and we need to patch it and or fix it, but it's a little bit more complex than that. Or isn't it? If it's black and white to you, that's a valid answer too. The in-state is not quite the same, in a traditional, if there's such a thing anymore disclosure process, there'd be a valid bug and there'd be fixed software and it would be well deployed to people in a very timely fashion, right? In this case, there wasn't that sort of finality of a fix and everyone's now safe. I'm sure there's a bunch of these old versions of Commodia still out there and even the new versions still do the interception. So that's an odd twist on this. And just to add, we're talking about ad replacement, which no one's really a big fan of. There's a lot of sort of enterprise data loss prevention gear that does exactly this. It does man in the middle SSL because otherwise you can't watch what's leaving your network. So there are legitimate sort of business enterprise reasons to do this. Ostensibly, you work at that enterprise, you've signed something that says, my network traffic can be monitored. So there are real reasons to do this. So this one's not, there's no final answer because you either have SSL end to end or you don't. And there's no middle ground fundamentally there. TLS, sorry. Hopefully it's TLS, yeah. Yes, sorry. I said the old words. Not SSL, SSL's bad. I'm old enough to, yeah. I mean, I think the broader issue of supply chain vulnerability coordination is actually quite complex. Not necessarily, you know, this case is the best illustrator of that. But I mean, if you think about, if you think about the security of your mobile device, that does not, the end security of your mobile device does not depend on one vendor at all. If there is something in one of the firmware of the chips that's used by multiple manufacturers, you've got all of these coordination, you know, headaches and these hurdles to overcome. If it's in the operating system, you still have to coordinate up and down the supply chain stack. And then finally, if you are to get a patch pushed out to the users, you need to coordinate with the carriers whose business model is often to charge for data. So how do you actually manage that in that type of supply chain? How do you manage the vulnerability coordination? How do you manage testing of the patches? And how do you manage the distribution of those patches? And we have had smartphones, you know, essentially guiding us onto the internet for, you know, at least, you know, what, 2007 was when the iPhone came out and that was really the big burst of smartphones. We're at 2016 and we haven't fully solved that problem for that complex supply chain in that ecosystem of mobile devices, let alone all of the other devices, you know, in the world. So I think, yes, vulnerability coordination across supply chain, very, very difficult and unsolved problem. This is an open problem in vulnerability coordination. And if it's a problem that you guys think is interesting, I definitely recommend getting engaged with a process that at least three of us on the stage are working with. There's an NTIA multi-stakeholder process going on right now that is tackling these and other really thorny questions in vulnerability management. So talk to Alan Friedman, who's here today somewhere about that. There he is in the back waving everybody point. And now he owes me that beer that he promised. Just kidding, because that would be wrong in some ethical way that I'm fairly certain of. Anyway, I wanna throw the floor open to you guys who have questions and also, I know we've got someone monitoring the Twitter feed. So if there's questions coming in from Twitter, there's some there. And if you wouldn't mind waiting for the mic to come to you so that the folks on the live stream can hear it too. So I think we had one over here. Yep, the lady over there and then we can go to the gentleman in front of you. Thank you. You may have touched on this. We get that mic up. Is it on? Sounds better. It's on. There we go. There we go. Does anybody wanna talk about legal impediments to vulnerability? Can you speak up? Does anyone want to wade into the issue of legal impediments or challenges to vulnerability research? Or offer, I know this is something that's sort of been talked about, but quietly. Can I get another 40 minutes? Cause that would be. No, so that was, if I didn't do this panel, I was going to do that panel, but. Sure. While I am not a lawyer, I do advise a lot on regulating cybersecurity research and that kind of thing. I think for as long as we've had in the United States, as long as we've had the Computer Fraud and Abuse Act and the Digital Millennium Copyrights Acts, they have been used as a silencing tool by vendors who really don't want to hear from the security research community. I think that while those legal tools were in place originally intending to at least make it a crime to go and do a war game style hacking of the Department of Defense, et cetera, I think at this point, we've moved past as an industry needing to use legal tools to discourage essentially uncovering security vulnerabilities. In a lot of ways, these tools are kind of used to silence the digital whistleblowers of our world who are here to let you know that in fact security, security of these devices isn't something you can just trust, but you must actually verify and these security researchers need to be allowed to do so. So I think the chilling effect on security research is more dangerous to not just the American public, but the world than these laws have really allowed for. So working with others in many different coalitions and spaces to get some of these issues addressed is one of the reasons why I've been falling on this policy grenade for as long as I have having been a former hacker myself. So I think it is very, very important and we're not going to be able to move forward without addressing some of these large gaps in legislation that don't really provide enough protection for individual security researchers. Any others? I just said policy grenade. I did. You did? I heard it. I thought there was a, okay, same comment. We got one from Twitter? Mike, come in. Oh yeah, there we go. Okay, so this question is from Twitter and it says, to what extent are complex vulnerabilities like you've discussed today the norm or are they black spawns? Important but rare. To what extent, what? I think it was, to what extent are they complicated? Common or rare? To what extent are vulnerabilities? Are these complex vulnerabilities rare or common? No, none of these are rare. My God, we've run out of CVE numbers. This is not, none of these are rare. I think the question was more about really complex ones like the G-PAC or are they sort of more box standard, rare ones? No, this is the new normal. It's common. We have so many things of internet and there are only going to be more of these things. You're doing your thing. Yeah, no, this is the new normal. Yeah. Right, getting back to supply chain is an example of this. If you don't know what you put in your box and I don't know what's in the box that I'm using at home, you can't even have a chance to fix it. So even basic things like inventory, build materials, knowing what you're running, knowing open SSLs in there, knowing that D-Bus is in there for the Jeep thing. If you don't know those things, you're never going to be able to even patch them in the first place. And even the producers of these things don't, they don't know that. Yeah, most, most of the new vendors who are now suddenly putting software on their devices and introducing internet connectivity, the smart thing to do is to use somebody else's code. So use open source libraries, ideally written by people who know what they're doing, especially, you know, I always say, if you're going to roll your own crypto, for God's sake, don't smoke it. But, you know, so not rolling your own, you know, cryptographic solutions is a smart thing to do. However, our dependence on libraries like open SSL makes, you know, makes the entire world vulnerable when there's a critical vulnerability found. And it's that idea of trying to get, you know, trying to get that coordination working properly and trying to make sure that once you patch the library, you still actually have to recompile code using that library. So it's a multi-step thing to protect people when something like a complex vulnerability, or a complex disclosure, like a heartbleed type of disclosure happens where you have to coordinate not just creating the patch, but getting the patch to the folks who are going to recompile their code with the fixed library and then raising the awareness publicly for everyone else to go ahead and catch up and do it. Yes, this is the new normal. And as people incorporate these libraries, not necessarily knowing what they've incorporated, using older and outdated versions because that's what they were testing on when they developed their thing. All of this is going to be a bigger and bigger problem. Well, and then it rolls into you get the patch, you apply the patch, you build your new software. Well, if your thing is a couple of years old, then you run the risk of, okay, you push this out. Let's say you're even really good and you have an OTA mechanism, what percentage of your population is going to get bricked and what kind of customer satisfaction issues are you going to have as a result of that? So, yeah. Well, if it's an embedded device, can you even precisely effectively patch it? And just from an OS perspective, we've seen about six to eight of these big problems a year and I expect it to only grow. Yeah, if you brick a pacemaker, you brick a human and this is not an acceptable outcome. On that, really, I'm really terrible. I think that's a perfect place to just put a pin in it. What do you guys think? Absolutely. But thank you guys very much and if you could join me in thanking our panelists for taking the time to talk to us today.