 Hello, welcome to my talk called IOT fails, learning from sex toys how not to suck or blow. So who am I? I am Render Man, Canadians, a very proud Canadian hacker, researcher, security and penetration tester, founder and researcher for the Internet of Dongs project, basically where we test the security of internet connected sex toys. I am Pope of the Church of Wifi and our team is the two-time champion for Hacker Jeopardy as of this year. I have been to every DEF CON since 1999, 2020 notwithstanding of course, and look forward to continuing to do so in the future. So I want to be serious, this will be a discussion that does involve sex toys and other intimate devices. This means some language may be considered crude by some possibly offensive to others. If you are likely to be offended, then I suggest you don't watch this video. I do try and keep things PG rated, but some items have a name that could be considered offensive to some. These are legal devices sold worldwide. Any humor that may be injected is objective aspects of the devices in no way making fun of the users or their lifestyles or any aspect thereof. These people who use these products and buy them deserve the same safety, security and privacy as any other consumer of IoT devices. So IoT security in general is a dumpster fire. It's almost an oxymoron. Every week some new device has a hard-coded password or the leak, personally identifying information everywhere. We are in a day and age where a coffee maker is a threat to the command and control systems for a chemical power plant. Mistakes are being made in IoT that were made 15 years ago. Stuff we thought we dealt with, thought that we were past. We know how to solve these things, but they're coming back. Things like default passwords should be something that's long past. There's very little discussion about the roots of the problem, as I see them anyways. Things need to change in this industry before people start getting hurt. First, let's build a car. Obviously not a real car, but more as a metaphor because this is something a lot more people are familiar with. A car should have four wheels, steering wheel seats, engine brakes, doors, windshields, mirrors, all the usual things. If we were handing this as a design to someone we could say, okay, the wheels need to be in the corners, engine in the front or the back, depending on the amount. Seats are inside, not on the roof, windshield in the front, the usual kind of what delineates a car. So the designers go off, design, build a prototype of the car, run around a track, bunch of times, yep, everything works as was requested to be built. Okay, so they prove it for sale. And then they find that after 5,000 kilometers, every car explodes, killing all the occupants. Obviously, this is a bit of a design flaw. But you hear arguments like, well, you didn't specify the car shouldn't explode after 5,000 kilometers. If this was IoT, they'd say, well, the user wasn't operating in like they expected them to. They really assumed that they would try to go faster than the speed limit. Well, it worked because it was designed to meet the acceptance on the test track because that's the only provisions we had. We only had very narrow time on the test track, so we only tested it to 100 kilometers, not 5,000, or things like it met nearly every non-existent regulation for the industry. Obviously cars are a lot more regulated than IoT. You can see these kind of arguments wouldn't necessarily pass in an automotive industry. Right now, we seem to be building a lot of homers. If you're familiar with The Simpsons, there's a wonderful episode in Season 2 where homers asked to design the car for the everyman. And he puts every crazy idea he has out on the table, and the designers dutifully say, yep, yes, okay, yep, we'll do that no matter how crazy it is. No pushback. So they take this crazy design and make this car, which is a complete monstrosity that ends up actually destroying the company. You see this with a lot of IoT kind of situations. It's like, well, you didn't specify that it had to be secure in the documents, in the design documents. Really why should that be necessary to specify? You said it should be secure, but we have no idea how to actually check that, or what criteria to use, or anything like that. Previously we only built devices that didn't need to be secure, or it wasn't a priority. We thought it was secure, we got an SSL certificate, but don't know how to use it and install it. Your design specs didn't say we couldn't just store everything in plain text in an online bucket with no password. Again, why should this be something you have to specify? We always assumed user would always have connectivity and the servers would never go down. This is something that's caused great many issues, where an assumption of always on internet connectivity, that things like Amazon outages or the local client's internet connection goes down, these should not be showstoppers or put people at risk. I've spoken to many Internet of Things, Internet of Dong's vendors, and they always say the same thing. Privacy and security of our customers is of the utmost importance. This is the infosec equivalent of our thoughts and prayers are with the victims. Plotitude that does nothing. And it should be secure, should be the default for any design. Why should it be necessary to specify? And part of that problem is many companies, they'll see the headlines about IoT hacks. They'll see Amazon buckets that are wide open, leaking all the customer's information, but they don't realize that they're an IoT vendor, that they're now in the same ballpark as a thermostat, a baby monitor, a refrigerator. Refrigerator manufacturer might say, well, we build appliances. We're not an IoT company. It's like, well, yes, you are. And because I've had so many of these conversations where I've brought literally their entire user database to them and said, hey, you have a problem, it's amazing to watch the change that happens just after one conversation when you point out you're a software company now. Because in Internet of Dongs and adult intimate devices, most of them went from a hardware device maker to a software and service provider pretty much overnight and they never realized it. They may have had good people on staff for material engineering, design, some electrical, but never anything networked. They never had to deal with the problems of connectivity. One of my favorite sayings is that on the Internet, you're 100 milliseconds away from every jackass on the planet. Meaning, if you connect a device to the Internet, somebody somewhere is going to be poking and broadening at it. And that changes the threat dynamics dramatically. A previous analog device that just vibrated or buzzed or whatever didn't have the possibility of somebody remotely reaching out and connecting to this and causing it to do things that maybe wasn't meant to. Many vendors just rolled their own systems. They didn't use available frameworks that had been vetted and trusted. They thought they knew how to implement an API and for some, they did a good job. Others, not so much. Many companies will go to a third party and say, hey, we want a device that does this, this, this, and this. And the company will dutifully design and make to that stack, but they don't go to DEF CON. They don't hang out with people like us. They don't know what questions to ask. So when they make assumptions like it's secure because, well, shouldn't it be? Or they ask the company, well, is it secure? They don't know to ask what are all the outside connections, SSL and TLS encrypted? What versions of TLS are you using? Those sort of things. They don't know what they don't know. Much of my IoT work has been with the Internet of Dogs project. It was started basically because I wanted to learn mobile security, web app, API security, the time I was being given a bunch of projects for that. And most of them were fairly secure, and I needed something with some frankly low hanging fruit to kind of cut my teeth on. Many of these devices and apps have text, voice, video, and others. Other devices will sync with video and even allow bi-directional control from the devices themselves. This is rather amazing. I mean, you have these apps that are essentially a new communications platform, like Skype or Google Chat or anything like that. But it's being run by companies that don't realize this or they've outsourced so much of it that it's being run by people who don't have vested interest in this. And it's a very fascinating field of study, especially in these socially distant times. Are suffering, and this is a way for people to stay close. These two devices have a unique set of challenges because they are so very intimate. The privacy and security expectations from the users is much, much higher and has some very unique challenges too. So it's something that a lot of people should be paying more attention to. No one else was seriously looking at this when I started the project in 2016, and I have no embarrassment in pursuing this, and I'm actually very proud of the work I've done. Because the things that I have found have been terrible and terrifying. A lot of devices will give data to the vendor, telemetry, stuff like that. How many times a day your connected fridge is being open each day may not mean a lot to people. Some people are like, okay, maybe I snack a lot and opening the fridge too often. But how many times a day you use your sex toy, that means a lot of different things to people in their fridge. Cultural implications, taboos and shame can have severe consequences. The Ashley Madison breach, there were many people that were found to be customers of that site that committed suicide because of the shame they felt in being caught. Most of these devices are a Bluetooth device connected to a mobile app, some sort of web API. There's obviously exceptions, but almost all of them use common off-the-shelf ship sets from Nordic, TI, whoever. So essentially, except for the wrapping, these devices are the same as any other IoT product, be it different appliances, children's toys, you name it. So it's a very interesting way of looking at the industry through this frankly under-observed branch. Normally, I follow a policy of coordinated disclosure with the vendors. I've established a good relationship with many. I've helped many of them set up a vulnerability disclosure program. Most of them didn't realize that people would ever try to reprogram their devices or hack them in some way, and they had no idea how to take in vulnerability reports or how to deal with them. It's a very simple step that I'm glad to see most have done. This is also one industry where most I've found are genuinely naive about security. It's not a case of they don't care or they're cheap or something like that, they just did not know. And I've been glad to see that most have stepped up when they realized what the problem is to educate themselves and get things better. But there's always exceptions. So first advice I want to talk about is the COC-CAM, this is one of the crude named devices, but that is its brand name. This is a genital mounted webcam, make of that what you will. It was founded by a group of guys in the UK who came up with this crazy idea. I would not be surprised if a few beers were involved in this. I managed to get a hold of a early production version in 2018, and things are very interesting very quickly. So this device is a Wi-Fi enabled camera, and by default it creates its own access point that the mobile app is then used to connect to and then configure the device. One of those settings is to connect to an external network. That's kind of scary. Because it now has an IP address, you know, run an Nmap scan on it, and it was identified as a D-Link 932L webcam, and you see the pictures there on the slide, that you can kind of see where the similar hardware under the surface is being used. It's got microSD card support for storing the videos, but also will stream to the mobile app for recording. It's essentially an ARM system on a chip, and it's running Busybox Linux. So I can legitimately say, I got rude on a cock cam, and continuing to make my mother proud. I found a lot of common IoT issues that it carried over to IoT, but when you start thinking about it in terms of this being an adult device, the sort of content that it's being used to generate, the implications get a lot more serious. There was an FTP server enabled on it that allowed anonymous access with no password as root. This account had full read and write permissions to the entire file system, literally the worst case scenario for this kind of issue. With this, you could overwrite the password hash, and also login via telnet then, because you now know the password that you set the hash to. The streaming server component of the firmware had credentials to the network and other things in a plain text file, including passwords to an update server that wasn't necessarily for the cock cam product. This was interesting because it showed that there was a whole bunch of other vendors' components in there that hadn't been sanitized. So essentially, anyone on the network can download any stored videos from the SD card or connect and stream from this device. The camera firmware contained fragments of other companies, software, functionality. And this is essentially, as I said, a 932D link camera network webcam. It had the IP camera web interface still in it, if you knew where to look. And that even allowed you to use it to bridge into a Skype session. Not sure I would want to take that phone call, but so be it. And the app was just so full of unnecessary libraries, fragments of other companies' software, including one previous company that was a remote video doorbell company. They had left the MP3 files for the door sounds in the app. So there were ding dong sounds in the app for a cock cam. This is providing me a lifetime of ding dong jokes that I can legitimately make with this research. Yet another wonderful reason I do this research. So reusing the hardware wasn't necessarily a bad idea. It's sensible, it saves time, saves money. If it's done right, though, if it's not done right, it can actually be a big problem. In this case, a vendor should have had a master repository of all the features that this hardware could have done, but just built in the ones they needed, not build everything in the kitchen sink into this. Because the more functionality, complexity that you build in, the more things that can go wrong. You're going to spend a lot more time debugging it. You're going to spend, have some performance problems, perhaps. It just makes more sense to minimize the things that can go wrong. In this context, with this device, connecting to an external network was built in, but not appropriate for the device. If it had been just its own access point, you connected it with the app, and it streams to the mobile device, what you do with the videos after that is your problem. But the fact that it connects to an external network, you can see situations where this thing could be theoretically broadcasting out to the internet at large, or to everyone on a public network that you're using this on, if you're in a shared environment like that. Talking to the vendor, the designer or manufacturer was kind of a nightmare. They essentially got railroaded into this design. They went with their spec sheet of what they wanted. And the vendor was like, we'll just take this thing we've already got built, slap a new skin on it, and call it a day, rename the thing and call it a day. That's not a good way to do things. There's a thing in startup culture right now of minimum viable product, which means just barely get it to work to where the customer will want to buy it and then ship it. We'll fix it later. The problem is this also means minimum viable security is often what ends up getting shipped as well. And this is a problem because if you don't have things like an update path or a way to alert users to update, are you going to ever get these things fixed? It may be cheaper, but it will come to haunt you, be it bad press or depending on the type of device, lawsuits. Vendors of IoT and IOD need to spot these design and manufacturers that push and railroad into designs that aren't necessarily taking in their customer's best interest into mind and stop using them. We need to talk amongst each other and say, hey, this vendor turns out a lot of crappy products. Let's stop using them. Let others know because there's always new people entering the market with an idea that's fine. But they need to know that what may be the cheapest vendor has problems. We almost need like a yelp for these kind of manufacturers. And manufacturers are often not concerned about things like updates because there's no money to be made in updates because they sell the hardware and ship the app with it. And then they say, well, why should we add features or fix bugs? People will keep buying this thing and we have no financial interest in doing that. You know, we're going to move on to the next customer. That's scary and dangerous. And because things like sex carry cultural taboos, insecure adult products could destroy a small company with what it costs to settle after a lawsuit. Wevibe, who was not a small company, but a issue with their privacy policy in the app actually cost them $5 million in a class action lawsuit, not a small chunk of change. Failure to understand what you're building can also lead to physical harm as well. The cock cage calamity. Good reason to use chicken coops in the talk. So recently in coordination with the company, Pentest Partners, we had to drop essentially an O-Day on an IOD product. We didn't actually drop any code or anything because as soon as you looked, you understood the problem and it was very, very easy to replicate. And I never really wanted to do this to any of vendors because I've always found them very good, but our hand was kind of forced. We, bad things did end up happening from this, unfortunately, from this disclosure. But we had no choice because if we couldn't alert the public, then even worse things would happen if things happened in a vacuum and the company wasn't aware at least what the problem was that we were going to announce and at least haven't had a heads up before bad things started to happen. This is a very good opportunity to learn about what not to do when someone reports a vulnerability to your company. Devices in question is the cellmate. It's essentially a mail chastity device that locks around the genitals and is controlled via Bluetooth. A mobile app connects to a REST API backend. The idea is that the wearer has the app on their phone but that is paired with the device, but they cannot control it. Another partner is able to decide when to unlock and release the genitals. It's part of a power play thing. All the functions for this to unlock, block, et cetera require API access and that becomes important later. An initial look at the API was very alarming. Queries for, you know, member ID information. If you gave it a blank search string, that came up as a wildcard in the database and it gave you everything. Absolutely everything about every user. Emails, passwords, in clear text, phone numbers, you know, location data in some cases. When I first looked at this device, this scared me, but due to connection issues and, you know, from my home internet connection, I never followed up right away and that particular vulnerability was fixed fairly quickly afterwards when I did check it again. Life at first gets in the way of research, but it was on my radar, it was a device I was aware of. In May of 2020, it was contacted by Mike who found some issues in the app and the API. Now they weren't a security researcher, they were developer and they wanted to do the right thing and report this. You know, they knew enough to say this isn't right, this is a problem, this is bad coding. I want them to fix this. So, new of the Internet of Dongs project reached out and helped ask if I would contact QIUI, the vendor that makes the product and act as a proxy for them to report this vulnerability. This is something I've actually done a couple of times with the Internet of Dongs project. What they had found was that plain text passwords were exposed, poor or no authentication on certain API calls and with enough digging, the API would still expose all user details and enough information for remote takeover of accounts and thereby control of the associated devices. I took their report, some light editing and passed it on to QIUI who I had established contact with the CEO, provided them the report, provided offers to explain things, go into detail if they needed to. Didn't ask for anything, didn't need any pay or contracts or anything. And then we waited and waited, reached out, got assurances, yes, yes, we're working on it and continue to wait. A few small changes have been made by June 2020 but the whole thing was still very broken. August rules around and we're not encouraged. There were no replies at this point. We were inquiring as to what the status was, we're not getting any response, no updates, no progress that we could see. In September, Alex Lomas of Pintest Partners tweeted out their frustration at the state of some teledomic device security thinking that it should be up there with healthcare and banking considering the importance to some people and was just very frustrated with this particular disclosure he was working on and the frustration level sounded very familiar. So I reached out, Alex confirmed, yes, it's QIUI. I said, okay, we have been trying to do disclosure since May, we need to talk. So I got into a call with them and confirmed that they had found a lot of the same things. We had compared notes and decided, okay, we need to get a joint message to this company. Because now two independent researchers had found the same issues later on a third person actually emerged that had also found these things. So it wasn't long before someone with negative and bad intentions would find this. And this was over a span of six months we had been reporting this and encouraging the company and offering help and they had done practically nothing. We put together combined communication to the company saying that we all knew, we had all been talking and they very much needed to deal with these issues quickly. More assurances that a new API was coming, you know, were made. Well, that new API was a bit of an improvement because of backwards compatibility issues they had, they'd left the old insecure API still running. And this is a major design flaw on the device in the app because the device relied on the API to generate a device specific token to issue the unlock command or Bluetooth. The device itself had a hard-coded, unique key hard-coded into it that the app would take, send to the API and then receive a response token that would allow for unlock. There was no mechanical or emergency unlock for this device. Something that is basically clamped around genitals, involving a steel ring, basically to get this thing off would require bolt cutters in a place one typically doesn't want to have bolt cutters. So if there was no API connectivity or account access was lost or the person's mobile device broke or whatever, you couldn't get the unlock token. There was no way to release the device, even in an emergency. This concerned us greatly because beyond personal identification disclosures, we were very urgent in our warnings because they were about to release a couple of new devices that scared us. One was an anal chest study device modeled after a medieval torture device called the pair of anguish. Another was a remote-controlled shock collar that you could shock someone over the internet, you know, via a Bluetooth-enabled shock collar, basically. One can imagine the sort of, you know, situations that could occur if somebody got remote access to their API back in and could say issue shocks to everyone all at once, what if they found a way to up the voltage? This is scary stuff because their design, partially due to the communities of these devices are marketed to, they didn't design for failure or emergencies, which is interesting because with the BDSM community that would be a fan of these devices, safety is very much a key component of their activities. A manually released method may be counteractive to the purpose of the device for some people, but emergencies happen and that needs to be considered. So in late 2020, September of 2020, with no response and no plan, very beta test app, you know, that barely worked, many of the same API issues still existed even in the new one. They still had not shut down the old API. We basically threw down the gauntlet and said, you have a week to come up with a workable plan to fix these issues. We weren't specifying that they had to have a full timeline, but they had to show that they were actively working on this, you know, and when they would shut down, you know, be able to shut down the old API. We're more willing to give them assistance and give them more time if they had a plan and they never responded. So on October 6th, Pintest Partners and I released information about our struggles to report the issues. And the problem is that when you shine a light on something like this, other people can look and because it was so easy for us to find, it was so easy for them to find. We didn't release any technical details, but it wasn't hard to find. And then all hell broke loose. One fun thing though was we made the BBC Cryon, the text of the bottom of BBC World. That's sort of a new achievement. But then users on their forums started getting messages that they were locked out of their accounts and were demanding Bitcoin in order to restore access. We now live in a day and age of sex toy ransomware. Yeah, I had the same kind of thought. It didn't take long for people to reverse engineer those old apps and using the old API to discover the same things. They basically just iterated the user IDs, standards set of commands to transfer the device, control the device to the attacker and then automatically send a message to them to demand Bitcoin. The user forums were freaking out. And so that was interesting because some people were very incensed at being locked out or assessed at the company or whoever was doing this. Other people were kind of titillated at the idea of being locked out permanently from their devices. Again, we don't judge, but it was a serious issue when you really get down to it. And this sort of thing had to happen with some device sooner or later. It's just unfortunate that we couldn't get them to fix it before something bad had to happen to people. QIUI put out a message with a escape method if the people should find themselves locked out that involved prying with a screwdriver. Again, not something that I was particularly comfortable recommending to people. Pretty much every escape methodology required damaging or destroying the device, which these are not cheap devices. They very quickly put out some app updates, but they failed to take down the old API because again, until everyone had been moved to the new APIs, they couldn't shut down the old one. So they literally painted themselves into a corner. Eventually they realized where the source of their problem was and basically just issued an unlock to everyone. Made it so that the new app could unlock any of them. This allowed people the opportunity if they shut down the API to still be able to get out, which is something that should have been there from day one. They quickly migrated, as you can see in their message, they still required you to sideload the app, meaning you probably wouldn't be getting automated updates like it would from Google Play Store or any of the other app stores. That's a whole other talking of itself, problems there. The new versions were definitely better in security, but still far, far from where one would hope that they would be. This is a company that just left their entire infrastructure open to the world. And one of their solutions was to start doing real name authentication that they required, you know, a driver's license or ID card or something, you know, and pictures thereof to sign up for this app. This made no sense, because if somebody is gonna do something bad, they're gonna find a way around this, and now you've just created even more of a treasure trove for them. There's an idea in security where collect as little information as you have to about your customers, because then you don't have to protect as much, particularly with things like sex toys. If you don't collect it, then you don't have to worry about it being lost. This makes perfect sense. And this past January, VX Underground collects malware samples, but this rather amusing tweet saying, we'd like to uncomfortably announce we've received the source code to IoT ransomware that targets male chassis devices. Reading through the code was actually rather interesting because it was literally just iterate through user IDs, get their password through the API call, log in, change the ownership, send a message. This was ridiculously simple. It's not ransomware, per se, because it wasn't actually encrypting the device or anything that was just changing a password on an account. But this sort of attack is certainly not gonna be the last one on any of these kind of platforms. So it was interesting to see. Another last IoT fail I wanted to discuss, well, not necessarily sex toy, it was a safety product that makes the danger even greater. 2017, I noticed an app in Google Play Store for advertising for a Bluetooth-enabled piece of jewelry. Now, this was called the Ivy. It was a brooch or a bracelet that was Bluetooth-enabled, and the large stone you see there was basically a large button. So you could tap it once, twice, three times. It was paired with an app that would send a text or a voice alert if you felt in danger or something. So you could basically be at a bar with friends and say, send a message to your friend saying, hey, come rescue me. This person that's talking to me is really boring. Come get me. Kind of a good idea. Bluetooth LE, mobile backend, web API, pretty standard for IoT stuff. I grabbed the app just out of curiosity and took it apart to see what made it tick. I really wish I hadn't. Because within two minutes after reverse engineering the app, I was looking at personal identifying information, names, addresses, email addresses, phone numbers, the whole traumatic. There's a hard card IP for their API server in it. When you visited that IP, every directory was indexed with no authentication needed. It showed you every file on that server. You could use their web service, the PHP web service to iterate through user IDs, pull up all the user account names. You could see their usage history, how many times have been used, who had it called, pull down all their profile photos. You could even see all the alert messages that the users had recorded to be sent in case of emergencies. There were some in there that were not pleasant, shall we say. One of the more interesting discoveries in one of these directories was an iOS developer private key. This is what it was used to sign their iOS app before it went to the Apple store. I could assign a new version of their app with this key and it would be accepted by Apple. That's terrifying. There's backup files for pretty much every configuration file on the server, including the root MySQL password, a .git folder that have in the web root that had credentials to the .git repo with all their hard-coded secrets and everything. These credentials also appear to have push access to the repo so I can make changes. PHP might have been with default credentials, so I'm configured, but we had the password so I could even make myself access to the database with a nice Google for an end. And text files are lying around for even other projects by the same company. And this was only after 20 minutes of work. This was so incredibly simple. It was scary and this is a safety device and they leaked everything. They did zero for security. Tried many, many times to get a hold of this company. Never a response, no fixes. Fortunately, as of early 2019, this is no longer on sale. It's a list on Amazon and such, but they don't, it was just sold out and the website for it is no longer there. This is the sort of thing that easily gets someone killed. You could redirect alerts or someone's being stalked. This could write all sorts of other information. Probably the worst IoT fail I've found so far and the sort of thing that keeps me up at night. Oh, and this company also makes medical equipment. Let's hope they're not recycling code for that. I mean, it was things like blood pressure monitors and non-invasive stuff. But still, that's terrifying that the same company would do, you know, doing medical devices is doing something like this. That's not a good idea, but the worst implementation possible. But there is help and hope. Now, the automotive regulations, like, you know, in our building a car example earlier, there's a lot more there, but we're just starting to see some regulation and advice for IoT. In 2018, the UK put out their IoT code of practice for security and it was really simple, basic things. Remember I said that we've been dealing with these kinds of problems for 15 years and we've solved most of them. Things like no default passwords, have a vulnerability disclosure policy, have a method for secure updates, store credentials securely. Make sure your communications channels are secure. Least privilege, least attack surface. Don't, you know, give everybody access to everything. Verify, you know, updates, verify the software integrity, see if something's been tampered with. Protect personal information, yeah. Australia in 2020 released their code of practice for security IoT and not by accident. It's basically the same as the UK one because a good idea is a good idea and they just decided to continue going with that. And amongst a lot of other news going on, something that was missed was that in December of 2020, the US signed the IoT Cybersecurity Improvement Act. Now this bill required NIST and OMB to take specified steps to increase cybersecurity for Internet of Things devices. And basically NIST has been asked to come up with standards and the US government will require those standards to be met for any device that it buys. What those standards are remain to be determined, but I think we can guess that they're gonna be based pretty closely on the UK and Australian models. Now, you know, the US government being a sizable market, this is gonna create a sizable market for devices that meet this standard. And that's an incentive to build for secure environments, you know, for secure standards because why limit your market? If you build it secure, you can sell it to those that don't care about security and those that actually do. This won't necessarily help the Internet of Dogs market, you know, because I don't think the US government buys a hell of a lot of connected sex toys. I don't know. But at least it's a standard that we can point to and say, hey, is it a bad idea to meet this level, you know, build secure and you won't have any limits on the markets. So some closing thoughts. We've created these tools and frameworks to build some absolutely amazing things, but we haven't required, or made it so that we need to understand the underlying technologies and previous experience as to, you know, what to do, what not to do. So we're putting in the hands of people who don't know any better, a Homer Simpson, giving them the tools to build a car from scratch without knowing, you know, the 80 or almost 100 years of experience of what not to do. So anyone can design and build a car, but it doesn't mean they can do it well. Anybody can come up with an idea for an IoT device, doesn't mean they can do it well. Very early entry has been made so incredibly low it allows for truly terrible designs that, you know, either through ignorance, indifference or just simple greed, you know, these things will make it to market without any oversight and the consumers generally don't know any better. You know, they go for the cheapest device. Cheapest is unfortunately often the one with the most insecure implementations. You know, I'm not a huge fan of regulations and standards. I think they should be minimal, but they are needed to help establish a baseline or at least a common language that everybody can talk and point at and require. Because these are things that we've been dealing with for 15 or 20 years, this should not be new. We've solved these, we just need to clue everybody in. Because no one wants or intends to build a device that's insecure and, you know, has their customer's data compromised as a design feature. Nobody does that. You're probably not gonna be in business, you know, that long, but unfortunately a lot of companies that do this aren't in business far too long because they never get held to account. Tools and learnings on these devices in the software should be made easier to digest for non-technical people. InfoSec is very bad for error messages that are very confusing or hard to understand. Think, you know, every time you see an SSL error message, how many people's brains just turn off and they don't understand what it's saying. You know, we need to limit the ability to do insecure choices. Why we are allowing, you know, HTTP when HTTPS is so simple now, you know, this should be something that is by default in all these frameworks and you have to put effort in to turn that off. We need to build bridges with industries that previously have not had to deal with these problems because a lot of them just don't know. They don't interact with people like us. We need to reach out to industries and companies that are making the leap into IoT and offer some guidance and support. And we also, InfoSec and Public and General, need to realize there are risque industries. Adult industries are a major one that, you know, some companies are like, well, I don't want to be associated with that. And it's like, these are people. They can be hurt, you know, these are legal devices, people are buying them. It's just another IoT device. You know, we need to realize and get over some of these stigmas. Outsourcing may be cost-effective, but you're putting your control, and in the case of some of these IoT devices, other things in someone else's control, they may not have your best interests in mind because they want to keep their costs down. You may want to bring some of that design in-house, you know, have a roughed out, you know, product design that you then take to somebody else, but have the standards already set for security and privacy and attention to things like privacy of consumer information. The same people making a poorly insecure fridge could be designing something more important later on if a company has a track record of these, you know, it may be that a fridge manufacturer outsources it to another company that outsources it again to somebody else. We need to be identifying who those root causes are that are coming up with a lot of these poor designs, and stop using them. And last but not least, we need to stop buying insecure products. They may be cheap, they're plentiful out there. We need to do our research more, you know, as a society, and just say, no, we don't want that. So thank you very much. I have a website for the Internet of Dongs, email, and thanks to Circus One, Nikita and Straith for their advice on this. And refer to questions. Thank you.