 For those of you who attended the last talk, mine is not going to be that low level, it's going to be like poetry compared to it. I'm here to discuss privacy issues in IoT, to see how privacy issues look like and hopefully to bring some awareness to that and send you out with the knowledge. So let's start. So the left one is me. Another guy who worked with me on the research, his name is David Sopas, very talented man following me on Twitter. We both work at checkmarks, I'm the head of upsec research of checkmarks. This is my details to contact me if you have any questions after the talk, if you want to connect I'll be more than happy to do that. I know some people don't like to ask questions in a crowd, so this is a good way to catch me. So I'm starting with some assumptions. I'm assuming you're familiar with basics of Bluetooth and BLE, I'm assuming you're familiar with the attacks on these kind of technologies, I'm assuming you're going to be cool with oversimplifications that I'm going to do. We don't have time to talk about every single thing, so I'm going to lie a bit. And I'm assuming you want some links to the tools and methodologies that I'm going to show because I'm not going to discuss them in depth, just to show them. So I promise to publish everything that you will need. So let's start with the agenda. I'm going to describe a bit of privacy, what is it? And what is it good for? We'll show some IoT privacy leaks as a result of bad implementations, we'll do the same with malicious intent of devs and vendors, we'll talk about one very nice privacy leak we made with the high end IoT, I'm saying that so you'll stay. And some takeaways. So let's talk about privacy. We always start with the good book, Wikipedia, to describe things. So privacy is the ability of an individual or group to seclude themselves or information about themselves and thereby express themselves selectively. So there are some points in there that are very important to privacy. One of them is the right to be let alone, yes, it is a thing. The option to limit some access to information, to private information, secrecy, control over your information about oneself, obviously about oneself, about where you walk or your employees information as well, et cetera. So what are the reasons that people have, vendors, developers, if they're legitimate or not, to take one's privacy? First is security. If I tell you that one of you is a terrorist and I need to check each and every one of you, this is a good reason to do that, maybe. Maybe to get something physical, if I can break into your house or any other private place you have, I can steal things, I can take things. Get private information. There are a few aspects of private information that I can gain from getting your personal information. It worths a lot of money, as we know these days. Maybe some information about organizations, behavior analysis, all these are worth money. And also sometimes because I lack the interest of making sure that your privacy is kept. I think this is one of the most common issues, actually. What does it take from a person to forfeit his privacy? Also security, at the top place, people believe that if they're letting themselves being groped at the airport, it means that their flight will be okay, they agree to it. Sentences like I have nothing to hide, I'm sure you heard it from people. Try to ask these people some personal questions like maiden number, date of birth, and sexual preferences and you'll see that suddenly they do have stuff to hide. Sometimes laziness. I don't care. I don't read what I'm clicking on. It doesn't really matter. Ignorance from people who doesn't really know what they might lose. And the terrible of all, convenience. If I can let Google know where I'm going and just wait for them to let me know where I should eat or what I should pack in my suitcase, then I'm willing to forfeit my privacy. And then comes IOT and breaks everything because it has all the reasons to take one's privacy, all the reasons to forfeit one's privacy, and obviously lack of interest on the side of the vendors and convenience on the side of the users is a really bad combination. So what we did? We gathered a lot of IOT devices, some are really ridiculous. Here on top you can see a Bluetooth pacifier. I kid you not. People are putting Bluetooth in their baby's mouth now. Every single device had a privacy issue. Again, every single device. We did not manage to find even one device for lacking any problems. Cheap ones, expensive ones, no matter where they were manufactured, made in the US, made in China, all the same. Every single device. This is a scary thing. Let's start with the physical security. A lock, a Bluetooth lock, they come in many shapes and sizes. None of them stays closed, none of them. That's amazing. First of all, probably if I would hack this thing, it would be with a plier, but still people want to use it for small things, I guess. It's the same with houses, though. There are two really effective ways to break a smart lock. HCI snooping is, I think most of the locks actually broke with this method. It's really an easy method, easy as one, two, three. You enable HCI in your phone. You use the application that's supposed to open the lock, you extract relevant data using the P-cap, Wireshark, and then use a script to replay or fuzz. Extremely easy. Again, this is how you use HCI, run the application, and then look at the P-cap files. In this case, you can immediately see, well, maybe you can't, but I can see that the code is a six-digit number, obviously very easily fuzzed. Like in this example, here it is locked, and here it is not. This is the time it takes, not more than that. Almost all locks were checked. For those that this method did not work, we actually had to work a bit harder. We used many of the middle attack. I tried to make a live demo for it, but the atmosphere here is too dirty, so it really didn't work. So we'll just keep it at that. You run a proxy between the device and the lock, extract relevant data, replay, and the lock opens. You remember the right to be let alone, that nice thing, that part of privacy. This is not something you can do if you have this kind of smart band, anonymous smart band. We tried several of those, again, we will not let you alone. Again, faking messages, using the same methods, it's really scary. It's really scary. Just faking a message, that's it, nothing more than that. So why is it so easy? Why is it so easy? Well, two things, two reasons. One is lack of encryption. An IoT device vendor, they either do not implement at all encryption, or they use deprecated methods, or they use them wrong. Encryption is sometimes complicated, but if you don't care, it makes it even more complicated. This really allows easily to sniff either passively or by doing many the middle attack. The second thing is using very weak pairing methods, methods like JustWorks and Pasky. Read about it, it's really deprecated, it's really old. There are way better methods these days that should be used. So why not make it better? Again, it's cheaper, don't need to change anything, I have the same software for four years now. If it's cheaper, I can sell more. Vendors come and go very quickly, some of them don't really have names. Just imaginary names, zero liability, and people keep buying anyway, no matter what. So why bother? So this is when you implement in a negligent way. What about malicious? This was kind of surprising because we didn't really aim at finding something like that. We thought that just bad implementation is what we're going to see all over the place. And then we took a smart scale by AEG. The reason we took a smart scale by AEG is because we tried to get serious and not all this no-name IoT devices that we used before. AEG is German, nothing more serious than that. Also, it was in white or black, we took the black, very serious. So the scale was AEG, but the app that came with it, not so much, wasn't really AEG. The app was by someone called V-Trump, a Chinese company, among their clients are AEG Texas Instruments and Realtek, seems legit, nothing to suspect, and we didn't. So we started by installing the app like any other user who needs to check his weight, maybe, sometimes. And then we got all these permissions, all these needed permissions. It's not that bad. I mean, it's a bit weird that a smart scale would need to be able to mount and unmount file systems, maybe. We started to kind of suspect, but we kind of understood what's going on when checking the traffic. First of all, they didn't really try to hide anything because the host, I don't know if you can see it, it's called gather.lotusseed.com, they actually gather things. And then it was pretty horrible to notice that the app connects to a server in China and sends the following info, IMEI, Wi-Fi, ID, phone operator, phone brand and model, Wi-Fi of your house, and phone, MAC address, latitude, longitude, and obviously your weight. This is just to make it insulting of the old data. So it's not my mistake, this is not negligent, right? They actually made a lot of work to gather all this information. So we tried to get responses from all the previous devices, we didn't manage to get any response because I don't think it's real companies. So I just skipped that part. But here we did have real companies, right, AEG and VTrump. So AEG said we action products with priority whenever we believe it is necessary. I think they didn't believe it is necessary because I could never manage to contact them again or get any response. So this is the response we left with. VTrump on the other hand said that their app functionality does require these permissions. I mean they ignored completely what the second part of all the information that is sent, they only talked about the permissions. And obviously they decided not to change anything, and this is what they told us. But they did make some changes. Just a little before this lecture we checked again, they made changes. First of all they changed the host. So it's not gather something something dot com. It's now just a bunch of characters dot com. The second thing they did is they added encryption. So it's actually mitigating the penetration testers, it's mitigating the researchers because they didn't want us to see what they're sending. They did it really bad. We did check they actually sending the same thing, not that important. And this is an AEG that you would probably think that you should trust. Let's go a bit high level towards not personal but maybe corporate, military, I don't know other uses, exfiltration. So everyone knows what air gap is. Everyone who knows what air gap is please raise your hand. Okay, that's quite an amount. So air gap is being used for highly sensitive data. It's disconnected from outside network completely, it's supposed to be completely disconnected. Things can get in, no problem. It's an assumption that some viruses or malware will go into the system. And that's, let's say it's not okay but it's okay. But nothing gets out. So no matter what, nothing gets out unless you're using a $650 smart light bulb in your building, base, room, whatever. What we did here, on the left side you can see, let's start on the right side. On the right side you see simulation of a computer who has, let's say, malware or virus. This virus or malware found out that there is a smart light bulb somewhere in the vicinity, connects to it, and starts exfiltrating data through the blinking of light bulb. That's it. On the left you can see the app we did for Android that it will start gathering the blinks and make them into letters. It's very sterile here because it's just aimed at the wall, at the white wall. I don't know why it's blue but it's white wall, believe me. We actually tested it from a distance of 100 meters with a telescope during daylight to a window and it also works. You can see it starts blinking and soon it will start interpreting the data. As you can see, just 1s and 0s, nothing fancy. Some of you probably say it's not really effective because the victim actually sits in a room and the light blinks. He should be suspecting something sometimes. We actually did the same thing with only blue, blue wave lights. It cannot be seen in the eye but every camera catches it. Obviously we can do it multilayered with several colors if you want a larger, wider bandwidth. But this is enough for the POC. It was kind of funny to do something through a telescope during daylight. We didn't look crazy at all, by the way, doing it. So the same person who said, yeah, but the guy sees it blinking also is saying, yeah, but this is all just cheap, crappy devices. And he's right. So we tried something else. We went for Alexa. We went for Alexa because it's the least cheapest, crappiest device we could find. Amazon echoes the series. We sold intelligent personal assistant. By the end of 2017, 45 million units were sold. I'm pretty sure some more sold since then. Popularity rose in last years. So is the fear of being recorded or listened to unknowingly. And we wanted to show that the fear is totally justified. So we started checking it. What we wanted is to turn our Echo Dot into a tapping device. We're coming from Upsack application security. So we don't like soldering and we don't like, I don't know, some dirt on our hands. So we decided to do it remotely. We thought it would be easier for us. Because I think in the last couple of years, a couple of groups managed to do some hacks and cracks by actually having contact with the Echo. So we tried to do something remotely. The first challenge was actually the activation challenge. Because Alexa is asleep until you wake her up. She only starts streaming audio to the cloud after the wake up or the Alexa is heard. This is something that it's very hard to do remotely unless you shout really hard. So our solution was to start after the user wakes her up. It usually happens several times a day, I guess, for users. We had several options how to do that. Not to go into it very deep. We decided to use Alexa skills. Alexa skill is like an application that is run by Alexa. It can be either built in or you can download it for a dedicated Alexa store. And we thought it would be really nice to create a malicious one. Something that starts B9 like a calculator that you ask him how much is one plus one. You get the answer and you don't suspect anything but Alexa will then continue to record you. So this was the ground plan. The second challenge came quite quickly because we couldn't keep the session alive after the B9 part. So Alexa gives the answer and then she goes to sleep again. She either shuts down or if you tell her to stay, she prompts the user. So the partial solution was a flag that we found, I mean, it's a normal flag, it's a default flag, should end session. If you mention that you should not end session, then Alexa goes into another cycle, another session, but Alexa will prompt the user that she's again waiting for a response. So this is kind of problematic because the users of Alexa are very smart and they will know that something is going on. So the complete solution came from a class that is called reprompt. We said, what if we try and put an empty string in reprompt? Well, complete silence. So that was pretty cool. We managed to keep the sessions coming without Alexa saying anything to the user. So that was the first challenge that was solved, the second challenge that was solved. The third challenge was to actually get the data to the malicious developer. The actual recording is not accessible to the developers, it's uploaded to the cloud. But the transcription generated by Amazon is accessible. Usually the developer needs to choose a specific group of words which they call the word slot. For example, if you write, if you put cities, Alexa expects to get some sort of city name in your mumblings, names, animals, you get the gist. So our solution was to create a custom word slot that will get everything, whatever is said. Alexa will try to guess what is the closest word to it. And we actually created a custom one, we called it input, and this word slot would capture any single word. But we wanted sentences, not single word, because if the user says a single word and then goes to another cycle, then we probably miss the other ten words in the sentence. So we actually created templates for different sentences with all lengths of sentence we could think of. We could think of 15, I don't know why. Maybe you have a bigger vocabulary and you can make sentences of more than 15 words, we couldn't. So every sentence that is no more than 15 words will be captured by Alexa, transcribed and be sent to the malicious user. Okay, so I'll give a link to the demo later. Sorry about that. Can you hear it? Okay, let's skip it. And voila! Thank you. Thank you. Very great crowd. Okay, so that guy from before, from earlier, who pointed out things, he probably is saying now what about the blue light? The blue light is on all the time and he's right. But it doesn't matter much. And actually we thought it lowered the effectiveness of the attack of the malicious app, but Amazon didn't think so. Amazon thought it's still bad. And they said that the users of intelligent private assistants are not expected to keep an eye content with the device. So the blue light is not expected to be there. Also, and this was actually news for us, there is something called Alexa voice services which allows any vendor with some sort of IoT to embed Alexa capabilities into their probably lightless products. If I make a teapot or a refrigerator with Alexa in it, I don't have to put any light. So this does not lower the risk here. Custom response and they asked that we read it. Customer trust is important to us and we take security and privacy seriously. We have put mitigation in place for detecting this type of skill behavior reported by check marks. And this came from the security team at lab 126 in Amazon. Actually it was amazing working with these guys. They gave us very rapid responses. We collaborated with them the entire way with the fixes. And they decided to make the following mitigations, detect and disallow empty reprompt. There's no use of that feature. Identify is roping skills that are uploaded to the store. They didn't really give us more details but obviously they're on it and detect lower than usual sessions and act appropriately. So this is kind of great. Quick round of takeaways and I'm sorry if I'm getting a bit preachy but it's important. So it's very easy to forget that IoT devices are actually computers who has inputs and outputs, cameras, ears and eyes, whatever. Really users, normal users, everyday users do not really think that there may be privacy breach in stuff they wear all the time. The breaches as we said sometimes it's because the device sucks. Sometimes it's because it's malicious. But always because the users allow it. So privacy issues are a layer eight problem. Only users can solve it by making sure that the device only trust the devices, by making it very clear to the vendors that vulnerable devices will stay on the shelves and by making it very clear to vendors that found vulnerabilities must be fixed quickly. Your job, you're supposed to write it down. Talk to users please, talk to vendors if it's possible, educate and bring awareness. Privacy issues are a really big problem. Break as many IoT devices as possible and publish your findings no matter how small they are. Bring awareness please and that's it. That was me.