 I work on the Google security team. I'm a senior security engineer on the Google security assessment team. I predominantly conduct red teaming about 80% of my time. At Google we also have the opportunity to do 20% projects and I spend my 20% time breaking IOT. Those occasionally intersect and I'll tell you about one time when they did. But I will also tell you first, I have some personal interests. I like to break IOT stuff for fun. I've been known to buy many, many IOT devices solely to see how they play. And sometimes find bugs in them. I also like to make electronic things like, you know, badge life. So I'm going to start by telling you a little bit of a story about how I broke some door access controllers. And what actually led me to the realization something was broken. I wasn't actually looking at the door access controllers on purpose. I kind of stumbled across the realization something was broken and then had to spend the time making an exploit. And then I want to talk about how we fix it, right? I know that stories about how we break things are really, really interesting. But we actually only make impact as security engineers if we get a fix made from it. So first I'm going to tell you like, how do door access controllers work, right? Like many of you probably work in office buildings that use some kind of card based system for access. And so you probably interact with these all day long, but you probably don't think very much about how it works. You conceptually at a high level probably know, oh, I have an access card. There's a reader. The reader talks to a server and the server decides whether or not you're let in. But it turns out that the actual implementation of this is a little bit more complicated. You have the badge reader. You have a local controller that's present at the badge. You have the connection to the lock on the door. And then you have a local controller which will usually be in an IDF or a similar facility within the building. And then the badge server may be sitting in fact in a data center somewhere or sitting, you know, even in a cloud facility or a colo. So there's different protocols in use at different points along this connection. The most common protocol that is used on the wire from the physical badge reader itself to the door controller is a protocol called Weagund which came from actually magnetically encoding on the card and actually represents the flips in a magnetic field. Now many of our cards now are in fact using a proximity system and have no mag stripe on them at all. But in order to maintain backwards compatibility they kept using the same wire protocol between the badge reader and the door controllers. And even though we have door controllers now with IP that didn't exist at the time the Weagund protocol became popular because these installations may last for tens of years then you can end up with a case where your backwards compatibility on each side is maintained for a long period of time. Beyond that you have IP in this particular installation from the door controller to the local controller and then you have IP traffic again between the local controller and the badge server. As to actually unlocking the door usually that's just a straight 12 volt signal to a physical lock or to one of those magnetic locks that will release the door so that you can open it. So let's go through a typical scenario in terms of swiping your badge and getting access to a facility. So the first thing that will happen is you'll swipe your badge against the badge reader and the badge reader will immediately relay the information about that badge to the door controller and they'll say hey badge number 335 has just swiped. The door controller will say okay I need an access check for door number one because the badge reader doesn't know which door it's actually installed for but the door controller does so it says I need an access check for door number one. Badge numbers 335 I'm passing it on to the local controller. The local controller basically passes it on to the badge server in order to find out this information. The badge server is what will actually make the authoritative determination. It says looks good this door this card has allowed access to this door. Okay the local controller says badge okay and it tells the door controller open the door number one. And at that point you get your green light on the badge reader and you get your unlock signal to the door lock. Now this in practice only takes a fraction of a second you know typically a few tens of milliseconds or a few hundreds of milliseconds to happen. Although I'm sure some of you have swiped and then had that really long pause before it unlocked and well that's probably a sign of network congestion or failure along the way. So this is the typical scenario for a door access controller setup. But there's also some other use cases of all of this infrastructure. So let's say that you're at a remote facility and you've forgotten your badge and you want to be led into that facility. You maybe call your company's security operations center and convince them that you're the right person and that you should be allowed access. But how are they going to do that? Are they going to send a security guard to come drive out to the remote facility and let you into the system or into the facility? No it actually turns out that there's a way that they can from a remote client PC connect to the badge server and just say send an arbitrary command to unlock door number one. And that command goes to the badge server to the local controller to the door controller where it's converted to an analog signal and the door gets unlocked. So there's a remote access and remote unlock capability is very common in these systems and almost universally implemented for these particular types of cases. It also helps if the badge reader fails they can send a remote command to put the door in an unlocked state for a certain period of time or they can do things like changing the configuration for the door on the fly or other settings. So this is a possibility as well and you'll notice that in this case it's not associated with a badge swipe. It's just associated with a command that comes from the control PC in the security operations center or wherever the security staff may be. So now onto our story. Once upon a time I'm executing a red team and the focus of our red teams was not in fact these door controllers. We just happened to be on a network that happened to have a patch panel in an area accessible to contractors. So we're like cool let's plug into this patch panel and see what's on this network. We didn't even know which network segment we were on at that point. But we realized shortly afterwards after we started seeing traffic we were unfamiliar with and started tracing the cables and they went to the locks and the door controllers next to each of the doors. And we're like oh cool maybe this is some physical access stuff. So we dumped some of the traffic but we didn't immediately see a way that we could do a replay attack or otherwise influence the door controllers. And since we were doing one where we were physically on site at one of our facilities and not in fact in our office we didn't have a whole lot of time for analysis at this point. But I was like you know what? I like to break IoT devices. I'm coming back for you later. And so I started looking at the traffic and trying to figure out why there was something that was just bothering me about the traffic. And so I started looking at this and I was like there's something suspicious here and I can't really put my finger on it. So I started looking at the traffic in just one direction at a time. And so this is three messages in a row three packets coming from the door controller to the local controller. And there's something really really strange about these packets. And if you haven't figured out what that is turns out the first 36 bytes of each message are the same. Which that's pretty weird. The data looks very random. It doesn't look like it's some sort of plain text. But at the same time 36 bytes are the same. And then the remaining bytes are completely different. So at first I was thinking maybe it's some kind of proprietary binary protocol or something like that. But replaying these raw packets wasn't getting me anywhere so I decided I needed to do some more analysis. So I went and looked at the literature for this particular product. And they make the claim AES 256 network encryption. If I recall correctly there was also something about this being used to protect top secret government data in the brochure. Well so I'm pretty sure looking at that PCAP that they're doing it wrong because usually when you see encrypted data you shouldn't have a pattern to it like that. I'm not a cryptographer but I have seen a lot of encrypted protocols and the ones that are well-designed don't have 36 repeating bytes in them. So I started thinking okay maybe they're doing something weird with this and they're encrypting each message or something like that. But I couldn't really figure it out from a black box approach having only the PCAP. So I started looking at the endpoint devices. And they're basically ARM devices running new Linux. Some sort of Debian derivative it turns out. It may be actually pure Debian from several years ago. And the actual firmware applications that are used to do the door access control are supplied as Dev packages. And in fact you could download the updates from the vendor site and get the whole Dev that contains all of the application and all of the libraries that they're using running on each of their applications. But these packages were actually several hundred megabytes. So I had a number of binaries, libraries, scripts and I wasn't sure where to begin. So what tools do we turn to to begin? Well there's a bunch of shared objects and it turns out shared objects have to have some of the symbols left in, right? You can't completely strip a shared object or else you can't do runtime linking against it. So it turns out the fastest way to actually find out the names of the symbols in a shared object. And I know there's some way to do it with object dump or NM or those other tools but it turns out just running strings against them is an amazingly effective way to get the names of symbols out of there. And then I don't have to look at the man pages for NM or object dump. And you could also use LDD to figure out which binaries were linking against those shared objects. Now there's a bit of a wrinkle there if you've never worked on a non-X86 architecture. And that is that you need LDD that's been compiled particularly for the architecture that you're dealing with. So in this particular case you need the ARM EABI version so that LDD can run against these particular ones. So I know that there's better and much nicer ways of doing this but it turns out that if something is stupid and it works then it's really not stupid. So after a while I come across this in a shared object symbol still named and it's labeled default AES key. It's also named default AES IV. I fuzzed a few of the bytes I'm sorry legal insisted. We have here though defaults but you know what I was like okay these are the defaults maybe these aren't the ones that are actually being used. And I spent a lot of time looking through the binary I was like alright so where do you change the key like is it loaded from a configuration file is it loaded from the interface is it loaded from an upstream server like how do you change the key it says default so there must be a way to change the key. And I could never find a way to change the key. And I was really really confused so I was like okay what does default really mean? Default means a pre-selected option adopted by a computer program or other mechanism when no alternative is specified by the user or programmer. It is technically correct the programmer did not let you specify an alternative so it is the default. When we went to the vendor with our findings I said hey it's labeled as the default key. Please tell me was I just an idiot and couldn't find a way to change it? And they were like no we never thought a customer would need to change it and we were afraid that if they changed it in one place it would break their system and would result in all these support costs because the keys wouldn't be the same on their other devices. And I was like thanks guys. Alright so I have a key so what can I do with this? Like finding a key is cool but at that point I can get some plain text. Well first I wanted to make sure that it was the right key. So I started decrypting and for anyone who's ever looked a little bit at crypto it turns out that plain text has lower entropy than the cipher text. And so a really good sign was that I had long strings of null bytes in it once I did the decryption. That told me hey this is probably some binary packed protocol there's lots of zero values and so I've been able to successfully decrypt it. But it also turns out they weren't using any sort of a Mac or any authentication on the encryption so I couldn't be a hundred percent sure. But it would have been a pretty strange coincidence to get a bunch of null bytes and lower entropy if I had the wrong key. Unfortunately the plain text still looked like a bunch of noise just lower entropy noise than the cipher text. In general plain text is is useless to you unless you can assert some sort of meaning to it and figure out what it's structured. At this point I had the rough equivalent of decrypting it and finding out that I had Sanskrit inside. I didn't know what it meant but it was no longer a secret. And it turns out it was some custom binary protocol and so I had to start digging further into their binaries by doing binary analysis to try to get an understanding of how this works. So I started looking at badge reads with correct badge numbers and the responses that were coming back from the local controller. And getting the door unlock messages back from the local controller. And also coordinating, correlating with the door status messages in order to figure out oh this message says that the door is unlocked, this message says that the door is locked. It turns out that their protocol actually though has some like 40 different message types in it. And so I couldn't figure out all of the messages. Many of them weren't probably relevant. They were messages to change various settings and timeouts and things like that. Right? There's a certain amount of time the door remains unlocked after you swipe your badge for example and that can be five seconds. It can be a minute depending upon the circumstances. Many of them are still unknown. I couldn't really justify fully reverse engineering their protocol at that point. But so I wanted to get a working exploit out of this. Right? Like we didn't actually need it for a red team at this point but like there's just something about the satisfaction of breaking a physical device and actually seeing that physical impact in the real world. So in this particular case hearing the click of the lock mechanism opened the door. And I was like alright I have to keep going at this. I can probably justify another two or three days before my manager says well we found enough to report it and move on. I was like okay. Gonna try to figure this out. So I start looking at it and there's some sequence numbers in the flow. And this is actually why the replay attack didn't work. It's not that the crypto is preventing the replay attack. It's just that they maintain sequence numbers within their application layer between the two end points. Unfortunately the door controller connects to the local controller. At first I wanted to just initiate a new TCP connection to the door controller and say hey here's an unlock message. Unlock. But you can't do that if the door controller is the one reaching out to the local controller in order to get to things. And it kind of makes sense when you think about it because your door controller in some facilities maybe it's behind some sort of NAT. The local controller is a little further away. It makes sense for the most remote piece of the equipment to be the one reaching out to the progressively closer to the core pieces of equipment. So if I couldn't initiate a new connection I needed to find another way. Turns out our spoofing still works. Thank you. And you just man in the middle of the connection. Because if you can man in the middle of the connection you can decrypt and get those sequence numbers that you need in order to talk to both sides. And in order to avoid the local controller saying hey I lost a connection to a door. I started replying to both sides. That's what I was using the door message state for. I just kept tolling the local controller yeah the door is still locked. Yeah the door is still closed. You're good to go. But to the door controller I was telling it hey I have a remote door unlock from the administrative council. Please open the door for me. And then at the end I just plain dropped the man in the middle. I could have fixed up the TCP sequence numbers and like adjusted the windows and gotten it back to the same state. But it turns out that on these networks these devices are so reliable that the error message of communications lost for a brief period of time is completely ignored by anyone looking at it. In fact in our installations those messages happen hundreds of times per day. They're not going to send someone to investigate every time a connection is lost. It can happen from a loose cable. It can happen from a quick drop in power at the site. It can happen from any number of factors. And so no one's investigating oh communication lost they came back 30 seconds later when it automatically did a reconnect. So it turns out that you could just man in the middle with your encryption key and send it a message that says unlock the door and it unlocks the door for you. So this is basically what it looks like. You just need access to the network between the door controller and the local controller. And you can send an unlock door one message and it will unlock door one. Now one interesting thing is when we first reported this to the vendor their first response was well yeah you shouldn't give attackers access to the network between the door controller and the local controller. And I had to actually stop and think about how to respond to that for a second because I was like your product brochure says AES-256. If your premise is they can't get physical access why do you need the encryption in the first place. And then I said and do you really think that all installations everywhere don't have the physical access to an attacker. And they were like well yeah you know you always want to put these on an isolated network and it should be completely internal to your site and all of these things. And I said don't you also sell a security camera product that integrates with this. And they're like yeah yeah we can we have security cameras. Said don't some customers put the security cameras on the outside of the building. Well yeah I was like and the security cameras talk to the network how. Well yeah but that should be on an isolated network that you then just bridge over the connection from the cameras into the controllers network. And I said do you really think your customers are setting it up that way or do you think they're just plugging everything into a switch saying hey it works and moving on. Right I mean I don't know. It could be maybe everyone is segmenting every device into their own land segment and all patch panels are carefully hidden behind well secured doors. It was not marketing it was in fact engineering that I was talking to. Although you know there were also managers in the room so. So how do you go about fixing something like this right I'm not coming up here to tell you oh you know these things are all broken and it's hopeless and everything like that. How do you actually go about fixing this in a way that makes the situation better right we tried to work with the vendor to get an improvement and my first suggestion was stop rolling your own crypto I mean I know you're using AES that's great but stop rolling your own implementation of that and use TLS. And they were actually already linking to lib SSL in order to get the AES functions to do the crypto so it wasn't like they didn't have it sitting around. I was like okay use TLS and they come back all right we will make an updated version of the product and the updated version of the product will use TLS and I said great how are you going to do key distribution and they said oh it's simple we'll distribute the keys in the firmware images we're still back at hard-coded keys. So like why is this such a such a thing right you can easily implement some proper crypto but key distribution is really the hard part of doing transport security on embedded devices right like these devices do not have your typical interface that you would have on your laptop on your desktop computer in fact many of them may not have host names how do you do a TLS certificate verification when I point it to ten dot one dot one dot one right no one's going to give me a certificate for an internal IP address rightfully so and at the same time I expect to be able to communicate with that device securely it also turns out they told us they couldn't fix the existing devices that were in place because they didn't have enough flash in order to have the entire TLS implementation present so they said they can only fix it in the next generation of device and obviously these devices should really be on an isolated network right we can't depend on some internet-based solution for key distribution because I don't really want my door access controllers reaching out to the internet to get their crypto material seems like I'm creating more problems so we started working with them and we started thinking internally even like how do you do these sort of things so some of the criteria that you need in order to do it correctly is ensuring that keys are not common across multiple installations and that devices need to only communicate with trusted partners and each individual message should have both confidentiality and integrity which means having a Mac of some sort or otherwise signing the messages and finally they really shouldn't be rolling their own crypto so coming up with this we came up with a few hypothetical solutions we weren't trying to redesign for them right we're not a consultancy just a user of their product but we also wanted the next version of the product to be something we could feel comfortable using so hypothetical one is used TLS the vendorships each device with a certificate and they trust all other devices signed by that vendor right strictly speaking I think maybe that's better than what we had but not by much and Tacker can just go on eBay by their own device extract a key and then they can man in the middle using the key they extracted from a device on eBay so this doesn't really get you into a much better situation than you have with just a hard-coded AES key so someone points out why couldn't you just generate a new key at boot time and say hey here's my new key well the problem is you still need some way to verify that that key belongs to the device you expect to talk to you right a newly generated key is fine but you need some sort of authentication to the key which is why we have CAs in the traditional PKI model is to say that that key is a real key if I just give you a random key every time then anyone in the middle can do it so one other proposal use TLS configure each device with customer specific CA certificate and then the key material and the CA certificate on each device and the problem with the biggest problem with that is it becomes difficult at scale if you have to manually deploy key and CA certificate on each individual device right a large installation may have thousands of these devices in their place think about how many access control doors your employer has right if you work for a large fortune 100 company there's going to be you know anywhere from a couple of thousand to even tens of thousands of access control doors within your offices so in a third op option you could use TLS you can ship each device with a hardware attestation key that's baked into a secure chip and the device can sign a certificate request on the first first boot and then send it upstream to a central CA that's maintained on the same server that is actually making the authentication choices so you now have a route of trust that's in the same place as your database of card credentials and door access controls now this isn't perfect because it does require some additional setup it requires a little bit of additional hardware and for in the device attestation key and it does require that your network is trustworthy on first use but in reality it's a lot better than where you are and it's actually very similar to the SSH model right the first time you SSH into a new machine you get a prompt to accept the key and I'm sure we all very diligently go and check the finger print of the key on their remote server no the reality is if you're on a network that you don't think is being screwed with you say yes and then if it ever changes maybe you start to worry about how your network was additionally since the CA is sitting on the same host where your door access control databases sitting or on the same servers that are providing that information a compromise of your CA would have been a compromise of your door access control system anywhere so this puts the route of trust in the same place as your authoritative source of data for it which results in possibly and the one of the best case scenarios that you can have while still not having to have user interfaces or individual configuration of the devices I don't make a claim that this is the best situation but we're hoping that this is a model that the vendors of devices like these might consider moving towards since there already is a central point on each network for the communications right it's not fully distributed so I think it's important to note that software security matters for physical security systems right I think many of you coming to the IOT village don't need to be convinced of that particular case but it turns out that a lot of physical security vendors have so much expertise in terms of physical security right they'll tell you oh this lock is unshimmable and we will do this and we have a sensor on the back of the badge reader that will tell you if it's been pulled off the wall and all of these physical attacks but they don't look at the network layer because they still are in the mindset of the physical attacks that they've been protecting against for years and IP has just been shoe horned into these systems the industry could be doing so much more and by the industry I mean both the physical security systems and customers. Customers who are aware of these security vulnerabilities and are aware of these concerns can be applying pressure to the vendors to get testing to get their devices you know looked at by an independent security consultant and finding these bugs before they get to market before they get them installed and find ways to make them more resistant to network layer tampering but at the end of the day it's an ecosystem and it'll all depend upon what customers are asking for. Hopefully customers will be asking for secure systems since it is a physical security system but in our discussions with vendors it seems like very few customers seem to be to be caring about the network security aspects they just want something that's going to keep physical concerns at bay. So any questions I'm happy to take a few there's also my Twitter handle my blog and a short link for this slide deck if you want to take another look at it. I also have some IOT hacker stickers if anyone wants to come up after the talk you're welcome to some of those as well. Yes question. So the question was do I expect some attestation from the server that the device is actually one of the vendors devices? Yes so that's why in this third scenario I said that the devices ship with a hardware attestation key and so I would say that the certificate request for the first time use would be signed with that to at least show that the certificate is being issued to one of their devices. Obviously if a malicious attacker gets one of the devices and plants it on your network they might be able to get a key as well so it's not a perfect solution but it I think it raises the bar. Yes so I'm very vaguely familiar with it in this particular case the we can put I'm sorry I should repeat the question the question is how will the new OSDP quite protocol improve compared to we can't and so the reality is it depends on the attack right there are certainly attacks that you've seen like BLE key and things like that where they're cloning the badge numbers off of the we can wires I don't know enough about OSDP to say whether or not that makes a dramatic improvement to those attacks to this particular attack it would have no effect because it's IP protocol between those two devices and it's their own proprietary protocol it's no longer we get into that point it's just transporting the badge number as a single number but I do think moving away from a protocol that was designed for encoding magnetic bit flips is probably the right choice in 2018 so the question basically is is public key infrastructure the right way to go because public key infrastructure requires immense computing resources to perform the cryptographic calculations and so IOT devices can't handle that and for various reasons so and I think in this particular scenario the devices in question and devices of this style a lot of the concerns that you mentioned in terms of battery life for example these are not battery powered devices they're all hardwired so battery life is not a concern and the microcontrollers that are present on them even on the smaller end are more than capable of doing a PKI handshake each time they set up a TCP connection right these devices don't have high volumes of connection they're not doing things that you know hundreds of connections at a time in fact if it wasn't for the occasional network comms failure on these devices they stay connected for days at a time between the two end points with a TCP connection additionally since this isn't part of a public PKI you could easily do it with elliptic curve cryptography which is dramatically easier on processors and much easier to do there so I'm I'm sure that there are other solutions as well right you said should we be looking for something new yeah absolutely but I'm not aware of anything new that would address the problem and in the correct fashion if cryptographers have something up their sleeves that they would like to share with me I'd love to learn about something else but this is the best approach that I'm aware of right so the question is like how do you keep from spoofing a device in terms of having the the TPM chip well TPM do have unique keys burned into them at manufacturing time extracting those keys from those devices is supposed to be difficult the manufacturers make it very difficult it's not impossible but it dramatically increases the bar on an attacker and if you can start pulling secrets out of the hardware secure elements there's probably a large number of other attacks that opens you up to having that capability yeah and if someone if someone is able to start extracting RSA keys from TPM or secret keys from TPM another thing they can do for example is completely subvert the root of trust for a number for like secure boot on a device if they can get your TP the encryption keys out of the secure keys out of the TPM I mean that's that's what TPMs are designed to resist against is being able to spoof their identity right any other questions oh sorry by a key device you mean the badges so one of the reasons that badge cloning is so possible is that doing proper cryptography so so when I was asked earlier about low power devices right that is the ultimate low power devices your your prox card style badge has a microcontroller in it but that microcontroller is only powered by the inductive coupling between the door reader and the badge and you expect it to happen in something like a tenth of a second and so I've actually had conversations with designers of these chips I was like hey why isn't it just a private key in there and you do like an ECDSA signature then no one could ever copy could clone a badge just by reading it right I mean ECDSA signatures are should be strong and he was like yeah we're dealing with processing power that's like one percent of what you would need to do an ECDSA signature so if you had an active badge that had a battery in it that would probably be feasible but then you have people locked out of their office when the battery dies so there's trade-offs to be sure I would love to see a solution where you basically have end-to-end cryptographic assurance all the way from your badge to the server that verifies it and back to the door yeah so the bridge toll devices they actually use very similar technology to a prox card but most of what makes it chunkier is not in fact a battery at least in the ones I've opened up in California it's much much much larger antennas than what you have in the badge that it can do the longer-range read and of course the readers that are mounted over the highway for the bridge tolls are much much much more powerful readers than you have in it it's also why you see if you go through a drive-in garage where you have to badge into the garage you notice the readers there are typically maybe a foot by a foot in size whereas the ones on the door molding are two inches by three inches and all of that size difference it's not so that you can see it more easily or something like that it's entirely so that you have a much bigger antenna and can have much more power coupling between the two devices giving you a longer read range I think I've just about run out my time if anyone wants to chat afterwards I'll be around for a little bit and thank you very much