 Yeah, thank you for the short introduction. I'm Siegfried. This is my colleague Stefan and today we'll talk about our tracking application investigation. We are both from Germany from a research institute called Fraunhofer SIT. It's located in Darmstadt, close to Frankfurt. A few words about ourselves. So I'm Siegfried. I'm leading a research group at this research institute called Secure Software Engineering and our main focus is on static and dynamic code analysis. So writing new analysis in order to find vulnerabilities in binaries as well as source code. I'm also a founder of the team called TeamSIC, which I will say in a second, and a founder of Code Inspect, which is a reverse engineering tool for Android. So Stefan, would you like to say a few words about yourself? Yeah, hello. My name is Stefan. I belong to the Test Lab Mobile Security Group. I'm also developing static and dynamic analysis tools. And in my spare time, I'm digging around a bit with IoT stuff. And together with Siegfried, I'm a co-founder of our hacking team. Good. Thank you. So we talked about this TeamSIC. So what we presented today is not the results of both of us. It's basically the results of our team, which is called TeamSIC. So what is it? A few words about it. It's a hacking group. So we meet once a week in our spare time. So the team consists of researchers from this research institute as well as students around our universities, around the university. And so we usually look into different interesting projects. Then we try to find some vulnerabilities. And that's basically the goal to learn from each other. So the credits definitely go to those brilliant students and researchers mentioned below. Two of them are actually here in the audience. Good. So before we start with the talk, a short beer announcement. Since we are, well, you already know from Germany and actually from Munich or close to Munich, we thought it might be yet another cool idea to bring or import some boxes of beer. And since this is our third first DevCon talk, we imported two boxes of beer this year. So after the talk, feel free to come and grab a cold beer. There are 40 bottles for you guys. Thank you. So let's get started. A short agenda for today. I will start with a little bit of motivation and then do a little bit of background information. And then we dig into our results of our security findings. First topic is client side authorization. I will explain what I mean by that. Then we will talk about the client side vulnerabilities and then we will talk about server side vulnerabilities. In the end, a few words about responsible disclosure because this was funny this year and a summary in the end. Good motivation. So when I started, when we started putting the slides together, I said, how can I motivate tracker applications? So while you first of all think about surveillance, so looked a bit up online and I found a cool blog post about a CAA museum. For instance, in the 60s already, there were some radio receivers inside a pipe. So very small stuff, which was very interesting to see. In the 60s, again, like a camera inside a pack of cigarettes to hide and to audio record or video record the environment. And in the 70s, like a microphone fit already into a dragonfly in order to spy on people. So I guess you already get it. This was the past. So how is it right now? I guess we all have it in our pocket. It's the smartphone. Well, because there are a lot of sensors in it like GPS and this kind of stuff. So you get a lot of information from people. This is the reason why there's already spyware and rat abusing this stuff and extracting all the information and using it for whatever reason. But we also ask ourselves or we ask are there any benign reasons or are there any good reasons to use such kind of apps or surveillance apps or tracking applications? And then we found three different topics, which was interesting. So first of all, families. So there are apps out there where adults want to know where their child's are. So they want to know if they are whatever safe in the environment based on the location information. There are couples, which was interesting, like track my boyfriend, track my girlfriend apps. I don't know why they do this. So they mutually agree on installing the application and whatever to check if they are not cheating on each other. I don't know. But there are many of them out there. And friends as well. So you want to know where your body is in order to meet or whatever. So there are benign reasons. The question is how do you differentiate between the good and the bad now, right? Because from an implementation perspective, both are implemented in the same way. So for this, we looked up on Google Play and we found these kind of apps. So we thought there might be a definition, what are good apps and what are bad apps? So I found one of them is the Android security report stated it that commercial spyware is any application that transmits sensitive information off the device without user consent does not display a persistent notification that this is happening. So this means if you want to install a benign tracking application and if you want to upload it to a Play Store, you need to show the monitor person notifications like whatever right now I'm accessing your location information or whatever and I'm sending it to your mom. Some kind of this. If this is the case, then this is a legitimate app. And if not, then it's considered a spyware and it shouldn't be in the Play Store. This is at least what we found. Good. So we've only focused in this talk or in our project on those legitimate apps and we ask ourselves the question because they're collecting a lot of data. How well is the collected data protected on the client side or on the server side? For that, well, we looked up as a Google Play Store, we typed in tracking application, tracked my boyfriend, tracked my girlfriend. And then we found 19 different apps. Why is this an odd number? Well, we just downloaded a few of them, the first hints and at some point we stopped. And then while we found so many vulnerabilities that at some point we got bored and this is the reason why there are 19. This is no special reason for 19. So we looked that we at least get those that have the most installations based on the Google Play Store statistics. And another point is we only looked for free applications. So I know that there are a lot of commercial spyware applications out there. Those were not the target in this project, only the ones where you can download for free and you get for free and you can use for free. So as a spoiler, we found 37 different vulnerabilities in total, very, very sensitive ones. And while in this talk, we will show a few of them or at least a few categories of them. Good. Before we see the takeaways of this talk, so what will you learn today? I have to tell you that if you expect any sophisticated exploit in this talk, unfortunately, I have to disappoint you. So it was in this project very, very easy to get access to all this highly sensitive data and even to do mass surveillance in real time. And we usually play this game of can we upgrade the applications which we have to pay for premium features? Can we upgrade it for free? And yeah, we will also say a few words about that. And yes, it was possible again this year. Good. Then I will come to the background information, just very, very small background information, very easy setup that we are all at the same page. So how does this work? So usually you have this application and you have an observer and a monitored person and both install this application and then there is some kind of pairing process where they know, okay, I belong to this guy or whatever or I can monitor this person. And on the monitored side, well, it collects all this sensitive information like location and so on and sends it to the back end and in the backend side, you basically the observer pulls the information saying, hey, right now I want to know where my kid is or whatever. So on this means on the back end side, there are information like location information, call history, text messages, WhatsApp messages, whatever. And apart a couple of applications also had the cool feature of installing a messenger into this tracking applications. So this means that you can chat with your girlfriend or whatever. So you also can send pictures and videos. And this is important for the remaining talk. So all these data are stored in the backend. So what are the attack vectors here? Well, as I said, the usual game, can we upgrade premium features for free? So we will say a few words about that. Then obviously the two communication channels can be do a man in the middle. And how was it implemented? How was the protocol implemented? We will say a few words about that. And the other last attack vector is the backend, basically. Good. So in the following, we will talk about all of those three stages now. First client side authorization. So before I do this, I start with, we all know this, but just to get clear what it means to do, to access the sensitive data. So you have an observer and you would like to access sensitive data or data from the backend. So there are usually two steps involved. First authentication, including identification, and then authorization that you're authorized. The check is on the backend, which checks if you're allowed to access this data. We all know this, but I'm just saying. And then what we saw is usually, so there was most of the time, this kind of authentication process, many times broken, many times there was none. But at least it was there. And then there was something where we found a client side authorization. And I will explain this in a second. So I will show you four different examples of what we found out, which was not okay. Good. The first one, um, yeah, as I said, the usual game of premium features. So these kind of applications contain some features which are disabled by default. And if you pay wherever $5 or something like this, then you get super cool premium features. One of them is for instance, removing advertisement that you have read that you're not seeing the advertisement anymore. Very easy. So we asked ourselves, um, how was this implemented to, for instance, get rid of the advertisement. And then we looked into the code. And we found the following. Um, share preferences, get boolean L ads, for instance, then there was a check if removed. So if this flag is set to true, and if yes, then they basically disabled this view on the client side. For those of you who don't know the code here or what the share preferences is in Android, share preferences is a file that comes with the application. Um, it's a XML biased XML based file and it has a key value pair. So in this case, this L ads, for instance, was set to false. If you set it to true, then you basically can get rid of the, um, advertisement. Um, the question is for those who don't know it can, how can we manipulate this file? Well, there are basically two ways. First one, if you have a rooted device, it's very easy to, to change this value on an unrooted device, you and the application allows you to backup. So you backup the application, including this file, then you can, then you do, you modify the file and then you restore it. This is all known and well known from, also from the past. Then when we looked into this, uh, share preferences file, we found some other cool settings there. So one of them was SMS full. So SMS full is like in from the monitor person, all the text message basically can be, uh, can be accessed by the observer. Um, so the full text message because they want to know if the girl from the boyfriend is cheating or whatever. So they want to exactly know what's going on. And as I already mentioned, so if you set this one guy from false to true, uh, oh yeah, sorry, I forgot to say it. So what does this full mean? This full means if you did not pay, you only get the first X characters of the text message. And if you pay, then you get the complete text message as an observer. Um, well, if you set it to true, we already learned this right now, then you get the full text message. But the question was how was this implemented? It was implemented in such a way. So the observer basically said, hey, please give me all text messages from this guy or from my kid or from my girlfriend. And then the server says, yeah, okay, sure. Um, you get the complete text message text may one, two, three, like the complete one. And then at the client side, there was a check like, okay, so if you did not pay, I only show the first 50 characters. And if this is not the case, you basically see the complete text message. And this, this was a little bit funny to see and you shouldn't do this. Because I mean, come on. So yeah. Good. So next stage, second stage of, of this kind of box where, um, so as I said, there are basically these two roles. You have a parent which has the admin role and then you have your kid, which has less privileges. And if you're an admin, you can create a new admin or whatever. And if, well, and you can monitor basically your kid. And the question was, okay, how do these apps differentiate between an admin or a parent and a child, a children. So, and the question was, yeah, there's a shared preferences file and there is a set called East parent. And if this guy is set to true, um, then your parent and then your admin. So this means if you are the kid and you change your share preferences file to true, then you're an admin and can spy back to your parents if you want. Good. Next stage. Another example of this kind, I guess you already get this thing. So there was there were applications that contained, um, additional security protection mechanisms, which, um, once you open the application, um, you can enter a pin and then it asks you for the pin. And if you enter the correct pin, then you own, then you can access the application or the data in the application. So this is a good security feature. The question again was, uh, how was this implemented? And I guess you already get this kind of game now. There was a flag in the shared preferences file, this time P flag. So it's not so directly pointing to remove, uh, to lock screen or something. And if you set this to false, then you do not see any lock screen at all. Even if you added a pin or something, you can directly access the data there. Yeah. And last but not least, um, obviously the same also work for log in. So honestly, so there is an ease login. And if you were logged in before, so they store basically the username and password. And if you set this guy to true, it automatically locks you in even without typing the username and password. Again, shared preferences. I mean, yeah. So the last slide here in this case is please do not use shared preference for authorization checks. So for those of you who are bug counters, so please look into shared preferences. It's always fun and you find a lot of stuff. And for the developers, please don't use this again. We talked about this two years ago in last year about shared preferences. And yeah, there are for sure more apps that don't understand or developers that don't understand this. So please don't do this again. Good. So this was it from my side. I will now hand over to Stefan who will continue with the remaining slides. Okay. Thank you. Thank you. I will explain the rest of our findings and vulnerabilities now. First, the client side and communication vulnerabilities. And for the people who are not aware about the concept, a few words about man in the middle attacks. The basic idea is just to get as an attacker between the communication, so between the user and the back end and try to eavesdrop or even manipulate the communication. If, for instance, the app communicates in plain text, this would be very easy for an attacker because he can read everything, change data and so on. Another case would if the app has implementation flaws like it uses broken encryption or has errors so that the attackers easily can bypass the encryption. And the last step, so this would be the only reliable protection against man in the middle attacker to implement secure, correct, and confidential, integrity protected, authenticated communication. So our first step mentioned man in the middle attack. We had an application where it was required to sign in and we wanted to know how secure is this login process. So as a user, you have to enter your credentials. As an attacker, we observe them. The first thing you can see is a HTTP connection, so it's plain text. So man in the middle attacker would be able to read the plain text credentials. But as you can see in the sketch request, there are not our credentials. So we replied a few times to get or to see some pattern. And as you can see, we have different parameter names and different parameter positions, but we have always two same parameter values. So this looks interesting and we dig now into the code to find more about the implementation. The first thing we saw was a hard coded encryption key. Reverse engineering then this algorithm, we saw, okay, the user data or the user name is XORed with this key, base 64 encoded. And there was a predefined set of random values. One of them was randomly picked and combined with the username. This is the same procedure for the password. So this means if we now can as a man in the middle attacker observe the traffic, we just simply take the value, decode it, XOR it and we get the credentials in plain text. The other parameters we saw also were garbage. So we had two additional parameters that were also randomly selected from a predefined set. But they had no value. And this is some kind of weird obfuscation. We don't know why this was done by the developer, but it's the wrong way. So as I said, if you were able to eavesdrop this data, you can decrypt it, get the login data and authenticate. How to do it rightly in Android. So secure communication is not so hard. You just have to do an HTTPS connection, use TLS 1.2 or then later 1.3. You just need a valid server certificate to get it for free, for instance, from let's encrypt. And on the Android side, doing HTTPS is very easy. Define a new URL object, open a connection. Yeah. And that's all. Then it's done. Okay. So the next thing we saw, problems with authentication. A secret already mentioned, the apps are transferring all the tracking and location data to some kind of back end. In most cases, the back end hosts the database. And if you want to connect to a database, for instance, in Android, you have to instantiate the database driver and then you have to establish a connection. Now, what we saw in the application is a typical pattern, how you should not do it in an application. At first, they establish the connection, so you need the URL, you need a login name and you need a password. And the problem is now the password is stored in the application. This means everybody who has access to the application or to this code can simply extract the password. So the username has the URL and has complete access to the back end storage where all the data was stored. We had a few apps. Here you see a simplified database scheme. So this back end stores the email address, some name, and especially also the location information. And in our findings, we in common had 860,000 different tracking apps or locations. And even if you make regular requests or query, you also can observe or track the people in real time because the app regularly sends updated data to this database. That's not all, of course. So when we looked also in the code a bit step further, we thought, okay, yeah, SQL. So for SQL, you should in the right way use prepared statements. You can see, okay, they already defined a prepared statement. And now we would expect some method which will set the values in this statement. But what we see was they overwrite the prepared statement with a concatenated statement. And as a user, you can for instance control the email address. And in this sample what you see here is a classic SQL injection vulnerability. So the app is broken already by design, but we stumbled above this additionally. So I don't want to be unpolite, but this is really stupid code. Okay, now there are more exciting things. Let's get to the servers which are hosting all the data. A secret already introduced. We need an authentication process and authorization process. And we try to analyze and find if there are problems, design flaws or other vulnerabilities to bypass or break these processes. So we have different, let's say, stages of vulnerabilities and findings in some five. And I will now explain the different findings on the server side. So the first thing is not really a vulnerability. It's more, let's say, a feature or a usability thing. The application after the installation has by design or by default an option which says all located and tracked data are sent to the back end and everybody has access. This means they are freely accessible. So this is kind of, let's say, design flaw. A better option would be some opt-in where default is not everybody can access it. And if the user agrees, he can activate that everybody can access this data. How looks this data? Or how can you access it? It's very easy. You just need to know the website and the username of the person you want to track or want to listen. And for this, we have prepared a short demo video. So if you go to the website, you just, as I said, you have to know the URL of the website. Then you have to enter a username. For this, we choose a random username. We call the user sexy. If you open it, you already see, you get some tracking on Google Maps. Now you can open details. You see when the track was stored. You also click on this track button. You get the correct location, when the track was stored, the starting time. You get some kind of profile information, the altitude and so on. Here you see the altitude diagram. And now you also can track or reply the track of the person. You can see, okay, he's entering his car. You see his speed. He's driving around because of the connection with Google Maps. You can also zoom in or look in detail. So you can see, okay, the person is going to some school. He's driving around and so on. It's also a bit curious. In this track, the person is going to a school at 1 p.m. She's moving between the school and the ATM several times. Don't ask what she's doing or he's doing there. And yeah, at the end, she's going back to his home address. Okay. As I said, this was just a feature. This was not really a bug. But this is not all. We also stumbled about a bug in this website. So this is not a feature. This is a bug in form of an XSS. So next step, authentication problems. Sometimes you have the impression, yeah, authentication what. We took another application. If you look at the traffic of the application or reverse engineer code, you see some HTTP request in there. It's already mentioned. Nothing new. Plain text communication. Then we have a user ID. The user ID is the idea of the user itself. So the person wants to monitor something. This user ID was protected by Caesar encryption. I don't know why it makes no sense. Then we have the child ID. This is the idea of the person you want to observe. This is a simple 10-digit large number. And we have the current date. This is the date when the last tracking of the person was stored. And as you can see, this is not a very complex request. If you can also trial or guess this child ID, the person you want to monitor, if you enter this, the host responds the whole track of the person. We choose this tracking data and printed it into some Google Maps. And here you see some tracking profile of one of our students. And this is completely accessible without any authentication login process or whatever. Everybody who knows this URL can track other persons. Okay. Secret already introduced some additional features. Apps also have the possibility to send or track text messages. And the question was, can we also get these messages? Oh, sorry. If you look into the traffic again to get a message from the monitored person, you have to make a simple, there's an API. You have to make a simple post request. You get a number, how much SMS you want to have from this user and his user ID. So after that you get a timestamp and the phone number and the messages the monitored person sent. Now what happens if you let the user ID empty? You get all stored text messages from the server. Yeah. Okay. So as you see, this is not rocket science. We have no real complex exploits. You just have to know how to use your browser and send a URL. Now we get into exploiting. We have fewer SQL injection. Very simple. So again, we have another type of app. And in this app it's also possible to track a person. Here you have to know the mobile number of the person. So this means the backend provides an API. If you enter the mobile number, you get the longitude, latitude, the location, timestamp of the person you want to monitor. Okay. Now a little spoiler. We are talking about SQL injection. So what do you think what happens if you do this? Yes, you get all data, phone number, location data from the backend. If you look at the history, the first recording started in the year 2016. That's all. Simple SQL injection. The next one is a bit more, let's say, complex SQL. Secret also mentioned additional features like messenger functions so people can, for instance, with your girlfriend, you can exchange images. As in the usual messenger, these images are stored on a cloud system. And of course there's one cloud for all images. Not every user has his own cloud. And in this case, the user needs to authenticate at the cloud backend. Now also filter. This means, okay, this user has authenticated. The images belong to the user. So he just gets the images of his girlfriend or not images of foreign people. The question is now, can we somehow bypass this authentication or were we able to compromise the cloud? Spoiler, a little demo. So if you take a look at this cloud, it already provides some simple web interface. By the way, we have to obfuscate the URL because this bug was still not fixed from the vendor. So the cloud storage provides us some kind of simple web interface. But you see with you enter the URL in the browser, we get no images because we are not authenticated. And we are in the section SQL injection. So let's try a simple SQL injection at the parameter. That should be in the upper corner, a bigger image of the SQL injection. You can see it here. And surprise, surprise, we get a preview of the images stored on the cloud system. We can also open this images and download. And as you can imagine, if people are exchanging images and have the possibility to exchange images, they also will exchange not just burgers or selfies, they will also exchange more private or sensitive information. Let's say from the section of adult entertainment and so on. So we also found very sensitive data on the cloud. And yes, I cannot say how much data. We did not count them or download them because of privacy reason and so on. And as I said, the bug is still not fixed. Okay. And so this way we would be able to dump all images. So yeah, thank you. Then the last step, these were just images. Now we want to go to the crown jewels. So can we get the credentials? And one of our application had a strange, let's say installation process. So the app were able to recognize if it was already installed on the device. So they had some special installation procedure. This means when you install the app the first time, it's generated some kind of device ID and stores this on the back end. And you remove the application and reinstall it. It requests to the server for the device ID and compares and is able to detect, okay, I already was installed on this device. And if it realized that it already was installed on the device, the server already sends the username and the password and the email address back to the application. So our first idea was, okay, the device ID, how can we spool it? But the problem is the device ID is a long number. It's very complex. And then it's a domestic number. You can perhaps reproduce it, but it's not the best way. So our other trick, let's the idea empty does also not work. So let's try an SQL injection. Here you see a little curl command, which is doing the request with the SQL injection. And what we get was a stored user credentials, the username, user ID, the password in plain text. Now you can imagine, we can iterate over all values. And all in all we were able to or could extract over 1.7 million data passwords credentials, everything in plain text. So if you think, what the fuck? So then, okay, there's more. Surprise, surprise. A few words at first about Firebase, who is not aware of it. Firebase is a service from Google supporting web or app developer. They're providing service for crash analyzers, cloud messaging or storage. In our case, we focus on two services. The one is a real-time database. And the other thing is an authentication process or an API for this. So just imagine this real-time database like a classical database if you're not aware about this Firebase service. So we have another app. They have implemented an authentication process. They hosted their own authentication server. And as a user or as an attacker, at first you have to send a login request. For this, you have to send the user email. In our case, as an attacker, we will send the victim's email. On this backend, there's a database, a classic database with a public available table. And this table stores the user email and the corresponding user ID. And so if you're sending your post request, the database is curing the database. If he finds the corresponding email address, he replies this user ID. In the next step, the app now tries to access to this Firebase database by sending the user and this queried or corresponding user ID to get access to the stored data. So we're querying in this user ID, in this publicly available table. And as a response, we get the location data, the address, date when the request was sent. So this was the first thing. And as you can imagine, guessing the email address, you will get really easy access. I'm sorry I have found in this movie not a better face palm, but this is the first one. So in the next step, you see the app or the database back and also replies the user credentials back to the app. The question is now why? Yeah. This is an example how you should not do it. The developer implemented a client side verification. This means they're expecting the credentials from the server and comparing it in the application and if it's correct, they allow access. So think you're aware of it. It's not the correct way how to do this. Another thing is our old trick. What do you think what happens if you remove the user ID? Of course, we get all stored data from this database containing location, address data, user credentials, security token, whatever. Yeah, shit happens. So it's sometimes easy to bash people. So what's the problem here? The first thing is they did not set any authorization rules on the Firebase. This is always a common problem developer are not aware about this thing and they use some default configuration and then the authorization is disabled. Further thing as I explained, if you're doing authentication, you don't do it on the client side. You have to do this on the server side. And especially if you want to work with Firebase, then use the SDK. Don't construct any strange code constructs by yourself. The SDK supports Google sign in, you can use custom email, password sign in, Facebook, whatever. They already implemented it in the correct way in this SDK. It's also possible to use your own authentication back end. There's a good tutorial on the Firebase side. If you do this step by step, you're on the right way. But don't do anything with public available databases, construction and other weird things. Okay. A few experiences about our responsibility disclosure process. Of course, we informed all vendors. We gave them 90 days to fix them but 90 days are not strict. If the vendor says, okay, I need longer because of some development cycles or whatever, we say it's okay as long as you fix it. This time we got a few strange reaction. The first one is as expected. We will fix it. Thank you. Everybody is fine. The second is, yeah, sometimes you have no reaction. One reacted, how much money do you want? They thought we want to eavesdrop them but then we clarified, no, just won't give you the report. Please fix it. Yeah, the last thing, it's not a bug. It's a feature. For this people, a manufacturer who do not react on our emails, we try to involve Google. Google has this app security improvement team and also security team. We wrote an email to them, send the advisories, reports everything but we did not get any direct replay or any reaction. Last week we checked the store and 12 of the applications were removed so seven are still vulnerable. I also, the demo video you saw, this back end is still active and nobody is reacting. So a short summary of our talk. As always, as you saw, don't use plain text communication. Mobile is a radio communication. In most cases it's very easy to eavesdrop, sniff or manipulate your data. SQL prepared statements. It's nothing new. Each, especially Android, they provide a huge API for SQL and prepared statements. If you're doing app development, don't just focus on the app. If you use back end, this is a bunch, you have to consider also the security on the back end side. And very important also, don't store any user secrets like passwords, encryption keys, whatever on the client side. Everybody who has access to the app and there are a lot of people who are like reverse engineering Android apps, it's very easy to extract the information. Also, Siegfried already explained the shared preference thing. If you have anything, any special feature or you need a license, Google provides an API for this. And also if you're working with Firebase, use, read the Firebase tutorials, use the authentication and authorization API they provide. Here you see at the end, again, a list of the apps we analyzed. As you can see, the left column are the apps with the client side vulnerability. The right side, these were apps where the back end is involved, where we are able to access location data or even all storage data. If you look at the table, nearly all apps, especially on the back end, were vulnerable against some type of attack. So this is the end of the talk. So thank you for your attention. To two last words, all our findings, we wrote also for the vendor, we wrote advisories. The advisories are accessible on the website, final on the findings. And also for the vendor, we wrote advisories. The advisories are accessible on the website, final on the findings. And the last thing who wants to talk with us or discuss or has a question, come to us, grab a cool beer. We also have a bottle opener so you don't have to be thirsty. And thank you again.