 My name is Rodrigo Chiosi. I work with Samsung in Brazil, and I'll be here today talking to you a little about Android security and the modifications in the platform that we can do to improve its security in an overall. For those of you who were here last year, you may know me by the Android XF project. Here's a quick feedback on how the project has been during this year. It has grown a lot. Now it has basically all major versions of Android and the kernel available online for browsing, and it gets an average of 10K page views a day, which is way much more than expected from the platform. We got much more devs than I thought. Why am I here today talking to you about Android security? I work at a Samsung lab in Brazil called CD. This lab was originally created to perform local modifications to phones to the Latin American market, so we got the phones from Europe, US, and then adapt them to the Latin American market. But then over the years it became a research lab in all major areas of mobile phones. In one of those, which is really strong right now, is the security research, which is the team that I work with. I've been working with Android security since the end of 2010. Over the first year of our group, we focused mostly on Android hardening and implementing Android features to improve security, such as our custom version of ASLR, a custom firewall, file system Christians, just to name a few. But in the past year, since the end of 2011, we've moved a bit more towards offensive security, so what we do, we take our devices and test for security holes in all layers of the platform. We do test everything, so we start with the kernel, looking for bugs introduced by our drivers and things like this, file system, Android platform, and also Android applications, which is what I'm going to talk a bit about for you today. Well, let's jump into some interesting, I don't know, wait, what apps do we actually analyze? Well, we focus on preloaded apps, which are the apps that actually come with our phones. This is our primary target. We do test Samsung apps and apps developed by partners. When I talk about Samsung apps, it's not necessarily things that are implemented by Samsung, but they ship under Samsung name. And also known preloaded apps, which are apps that, those are mostly enterprise-focused apps, so there are things that are not shipped with the phone, but are likely to be loaded in the phone, again, also from Samsung and from partners. And there is also this third group with popular critical apps, I mean critical, because those are apps that actually perform some critical operation in the phone. So we won't be testing for things like Angry Birds or so. Those are more like antivirus apps, mobile device management apps, and things like this that actually have high power over the capabilities of the phone and are likely to be loaded in our phones. Okay, so now let's jump into some numbers here. What you see here is the result of one year of app assessment from the Samsung team in Brazil. Those are the 12 most common vulnerabilities we found in the apps during our assessment. What I did to build this thing, I got pretty much all the assessments we did over the year. Those are done mainly by hand, so we have some automated processes to test for some vulnerabilities, but when we test an app, we got a developer dedicated to checking everything, decompiling, read the code, and it's a mix of black box testing. Those happen mostly when we got 30-party apps, and also white box testing when we got some Samsung apps that we got sourced, and I got all those reports and passed by the vulnerabilities trying to find some sort of pattern or some kind of error that was more common. If you look at the first one, you get the open broadcast receiver, which is over 25% of all the errors we found, and proper SSL handling, open services, and so on. I'll be talking a bit about these biggest groups and how they actually could be avoided on platform side. Okay, so let's start with open broadcast receivers. I suppose you've heard of broadcast receivers before, but overall it's just component in an app that listens for an intent, and then performs some action based on that intent. What's called no-proper broadcast receivers, just as a note, none of those security vulnerabilities are new. They're all known in the community. So they've been talked about before, but they still happen very often. When you have an open broadcast receiver, it's a broadcast receiver that has been exported to the platform, and it is available to everybody to access, so any app could send an intent and trigger the action from that broadcast receiver. It's not always an error, so you can actually have a broadcast receiver that is intended to be open to everybody, but that's usually not what the developer intended. So the most common use case for open this exported broadcast receivers would be to enable that functionality to be exported to another application from the same app or within a private context. So you want this exported, but you don't want it exported to everybody. Now let's take a look at the default behavior of broadcast receivers. If you declare a broadcast receiver in your app, just a very basic one without any intent filters or anything, the default behavior is it's going to be restricted to your app. So the only way that you can trigger that functionality would be calling using the class name directly, and it wouldn't be visible to the outside. And it's a good thing because the most common case or the simplest case, it is protected. So it was a very good design choice to keep it restricted. When you think about exported broadcast receivers, actually the most common behavior is the most common intended behavior is that this thing should be exported to a private context, and it's not what actually happens. When you export broadcast receivers, at first it's going to be available to everybody and only then you can restrict it to your application. Here's just a quick example on how you could protect your broadcast receiver. So you can declare a custom permission and say this permission is signature with means that only apps that have the same signature as this app that declared this permission will be able to have it. And then in your receiver you say that, well, when you add this intent filter here, Android is going to export your broadcast receiver. So here you're going to be reporting it and you're saying that necessarily whoever sends an intent to this receiver will have to have this permission. So in this case you'll have it protected. This is not the only way to protect your receiver. You could also declare that say that your action or the intent you're going to use to trigger this receiver is protected. But I prefer it this way because it's clear here that you have the intention to protect it. So it's like clearer. And you can also, well, as I said, when you have this intent filter it's going to be exported but you can also add the exported tag to say that, no, I want this exported no matter what, which is also good because you really decode, it's clear that this thing is exported. So let's now take a look at what's the proper implementation flow of broadcast receiver. You declare broadcast receiver, then you're going to export it to everybody in an unprotected state and only then you add the mechanism to protect your receiver. But what actually happens is that the developer will declare broadcast receiver, he's going to run the app, it's not going to work, he will look for a solution most likely in Stack Overflow, which is the biggest Android resources, everybody knows that. And then you're going to find the first result will be someone say, oh, okay, I have to export your receiver, the developer is going to export it, we'll try again, and it's going to work. The problem is that if you check the actual development flow that the developer is going through, the point where the program actually works is not the protected state, it ended up stopping one step earlier. So from this point on, the developer is going to try the app, it's going to work and he was just moving on to the next feature because that's what you do when you have that line oriented programming, which is pretty much everybody. So what actually could be done here to avoid this problem? Well, what we have, the current flow for implementation, we have the, it's not that you declared broadcast receiver, then you export it and protect it, and only then you protect it. But if you have the development flow in the opposite order, so you would declare the broadcast receiver, then move to export a protected state, and only then you unprotected or open it to everybody. What would happen is that in the most common implementation flow or use case, which is when you want to export it in a respective context, if you follow the same flow that the developer did in the last one for the, looking at the internet for solution and everything, the point where his app would actually work would be, still be the second stage, but the second stage would be now the protected state. So it don't change anything that the developer would do, we still have the same developer with the same experience in Android programming, but now he ended up with a secure app. So in the design of how the broadcast receivers are implemented, if we change the way it takes for a developer to reach the unprotected state, we can actually make the platform accidentally safer because you don't actually need to teach one more thing to the developer to get safer applications. Well, this happens with broadcast receivers, but this is not exclusive to broadcast receivers. It also happens with services and content providers. In all three cases, the developer will reach the unprotected state before he reaches the exported protected state. Those two, they have different mechanisms to export their functionalities. In the case of services, what happens is that you enable everybody to bind to your service. So the solution is pretty much the same. You have to declare this service should require permission for the service to be bound only by trusted apps and the same with content providers. One interesting thing is that last week, I think on Wednesday, there was a blog post from the Google security talking about improvements in Android 4.2. And one of the things that they addressed are content providers because content providers, the default behavior would be for them to be exported and now they changed it so it is restricted and you have to manually set them to export it. So the default would be the safe context. This is good because this indicates that Google, I kind of aware of this thing. I don't know if this is going to solve the problem or not. Well, it's more of a matter of how did you compile it? If you compile with another SDK, it's going to be exported. If you compile with a new one, you have to manually declare it. The runtime behavior is still pretty much the same. It differs the way you put it on the coding. Well, actually, it is. I think it should be the opposite. As I said, you shouldn't say that you want it protected. You should say that you want it unprotected because when you want it, especially saying that you want it unprotected, you know what you're doing or at least you should have an idea what you're doing. The platform should lead you to the secure path for it. Another thing, if you ask me, how could this be implemented in here? One thing that could be done, if you think about this example here, it should necessarily make the developer, it should require this permission tag to be compulsory here. And add some other tag that could... Not necessarily another tag. If you enforce this permission tag here, if you want an open broadcast with zero, you can change the permission from signature to dangerous. So if another app wants to use it, it just declared this permission and it will be fine. You still have to specifically say that you're going to use that resource, but it won't be restricted. You may say that this makes the platform more complicated, but the fact is that keeping it simple the way it is right now to have it exported is not causing anything good. What you get out of this simplicity is a bunch of secured problems. Because even if you think about more simpler broadcast receivers such as, I don't know, on boot-complete or something like this, it may look that it's just an unprotected one, but it's actually not because the intent can only be sent by the system. So this situation, you are under a protected state. Okay, now, moving on to the second group, the improper SSL handling. The improper SSL handling happens when the developer has a self-signed certificate and he wants to validate this in his app. This self-signed certificate can be either from the website of his own or a web server or whatever, but it was not signed by one of the certificates that are shipped with the default Android Key Store. So what happens is the APIs to validate a self-signed certificate are very, very complicated and they have very poor documentation. If you look at the documentation for this at the Android developers website, the hard part is to create your own Key Store to validate is just a dot, dot, dot. So they just put the easy part that once you get everything ready, you just load it and validate, and the hard part is not there, so you have to guess. And if you look for solutions, what people do is actually re-implement the trust manager so that the check-trusted server method don't do anything. What this do actually is it will require the server to have a valid certificate, but it won't validate the certificate. So this will open your app for many in the middle attacks. So if I'm in the middle of the app, I can provide my own self-signed attacker certificate, and it's going to be accepted because it's a valid certificate, and it's not validating the source. And this thing is very, very common. You can see by the frequency that we found these things during our analysis. And one of the reasons, again, is because if you search for self-signed certificate validation on the web, you're going to be directed to Stack Overflow website, and this is the first answer. Like, it has the most votes. The thing is that this problem has been brought to media attention lately, so now at the same post, you can find the proper way to implement it, but it still has just a few votes, so we have to screw it down. But I hope this thing will get more popular over time and people will get this properly implemented. But the good thing is that Google is also aware of this thing because on Android 4.2, you have new APIs to validate self-signed certificates. They are still a bit complicated, but it's already a step forward in this direction, so let's see what's happened. Maybe next year I'll be here with a new short on these things and we'll see if it solved the problem or not. Okay, now the rest of the short, if you remove the proper SSL handling and the open broadcast receivers, open service and open content providers, they are pretty much developers fault, so there isn't much thing we can do in the platform, at least from my perspective, that would solve it. I just would like to point to them, they are very, very common and please, if you have your own app, when you get back, check for these things. The first one is hard-coded crypto key. I don't know why developers think it's a good thing to put your crypto key inside your app, especially in Android, which is very, very easy to decompile and generate the code. You get pretty much plain test code out of an app and it's very easy to spot hard-coded key. And this is found in the most critical apps you can imagine. I won't point any names, but I'm pretty sure if you want you can find them. And the other thing is to trust SMS messages to perform critical operations. I found many, many apps that, for instance, triggers a factory reset or a data wipe on the phone from an SMS. But the point is that SMS is not a secure channel. Even if you trust restricted fields on an SMS, which a person wouldn't be able to write it just by typing on the phone, it still can be forced. So if you trust an authenticated SMS to trigger a secure critical operation, it can be triggered by anybody. And this is also very common. So if you do use SMS to perform those critical operations, please use some sort of validation or encryption or whatever, but just don't trust it out of the box. Okay, so now this is another interesting thing. What I call the hidden issue, which is excessive permissions. I call it hidden because it does not appear in the chart and it does not appear in the chart because it's extremely complicated to measure it with the default tools. So just because it was not on the chart, it doesn't mean that the apps didn't have this problem. It's just because we couldn't actually measure it. Assessive permissions are not a security issue on its own, but if you do have another security issue on the app, they will potentialize the problem a lot. One example that I have here is the Poundtune case. I don't know if you've heard of it. Poundtune context held at Consec Quest. Originally it was focused on browser hacking. So it started with hacking from Internet Explorer, Firefox, Chrome OS, and then Chrome browser, sorry. And then they created the mobile edition which they tried to hack Blackberries, Android, and iPhone. And last year at this contest, the target for the Android was the Galaxy S3. So after the contest, I was directly involved in the analysis of the problem that was found. And it was a very complex attack, but one key point in the attack is that they managed to attack one preloaded app. And this preloaded app had the install package permission. And but actually it was not using this permission. Some developer added this permission there during development for whatever reason and forgot to remove it. And this is a system protected, signature system permission, but this was preloaded so it had the symptom signature for that phone. And due to this excessive permission, the attacker was able to actually install a payload in the phone and then from there he could access pretty much everything. And if it wasn't for the excessive permission problem, probably this attack couldn't be viable or at least it would be way much harder to attack because they would have to find another vector for the attack. So let's look at why this thing happens. If you think about the developer point of view, what he does is that he implements his app, add some restricted APIs, what I call restricted APIs or APIs that requires you to declare some sort of permission in your app. Then he runs the app, the app crashes. Then he looks again at the internet for a solution. Again, he's gonna hit Stack Overflow, find a lot of people telling them that they need to declare some permissions. They're not copying, paste all the permissions they found and then the program is going to work. But I don't know anybody that after they reach this working state, they would just keep removing one by one the permissions to check if that was the one that make it work. Like if it reached the working state, the developer would just move on and these permissions will be there for no reason. But why does the developer actually have to do this? Because if you think about those restricted APIs, you know beforehand at compile time that some permission is required and so why we can't automate this process in the SDK? I mean, this won't cover every single case. Sometimes you have to manually declare a permission but for the most common case, you already know at compile time what you're gonna need. If you're using some Wi-Fi API, you know that you've required a Wi-Fi permission. If you're using some phone state API note that you need the phone state permission. But we don't have a clear mapping of these API permission thing. If you look at the documentation for the APIs, you can find which permission requires, which permission that specifically API requires but it all spread out, spread at the documentation. We don't have a clear map of this and this map is something that we need to be able to automate this process. And again, as I said, it may not cover every single case but just the fact that you can forget about the permissions initially, compile a app and then get this map created for you and then just add your custom changes over it if needed. It will reduce this problem a lot. There is also if you have these mapping, you can create automated tools to be able to detect this problem. There was some attempts to do it but the ones that you can find online are either outdated or incomplete. Well, to sum up, I hope I could show you that there are some changes we can do to the platform, to change the workflow, the way we develop things, to make Android more secure without having to change the developer itself. Of course, we want developers to be more secure aware to develop secure code but still if we can change the platform to make it more secure, why shouldn't we do it? Also about the design state, we should always aim for a development flow that leads the developer through the secure state before he reaches the insecure state so that as we showed, if we had something like this, this vulnerability chart, you wouldn't even have these big chunks over there and just by changing the flow, it could reduce this a lot and that's pretty much it. I was a bit faster than I thought. I don't know if I spoke too fast but if you know of any other places that this kind of design issue could be implemented or where you could find, or if you have doubts or anything like that, I would be glad to hear from you or if you have any other questions but thank you for being here today. Thank you. Any questions? Yep. Yes, actually if you see this poster is outside about Secured SentenceSafe thing that ships with the Galaxy S2 and this project... The S2 is not... No, the Note 2, sorry. And this project is going to use a bit of the security handsome Linux but the project itself is more like I would say like a simple code so how you could achieve this secure platform is not that easy to pour everything to a product, a final product because there's a lot of custom code, things that partners ask us to implement that conflicts with it but we do take a lot of security handsome Linux. Security handsome Linux. Well, actually here if you think about for instance the open broadcast receivers it wouldn't do much because the fact that you have a broadcast receiver exported to everybody is a valid behavior. So you can actually have an app that export functionality to everybody, it's not wrong. So you can like have a policy in running in the framework to say no, your app's not right. I'm not going to execute what you're saying because you can just by looking at the app say no, the developer didn't want to export it to everybody so I'm going to block it. Well, you could like... Yeah, but I mean how can you defer the legit exported broadcast receiver that's usually not intended exported broadcast receiver? You don't know what was the intention of the developer. They are both correct code. The point is that one of the developers didn't want to do it but he didn't write problematic code, it's valid code. You just don't do it. Ah, okay. Well, that's actually pretty easy because for instance if you have a broadcast receiver that will, I don't know, place a call or send a file over the internet and you can trigger it from every app. So every app has for instance the capability of uploading a file to the web without having to declare internet permission, that's a vulnerability. Think about it, it is a vulnerability when you export a functionality that would originally require you to declare a permission. Anything else? All right, thank you.