 Sanyu, hi, everybody can hear me okay? I hope I can hear myself. Thank you so much for coming and coming out to learn about Android security. We're basically gonna be talking today about finding and fixing vulnerabilities in Android applications. Smell it, let me use my keys, all right, that's fine. Okay, so who am I? I'm gonna skip through this. I'm an application security engineer and you've already heard all this. So let's just go on to the disclaimer. What we're gonna learn today is stuff that can be used to exploit actual applications that exist in different Android marketplaces. So please be a good human being and either use this stuff to educate your developers for your own auditing, for tools that are within your organization or for bug bounty programs. Please hack for good. And if you find vulnerabilities, disclose them because everybody learns from them. Okay, so what are we gonna talk about today? We're gonna start very, very briefly with some Android basics. Kind of just running through how security works in the Android ecosystem and why it's built the way it is and kind of just giving you a bit of a foundation for what we're gonna talk about after. Then we're gonna talk about tools. Just basically setting up your hacker space, so to speak, and different tools that I like and why they're fun to use. Next, we're gonna talk very briefly about why the Android ecosystem tends to be a bit more vulnerable than, say, iOS. And how we can kind of mitigate against some of those attacks and some things that have come out recently with Google IOs security discussion this past week. Then we're gonna talk about some common attack surfaces, finding and fixing common vulnerabilities based on those attack surfaces and then just some resources on continued learning because I really like books and courses and I'm gonna share them all with you. So Android basics. A lot of people talk about Android as basically Java on Linux. That's a bit of a misnomer, but basically accurate. So Android is essentially a stripped down, modified Linux kernel plus a virtual machine that runs Java like applications. And you see here, the virtual machine kind of provides an abstraction layer to the operating system which just keeps things a little bit more nicely contained. Now, originally an older versions of Android, virtual machine that was used was the Dalvik VM that's since been replaced with ART which stands for Android runtime and that's not really gonna impact as much today. The only thing you really need to know about it is that ART is a little bit better from a performance standpoint, but in terms of security if you wanna be really nitpicky the only real difference is that because ART is slightly newer, maybe there are more vulnerabilities that haven't been found because it just hasn't been out in the wild long enough or at least as long as Dalvik. So the code is written and compiled in Java and then converted to DEX and then that's converted to OAT files. If you're worried about how do I get DEX files to convert to a Java readable type of situation if you're decompiling, it's actually pretty handy because DEX files are within the OAT files even on newer devices that are running ART compared to Dalvik. So there are some really fun tools that basically dump OAT files and then convert to DEX files. So you can still decompile them with the same kind of applications you would use in older versions of Android. Security boundaries. So security boundaries basically divide levels of trust. You're basically dividing kernel space from user space. So the code in the kernel space is trusted to perform low level operations that you wouldn't necessarily want to allow users or applications to get their hands on because malicious apps can do really scary things if they have operating system level abilities. And so the user space is really focused a lot more on applications. All right, so we've got two permissions models. There's Linux kernel as we've mentioned that's very similar to Linux kernel that you would have on your PC or whatever. You've got users and groups that are enforcing security and then those are where your permissions are kind of defined and that's kind of what's referred to as the sandbox and it limits what access is or what can access resources. And then your Android runtime defines the permissions for the applications. Okay, so let's just talk very briefly about application components. These are words you're gonna see used a lot over the course of this presentation. So that's why I'm kind of running through them so that anybody who doesn't typically do Android development or reversing kind of has an idea of what I'm talking about. So activity. And activity is a single focused graphical interface. It's basically what you're interacting with as the user. Intents, sorry, intents are used for messaging between components. You have implicit intents and explicit intents. This is gonna become very important in our discussion later because there are a lot of vulnerabilities around using these two different types of intents. Explicit intents are rooted to a single application. Implicit intents are rooted to any application that basically shows interest in receiving the intent that's being sent out there, which sounds scary and I promise you it is. Then next we have broadcast receivers. These are inter-process communication endpoints and they basically allow an application to register for certain events that it might want to proceed with. So for an example, a broadcast receiver would be, let's say that an app requires a notification when receiving a text message. So it would register with that particular broadcast receiver, the SMS broadcast receiver. And the code is only executed when that certain event happens. Now broadcast receivers are the only components in the Android application world that can be created at runtime, which is kind of interesting to think about. Services are essentially background operations that don't require any interaction with the user through a graphical window or anything like that. And you have started and bound services. So a started service doesn't actually require any ability to communicate back to the application that started it, it can kind of just run into its thing and doesn't really have to send anything back. Whereas a bound service is exactly what it sounds like, it actually has to pass some sort of communication back to whatever started it. Content providers. So these manage the storage of application data. These are a lot of fun. We're gonna see some fun ways to exploit content providers because they basically act as databases for your application. They're used for sharing data between applications if they're used correctly and they're defined by the developer. So SQLite is really easy to use for application development with Android. And so a lot of developers tend to go with SQLite databases. WebViews basically act like a web browser. They allow you to display web pages in the frame of the application. And prior to 4.4, the webkit engine was used and now it's being managed by Chromium. Permissions, very, very important. You're gonna hear me say a lot today about give something a permission. Add a permission to that. Maybe just use permissions. And that's because they really prevent a lot of vulnerabilities we're going to see and they're a really easy way to make sure that scary applications can't do anything bad with your code. So the activities in application can perform a restricted to its permissions. And those applications are sandboxed within the OS so they can't actually access other applications' data. And permissions are really important for that because if you have an application that's maybe taking banking information, you don't want some malware to be like, hey, I'd really like to see those credentials. And then finally your manifest. So the manifest file is essentially an XML document that just tells you everything you need to know about the application. It's where you define your permissions and it's basically responsible for allowing you to use components while you're building your application. If you don't define something in the manifest except for broadcast receivers that can run it, create it at runtime. If you don't define them in your manifest file then they don't actually do anything. Okay, so this is fine. Let's talk about tools. Setting up an environment can take a little bit of time but it's totally worth it and makes everything a little bit easier. So I love Linux. I find Linux is the easiest operating system for doing any kind of pen testing with Android. And I know some of you are like, you love Linux while using a Mac. But you can also run Linux in your virtual machine if you have Windows or if you're running a Mac or anything like that. The reason I like it is just because a lot of these tools were built for Linux and although they do work with other operating systems there tends to kind of be some dependency issues that cause you a bit of grief. So the two I primarily use are Kali and Santoku. Kali I'm sure a lot of you have heard about is a great pen testing distro. It comes with so many tools out of the box although surprisingly not that many mobile tools. So if you're doing primarily mobile testing Kali can be kind of bloaty and you have to like if you don't have a huge hard drive like uninstalling stuff is kind of a pain. Whereas Santoku is actually really great for primarily mobile testing and pen testing and malware analysis for mobile because it comes with all the tools you will ever need for mobile development or mobile pen testing. And if you hate both of those options and you want to just like do your own thing there's always about two because it's so user friendly. You can also dual boot, right? Like if you know here's like I really don't want to dedicate my entire computer to being Linux. You can always dual boot on Mac or Windows. I found that to kind of get you stuck in hardware compatibility hell and prefer maybe not to go that route. But you can also just order USBs online, bootable USBs. To me that sounds terrifying but if you don't really want to make a bootable USB you can always do that. And Kali actually if you're interested in using Kali the offensive security has a really great Kali book on using the operating system. So those are good resources. Okay so what do we need to basically do a lot of the stuff I'm talking about today? The most important thing is that you have the Android SDK and all these commands I'm sorry they're in Linux. Most of them will work in a Unix environment but you're just gonna install the Android SDK and I'm just creating a symbolic link so that you don't have to run it from the exact folder every single time and then you need to make the Android component executable because Android will manage your emulators and ADB is what interacts with your devices whether it's like an emulator or a connected device and it gives you shell and lets you read logs and do a bunch of scary stuff. If you're running a 62 bit system you're actually gonna need to still install 32 bit headers and 32 bit packages so just keep that in mind if you're running into issues while you're setting this up. Creating an Android emulator pretty easy. Once you've got that executable Android file you can just run Android SDK which opens a nice little GUI that lets you download a bunch of packages and from there you can download whatever versions of the API you wanna use and you can create your emulator. Android AVD is another way to pop up just the not the tooling but the created emulator GUI and then if you wanna run the emulator you can just use that emulator dash AVD and then the name of your emulator so that's just a quick way to run it from the command line. If you wanna get an interactive shell with whatever device you're running or your emulator you can just run ADB devices and that'll show you all the devices that are connected and just take note of the device ID and then just pass that shell ADB shell command and you've got yourself a shell. The fun thing is an emulators run in group by default but they don't always work with hardware tests so if you're doing some, like we're not gonna really talk about hardware exploits today but if you're doing stuff like that you're gonna like why isn't this working in my emulator? It's probably because you're using an emulator and you will probably have to use a real device. So where do you find ABKs? If you're doing this from like a research standpoint or if you're a developer team like if you're an app sec security engineer and your developer team is like we're not getting the ABK, go get it off the play store. First of all, sad but second of all you can actually download any ABKs that are in your device from ADB. You just do this pull and then the remote path and the local path, it's an easy way to just grab them off of your phone if it's rooted. And then you can also go to third party download sites. I've used these. They're really sketchy but they do work. I haven't found an email where yet. So if you wanna have some fun with that and you don't really feel like setting up ADB or anything like that and just doing reversing it's kind of a good way to go. So two of them are ABK pure and apkpocket.net. Yeah, just brace yourself. All right, analysis tools. Throughout this talk we're gonna see some DROZR calls. DROZR is a really handy application for analyzing and auditing apps. So it basically allows you to assume the role of an Android application so you can interact with basically whatever your app does that you're testing. And you can find a whole bunch of things like which content providers are exported and whether there are any SQL injection possibilities or vulnerabilities in your application. It's kind of dual purpose. It helps you find vulnerabilities but then it also provides exploits and payloads which is pretty handy. So you've got three main components for DROZR. You've got an agent which is just a lightweight Android application that runs on whatever device you're testing or using to test. You've got the console which just is basically what lets you run all these commands. And then the server is just rooting the sessions between the console and the agents. And there you can kind of just see how you set it up. ABK tool is another really great resource and it's great for reverse engineering any of your applications. It basically takes them back to their original form which is quite handy. It converts all the text files to Smalley which is just like an intermediary kind of language. It's not great to read but you can always convert Smalley to like a Java style language that makes it a little bit more fun to read if you're a Java developer. And yeah, you can actually recompile your APKs after you've decompiled them and maybe messed around with stuff. And again, that's just how you set it up. And then we've got JADX, JDEX. I don't actually know how to say it but JADX is one of my favorite tools and it's command line and a GUI for decompiling your APKs. And it's really easy because you literally just pass it the APK and it does everything for you and it gives you a really nice readable Java format. So you can kind of look through the entire application and the resources and the manifest file and everything looks almost like if a developer passed it to you in GitHub and was just like, hey, here's my project. So now we're gonna talk about why Android, the Android ecosystem tends to seem so vulnerable in comparison to say iOS. One of the main issues is fragmentation. So we love the fact that Android is open source, right? Because it means that anybody can build a device that runs Android, you can have fridges that use Android and all these crazy things. And it's great for mobile device manufacturers because they can build their own devices with the platform and do whatever they want. But the problem with this is that you have so many devices and if anybody here is an Android developer, you know the struggle of having to make sure that all of the devices you could possibly support are supported. And especially if you're doing user interface stuff, it's hell. There are also so many different hardware configurations. You've got different API levels, different screen sizes, different peripheral activities and availabilities and it's just kind of crazy. So the good thing about this is that if somebody finds vulnerability in a particular combination of all of these different ways that Android can be used, typically they cannot use that exploitation or they cannot use that to exploit devices that don't really fit those same credentials. So that makes the kind of like attack surface a little bit smaller, but at the same time if you're a security engineer and you're doing auditing or you're trying to do like a comprehensive audit on like all Android devices for your particular app, it's basically impossible. So it's a bit of a trade off. Update frequency, this is a big issue that I'm sure a lot of us are aware of because update frequency is not great for Android devices. There are a lot of people that are still running really old versions of Android that have not been patched at all. And new versions are just adopted really slowly. Like to the point that in 2013, the ACLU actually filed complaint with the FTC against four major US mobile carriers because they were like, you guys aren't making it easier for people to update their devices and this is just a security risk. So back porting is, so the problem with back porting is that security fixes for older Android devices don't really exist. It's sort of just like, oh yeah, an old device just like an old version of Android just update to the new one, otherwise you're screwed. So that doesn't really happen and that just leaves people really vulnerable. The Android update alliance, or lack thereof. So in 2011, during Google IO, they announced this really amazing initiative to encourage all these different OEMs to commit to updating devices and supporting them for at least 18 months after their initial release and you had Samsung and HTC and Motorola and LG and all the big players. And then it was never mentioned again. It was like 2011, Google IO happened and everyone was like, yes. And then the next day everyone was like, well, it's not a thing. It just ended up not really being commercially viable for any of these OEMs, especially the device didn't sell well. They were just like, I really don't want to put money towards keeping this device that like nobody bought updated. But this year, Google IO 2018, they're actually announcing that they're forcing the OEMs to maintain updates by writing the policy into their contracts. So fingers crossed that actually works. And then updating dependencies. It's time consuming, it's cumbersome. It's not fun. We've got the issue that open source doesn't always mean it's secure. Open source means you have a lot of vendors and vendors can introduce vulnerabilities in whatever it is that they're building. That can lead to bugs and that can be a playground for hackers. So also this year we saw on Google IO that they were announcing how amazing it was that Android is open source and that means we're so much more secure than anybody else because we've got our amazing manufacturers looking at our devices and like, yeah, to an extent that's true. It's kind of why certain cryptography algorithms are better than like running your own option because you've got a lot of people looking at potential vulnerabilities and helping to fix them. But again, you could have a lot of vulnerabilities introduced by manufacturers. Maybe you don't have like the best security teams. You never know. And then public disclosures. So disclosures are actually super rare and there was a mailing list that started in 2008 it was the Android security announced mailing list and for like the longest time had a single post and tracking issues through change logs and issue trackers is really time consuming so a lot of people don't do it. And it kind of makes sense. Like it's not necessarily in the best interest of Google to basically tell you all the vulnerabilities that have been found in different devices for their partners. Not only because if the OEM doesn't actually patch it in time then all of a sudden you've just got a vulnerability sitting out there being read by everybody but also it's just like not a great partnership opportunity for Google. So I kind of get why they do it. All right, so let's talk about common attack surfaces. This is gonna go quickly. Hamburglar. You've got permissions, broadcast receivers, IBCs, storage, intense, web views, content providers, logging, data validation, obfuscation or lack thereof, tampering. There's a lot. And these are only actually a small subset of the most common ones. I'm not even talking about like hardware stuff really. So, we've got a lot of different things we can look at. So how do we find and fix these vulnerabilities? We're gonna start with application permissions. I don't know why that is so low. But anyways, so developers don't always restrict the permissions they give. A lot of the times it's like, you know that GIF where it's just like, you get this and you get this and you get this and you get this. It's kind of like that with developers permissions. And, you know, it's sort of understandable. You're trying to meet a timeline so something's not working and you're like, well, just open things up. But what that means is that malicious apps can actually make use of those permissions by potentially exploiting your application. And most users don't read things, right? Like, I'm sure a lot of us know this. You have users that don't really think about security and have a pop up that asks for 15 permissions and they're like, okay, because they want that like tiny bird app or whatever it was called. So, what's really important for us as either security auditors or developers or like anybody who's essentially reviewing these applications is to err on the side of caution all the time, only ask for the permissions that you absolutely need. Because you can trust your app. You can know that you're building something that's really great that will help people but you don't necessarily know what those people will have installed in their device or if it's rooted or if they're just like riddled with malware. So the principle of this privilege is really important. And it basically states that in a particular abstraction layer of a computing environment, you basically have every module, so that's a process, a user, a program, whatever, depending on the subject, it must be able to access only the information of the resources that it absolutely needs and nothing else. So for example, if you only need right permissions or yeah, right permissions, just give right permissions if you only need right permissions, don't give right permissions because you don't need both. It's also important to define permissions with a signature protection and this basically means that no other applications can request access to using those permissions through your application. This is kind of difficult if you're interacting with third-party applications or other things on your phone, but if your app is really self-contained, it's a really great way to make sure no one can do anything sketchy with the permissions that you're granting. And yeah, just make sure that your permissions are really necessary. There's a fun droser command that's just run app package info dash a and then package name and that will just show you all the permissions that the application that you're auditing has. File permissions, same kind of situation. This is actually only really relevant to files that are stored externally. We're gonna talk a bit about insecure storage in a little bit, but typically, all of your data for an application is stored within a protected directory that is really just, you can't access it unless you have the permissions of the application that is storing that information. But if you're storing your files externally, then it's kind of like a free-for-all because malware, for example, loves to look at SD cards because a lot of the time, stuff in there is not encrypted or we're gonna see some fun examples of that. You can actually define file permissions in your Android manifest and in the code. So if you're auditing an application, take a look at both those places and make sure that someone's not making it look like, they have all the file permissions in the world and then somewhere else has been like, just kidding. It's really important not to give mode world readable or writable permissions if you can help it because they allow other applications to access that file and you typically just don't need it. If you use a content provider, that's kind of the safest way to share files between applications rather than external storage where, as I said, everyone can kind of access it. So IPCs, IPCs as we've mentioned are inter-processed communications and the endpoints for these aren't always secured. A lot of the times they're left very wide open with lack of permissions and because they're, as it says, data syncs and sources, there are a lot of great ways that you can either inject data or read it. So broadcast messages, they allow any application to read a broadcasted intent and since sending the broadcast data intense allows other applications to receive and process the data, you have to be really careful with those. The abuse case for these IPCs kind of depends on their purpose. So protection is typically achieved through app permissions and as I said, I'm basically gonna be driving home the whole app permissions to all your things today. But for example, an application may define an IPC endpoint that should be accessible only by other components in that application or by applications that request the permission required to interact with them, but that's not always the case. Especially sometimes you have new developers who just don't necessarily know or you have those hybrid app builders that some of them are great but some of them also do really scary things like storing private keys in the application. We'll see that in a second. Content providers, as we've talked about, are for data storage and they can expose access to data or a directory traversal, they're great for SQL injections and if they're not permissions protected then literally any application can invoke them. And even in certain areas, as we'll see if there are permissions intact, even if you're a rooted user you can still access them. So they're a fun place to look for vulnerabilities. Activities can be taken over and used in UI redressing attacks. Another name for UI redressing attacks is click jacket. So for example, visitors on a site that he thinks or she thinks they're clicking on a button to close a window and instead of clicking that X button that they think is what they're actually supposed to be doing it downloads some sort of virus or turns on their webcam or something like that. Broadcast receivers, they can be hijacked to basically interrupt intense and take data. You can also send null values to broadcaster receivers and that can DOS your application and a lot of people don't necessarily realize that that's possible. So input sanitation is really possible or really important. Services, they can expose application specific functionality. So also important to think about. Here's a fun real world example. I don't know if anybody remembers the Samsung Keys app on the Galaxy S3, but it was super privileged. It connected your mobile phone to your PC and it had a broadcast receiver that restored APKs from an SD card. So the TLDR of this is basically that a researcher was able to make use of an exploit in the clipboard service on the S3. And there was a right external storage privilege issue there. So they were able to actually copy their malicious or supposedly malicious application to the SD card. And then there was a call chain in the Keys application that would essentially go through every single application that was stored in that directory on the SD card and install the APK with the same permissions that Keys had. That's terrifying. So how do you guard against this? And we've sort of talked about this already, but sharing files using content providers and avoiding external storage like SD cards wherever you can is really important. Android versions before 4.2 actually export content providers by default, which means that anything can access them. So you have to make sure that if you're supporting those older versions of Android that you actually specifically say that you don't want to export your content providers. Drozer has this handy little command that basically shows you all of the exported content providers for the application that you're auditing. And actually even content providers that aren't exported can still be accessed by privileged users. And you can see all the ones that aren't exported with that command. And then finally, exported activities also require no permissions to interact with them. And there's another fun command to show you all the exported activities. So when you're using content providers you should really make sure that you're using permissions. And again, I know by the end of this you're gonna be like shut up about permissions, but they're really important. And we've talked about being able to DOS broadcast receivers by passing no values. So sanitizing inputs are really important for basically everything, but especially for content providers. Because a lot of developers are using SQLite you can actually do a fair amount of damage with SQL injection attacks. So use prepared statements, happy developers use prepared statements. And that will kind of just make sure that that's really not that possible. Drozor again has a fun little command here that shows you all of the possible SQL injection vulnerabilities in your application. That's fun to run. And then explicit intents. Those are really important to use because you're basically specifying exactly what application you want to receive this intent. If you leave it open and you're defining an implicit intent, then you could potentially have something like malware that says, I have all of the appropriate settings to receive this intent and can maybe do something scary with the data that's being passed through them. So explicit intents if you can help it are really important. Custom permissions with services because they can be checked by the service when the external service makes a request. So those are really useful. And then using the local broadcast manager for local intents because no other application can actually access that data. So that's a really good way to kind of get around the whole potential application data sharing issue. IPC sniffing is something you have to kind of be aware of with these two send broadcast and send sticky broadcast commands. Again, sorry, but permissions. So use intense sign with permissions so that somebody can't receive that intent instead. And then yeah, check your data. Always check your data. So insecure storage. For those of you who haven't reversed an Android app before, they're super easy to access. APKs are basically just zip files. And as we saw with some of those tools, you can literally get it back to pretty much its original source by using free tools from the internet. So don't ever trust your client. Try not to store things locally. Data shouldn't be stored in either this data, data package directory. And that's only accessible to the application unless your device is redid, of course. And then the SD card, which I basically said is accessed by everyone and should be avoided if at all possible. You can actually dump process information to access sensitive information as well. And that's totally possible with redid devices. So if someone gets a hold of somebody else's phone, it's possible for them to access sensitive information that way. And don't ever embed encryption keys in your apps. So I think as security researchers, we're sort of exposed to a lot of this because we tend to read a lot of terrifying blog posts and reports and news stories and whatever else. But sometimes developers don't necessarily realize that their app is totally exploitable just by having the APK. So someone actually wrote a fun script that iterated through the classes.dex file of a whole bunch of Android applications looking at every single string that was contained in that file. And they basically just used every string and tested it against the encrypted database to see if that was the encryption key. And it totally worked for a lot of applications. So no encryption keys or sensitive things in source code, in storage, just like put it somewhere else. Web views also allow HTML data to be cached locally. So something to keep in mind. And you might be thinking, well, if someone got a hold of my phone, like full disk encryption is a thing. And yes, that's true, but there have actually been a decent number of proof of concepts showing that full disk encryption can be broken. Qualcomm actually has been one of the few manufacturers that have actually published this and then also explained how it happened, how to prevent it and posted a fix and stuff like that. But not everybody is as great as Qualcomm. Oh, and sometimes people back up SQLite databases on their SD cards, which is a really bad idea. And basically leaves plain types of databases and plain type. Okay, so here's a real world example, a couple of them actually. These are a little bit older, but the same principles still apply. So Skype in 2011 created SQLite databases and XML files with world readable and writable permissions. Everything was unencrypted. And it wasn't just a database full of your user information. It was like everybody who was using the application. And it also included config data and message logs. So that sucks. That was reported by Justin Case. And then WhatsApp in 2014, same kind of deal. They stored a database backup on an SD card. And in newer versions of Android, you can't actually get permission to read external storage just like by default. You have to request it. But again, as we've talked about users tend to just be enter happy or confirm happy. And if it's malicious app says, oh, I need to do these things and read your external storage. They could just click okay. And then all of a sudden the malicious app has all of your WhatsApp data. So how do we prevent this? It's important to look for code that stores data locally. As we've talked about, everyone can get everything that's stored in your APK. So really make sure that there's no sensitive data that's stored there. When you absolutely have to store something, you know, and it's gotta be in the client, then just make sure it's encrypted. It's really important to make sure that you're using strong encryption algorithms. I have developers ask if they could use like MD5 to encrypt their credentials. And the answer is no. So, you know, make sure that something like Bcrypt is being used and maybe don't use hashing for passwords, but if you are gonna use hashing for other information, don't use MD5 or SHA-1. Maybe try something like AES or RSA or SHA-256 or something like that. If you're using web views, look at the clear cache function and that basically clears your cache as it says. Or you can actually just prevent caching all together and that just keeps things a little bit safer. And then re-initializing the application class with dummy values is a really handy way to prevent that data from being read by anybody else because the application class doesn't actually get rid of any information once the app closes. It's pretty persistent. So if you initialize it with dummy information, then other applications can't potentially take advantage of that. Insecure communications. This is something that I think a lot of developers don't necessarily think about, the fact that you can't intercept web traffic, even when you're on mobile. And it's a really important place to look for vulnerabilities as a researcher, as an auditor or wherever you might be looking for vulnerabilities. Burp Suite is what we use a lot. It's a really great free tool and it's really easy to set up for intercepting traffic. You can actually set up a proxy for an emulator as well. So we don't have to worry about having a specific device. And you just set the APN to 10.0.2.2 and the port to the same as whatever's being specified by your Burp Listener port. So something new that came out in Google IO 2018 is that starting with the Android P devices, they're actually going to enforce TLS by default. And that's really amazing because one of the issues we see with insecure communications is people passing insecure requests. Basically like plain text requests, stuff over HTTP, that's really easy to intercept and do nasty things with. So this is great. The only issue with it is that it's only for new apps that are targeting Android P. So if you have older apps, there's no TLS by default. And the other thing is that you can actually opt out of using TLS if you're accessing a legacy domain. So like it's definitely great and we'll hopefully see more application developers moving towards that direction, but it's not totally foolproof for us just yet. So like the TLDR of it is just never sent clear requests. Use some sort of encryption because people can intercept them and do scary things. Web views, we're gonna see an example actually of intercepting web traffic and doing something bad with it and an example with web views, but web views are a really great place to look for vulnerabilities. And primarily because they basically load web pages within your application and the content that you're loading in a lot of cases actually has all the same permissions as your application. You can run JavaScript, you can actually run Java in your web view and essentially break out of the sandbox and it breaks that whole, you know, same origin policy. So developers are actually really prone to disabling code that checks for SSL errors in web view because, you know, you've got like deadlines and stuff and you just wanna get rid of all the errors and get stuff working. But then that means that, you know, there's no check for SSL errors and a lot of scary things can happen. So it's really important to keep those errors or those that error checking in place and ensure that nothing is being sent as we've already talked about in the clear because you can totally load malicious JavaScript from a potentially malicious web page or website or whatever that's being loaded within your web view and it can't really break out of the application to do terrifying operating system things like in Java commands, but it can still wreck the security of your application and access any data that your application can access. So that's something to keep in mind and you see this a lot in apps that have advertisements, you know, with like a little flashing thing in the bottom that's like telling you to click on it. If you click on that, you're still within the web view. So it's important to remember not only as, you know, an auditor or a security researcher or developer or whatever, but even as a user that unless you really trust your application, be careful of what you click on in a web view. So how often does this happen? And again, a little bit dated, but Stanford did a study in 2013. They looked at 40,000 apps that at least determined could use this JavaScript bridge. This is essential, you know, essentially being able to run JavaScript and have JavaScript be passed back to the application. And they found that a third of those applications could be reached by untrusted content, which is a decent amount, quite frankly. And, you know, the problem with this clear text content that this like this JavaScript bridge situation is that if you're sending something through this bridge and it's in the clear and attacker can intercept the traffic, modify the responses, send it back. And all of a sudden you've got this, you know, scary data package that might not do the things you want it to do. So here's a fun example that is actually quite recent. It's from this year in January. This report was publicly disclosed and it was remote code execution in this tiny cards for Duolingo app. It was on Android. So tiny cards is basically like a flash card app. So if you're practicing a language you can create these little flash cards that help you remember words or whatever. And what essentially happened was that tiny cards was loading a web view. And in the web view they were loading a website and they were loading it by HTTP first and then it was being transitioned to HTTPS. So an attacker could actually intercept the web traffic while it was HTTP and basically do a man in the middle attack and redirect the traffic back to the application and put whatever content they wanted. And so thankfully this was disclosed publicly and fixed and everything's good. But it's a really great way to, you know, like tiny cards was using HTTPS but there was still that transition period that could be taken advantage of. So how do we prevent against this? You can restrict users who are using these web views in your application to the application domain and this just means I can't like break away from whatever you've set and access other scary websites. There's this method called set JavaScript enabled and that, you know, does exactly what it sounds like. It allows you to use JavaScript. But it's a good idea not to set that until you absolutely need it. So for example, Facebook, that is set to, you know, JavaScript is not enabled when you open a link within Messenger and similarly we do that at Shopify when you're in the Shopify app and you're viewing third party app store pages because those can have links that can take you to scary places. APIs after 17 actually require this JavaScript interface call for any method that's being exposed to JavaScript and that'll prevent any kind of malicious code accessing operating system level kind of commands which is great. And then you can also create a white list of domains actually that the user is only able to access those domains and that's a really good way to, you know, still have people to access all the websites that you want them to but just kind of like keep it contained. And then of course sending that traffic over SSL. Logging is great for debugging as I'm sure a lot of us know but it's also great for hackers because people really like writing stuff to logs. I know as, you know, I started as a software engineer and moved into security and I used to love logs and a lot of applications can log very sensitive information. Like for example, let's say you're trying to pass credentials between different activities and you want to make sure that things are being passed properly or that the information that has been collected is actually accurate. A lot of developers will put that in a log and it's not necessarily encrypted and it's not necessarily removed. So they're a great place to look for potential data leakage. So the read logs permission was actually removed for third party applications since Android 4.1 but if you've got a rooted device you can still access it and any earlier versions can still access it too so that's not exactly perfect. Log hat and ADV log hat are really handy. You can pass in different tags or displaying only certain logs and you can run those like ADV with your test hangar or what not, you have to run them from root but you can basically see everything. And even system processes like the activity manager logs a very detailed message of what it's doing. So logs are a really great place to look if nothing else for things that maybe shouldn't be there. Firefox 2012 logged browsing activity including plain text URLs and session IDs of their users. So you could basically use it to hijack someone's session. And then obfuscation. This is one of the last things we're gonna talk about but as we've said multiple times it's really easy to reverse engineering Android apps and you can make it harder for people by making sure that your code is obfuscated. So there's sort of a trade off for this because if you're obfuscating code in a certain way it can make it really tricky to maintain the application later on. If you're doing something like lexical obfuscation which is what is done with ProGuard and it's essentially switching out meaningful words for like computer garble. That's not really that difficult to maintain but if you're doing like crazy stuff like Java reflection and like calling methods from other things that don't make sense then developers can hate you and you can hate yourself later in life. So it is a trade off, it can make it easier to keep your code kind of more private but then it can make your life really difficult. People have talked about using native code as well as a good way to obfuscate against reverse engineers because when you have native code it has to essentially be reversed as though you're looking at like at C and so you're looking at assembly level stuff which is less fun. But it also makes you more susceptible to those like low level vulnerabilities like buffer overflows and string issues. So just be wary of that. Okay, this is my favorite thing to talk about, private keys. So private keys are really important, right? Like they're used for signing apps they can be used to encrypt HTTPS traffic applications like they're in the Zipped APK all the time and we're actually gonna see some real numbers on this. People also use the Java key store for storing their private keys because it's good for public keys, private keys, certificates, all that fun stuff but the password protection is actually optional and there's no container level encryption and the private keys that are used or that are stored in your Java key store share the same password as the key store container by default and the problem is that people really suck at coming up with passwords. So don't necessarily think that that's a foolproof way to prevent people from finding your private keys and this is why. So June 2017 an IT security journalist finds a private key in a Cisco app and Will Dorman gave this awesome talk at Beside San Francisco this year about a study that he did analyzing applications in the Google Play Store for exposed private keys. So he looked at 1.7 or so APKs and found a whole bunch of private keys stored in there. You've got the PKCS12 which is one of the ones we're gonna look at and it's used to store private keys within a company public keys certificate protected with a password based symmetric key. He basically checked these passwords against the rock you dot text password list which is essentially a list of all the most popular passwords people use and 41.4% of them were broken by it or you know, decrypted by it. There were strings in the app code that acted as passwords and manual analysis of just like trying out of things also worked too. Java key store password cracking not as effective but a lot of people were still using those rocky passwords. And I actually love this talk so much. I definitely recommend you go watch it on YouTube. It's kind of terrifying but also a lot of fun. So how do we guard against this? Don't store your private keys in the app. It's the takeaway from this message. Google offers cloud storage for secrets and sensitive information and that's a good way to go. If you don't trust Google, because like don't trust the man kind of situation then you can just keep your private key somewhere else but just like not in your application. And then this year again, Google IO announced this whole strong box. It's like a kind of like a key store, like the Java key store but like stronger. So it's resistant to shared resource attacks, side channel attacks, physical attacks. But again, only actually useful for devices that ship with Android P. Now finally tamper detection. As we've talked about, attackers can download an APK, modify it, reverse it, all that fun stuff. They can also re-sign it. The certificate hash would change. So it would be obvious it wouldn't be the same developer unless you include your private key because then they can make it look like they're you and they can convince people or trick people into basically downloading your application. You can add signature checks to your code. We have to be really sneaky about it. The Google Play Store actually has a decent number of checks in place to make sure that things are being signed correctly. But if you have anything written in your code, determined attackers can just like take it out and recompile it or repackage it and upload it again. So just not the best. So what to do? Avoid client side checks. Google's safety net has some nifty tamper detection features but this has actually been proven to be not totally bulletproof. They can detect whether a device is rooted to some extent and they can determine whether a device has malware to some extent but there are definitely workarounds now of people showing that they have rooted devices that have circumvented the safety net checks. Android P will have this keystore attestation API and it's a signed statement from the secure hardware and these new devices that the device hasn't been tampered with but again only useful for Android P. And then you can run system calls to check whether your application is being accessed by the ADB like the Android Debug Bridge or whether it's running on an emulator. That's a great way to kind of just make sure nothing scary is going on. And then safety net also enables server side checking for application tampering which is a lot safer than putting the code in, you know, like all of the actual logic code in your application. So what do I tell developers? Be paranoid, like all the time, kind of. But actually be paranoid all the time because pretty much everything's hackable. Basic hygiene rules, never trust a client, follow the principle of these privilege. There's a security linter in Android Studio which is very handy, it's just like basic security stuff you shouldn't do and it'll notify you of it. Never store your private key in the app if there's one thing you take away from this other than permissions, it's that. And then be as explicit as possible about your app's intentions and yeah, never trust a client ever. So if you wanna keep learning, these are some really great resources. There's the developer Android pages with security overviews and best practices and all that kind of stuff. OWASP has a mobile testing guide that's great and it's good to keep on hand for auditing. They actually have an e-book version of it as well. It's essentially used to be the Android testing, security testing cheat sheet, but now it's just, they put it into an e-book and it's great. And then books, videos and courses. These are some of my favorites. If I was to recommend like any two books, it would probably be the Android hackers handbook and the mobile application hackers handbook. Some of the others deal a fair bit with like low level hardware stuff that's great to understand, but not necessarily as useful from an application developer standpoint. And then there are some great courses and certifications and all that kind of fun stuff if you're looking at learning more about this from an education standpoint. And some really great courses, lynda.com which I guess is now called LinkedIn Learning has some great mobile security classes. And all these slides are gonna be up on my blog which you can see down there and feel free to reach out over social media if you have any questions or like want any other resources from what I've listed here. And that's all, thank you. Thank you.