 Thanks everyone. My name is Ryan Johnson and today I'm going to be giving a talk called Still Vulnerable Out of the Box, revisiting the security of prepaid Android carrier devices and I collaborated with Angelos Savro and Mohammed El Sabag for this research. So today just the agenda, we're going to start off talking about Android, then look at some of the preloaded software, then we'll move on to applications and their application components, after that we'll move to application IPC, then we'll get to the core of the talk which is vulnerabilities in prepaid Android carrier devices, after that we'll move to some ZTE specific vulnerabilities and then we'll zoom out a little bit, look at some of the vulnerability root causes, move on to how to secure the software and wrap it up with some conclusions. And this talk is somewhat like a continuation of the talk we gave at DEF CON 26 in 2018, essentially on the same topic, looking at vulnerabilities in Android prepaid carrier devices, so five years later just showing that the issue still persists. So we all work for Quaca which is previously known as Cryptowire, where jump started in 2011 on a DARPA project, we've also worked with DHS S&T NIST and at Quaca we do software assurance, developer integration, personal device management, firmware testing, threat feed and security analytics. So prepaid smartphones, these are going to be smartphones that you generally purchase in full price at the time of purchase and you generally get a prepaid service contract, this is going to be a fixed term service contract, usually one maybe to three months. Some of these smartphones are typically on the low end so they could be anywhere from $50 up to maybe $300 or $400. The major American carriers have a presence in the prepaid smartphone market, you can see there's a picture there from Walmart showing T-Mobile, AT&T as well as Verizon. So in the prepaid market Verizon has a large presence since they own Total by Verizon which used to be called Total Wireless, they also own Track Phone as well as Straight Talk. So Android is developed and maintained by Google so they make the source code available through the Android open source project also known as AOSP. So Android is an open ecosystem where vendors can take a version of Android, fork it and then modify it to provide extra hardware and software features which gives them kind of a differentiation in the market and a competitive advantage and this is good since it gives you something beyond plain vanilla Android but it also requires some scrutiny from a security perspective. Some of the major Android vendors have their own security bulletin so if there's a vulnerability in their own code which is not present in AOSP then they will list it on their security bulletin and Android it's the most popular mobile OS in the world at 70.8% and in America with 42%. So when you have an Android device, various entities are going to provide some software so there's AOSP which is at the core of it from Google, there's going to be the chipset manufacturer and the hardware manufacturer, they're going to add some hardware components as well as some software components if it's a carrier device they usually put on some software as well, there's the vendor which does the customization and then there's also vendor partners which is anyone from Netflix to Facebook to some app you've never heard of that doesn't have a launcher or an app icon. So pre-installed software this is necessary to make your device more functional than just a brick so some of the software is necessary, some of it is value added where the vendor thinks that you will find this useful. So pre-installed software is more trusted and has more privileges than third party applications because it's specifically curated by the device or the vendor or OEM so pre-installed applications get special access to permissions, they can run a certain special UIDs and run system services and on Android processes are bound by an SE Linux domain as well as the rules that apply to that domain and a lot of this pre-installed software may contain insecure interfaces which third party apps that you download can interact with and in Android the main IPC mechanism is called an intent and you can think of this as a message with destination as well as some embedded data. So Android applications when you install them they have their own user ID as well as group ID which are assigned at installation time and this helps to provide a sandbox for the application's files and resources. In Android applications are identified using a package name, this is generally a reverse domain notation and in Android applications get access to data and capabilities using permissions so an application will declare what permissions it needs and there's different levels of permissions, normal permissions are granted to an application upon installation and then dangerous permissions are they at least for third party applications to get them they have to provide prompt the user with either an accept or reject dialogue and a user can also download an app and assign it to a specific role so maybe you download an app, you want it to be the default VPN or you want it to be the default messenger or a notification listener, this is again done through the GUI and Android apps are not monolithic, you can decompose them into individual units called application components and in Android they provide an activity which is essentially a user interface that the user can interact with, service for long running tasks, broadcast receiver, registers for events, somewhat like an event listener and a content provider provides access to structured data. So application components they can be started independently, they can run concurrently and they perform dedicated tasks and all apps in Android have a manifest file and this is somewhat like a specification which has their package name, version information, permissions they request, listing of application components and any hardware or software requirements and for external app to interact with application component in another file, this is dictated by three attributes which are listed in the manifest for the declaration of an app component, Android exported needs to be true, Android enables like can't be false and an application component can set a permission which is like an access permission so for an external app to interact with it to send it a message it needs to possess that permission. So here are some ways that some IPC mechanisms either done directly or indirectly, many of these are provided by binder in Android, some of the ones we'll talk about today are intense bound services and network sockets. So the threat model is a local application that has either zero or one normal level permission and a normal level permission is granted to the application upon installation and the application based on that zero or one permission appears limited and constrained. They were not requiring any user interaction beyond installing and running a third-party application once and then once this application is run it turns to its environment and interacts with insecure interfaces of co-located software on the device generally through intense bound services and sockets. So on to the carrier devices we looked at 21 Android carrier devices, they are grouped by carrier showing the vendor as well as the model so you can see TrackPhone is pretty well represented because they are a pretty big player in the prepaid market. And here is a summary of the vulnerabilities in these devices so 86% of them leak non-resetable device identifiers to system properties, 81% of them leak the GPS coordinates to a loop back port like a debug port on TCP port 7000, 9% of the devices expose the arbitrary file read and write as a system UID, there's also a 24% of them are impacted by a factory reset and this is when essentially you wipe the user's apps, the app data as well as the settings so if you have any data that isn't backed up or synced externally your data is gone forever so the user can experience data loss. There's also arbitrary AT command execution which an AT command is a command that's sent to the base band processor which is executed by the modem, 24% of the devices were impacted and most severe is arbitrary command execution as a system UID and 9% of the devices were impacted so I mentioned earlier that this talk is a continuation of the talk we did at DEF CON 26 so here is a listing of the devices as well as the carrier and the model and then the vulnerabilities that were present in each. So non-resetable device identifiers these are data items like the IMA, the Wi-Fi MAC address, Bluetooth MAC address, serial number, the ICCID and back in Android 9 there was the read phone state permission which an application could request and if the user granted it to the application then the application could get these non-resetable device identifiers and then in Android 10 Google made some changes so the permission was read privilege phone state instead of read phone state which made it so that third party applications could no longer access these device identifiers although pre-installed apps can get the read privilege phone state permission access the device identifiers and then leak them to a location which is accessible to a zero permission third party application and a common leakage location we noticed was system properties. So system properties this is a system wide repository of key value pairs you can see the get prop being executed with a snippet of the output and the get prop command can be executed by any application it doesn't require any permissions. SE Linux can actually restrict access to certain system property keys that's not the ones we're going to talk about they're accessible by third party app with zero permissions so the system properties contain version information of the build, version information about the chipset, status of unit services and much more and third party applications cannot write to system properties which is enforced by SE Linux. So here are the system property key names as well as the non-resetable device ID that it leaks as well as the devices that are impacted. So as you can see here the iMaze fairly common as well as the Bluetooth or the Wi-Fi MAC address and the serial number. So based on the previous table there's just a small code snippet executing the get prop command and then grepping on all of those patterns and then it reads from the command output and just writes it to the system log and as you can see here this is being run on the track phone TCL A3X which here a zero permission app can get the device IMA, the ICC ID as well as the Bluetooth MAC address. Next there is on certain Android devices they contain a system binary from media tech called MNLD and the vulnerability here is whenever the GPS module is active MNLD will open a debug port on the loop back interface, TCP port 7000 that doesn't perform any authentication or authorization to any clients that connect it just starts emitting the GPS coordinates. And the attack requirements here is just a local application that has been granted the internet permission. You can see the impacted devices as well as the path this system binary executes with the GPS user ID and you can see the CVE there and this appeared in media tech May 2023 security bulletin and they've provided the affected chipsets. We've noticed there's different behavior among versions, major Android versions for MNLD so on Android 11, 12 and 13 MNLD will bind to the loop back interface on port 7000 which in hex is 1B58 and then on Android 9 and below it will bind to any IP address and we observe this going down to Android 4.4.2 device which is Android Kit Kat which is quite old and at the bottom is just a source code snippet to connect to port 7000 on the loop back interface, read from the socket and then write the output to the system log. So here is some output from connecting to the debug port. It emits NMEA sentences and we've redacted the latitude and longitude from the GPS coordinates here and we've listed the NMEA sentences that contain the user's GPS or the device's GPS coordinates. So there's two different attack models you could do. You could have a passive approach where an application has the internet permission just periodically pulls port 7000 in the background to see when it's open and whenever it's open it can then just get the device's GPS coordinates, record this longitudinally. There's also an active approach where a malicious application can start Google Maps or a similar type of application and if you've used Google Maps and given it access to your location then as soon as you start the application's launcher activity it will immediately activate the GPS module which will then open TCP port 7000 which application can then connect to in the background and get the device's GPS coordinates. We also searched the GNGGA NMEA sentence on show Dan and there's 930 internet-facing devices that are emitting that NMEA string and 913 of them are from port 7000. Next we're going to talk about the MMI group app. This is a pre-installed app that exposes various capabilities which I will talk about on the next slide. The attack requirements are just a local app with no permissions necessary. You can see the impacted devices listed. The package name is com.factory.mmi group. On all the devices we looked at it always had a version name of 2.1 and a version code of 3. It executes as a system UID and the reserved CVE ID is shown on the slide. So these are the capabilities that it exposes by device. So there's a factory reset which I mentioned earlier is a data wipe. It also leads the device IMA, the device serial number and exposes arbitrary AT command execution and you can enable wireless adapters. So the workflow for this is the local malicious app sends an intent to the MMI group app. The MMI group app starts. It then dynamically registers for various action strings. The malicious app then sends a broadcast intent using those action strings that it registered for and puts the embedded data that the MMI group is expecting. The MMI group then either queries some data or performs an action and it writes the result to persist.cis ata adb result system property. And the malicious app then reads either the data or the result of executing the action from that same system property. And at the bottom is just a small source code snippet to send a broadcast intent which initiates a factory reset. So the more interesting vulnerability here is arbitrary AT command injection. So this application executes a set AT command for AT plus power where the actual parameter is controlled by an intent extra that an external app can send and the AT power AT command is not actually supported on the devices we examined and it's also not in the 3GPP standard. So to inject our own command we look at AT plus power equals that string as nine characters. So we inject the back or back space escape sequence nine times or it's unicode equivalent and then put our own AT command and send that broadcast intent and then check the result from the persist ata adb result system property. So there's a source code snippet there where we inject nine back space characters and then execute the AT plus synom which returns the device's phone number which is then read from system properties. So here's a demo on a T mobile level six pro 5G. Here's the attack app showing that it has no permissions and it's going to execute AT commands in the background. So just getting the device's phone number, getting the IMA, getting the base station ID which is a proxy for its location, getting the ICC ID, getting the MZ, getting the serial number, calling 911 showing that 911 has been called and then programmatically hanging up. Next we have another application that also exposes arbitrary AT command execution to co-located apps. The attack requirements is just a local app. It doesn't require any permissions. The package name is com.trackphone.tfstatus. As you can see by the path, the file name is hiddenmenu.apk. This executes with the radio UID which is the UID that the telephony framework runs on Android. So this is a very similar type of vulnerability where it's executing a different set AT command where we can control the parameter which is from an intent extra. So this, the only difference is it's 11 characters long and so we just inject 11 backspace characters and there's also another approach where you can stack additional AT commands by turning that AT command into a test AT command by injecting a question mark and then adding additional AT commands which are separated by a carriage return. So at the bottom you can see there that it's just injecting 11 backspace characters, resetting the settings for the modem and then calling 911 again. So we have another hiddenmenu app and as you can see these applications are not that hidden. The vulnerability here is that it exposes a programmatic factor reset to co-located apps. Again it's a data wipe where a user could potentially encounter catastrophic data loss. The attack requirements is a local application that hasn't been granted any permissions. The impacted device is a boost mobile TCL 20 XE. The package name you can see is com.tct gcs hiddenmenu proxy, the version name, version code and that it runs as a system UID and at the bottom is essentially just a broadcast intent where you send a message or an intent to factory reset receiver and once it starts it just programatically initiates a factory reset. In Android application components can set an access permission that external applications must possess to interact with them. So here there's a vulnerability where it says I'm protected by this permission but if that permission isn't declared either by that app, by any other pre-installed apps or the Android framework then a third party app can then declare that permission, set the access requirements for it and then interact with any application components which are supposed to be protected by it. So we observe this in the TCL screen recorder app and the vulnerability here is that it exposes arbitrary file read and write as a system user app to local applications. The attack requirements are just an application that declares the missing permission and then requests it. The impacted device is a track phone and AT&T, TCL 30Z. The package name is com.tcl screen recorder and we've provided the version names, version codes and it runs as a system UID. So the workflow is this malicious app declares and request the missing permission which is com.tct.smart.switch phone.permission.switch data in its manifest and then also request it. It then binds to the service that requires its access permission that it just declared. After it binds to the service it gets an eye binder which it then embeds into a messenger. It then gets a message from this messenger, creates its own messenger and then sets it to the reply to instance field in the message with a what value of one and then sends it to the TCT screen recorder app and this is for bi-directional communication. And then on that messenger it invokes function number four which is essentially just a what value of four. It provides the file that wants the access which in this case is just SD card defcon31.txt on external storage as well as the mode which would either be read or write. Then the TCT screen recorder app responds to the messenger that was provided in step three that the malicious app receives. In the bundle there's an extra name file descriptor which contains a parcel file descriptor which it can then put into a file input stream for reading or a file output stream for writing and perform whatever operation it wanted to on the file that it indicated. So there's something called factory apps. These kind of provide a centralized location for testing of hardware and software functionality. These can come by many different names. Engineering apps or hidden menu apps, factory apps. These generally cannot be disabled or installed by the user. You would need root access or potentially an exploit to remove or disable them. They run with a system UID which makes them quite privileged. Generally there's no launcher activity so the user is unlikely to actually use them. So we talked about the MMI group already. Now we'll cover the FQC app as well as the transient factory app. So the even well FQC app, this exposes local arbitrary command execution to co-located applications. The attack requirements is just a local app with no permissions necessary. And the impacted devices are the Verizon sharp rovo 5 and the track phone blue view 2. The package name is com.evenwell.fqc. You can see the vulnerable version names, version codes. The path it runs is a system UID and the reserved CVE ID. So this one kind of has a interesting workflow. The malicious app starts the show barometer activity. The FQC app then starts this activity and when this activity comes into the foreground, it's part of its on resume activity life cycle. It sets a system property called persist.sys prevent power key to a value of on. And if this activity moves into the background, it's part of its on pause activity life cycle call back. It sets the system property to off. And to actually execute commands in the context of this app, the persist.sys prevent power key system property needs to have a value of on. And FQC app cannot have a foreground activity which it checks before executing a command. So the way we achieve this is we crash the app while that activity is in the foreground. So we send a broadcast intent to FQC broadcast receiver with a null action string which at run time the application does not check to see whether this action string is null. It operates on it and encounters a null pointer exception. And this sets the requirements where we have the system property to the desired value and there are no activity components of the FQC app in the foreground which the FQC service checks before it will execute a command. So we then we send it an intent with the command we want to execute in the turn off heater string extra. And executing commands in the context of a system UID app is very powerful. This is a very limited set of capabilities. You can grant arbitrary permissions to apps. You can install apps and then grant them the permissions. You can perform factory reset. You can record the device's screen from the background. You can drive the GUI by injecting arbitrary input events and then also perform operations with app ops. So here we have a demo. We have bad app which is going to install an app programmatically using this vulnerability called all perms and then grant it all permissions available to a third party application. And this is on the track phone blue view too. Just going to the app info showing that this application does not have any permissions. We're going to run it, start that activity, crash it and then interact with the FQC service in the background to install an app and then grant it all the permissions available to a third party app. So there it is all perms going to its permissions. This shows kind of the course view of the permissions it has. And then moving to the more granular. You can see that this has quite a few permissions which could be abused to surveil the user. Thank you. So the best we can tell this even well Digitech company that created the app. They're a Taiwanese company. They have an app on Google Play, a test app. And this used to be on Nokia devices which were on Android N and Android O or Nougat, Norio, 7 and 8. And then they removed it in Android 9 or Android Pi due to a battery issue. And then looking at honehai.com, there's a PDF there that shows the ownership of this even well Digitech company which it's owned by Foxconn. So here's another application that exposes the same vulnerability exposes local arbitrary command execution as a system user to co-located apps. Again, you just need a local app that hasn't been granted or doesn't request any permissions. The vulnerable device is the itel vision 3 turbo. It's an unlocked device. The package name is com.traincyon.autotestfactory. You see the file path, it's factory.apk, the version name, version code and runs as a system UID. And in this workflow, the malicious app needs a file system location that's both readable and writable to the malicious app and the factory app. So it takes its base scope storage directory, makes it globally readable, writable and executable. It then creates a shell script in this directory with whatever commands it wants to execute, makes it globally readable and executable. And then it creates another file. It needs a specific file name that's expected by the factory app runscriptfile.txt. And in this app, it essentially just runs our shell script in a shell that was from step two, makes this file globally readable and executable. It then interacts with a broadcast receiver component named command receiver, and it sends the file path to the runscript file. And then the factory app in the command receiver, it reads the contents of that file which executes a script. And then it writes it to shared preferences. The command receiver then starts monkey background service, which then reads from shared preferences and executes the script with system privileges. Next, we're going to talk about some ZTE specific vulnerabilities. There's three of them that we discovered. The most severe is the first one, which is an arbitrary file write as the Android system, which is system server. If you're familiar with Android, this is the Android framework. This is what most applications request functionality and data from. And this, the way this is accomplished is by a zip file that employs relative paths using path traversal attacks. There's also another vulnerability where you can start arbitrary activity components in the context of a system UI app where all the intent fields are externally controlled except for the action string. And then lastly, there's a vulnerability where you can delete arbitrary files or delete directories recursively in the context of the settings app, which runs with the system UID. And all three of these vulnerabilities do not require any permissions except the last one checks to see that you have one of four different package names where none of these package names are actually installed. So you can boot up Android Studio and create an application with whatever package name you want to fulfill this. And it doesn't check the actual app signatures, it's just checking the package name, which obviously is not secure. So this is from ZTE's security bulletin showing the impacted devices up to what version is impacted and as well as the software version that it was fixed in. So this is the workflow. We have a local malicious application. It creates an Android X file provider and it just needs a URI authority that starts with com.zte.beautify because a system server checks with strings starts with instead of strings equals, which is obviously more discriminating. So we just create an example URI authority of com.zte beautify underscore potatoes. And the malicious app then in its assets directory, like within the application, it then unpacks a zip file to its internal storage to a directory that is shareable by its file provider. It then grants system server which has a package name of Android read access to this zip file that it just unpacked using the standard API from the context class called grant URI permission. It then broadcasts an intent with an action string of com.zte theme change with the URI to this zip file which is employing path traversal attacks using an extra named theme underscore URI and theme ID can be any arbitrary string. System server then unpacks this zip file which we granted it read access to. It doesn't enforce any constraints on any of the file types or file names. It also doesn't protect against any of the path traversal attacks. It doesn't canonicalize the path and enforce any constraints there. So it essentially just in its context executing a system UID as like the Android framework itself over writes files. And in certain cases, depending on what is overwritten, a third party application can cause a system crash just by an uncaught exception in the system server process to trigger initialization routines. Say if you did overwrite another application or overwrite the screen lock database with a screen lock database copy that does not require any pin or pattern. So here are just some the use cases. You can uninstall apps. You can overwrite applications. You can overwrite app libraries. You can overwrite the screen lock which will work once after you perform a system reboot. And at the bottom you can see the crafted zip file. It's called not evil dot zip. And as you can see it's using relative paths which move up the file system and then overwrite the lock settings database as well as its journal file. So here we have a video from the ZTE axon 40 showing that we have a screen lock and this is done with another phone just because we have to do a system crash and to show it in one take. So we just showed that we have some pictures there. This is the attack application showing that it has no permissions. So now we start an arbitrary component. Perform a phone call. We don't have service on this device. We delete the user's photos. And then we're going to start WhatsApp showing it's the official legitimate version of WhatsApp. Then we go back to the POC app. We overwrite the screen lock database as well as WhatsApp with a repackaged version. Calls a system crash. Now it's going to boot up showing that the device no longer has a screen lock. So there was no screen lock there. We open up the photos. User's photos are gone. And now we run WhatsApp which is our own version which is just kind of a silly version but if we wanted to be malicious we could have a repackaged version of WhatsApp that appears to be WhatsApp that gets the user's data surreptitiously and sends it off. And so just to summarize looking at some of these vulnerabilities, some of the root causes, there's definitely some failure to perform access control at application component boundaries, at interfaces, on network sockets. Also when you are unzipping a zip file you certainly want to canonicalize the path and enforce some constraints. In Android 14 applications that target Android 14 there are some platform defenses which are going to prevent escaping using relative paths. And if you try and protect your application component with a permission and this permission isn't declared anywhere it obviously doesn't provide any protection. And in many cases we saw a lack of a white list which could at least kind of constrain what a user could do and limit the impact instead of having it be completely unbounded. And also relying on just the package name of an application doesn't provide you with any real security as I mentioned earlier. You can use Android studio and create any arbitrary activity or arbitrary app with whatever package name you wanted. And just some suggestions. So vendors could use some showing, looking at these vulnerabilities, some sort of proactive scanning of devices actually before they reach the user. Also maybe some sort of certification showing that the device is undergone some sort of security analysis. Enterprises need to constantly monitor and regularly scan assets in order to enforce security policies. The user if you see some of these pre-insulted applications you may want to disable them. And if there is a pre-insulted application that's vulnerable to arbitrary command injection, if you execute that command you can have the application disable itself. So to kind of close that vulnerability. And the way we find these vulnerabilities is a firmware pipeline where you can run a firmware image or a set of applications, run it through some static analysis engines, looking for violations using vulnerability patterns, applying some filters to it and then creating a POC. And after introducing all these vulnerabilities Quaca has an application called Q Scout which you can download for free and run it on your device to see if your device is impacted by any of these vulnerabilities. And the QR code takes you to the Google Play link to download the application. And just to wrap it up, so third-party applications, even though they may appear limited, they can interact with their environment looking for insecure interfaces of privileged pre-installed software. And anytime you have some software and it goes out there, I mean you should assume that people are going to spend copious amounts of time reverse engineering your software, looking for software flaws. And as shown by this presentation, privileged pre-loaded software could use some increased security vetting. And you should read our paper. It goes into much more detail, has POC code. It's about 70 pages. And you can contact us at OEM at quacka.io. Try the Q Scout app to see if your device is vulnerable. And if you would like to continue the conversation, we'll be at Yard House at link at 1.30 today. And that's all for the presentation. I'd like to thank you for attending. And it looks like we have two minutes, maybe. I don't know if that's enough for questions, but if there are of those. The blade X1 from visible, it didn't have any of those vulnerabilities. It was actually part of the data set, but didn't contain any vulnerabilities. If you were limiting it to that set of devices, yes. All right, well, if there's no more questions, thank you very much for attending the