 Hi, everyone. I hope you all are doing well. Welcome to the Fragment session on Plug the Vulnerabilities in your app. I'm Shruti. About me, I work as a security manager at AppSecco. What I do at AppSecco is taking care of clients and ensuring that they have a smooth journey with the work we do for them. AppSecco was founded in 2015, and we are an application security, cloud security specialist company. You can find us at appsecco.com. Also, I will be your moderator for today. So about today's session, the session is going to help you understand the common mistakes mobile app developers make while creating mobile apps. And it'll also help you understand how these apps can be vulnerable and how the attackers perceive the mobile app security. So our speakers will explain to you about the mobile application security and also why API design is crucial. With that, I would like to introduce our speakers for today, Rithi and Riaz. Say hi. Hello. Hello. OK. All right. So Rithi works as an application security analyst at AppSecco. She also leads the Nal Bangalore chapter, which is India's largest open security community. She's a developer and maintainer of YAPI, cloud-based vulnerable hybrid Android app. Then we have Riaz, who works as a head of security research and testing at AppSecco. He's also actively been involved with the Bangalore OAS chapter and NAL chapter for the last seven years. He's also one of the chapter leaders for the OAS Bangalore chapter. So Riaz also frequently lead a speaker at various security conferences and events around the world, including Black Hat, Nalcon, Cocon, and OAS AppSecco. So that's about our speakers today. Also before we start, I want to let you all know that we'll be taking questions at the end of the session. So there is a Q&A section on the Zoom if you're attending from Zoom. And also the YouTube has a chat feature where you can post all your questions. And we'll be answering them at the end of the session. So over to you, Rithi and Riaz. Thank you. Thank you, Shruti. Well, thanks, Shruti, for the amazing and really nice introduction to us and the talk as well. Thank you, Riaz. All right, so today we're going to talk about, you know, it's a cumulative thing about multiple things we've seen. As part of our application security journey back in office, as well as what we see around the world, the kind of issues that testers frequently find with mobile applications and common mistakes that developers do, right? And we'll, as Shruti mentioned, we'll be taking all the questions at the end. So use the Q&A section to post your questions there. And we'll take a look at them and try and answer as many as we can, but in the time at the end of the session. Right, so, Shruti's already done the introductions. That's Rithi, that's me. I'm wearing a hat because I'm a gray hat hacker, so that's how I define myself. As I said, we'll cover what we believe the idea of security is from an attacker and developer perspective. And we'll also take a look at a bunch of weaknesses that we've come across in our mobile testing journey for all of our, a lot of our customers who have mobile applications that they want us to test, right, the kind of weaknesses and the general trend of security issues that we find. Cover things that developers do, but ideally do not, right? And things that attackers commonly go after when testing mobile applications. We'll also talk about some generic things that developers can do that would harden mobile apps and their backend APIs and ensure that not to a hundred percent, but definitely increase the overall security of mobile apps. And I'll come to why I don't say a hundred percent in shortly in the slides there. We also have some bonus content around how do you, if somebody who wants to get started with security testing, right, mobile app securities testing, what are the kind of things, what are the tools you would require, how you would set them up, as well as some resources to get you started, right? And then we'll open the symbolic floor for our Q and A session. All right, so let's try and define what is security, right? And I've put mobile security, mobile application security in the bracket that I'll leave this comic up for a couple of seconds, right? Because there is an underlying message that this comic will pass across. And I'll come to that. All right, so security is freedom from or resilience against potential harm caused by others. This is like the Wikipedia definition. But essentially in the digital world, we are looking at the definition of harm would mean either the loss of confidentiality, right? Commonly see integrity or availability. As you would have seen the CIA triangle that a lot of security testers refer to as well as developers that are aware. Depending on how a violation of the CIA has occurred or a presumed violation when you're figuring out security issues that haven't been exploited yet, right? But you base it on the assumption that either CI or A is going to be violated. What was affected in terms of the component and what was the impact of the violation? As in what was an attacker able to do? And it is in case of a presumed violation, what can the attacker do? We can calculate the severity of the violation. As I showed you the comic earlier, the most secure system is going to be completely unusable. And this comes from, and this is a very popular security usability functionality triangle. The closer you move the black dot towards security, the less functional and usable it becomes. And this holds true for all kinds of systems, right? And not only applications, but any kind of system out there that has these three components independently. So which is why developers find middle grounds and they still end up having bug reports that deal with extremely low non exploitable issues, right? And then there is this whole confusion about whether this needs to be prioritized in fixing and stuff, right? But if you use this triangle to base your judgment, you'd be able to reach a middle ground. This is another obligatory Dilbert, right? As I said, completely unusable, completely secure system is going to be an unusable system. So coming back to the earlier comment I made about mobile application not being a 100% thing, right? Why is security hard with mobile apps, right? The first and foremost and the biggest thing is, aligns also with thick lines, especially desktop applications if you have them running. The code runs in an untrusted environment, right? You may be a seasoned mobile developer and you've written like the best mobile app out there with taken care of all the security aspects of the API, the app and everything, right? But the code in fact is running inside an environment you have no control over, right? And obviously you want to do this because it's a mobile app you want to ensure you can utilize more client processing power, right? It's easier to create, store and manage data as and when required. You can use the, you don't have to necessarily dynamic data when it is being created. You don't have to send it to the server to be stored. The simplified and fast architecture allows, right? The mobile apps to interact with the APIs for state control, right? But because the mobile application is running in an untrusted environment, all whole slew of security issues arise, right? The other thing that we've noticed is business logic and decisions are made client side to increase efficiency. And this could imagine recently we tested an app where there was a mobile, there's a file upload feature in the mobile app, right? When the file upload was successful or rather to ensure that if there is disruption of network traffic while the file is being uploaded, the temporary copy of the file is made in one of the temporary directories in the mobile app, right? And because the business logic ensured wanted, had a requirement rather that the file upload needs to be successful and it was in regulatory requirement for the application. And when the user logged out, the files from all the previous users were being stored in a common directory, right? And that itself becomes a violation because, believe it or not, a lot of mobile apps, mobile devices are shared between users. And I'm not talking only about old uncles and aunties, but even departments and law enforcement agencies that we've worked with in the past have department devices that they share between officers based on shift that they are, right? And there's also a disconnect between developers and security testers as to what kind of issues need to be prioritized, what is the significance of the issues, can we learn from previous bug reports, right? There is definitely a disconnect between developers and security testers and sometimes developers do have to wear a security tester hat and we come to that at the end so that they can understand security from the point of how security testers are looking at the mobile app in the API. Slight of full disclosure, Riddhi and I are not mobile app or API developers, but we definitely understand the security implications of what happens and how do you go about testing mobile apps and APIs, how they can be attacked and what could an attacker do with the access or data, right? More importantly, because all our entire career and Absaco for that matter has aligned with testing mobile applications for customers, mobile applications that back in the APIs, the maps and cloud related infra, that's something that we purely do, right? And which is why we may not understand all the nuances of what a developer does and in terms of mobile apps or APIs, but definitely from a security standpoint, yes. Most bugs that we are going to talk about, examples that we give from developer perspectives, most are what we have discovered in our real world assessments back at office, right? And some of them are really interesting use case studies, case studies that people have blocked about, right? There are a lot of familiar bugs that you will see in the next couple of slides. We will try and skip the why these mistakes occur that would be a subjective opinion. We're not here to, this is not a blame game presentation essentially because we see these mistakes happening day in, day out, but why these happen is a very subjective opinion, right? To understand why these mistakes in code or configuration occur. Next section, things developers do, which ideally they should not a bunch of examples on what we've seen in the past where developers have made mistakes and why and the how of what the mistakes are, right? Riddhi, why don't you take over? Riddhi, you're on mute. Am I audible now? Yes, yes, absolutely. Thank you, Riyaz. Hello all. So as Riyaz mentioned, we'll be looking at a lot of real world examples and this section, the section that I'm going to talk about is specifically about things that developers do that they shouldn't be or yeah. So the first thing is we depend on a lot of third party code, libraries, APIs, stack, overflow and we know that any developer cannot write the entire code and went on their own and we will have to depend on third party libraries and we will have to look into code written by someone else and we will have to depend on them. But for sensitive features and sensitive functionality we will have to be very careful when we are doing so. On the right side, you see there is a picture and it offers various options to you, options to log in. Now one of these had an issue which was reported recently and it was, it is, a zero day was found in sign in with Apple and the person was paid heavily in millions in Indian rupees. The issue was now how the authentication works when the user tries to log in using the sign in with the Apple feature. There is a JW token which is sent by the server and this JW token is then sent to the third party application which is trying to use the sign in with Apple feature. The third party application then verifies with the Apple server if this is valid token or not. If it is found to be valid then user is allowed to log in. What was the flaw in it? The flaw in it was anybody can, which I'll come in the next section. Here the problem was there was some issue. At this point we just need to know that an application is using sign in with the Apple feature which is not developed by them. It is developed by someone else. So in this case Apple becomes the third party. And using this feature it was possible for an attacker to pass any random arbitrary email address obtain a valid JW token and log in as that user. So user impersonation was happening here. Who was the victim? Anybody who was using the sign in with Apple feature they were vulnerable. What was the damage that a full account takeover of user accounts was possible regardless of whether the victim had a valid Apple ID or not. Now this is a continuation of this same issue that we just spoke about, but it is targeting on a different issue. The first we saw that we are depending on third parties without verifying if it is safe to use it or not. Second one is we oftentimes we miss validating authentication issues. So when we are using API and authentication and authorization play a very important role. So in this particular scenario if you see the top request this section here there is one parameter going in request body which is an email parameter and it takes an email. So any user can trigger this request and pass any arbitrary email address. If that is a valid email address then server was generating a valid JW token. And this is how the response looked like. ID would be obtained from here. So there are basically two issues which you could think of which is happening here. One is obviously authentication. Yeah, okay, let's not make the issue. So we are looking at authentication issues. The server is not doing proper validation and it's just accepting any arbitrary email address. This is just one example of real world but in other cases there could be like you have weak password policy because of which I can simply guess the password and login or there could be issues where your app allows both authentication, authenticated access and unauthenticated access. And it might be a case that maybe you have there are authentication has been missed for some screens and unauthenticated user can access that. So there could be authentications in different forms. So we should make a list of all the sensitive endpoints and we should make sure that authentication checks have been done properly for those endpoints. Third one is oftentimes, and I think this is very important because oftentimes authentication is mature. We don't find many authentication issues. We do find in rare cases but authorization issues are something which is prevalent in large scale. And authorization issue is something that we almost find in all the apps. We have found this issue in almost every app that we have tested so far. This is one example. This is not a very typical example but Amazon Cognito, some of you might have heard about it for some, it might be a new term but Amazon Cognito as you see on the screen from the web, it's taken from the website. What it saves is it allows you to handle, it does the sign up sign in and access control for you. So you don't have to write any code. If you just use frameworks like Amplify, you can simply implement this feature for your app. On the right hand side, you see the login page which is generated when someone uses Amazon Cognito. This is just a sample but yeah, it will look something similar to this. Once this has been set up, you can just, users can log in and they can access your app as required. Now there is a feature provided by Amazon Cognito. What is the feature? Amazon Cognito supports unauthenticated identities which allows customers to use the application without actually logging in. So this is not a flaw, this is a feature but there is a flaw which surfaces because of this feature. When we see the overall picture, we found somebody named as Andrea Riancho, he published a paper and the link is down there in the slide. So what happens in this is enabled. If this feature is enabled, that unauthenticated identities is enabled, there are chances that the person can also access your sensitive AWS services. It might be there that this AWS services permissions have not been handled properly and those services could be accessed by someone who has not authenticated themselves to the app. They're unauthenticated users but they are still valid users as per the app. But definitely they are not supposed to access the sensitive AWS services. So this is how the overall picture looks like if you are getting confused by whatever I said so far. There's a user who tries to use your app using Amazon Cognito, they try to sign in and Amazon Cognito user pool sends you a token. There is an ID, Cognito ID, which is sent. Once this ID is received, that ID is sent to the Amazon Cognito identity pool and you get a temporary AWS credentials. If the token is valid, these credentials, using these credentials, we can then try to access various AWS services via brute force method. So often times, and this is a very valid scenario in this case, in today's scenario. And we have found genuine issues using this use case. So if you see there's this, it's obviously it's not, the entire thing is not mentioned here. But if you refer this link, which is there on this page, there is a detailed description of how you can exploit this and how you can test this scenario. So this, using this enumerate im.py script, whatever temporary credentials have been obtained, you can pass it in access key and secret key respectively. And then it will enumerate just like this, what you see on the screen, it will enumerate every service that is accessible. So this list in itself is enough to prove that you can access services, which are not supposed to be, and this could be reported. Another very interesting issue that we often come across is for some reason, developers miss implementing rate limiting, or even if they have implemented, there are some misses because of which rate limiting doesn't work as it is expected. Like in this example, you can see that this screenshot is from a real project that we did recently. And the way rate limiting was API rate limiting was implemented here. You can see that it was Burp included was run. And you can see the serial number from one to 15, only 15 attempts have been made, but it started with 56 and again, 54, 55, it's never reducing basically. In fact, 15 minus 58, even higher than what it was initially. So this X rate limiting remaining was getting reset for some reason. So definitely this was an implementation issue. What was the real logic? I don't know, but definitely this is not secure. If rate limit is reached, requests are supposed to be blocked. Request should get blocked, but it was definitely not getting blocked. I was able to, it was getting blocked. It's very strange. So when I was using repeater and I was manually trying to trigger the request, even then this was increasing. So in reality, if you see, given this scenario, we can assume that there was no rate limiting implementation at all. It's as good as not having rate limiting here. Now, as for the documentation, you will see that there are three headers which are expected. And in this example, if you see there are only two headers, rate limit, rate limit and remaining, but there is no rate limit reset. So obviously the system doesn't know under what circumstances to reset and something wrong was happening. So if we are trying to experiment with some new features, we should definitely read the documentation, understand how it is supposed to work and especially when it's a security feature that we are trying to implement. And then we should definitely test it out. There were two real world examples around this rate limiting because there was no proper rate limiting. So it's not like rate limiting was not there in these two cases, which is Facebook hack and Instagram hack happened. It's not like rate limiting was not there. Rate limiting was there, but it was not sufficient. In the first case, if you see this N parameter, OTP was supposed to be passed here in the forward password new feature. And when the OTP is received, user was supposed to enter valid OTP here, but because of missing rate limiting, a user was able to brute force this password, whatever password was generated, it was valid for at least 10 minutes. And that 10 minutes was enough to break this fixed digit code, numeric code. And that's how this hack happened. It was reported on time. So there wasn't much damage. In Instagram, Gram hack also, it was a similar case. In this also, again, as you see it reads, Instagram had rate limiting, but it was not effective enough. And in this case also, there was a fixed digit security code. And yeah, it was easily brute forced. Really allowed a couple of things here. Sure. Folks, if you've come across this Facebook bug before, the basic idea that was exploited by the bug bounty person was that the www.facebook.com domain had rate limiting, right? The same URL, the same post request if you would make to www.facebook.com, right? There was rate limiting enabled there, right? But for beta.facebook.com and there was another domain, there was no rate limiting in it, right? This essentially shows the disconnect between even when an API is deployed, right? There might be security features that are administrative in nature. You would have something in front of the API as a middleware that takes care of stuff like rate limiting. In such cases, that rate limiting middleware was not available or applicable to beta.facebook.com, right? And for the next, for the Instagram as well, six digit security code appears to be safe. Because it has one million possible combinations and in real time you can't be brute posting this. But because the end point, the mobile end point for Instagram did not have rate limiting enabled, right? The web one had, essentially it ended up having, the attacker was able to set up multiple machines, right? And crack the six digit security code in under $150. So basically from the impact point of view and the investment point of view, the ROI was really, really high. And imagine the attacker could have gained access to anybody's Instagram account, right? That's because the place where the configuration was supposed to be implemented and where the security control was were two different regions. You want to? Yeah, thanks Reyes for adding it. So the next issue, which is very common, which again, I think gradually it was common, very common few, sometime back, but gradually this is reducing probably because people are becoming more aware and people maybe security aware they are getting, but we still do find these issues. This one is oftentimes, believe sensitive files in App Bundle. So the APK that we receive, what is APK? APK is nothing but it has all the files, it has assets, it has source code and any supporting file that is required for the APK app to work. So it's all bundled and it's very easy to unzip the APK or IP and see what is there in the bundle. And because APK is supposed to be distributed freely because only it could be distributed in various forms, but ultimately it is either an APK for Android or IPA for iOS. Once any person is out, you don't have control on who's receiving it. Anybody who receives it can simply unzip the bundle and see what is inside it. In this case you're seeing that there is an APK which has been unzipped and there is an XML file Android manifest and there are certain files and folders inside it. Now, whenever you unzip an Android APK, definitely look into what is there in the REST folder, REST folder. It will sometimes, a lot of information, a lot of interesting files could be found here. For example, this one. So this example has been taken from the vulnerable app which I built which is named as VIP. I did not introduce this deliberately but I was testing because I incorporated Amazon Cognito login feature in that. And I realized that in the APK when I unzipped it, I saw that it has placed the AWS configuration file in REST raw folder. And this is a very sensitive file. If you remember sometime back I spoke to you about the authorization issue in Amazon Cognito and there is a pool ID. So many of you might have questioned how will someone receive this pool ID? So this is one example that if a file has been not safeguarded properly and it has sensitive information and somebody gets hold of it, I can easily pick this pool ID and fetch temporary AWS credentials. And then I can see what all services I can get access using this pool ID. This is one way you can receive ID from different ways but in this case, the important part to notice that there could be and also if you see this file, this, if it was a proprietary music, nobody was supposed as people were supposed to buy it and they were not supposed to just access it. I guess just by unzipping this APK, I have access to this, right? Similarly, you should definitely explore, unzip the APK IP and see what are files are lying there. So you should never ship sensitive files in APK bundles. Next one, similar to the issue that we just saw, but it is slightly different. It's not a full-fledged file, but sometimes we hard-code secrets in the source code thinking that nobody will have access to it. How will someone see the source code? But it's very easy, especially in case of Android. And definitely it should never be in plain text. If you are having secrets, it should definitely never be in plain text. And sometimes we have found, and even in real-world testing or security testing, while doing real-world security testing, we had found a hard-coded secret and the funny part was secret was named as secret. So for some time I got confused that is it actually the secret or is it just a variable name? It took me some time to understand, no, it's actually the secret, which is lying there in plain text. So we did report that. So be careful, never put plain text secrets in your source code. Also, this all example, whatever we are talking right now, it's all related to data at rest now. Whenever your app is dealing with sensitive information, sometimes we have seen that, for example, in the left side, you are saying this is what that chat feature and people are talking, and then data is getting probably stored in a SQLite database and it is getting stored in plain text. So if your app deals with sensitive information, a lot of times SQLite database is used to store that data. If you need to be very careful when dealing with sensitive information, if it is sensitive, just do not store it in a database at all, try avoiding and if at all you cannot, then definitely it should be encrypted. We have also seen issues in cache files and this is a very common issue wherein caches are not cleared. Sometimes it is like caches needed for some feature to work, but definitely when the user logs out, we have also seen even after a user is logged out, there are files lying around in the device, the cache files, which still has sensitive information and obviously again in plain text because if they are encrypted, obviously it's of no use unless I can also break encryption. But we have seen data is there in the files in plain text, in cache files, cache is not getting cleared and caches can be in two forms. As you see here, they could be just random temporary files or they could be also in SQLite databases. It could be in both places and once few times we have observed that files get cleared but database was not getting cleared or vice versa. So we have to be careful when we are dealing with caches and ensure that caches are cleared. We have also seen that many times environment checks and this is, I think it is missed in most of the places where environment checks are missed. What is the motive of an attacker? Main motive of any attacker. They want to reverse engineer your app so that they can understand the app logic and find bypasses, business logic bypasses or to see if there are hard-coded secrets or anything. So they try to reverse engineer your app but reverse engineering is possible only when you have not put any protection in place and anybody can just use the right tools and obtain the source code. Intercept communication between server and app. This is also main motive. So if your app is doing some sensitive transactions if it is a banking app or it is a hospital app or something dealing with very sensitive information, the communication which happens with the server is very interesting and very crucial for app users and also for interesting for the attackers. So attackers do try to intercept these communication. So as Riaz had also mentioned in the beginning that you do not have control on the environment where this app will be running. So if you have not taken basic checks of where your app is going to run, then it might be very easy to break your app and steal information using your app. As Riaz said, it's not possible to make it totally secure, completely secure but still defense in layer is something that we want to look forward to. And few things that go missing is that apps don't check for if the app is being installed on a router device or jailbook and device, apps do not check if this app is being installed on an emulator or because once I'm able to install the app on a router device or jailbook and device or an emulator, then it's very easy to gain access to the system files. If my, I have not taken care of obfuscating the code, it becomes very easy to read the code, right? So I can read whatever is there. And obfuscation obviously makes it difficult to understand the code, even if I have, what do you say, obtain the source code. Encryption, we have seen encryption what could go on when encryption is in place, things that are very common and that go wrong are that your encryption keys are very sensitive. So where are you storing your encryption keys? Are you hard coding it? Or is it very simple? Can somebody guess it? Can somebody brute force it? Or the second thing is, are you using weak encryption algorithms which are already flawed? So using a strong encryption algorithm is recommended. And it's very important to pay attention to the key management. Certificate pinning, again, we have been able to bypass certificate SSL pinning by the passes had been very easy using the right tools. So it has to be, oftentimes it's not tested that thoroughly and it's very trivial to bypass SSL pinning checks. So these are some of the things, tamper prevention, I will let Riaz talk more about these, what developers should do. And yeah, but the thing to take care of here is oftentimes developers do miss out keeping in mind that where their app is going to run and because of which it makes it easier for attackers to get hold of your app, break it, steal information that the API communication and do damage to your app. So over to you, Riaz, I will let you add more to it. Thanks, Riddhi. All right, do you see the success kid? So essentially a couple of things to add on to what Riddhi's covered. She's kind of touched upon a lot of things that we see developers do and the kind of ways they can, if you obviously, if you know what the problem is, you can obviously encounter what different possible solutions can exist for the problem. So let's take a look at what are some of the things that developers should be doing. And then we've seen this in a lot of cases where applications that we've tested, couple of applications in our history where the application is extremely well built, very securely well built and we've had issues only with the backend APLs, right? Or vice versa, right? Either one of the components has been vulnerable. Couple of things, as in I have a bunch of them to cover, but essentially things to do to improve security, perform unit test cases, especially for complex apps. And when I say unit tests, I'm good for especially complex applications which require data input coming from multiple functions. You need to ensure that the business logic part is getting tested actively. So unit test cases would allow you to ensure some level of business logic testing is achieved, especially things around your authorization would become more clearer, right? If your code is written in a modular way, you can identify any loophole, security issues, and patch what is required instead of rebuilding the entire class, right? bug fixes, feature updates, and all of the other things are going to be done smoothly if you have code written modularly. To add on to what really mentioned earlier about third-party sources, we can't live without them, right? All mobile apps essentially at some point in time will have at least the analytics component from a third-party source. We need to be aware of the security implications of adding third-party sources, right? And this is not necessarily a library or an external binary. Even code copied from Stack Overflow, for example, or taken from another GitHub or open source project would essentially need to be scrutinized for security issues, right? Verify the source and be aware of the security state, and if any known issues have been published for that particular binary library before, because your circle of trust is your dog, and you obviously don't trust anybody else, right? From the environment point of view, and just to extend to what you were saying, as I said right at the beginning of the presentation, the application resides and runs in a very untrusted environment. You don't know who's going to run the app in a router phone. You don't know if the app is going to be run on a device that is going to be shared by multiple people. You don't know if the device is going to be run with applications that have been installed, unsafe applications that have been installed, right? With the lockdown, I'm sure all of you exist on at least one WhatsApp group where you have uncles and aunties sharing images and stuff. I'm on one of them where I, every two weeks or so, I get an APK file by one of my cousins, from one of my cousins, and with the insistence that I install it on my phone, and just because funny cat videos, and there's no dirt of that on the internet. But try and have an environment detection routine built into your mobile apps, just simply so that you can make it difficult for noise hackers to perform runtime analysis. If the app is able to detect that it runs in a, you know, a router jailbroken emulator simulated environment, it makes it difficult for, and there are obviously, there are ways to bypass it because you physically have the code with you, and the mobile app is running in a controlled environment in any case, right? You could still pass the code, but it kind of weeds out the people who would run after low-hanging fruits, right? Tools available with auto box, scripts to bypass standard checks are available on the internet. And essentially, unless you don't add a layer of obfuscation to the function, you know, using a combination of something like freedom and objection, you'll be able to bypass most jailbreak detection or root detection capabilities, which improve your detection rate by ensuring you have a larger list of things to detect. One of the common things that people, developers add into their code, if they want to do jailbreak detection is to detect the CDR.app is present or not, right? But with Checkrain, and the CDR, the package manager, it's not necessary to have these things available on the system. You could still have shell access, root shell access to a device, and you could be running in a jailbroken environment. And so your checks need to be more exhausted. Similarly, the default example for implementing, and we've seen this time and again for, okay, HTTP, the library, if you're using the certificate pinner as out of the box, right? The code to bypass the HTTP script certificate pinner is available, and it's just one simple command to be able to bypass certificate pinning for an app, right? So obviously that has to be twisted so that it can withstand the bypass itself. Ensure your local caches, files, and SQL storages are cleared upon user logout. More people share devices than you know, right? When, I mean, it's as equivalent to what you would do for your web apps. If a user logs out of the web application, right? And the server has ensured that the logout is successful. Any locally stored data, like your cookies, your local storage, your session storage, all of that needs to be cleared up, right? Similarly, when you have a mobile app and if your API driven with the logout and the authentication piece, any data that was created and belonging to that user, for that user session needs to be cleared up. And we've had cases where we've tested with mobile applications where we've seen data that belong to previous users on the mobile app reside in the sandbox. And although the sandbox of the app requires root access, it may entirely be possible to invoke if you have like an insecure activity to be able to read the SQL database from another app, right? That's definitely possible. Authentication, authorization checks, there's just no saying how important this is and then it's not enough, the number of times we reiterate this. Build authentication discussions into your app design, right? Questions like, and these are very tricky questions that attackers try to emulate all the time. Try and see if there are any race conditions that would occur or what would, how would the application behave if two users are logged into the mobile app at the same time, right? Or what would happen if the user logs into the mobile app from a different device, right? Treat all user input with caution because the context of the data is really important. At the login prompt, when you're providing username and your password for the mobile app, that is your credential for the application. But when it reaches the server, depending on how you're going to treat the data, it can end up being something else because the context is changed, right? With document databases like Mongo, if you have, you know, JSON queries going up, the JSON query itself in the data, you can add if the application on the, if the API on the server is going to parse the JSON input as is, you can pass your own document queries for so that the server can process, right? Because the context of the data changes. Offloading authentication to third party providers is great, like how we do with sign on with Google or Facebook or Twitter or Apple for that matter, right? But it can have its own set of troubles, which as recently as a couple of weeks ago, we saw with the Apple bug, right? Ensure the authentication documentation by the provider is followed and unnecessary storage and transmission of tokens and keys is avoided, right? This essentially also points to the fact that if you have a token that is set, you know, posted authentication with a third party provider, you need to ensure that this token doesn't get leaked to a different provider. Like for example, if you have an analytics component also in your mobile app, and if you're going to make a request to an analytics endpoint, the token may end up, you know, reaching the analytics endpoint through reference, right? If that is the case. So the token itself could be leaked to a very different third party provider in your mobile app because the way compactness and efficiency can come in. Authorization can definitely be tricky, especially for multi-user, multi-roll apps, right? Especially in apps where you are able to choose the attributes for a role, right? Avoid passing direct references to objects on the servers to clients. And when I say direct references, I'm talking about cases where you have an ID equal to five or ID equal to six, where the five and six is actually a database reference on the server, right? That's actually a data store reference on the server. Manipulating this, it becomes trivial then if you haven't implemented your authorization property, which is where the insecure direct object reference issues come up, right? We've seen some of our clients use UUIDs, which UUID identifies, which kind of makes it difficult for attackers from guess, doing guesswork and identifying who or what the ID for a different piece of data would be, right? But even then, using the UUID, if you're able to make an authorized request to a different endpoint, and that causes state change on the server for the UUID data trying, it is still broken authorization. Just generally that the attacker will not be able to guess it. The authorization is still broken though, right? I would recommend you start with the authorization matrix that shows exactly what feature a user has access to, right? And figure that out in code if that authorization matrix holds true, right? The default deny principle should be what is the default for all users, right? Follow the least privilege principle. We have an image of what it will look like on a network and a system, right? But essentially ensuring that all users are, you know, connected to the backend API with the default deny privilege, unless you prove who you are, right? That kind of builds the whole security in your flow. And this goes for, I mean, I've been parroting this for my entire career. Treat all user input as evil, especially for APIs if you're in the context of mobile apps. User data can only originate in many places and it's not only restricted to form submissions that the user may do, but could include anything that comes from the client that includes the request status, the file uploads, user agents, even the metadata of a file upload. Like for example, if you're uploading a PDF, right? The metadata of the PDF could contain malicious data because at the backend, when your API is processing the PDF file, the metadata might get consumed in a context that you're not even aware of, right? And content coming from non-human sources like databases and caches, which have already been created on the server from a different perspective. Like for example, if you have registered an account and that information is resides with, already resides in a database, and if that data is going to be pulled back from the database and use somewhere else, you could end up with having second order security issues, like second order SQL injection or second order XSS, right? When an error is encountered, you need to fail safely. That would be a problem. This is commonly done by a lot of developers, but the amount of information that is presented in the response is pretty high, right? You need to handle your error conditions without providing too many internal details and restrict the information that response is sent. For example, if you have something simple like your get profile, and that response sends back information of which only five items are displayed in the mobile app, but the response comes out to about 100 fields, right? That information may incidentally lead to disclosure of sensitive information that you do not want the user to be seeing in the first place. Because remember, if a mobile app can make a request to a backend API, a user can definitely make a request using a different tool like Curl or Postman, for example, right? Don't send excessive information in your responses. We're almost, we were about eight minutes to eight. We have some bonus content on how you could set up a typical mobile hacker lab, and it's not very exhaustive, but things to get you started, right? You need to have, definitely, if you're going to be testing mobile apps, you need to have routed or jailbroken devices with you, right, a router Android device and a jailbroken iOS device to start with. Android Studio definitely for to run emulators, to decompile APKs, to look at code in general, and if you want to build your POCs, that's very useful as attackers go, right? And next code, similarly, if you want to look at the device logs, if you want to create entitlements or run a simulator. Mob SF, I would highly recommend this to be like one of your first tools that you would run on an APK or an IPA if you've received, right? Kind of skims through the entire app, gives you visibility into water, water, services, activities, do any files contain any encryption related things, any weak hashing algorithms, if there are any hard-core secrets that were detected, right? Gives you an analysis, and although, because it's an automated tool, there may be a false positive that's true with any tool out there, but this gives you like an entire picture of what the APK or the IPA contains, right? And you can make an informed decision if, whether something is sensitive, something is a vulnerability or not. But, zap your or insert in your other favorite interception proxy to test and attack APIs, right? Hopper, Ghidra in terms of disassembly and function tracing, we find this really useful when we're looking at iOS mobile apps, right? We receive applications from our customers through test flight or, you know, Firebase distribution system or something. And once we have it installed on the device, we extract the unencrypted IPA out of the device and then we perform static disassembly using Ghidra or Hopper, right? JADX, an all-time favorite to decompile APK to Java classes as well as the internet itself, I suppose. Freedime objection, both, for both platforms, essentially to patch binaries at runtime to evade the jail breaks or root detection, evade sort pinning and you would have to, I mean, the true power of Freedime only is when you start writing your own scripts. But the objection to an extent has a lot of scripts written and then you can automate a lot of those things, you know, unless the code is written in a way that has incorporated the things that we spoke about including obfuscation and shrinking. 3U tool is a standard tool that we use to transfer files and, you know, browse the iOS file system, install push IPS to the device if you have an older version, right? And also it can be used to extract installed files out of the system. Couple of resources in terms of what you could read and learn about approaching mobile security testing itself, some getting started links. Highly recommend the mobile security testing guide, it's extremely expansive, contains information around how you would reverse engineer, how you decompile disassemble, how you patch you, how you would bypass any checks that are present, what to do when you're testing for APIs, all of that information, nicely compacted with a beautiful contents page. Please do take a look at the mobile security testing guide. The canidjailbreak.com right now is flagged by Chrome as malicious, but it's technically not, it essentially tells you if your version provided the version of iOS that you have, it tells you if an exploit or a method is available to jailbreak it, essentially gained root access to the iOS device, right? You can simply search through different exploit related things, even payloads as such, right? XDA developers, the Android forums and a very old collection of, I mean, it's really very active even now, but it's been on the internet for quite some time about anything to do with Android hacking, right? You'd have tools that would ease your life in terms of testing, even development, development of apps, for example. I mean, we browse this from an attack of perspective, trying to see if there's any quicker way or two fetch data out of NAPK. High-altitude hacks is from a null member itself, Pratik Gyanchandani, an extensive collection of iOS app hacking tutorials. And highly recommend that to anybody who wants to get started with iOS hacking, right? The Upta's API security book is really useful in case you want to understand how to build secure APIs, right? A link to VIP is also presented here. The tool author is my co-speaker, Riddhi, right? She, the Android, a vulnerable Android app will give you an idea of what kind of security issues exist, right? And especially with a cloud infrared, the backend. Mob as a documentation and the release candidate for the API security top 10. It's not very complete, but essentially gives you an idea of what kind of things to look at if you're going to be testing apps with an API backend. We've kind of come to the end of the content that we had. We'll take any questions if you have now, right? Our contact information is on the screen. This was like an introduction to mobile apps and testing and security and stuff, right? But if you need any consultation from a mobile apps perspective, security consultation around mobile apps, APIs, right? Or anything on those, on mobile environments, right? We are going to be having the office hours consultation on second of July, right? At 4 p.m., 4 to 5.55 p.m. And the link to register for that session is there in the slide here, right? So I would highly recommend if you have any questions around, you know, if you want to get some consultation from the other, I'll be joined by my other colleagues at Apsaco as well, right? Akash and Abhishek will be joining that day. So if you have any questions around design or strategy or essentially if you're trying to understand how you can build your security at the design or the drawing phase itself, right? We'd be happy to answer questions during the office hours on second of July. All right, Shruti. Hey, thank you, Riyaz. Thank you so much. Thank you so much for that detailed, informative talk and about telling what are the things you need to be not doing when you're developing a mobile app. Thank you so much. Thank you so much, Riddhi. So I think we can get started with the questions since you already mentioned about the next week's session, we can move on to our questions. So there is a question which Shivam is asking, even if encrypted data is stored, will the attacker have access to the data? If the encrypted data is stored where? On the mobile device? I'm assuming that's the mobile device that's there. I think he is referring to the point where we were talking about storing data in the app and the files, right? So I'll just add Riyaz and then you can add on top of it. I'll just say and to answer it. So Shivam, you can correct us if our understanding is not right, but what I understand, you're trying to understand if when we are saving data in the app, in SQLite database, probably, and if or in the files, you are saying if you are encrypting and storing, is it good enough or not? Yes, that is what is recommended. Whenever you are storing data on your device, it should be encrypted. And the next step becomes then how strong is your encryption? So we also spoke about if you are using encryption, make sure you are protecting your encryption keys and also you are using stronger encryption algorithms. You should check you're not using any broken encryption algorithm. Riyaz, do you want to add something? Yeah, coming to the point of encryption is not to be treated as a silver bullet solution in the tech world, right? Just because the data is accessible and sensitive, go ahead and encrypt it, right? That's what we always hear about. Encrypting with ensuring the security, the state of security for the kind of algorithm that was used where you need to ask yourself the question, where are you storing the key that would decrypt the data? Because if the application is going to encrypt it, it requires the decrypted data at some point in time for processing function, whatever, right? If the application is going to decrypt the data at runtime, the key exists in memory on the device, right? So you would have to make a trade-off between ensuring that the key is available to the application at runtime. The interesting piece that you can have there is that you have different keys for different users. So based on the installation itself, you could generate a key that is unique to that device, right? And then you encrypt the data. You'd definitely still be able to gain access to the encrypted data because it's like anything else, it's going to still be data, it's going to be in the file, right? But you would not be able to read it, right? So you could still go browse to the SQLite store and extract the SQLite file. You would have gibberish, it would be garbage. You would be able to read it, right? But the app definitely knows how to read it, which means the key resides in the system itself. So, but then if the key is unique per installation, the app or per user, in fact, the data that you have will not be able to, or rather if you have somebody else's data, you'll not be able to gain access to that data using your keys, right? Unless you obviously gain access to the keys of that user, have you got another question? So I think Riya, someone is asking on the same line where do I keep the encryption key? If at all I had to encrypt the data at rest? Well, the configuration file is where we've seen a lot of times the key resides, right? Or you could fetch the key from the server based on components that are unique to that installation, right? But again, from the point of having unique keys is what is needed to be stressed out, right? If Ridi's key is the same as my key, then she can look at my encrypted data, use the same key that I have, and don't bundle your keys inside the application, don't hard port them. I'm not talking about the static key for the whole world. I'm talking about unique keys per user. Also, I would like to add that when we are trying to save data, we really need to ask if it is really important to store that data on the device or not. As much, if possible, do not store data on your device, fetch it from server whenever required, right Riyaz? Yeah, so you need to quantify what kind of data it is, right? And if you're like storing nuclear codes, obviously you don't store it on your mobile phone. If you're going to save the picture of your dog or a cat in your mobile phone, it doesn't matter if it's encrypted or not. So quantify the kind of data that you're going to be storing, right? And if it does need encryption, store it on the server for the maximum security. Again, remember that it's an untrusted execution environment, so anything that's on the device can be gained access to, which is why different users have different keys. You are using the untrusted environment to have secure code. All right, thank you, Riyaz and Riddhi. Shivam and Adhu, I hope that was helpful. Next we have is the question from Love. So he's asking, as a UX designer, are there any best practices or general guidelines that we can follow regarding designing around inputs and errors that would be helpful for the development teams to ensure security? If I was too fast, I can repeat that again. No, I got the question. So I'll take a stab at this, right? And as I said, we're not the developers, let alone UI UX developers. But essentially what we've seen is the amount of information that you pass to an attacker, right? If you constrain that amount of information that itself becomes a challenge for security folks to try and figure out a loophole. I'll give an example. If you have your UI, if it says on a username and password field or on a login page, if you enter a username and a password combination, and if the application tells you that the password is incorrect, right? You're pointing to the attacker, the attacker inferences that the username was right. Right, and simply using the most common password set an attempt can be made to figure out the credentials for this valid user, right? Similarly, at least in one case where we've tested an application based on the response, the number, the content length of the response, we were able to identify if there was a valid user on the system or not. Simply because a lot of times if the username and password combination is sent to the server, a lot of developers tend to check if the username is present in the database first. If the username is present, only then do a password check, right? So in both the cases, if they're returning the same message saying invalid username or password, in one particular case, we had invalid username password and invalid username password full stop, right? That one byte difference allowed us to interpret that the username check was a separate function and we were able to enumerate like a bunch of usernames, right? So from a UI perspective, keep the information as vague but helpful as possible. So if you're trying to give away information about login or something like that, right? Minimize the amount of information that an attacker can use. Okay. All right. Thank you so much for the detailed explanation, love. I hope that answers your question. Next we have a question from Satjajit who's also who's asking, do we need a Mac to set up a mobile pen test lab, especially for iOS pen test? It would really be useful if you have a Mac, simply because Xcode is not available anywhere else. And as you progress into testing iOS device in more and iOS application, more and more, you would want to have a version of Xcode, especially because there are times when you want to create your own fake app, create an entitlement and then use the entitlement and to pack something else. The, to get started, definitely no. You don't need a Mac. All you need is an iOS device jailbroken ideally because Windows itself has a lot of tools. People have written a lot of tools on Windows to work with iOS devices. And there are a lot of tools available in Linux as well for you to get started with iOS app testing. Mac, at some point in time, yes, if you want to do more advanced related, one thing that we find really useful is if you want to check if the app is leaking any information in logs, right? Like how you have ADB logs in ADB log in Android, you'd want to look at the logs being generated by the iOS device, right? And we've seen that Xcodes devices and logs option is a very useful place to, the only place we know that we can see device logs. At some point in time, yes. All right, thank you so much. Satyajit, I hope that helped. I guess that's all we had. Folks, if you all have any more questions, please feel free to post. So if no more questions, then I think we can wrap up the session. We'll wait for a couple of more minutes, no? Yeah. All right. If no more questions, then I think we can wrap up the session. Oh, okay. All right, thank you so much, folks, for attending the session. Thank you so much for your time. I hope this session did help you all. Thank you.