 Hello, everyone. I'm Nemo. I work at RazorPay. I work in security. I do payments. I've also been involved in leading the UPI effort at RazorPay. That's me as a kid. You can reach out to me at Twitter or mail if you have any queries after the talk. So this talk is about security horror stories and payments. Working at a payment gate over the last two years or so has taught me a lot about how the industry works and I've seen a lot of horror stories. So this is my fair share of them. The talk is... So you've seen this slide, I'm sure. A lot of times over the... Yesterday, that's the general overview of how payment gateways work. Later, we'll be talking about the first step of the problem where security is handled between the customer and the merchant or the customer and the processor. We have the processor in this case. What I'll be talking more about is how the processor interacts with the remainder of the solution, which is the banks, the gateways, the wallets, the card networks, all of these different things. So yeah, let's start with the first horror story. Oh, by the way, if I speak too fast, I'm known to do that, please raise your hand slightly and I'll try to go slower. I'll repeat as well because it's a pretty short talk. I have a lot of time. So yeah, the first horror story is about useless security. The way this works is we'll take advice from our two favorite cryptographers, Alice and Bob. Alice knows a thing except Bob is pretty clueless. How it works out is we'll actually use WhatsApp as an example. This is me as Alice trying to talk to Bob. I sent a message. So this is actually happening over API communication layer. I've gone over to WhatsApp to make it easier to understand. Me as Alice tried to talk to Bob, hey, do you want me to send me a message? Bob told me, hey, sure, please send me that message except please encrypt it with my public key. So the way public private communication encryption works is you use the public key, which is typically demonstrated as a lock to encrypt content, which will only be a read by the person holding the actual key to that lock. So I said, yeah, sure, I'm Alice. I'll encrypt this content using the lock and send it across. So I did that. I sent it with the encrypted text to Bob. This is where the whole story starts. Bob responds back with an encrypted text. I mentioned, hey, I actually don't have the private key to read this encrypted text. Bob responds back saying just decrypt it using the public key. At this, this is the point where the host will start because if you're involved in crypto at any point, you know, you can't decrypt things using the public key. You need a private key to do that. I told that to Bob. Bob was like, no, no, it'll work. Just try doing it. I got on call, figured it out, explained how public private crypto works. And then we managed to do it by changing their site and then teaching them this is my lock. Please use my lock while sending responses back to me. And that is how we figured out the solution. But it was hurting to know, you know, they are industries who rely on encryption, a famous being the primary one, and they had absolutely zero idea how it was working. So this is my first and foremost point. If you get any takeaway from this talk, let it be this, don't implement your own crypto. Another thing you should have noticed is I was using WhatsApp as an example. It's intentional because WhatsApp actually is an encrypted medium at this point. This entire communication was happening over a CDPS or a secure communication layer, at which point you don't actually need to encrypt your messages again. It would be the equivalent of writing a secret later and then sending it over WhatsApp. You don't need to do that because WhatsApp already encrypts all your messages. This is why it comes under useless security. You don't need to do this. It just tells me how bad you are at it. The next thing I would want to tackle is confidentiality. So when you mention confidentiality, we usually talk in terms of data, transit data, how you encrypt in content while it's moving from one place to the other. You want to make sure that no aspect of the data is leaked. So when you're talking about encryption, you take care of things. If you're encrypting the same content twice, it shouldn't give the same encrypted text. If you're encrypting five characters or seven characters, you'd want them to encrypt to a fixed length of let's say eight characters so that the attacker is not able to figure out what was the length of your original message. So there are many, many nuances involved with encryption. And this is how somebody, Bob decided to implement encryption. So if you're familiar, this is AES, the advanced encryption standard. This is actually the de facto encryption standard for data at REST. You are encrypting using AES. The mode is OK, CBC. What you should notice is the IV, which is the initialization vector. It's set to four at the rates, four ands, four hashes followed by four dollar signs. How does this matter? The way it matters is if I take message, the initialization vector decides how it's going to encrypt the content. So usually, when you talk to cryptographers, they'll recommend you to use a randomly generated IV for each and every message that you encrypt. Why is that so? Because if you use the same IV over and over again, the same message will encrypt to the same ciphertext. So do you remember how, as kids, you'd used to create substitution ciphers to send letters? You would say A becomes Z, B becomes Y, and so on and so forth. Let's take it a step further. What if you made really made the substitution for words? What if you said Nemo becomes N, Haskey becomes H? Every time you wanted to send a letter, whenever Nemo would come, you would replace it with an N. Every time Haskey would come, you would replace it with an H. It works pretty fine. Except you realize if you're doing digital encryption, almost everyone has the ability to encrypt the text. You can send the request to the server. It would encrypt it for you. So I send the server Nemo. It sends me back an N. So I know every time I see a N in the ciphertext, it would be Nemo. I try to encrypt Haskey. I get back H. So I know every time in the ciphertext where there is H, it is actually Haskey. So how does this matter? It gets even worse. The way that we're doing this is usually when you're trying to do encryption, you try to do it on a block layer. You encrypt the entire message together. So as to not leak any information about individual pieces of it, like how we're discussing Haskey can Nemo being encrypted individually. If you were to encrypt the entire thing, it would be vastly different. They were instead doing encryption on Bob decided to do it on a word by word basis. How that matters is we'll take a real example. Let's say this is an encryption table. So like how we're doing Nemo becomes N, Haskey becomes H. Let's say 10 becomes triple dollars. 100 becomes more dollars. 568 becomes three at the rates. Triple nine becomes five at the rates. That's our encryption scheme. These are the only plaintext in ciphertext you're worried about. The attacker sends the first message, which is if you look back at a table, we can reverse it. It says 10 bucks from account number 568. I send this request, it got encrypted to that. I'm cool with that. Then I send the attacker sends a second request from a different account, which happens to be account number 400 rupees from account number 999. At the server cleanly encrypts it back, I get back the ciphertext. Now what I can do, because the encryption is implemented on a word layer, I can just mix and match things and create a new message that was actually not encrypted by the server. And this is a completely valid message. So what have you learned? Don't implement your own crypto. Let's move to the next story. This is about message integrity. So when we say message integrity, what we mean is if you have some content, you want it to be traveled across one place to the other and you don't have a reliable communication layer. If you had a reliable communication layer like WhatsApp, you can just use that. But assuming you don't, you'd want some message integrity in place to make sure that the message doesn't get tampered with. Like in this example. So usually the way it is implemented is via signatures. And this is how Bob decides to implement signatures. This is a request. This is not the actual request. I've picked out a lot of fields. There are a lot more fields. This is not the format at all. But yeah, the thing to notice is it has some fee content, mobile amount, currency. It has an identifier. This is how they decided to generate the signature. So HMAC is actually a fairly common, well understood, well researched cryptographic scheme to generate signatures. It's a perfectly valid way to sign content for message integrity. So when you try to do an HMAC, it asks you for two parameters. It says, Hey, what's the message that you're signing? And how do you want me to sign it? The how do you want me to sign it part is known as the key. So you need a key which is shared between two parties. And as long as both of them have the same key, they'll get the same signature and you can verify. Yes, this message was not tampered with. So they're over here. What we see is the actual request, which is the entire request that you're sending to the server for the key. However, they decided to use the request ID. This should ring some bells. We have the ID present in the message. Using it to sign something. I'll do an analogy here. So this is, let's say we have a letter. I probably say this is letter number 568. I generate a seal. I only have only I have the seal at this point. I number the seal as well. I number the seal 568. I find this message using the seal. And then I take an envelope, put the letter in it, also put the seal in it and then send it across unsigned unsealed. Unsealed, I mean this was actually happening over HTTP. Which means anyone could actually sniff this, take out the seal, figure out the entire content, resign it if they wanted to change any parameters, horror stories. One more thing that you should notice is how the signature was added back into the request. So usually when you do signatures, you'll take the letter and you'll append the signature at the bottom, right? Even if when you're just signing a document, there's a field at the bottom where you put your signature. You never actually insert a signature into the document as you can see here. Why is that so? Because you've actually tampered with the document at this point, which means there needs to be an additional complexity involved in how you'll extract the signature and make sure that the message is the same message that was actually signed. So think of it this way. If I had a plastic seal on top of a document that I was signing, the document had a photograph, I'm putting the seal on top of the photograph. The photograph is not the same anymore. So yeah, these things happen. We got it figured out later, but horror stories. The next one is, should have gotten it reviewed. I have no idea what I'm doing. This is from a spec that you can find online. This one is, this is where somebody decided to write a spec on how we should do things. They talk about, hey, let's do an HVAC to once again do the message integrity thing, make sure that it's not tampered with. So it talks about how you should generate the signature. To generate the signature, you need to have the content in a predefined format. So it talks about how to do that. So it boils down to taking the app ID, the mobile, the device ID, computing a hash out of it, which is Shadoo 56. And then surprisingly, generating a token and encrypting that hash using the token with a ES256. ES256 is actually pretty strong, but that's not the problem. The problem here is you're calling something an HMAC. This is the official parameter name in this big, which is technically not an HMAC. An HMAC is a pretty well defined scheme, as I said earlier, which has a very well known format and how it is supposed to be implemented. It talks about you take how you take the key, how you padding, how you take the message. If you look here, there is actually no mention. You don't see any E over here because there's no encryption involved when you use HMAC. So this is actually not very vulnerable, but it's entirely useless in terms of how you're doing it. You don't need to encrypt content. Encryption is not authentication. If you encrypt content and rely on the ability that the encrypted content can be decrypted, you are already at a flaw. You don't want to do that. What you should do instead is just use proper standard HMACs and call it that, but that didn't happen. Okay. I think my talk's going to be pretty shorter. This one is called, this is the last horror story. This is called set it to fire and try again. This is just gross incompetence at work. So I'll do a primer on PC ADSs first to better understand this. PC ADSs is the famous card industry data security standards. This is how the entire industry works. It's the sacrosanct standard of how things should be done. It talks about how you encrypt things. It talks about how your employee policies should be maintained. It talks about who should have access to servers, how you should do logging, what content is allowed to be saved, what content is not allowed to be saved, how you should do encryption. It is a fairly well defined comprehensive standard on how things should work in the industry. So this is a partial 3.2 section from the standard and it talks about the most important thing that the standard cares about, which is card holder data. What things are you allowed to store? What things are you not allowed to store? So for example the card number that you see is technically called a PAN, which is a primary account number. Are you allowed to store it? Yes. Render stored data are unreadable as per requirement 3.4, which means you should encrypt it properly before if you're storing it. So it says yes. You're allowed to store the name but you don't need to encrypt it as such. You are allowed to store the expiration date. You don't need to encrypt it and make it unreadable. That's why if you are saving on some website you are able to see when it's expired. Then it talks about sensitive authentication data. There are things like the entire track data. Track data is the max strip data. Everything that's present on the card, you aren't allowed to store it. It talks about CAV2, CVC2, CVV. All of those things are essentially the same thing on the back of the card, 3 digit, 4 digits. You're not allowed to store that. It talks about the pin blocks, which you're also not allowed to store. It gives a very specific reason as well. It says if you're, you have to actually use CVVs though, right? If you're a payment company, you'll have to use CVVs to authenticate the request, make sure that the CVV is the correct one entered by the customer. So how do you do it? It says if you're receiving it, render it unrecoverable upon completion of the authorization process. So when we do our audits, our auditor actually ensures that this is rendered unreadable and unrecoverable upon completion of the request, which means after your payment processing is done, it should not be present anywhere on a service. It should not leave a trace on disk anywhere. So they actually, when we all got audited, they make sure that we zero out the, even from memory, which is a pretty high standard. So this is how I found the Bob writing code somewhere on the internet for one of the companies in India. You don't need to even read code to figure it out because Bob has left lots of helpful comments. These are not mine. These are already present in there. It says this function will save encrypted token and CVV. The function is called save CVV as well. It's from the Android framework. Interesting. But yeah, it looks cool, right? This is encrypted fingerprint, encrypted CVV. Of course, it's fine guys. Then we go on. It says get CVV. It's still returning the encrypted CVV. That's fine. Maybe they don't have ability to read it back. That's cool. Oops. They are reading back. They have ability to decrypt. Just because they can decrypt it doesn't mean they're using it. Maybe somebody wrote code. It never ran. Nope. Set card CVV. Get it from the fingerprints. Decrypt it back. Use it in payments. So this is, yeah, this is a hard story because somebody not just decided to write this code. Somebody decided, let's write this. Let's ship it. Let's make, let's call it a feature guys. We can do payments without CVV now. Let's sell it to our clients. And I know this was sold to clients because it's also present in iOS. So yeah, I don't read this, but see the lock trace says encryption error. If you don't have CVV, save it to disk store CVV. Nice things guys. Okay. So I'll talk a bit about what learnings you should have from this talk. The first info most I'll keep repeating is don't roll your own crypto. If you're rolling your own crypto, you're doing it wrong. This is the golden rule of actually of encryption and cryptography field. If you're not an expert in the field, don't do it. Get it reviewed, whatever you're doing. In many cases, you don't, in the payments industry, you don't actually have control over what crypto is being used in terms of any interfacing with other third parties. They essentially say, this is our systems work and this is how you must interface with them. But it's a duty as members of the industry to figure out ways to get it improved. Other things are don't over in general security. This is something I found happens a lot because somebody builds a product and then you tackle on security on top of it. You say, okay, I've built this. Now, how do I add security as a feature on top of it? Instead of thinking of it as I have a product, let's engineer it from ground up, taking security in consideration. So use basics. Relay on TLS. If you have two parties talking to each other, just use TLS and make sure that it's encrypted. You do certificate verification well. If you have data at rest, use PGP to encrypt it. If you're hashing passwords, use Bcrypt. If you are checking for message integrity, use HMAC, but you don't use keys that are sent in the message guys. Use standard authentication. Don't build your own authentication protocols. Just because it's hand rolled and just because it's custom, just because it proprietary doesn't mean it's any more secure. It's very likely less secure. One other recommendation I would have is if you're doing crypto within house as in the client and the server or the entire communication happens between all parties that you control, I would highly recommend looking at NACL, which is a fairly well researched, well peer reviewed cryptographic library. It's a label for multiple languages. Just use it. There's also something called Google Keys are a very similar project. You can look up these and what they do essentially is give you the ability to use crypto without shooting yourself in the foot, which essentially means they give you ready made primitives on top of basic crypto primitive that you can use. And it says encrypt content, decrypt content, generate keys, use them. It doesn't let you set the best friends. It ensures that there's no way for you to screw up. So, yeah, these are learnings. I'll leave with the code that says if you have to type the letters as in a source code, you're doing it wrong. I'll open for questions now. If, yeah, thank you. Thank you, Nemo. So before before we go to the questions, quickly, very two quick announcements. One is that Ajay Rappasupramaniam chief of zone startups will be available in the lounge outside from 12 to one to have one on one discussions with participants who have ideas for payment products or want feedback on their existing payment startups. The second announcement is please check in today again for your food coupons. So with that, I think I'll open the house to the quest to questions on what we've just heard from Nemo. Hey, yeah, my name is Sandesh. I work in app sec as well. Can you just go back quickly to the confidentiality slide? I think I missed something there. I'm not sure which one are the confidentiality one. The confidentiality. Okay. So, yeah, so my crypto maybe a little rusty here and correct me if I'm wrong. But the attack you spoke about is more if they use the same key. Yes, they're using the same key. Or if they're using the same key and the same IV. And when you say replay attack, did you actually mean like a known plain text attack or actually the problem is this is replaying it. So the problem here is that if you know a plain text has geek, then your Cypher text is always the same. And that's because the key in the IV is the same, right? It's both a replay attack in the sense you can actually do this. You can send the same message, the entire sign message again, I gladly accept that because there's no nonce. There's no IV that's differing between requests. As well as because they're doing encryption on a block layer, you can mix and match things around. Okay. So yes, both of the things are possible. Can you clarify? So the problem you're saying is that you can send the same message again and again, I actually accept it. But that's not nothing to do with the yes, that's that was present as an issue. But that's not the major thing here. Okay, this actually mitigated because they were doing checks as well, which brings the whole point of this shared of encryption useless. If you're doing broken encryption, and then adding a checks and just do the checks on if anyone can actually decrypt the ciphertext back, which they very easily can. It's entirely useless here. So you should just be doing your projects and properly. Okay, that's fair. Any further questions? We had, I think, somebody? Yeah, the back. Yeah. Yeah. I understand what's up. All the messages are encrypted. That's what I learned from your example. At the same time yesterday, we heard about two two factor authentication is very important and multi factor. But today I see that we use SMS for OTPs and to a face and Android miss. I know they can read our SMS is and SMS is not encrypted and use two factor authentication. Why are SMS is not encrypted? Why is there no mandate from the private sector or from the government? So what we need to do as an industry is step away from SMS is two factor. That's my stand on it. One of the major pushes that has happened in this direction is that the NST, which is the national state of standards in US that has released a standard saying SMS is not considered acceptable as two factor anymore. So if you're using it anywhere in your systems, please replace it. That's my recommendation. They are far better ways of doing two factor. You can use the standard to OTP. You can ship out hardware tokens to your customers. If it's high security, they're far for better ways of doing it than SMS. SMS is not encrypted to a high enough standard anymore, especially in India. Yeah. GSM. Hi, this is the color. So you know, we talk about encryption and all of these things, but the bigger problem is, I'm sure you have all seen this, any one transaction anywhere online, you get like 15 SMS is and 25 emails. So I mean, you know, there is no like security by exception here. It's like bombard the user with SMSes and expect that they're going to keep track of everything. And you know, so that's, I mean, that's not really security. That's not, that's not helping anybody. I mean, is there anybody talking about this? SMS is as no, I don't think anyone's talking about it. What happens? One of the major reasons this happens is there's so many parties involved. For example, your merchants trying to send you food. They want to tell you as many times how when it's on the way, it's been ordered, somebody spec it up. It's almost there. They want to send you out all these nifty notifications, except we still don't have push notifications properly in India, you because merchants in India and app companies in India not ready to rely on it. So they send an SMS as a backup. Yeah, because the thing is you just have to enter your number somewhere. And then, you know, it's not just transactions, right? It's like marketing SMSes come out, there is no control on how to stop them. You buy something somewhere. Everybody in India is good enough. I've managed to block few things. So it does work. But if you're willingly given your number to someone who you're subscribed to in the sense you are a customer of this, for example, Uber would want to send you an SMS when they charge you. It's you can ask Uber to not do it. You cannot if they refuse to do it, use somebody else. So the second question of the same thing is because your number is tied with everything that you do and it's known to your friends and any app on their phone is actually uploading your number to their server in the name of ease of use. The number is our least common denominator and that's where the attacks can happen. I mean, as a security researcher, you can see that being our Achilles heel Achilles heel or point of failure. So what's your thought? I mean, what have you, what do you think about it? What do you feel about it? If having somebody's phone number lets me have their accounts. I believe that a company should be shut down. You shouldn't if having somebody's phone number shouldn't let me do all these things. Even if, for example, if I have your email address, what are the reasonable expectations you have? I'll be able to send you some spam. You can block it. But that's how the internet was built. If I have your phone number, I'll be able to send you SMS. You can't Yeah. Do you have any other questions? Yeah. So coming back to standards, when you look at PCI, the most PSPs comply to be PCI standard. Yes. But when you look at the banks, so I believe the banks, the banks, even probably the payment divisions, the digital payment divisions probably are going through a PCI standard or compliance. But the bank itself as an organization has no standards. Correct. Okay. Do you think, don't you think we need an industry standard, if not from the RBI, that the banks need to fix their Trojan horses? Somebody, an employee can be an issue. You're not looking at issues only from a digital vector, right? You have other vectors, social engineering vectors, which are definitely possible. And I don't think PCI scales there. And there is a slight difference between a standard and a law here. Yes. Standards are recommendations, but laws are, you need to do it when it's a RBI regulation or if it's a law. I don't see anything happening on this front. Yes, that is correct. So I believe as a floss count, there are less than five banks in India that are actually PCI days are certified, which means, which is actually harder for banks to get certified because they're usually the issuers. In which case, if you think of it this way, I go to a bank branch and I get a credit card issue or a debit card. How many people have seen that card number already? It's probably a lot because it was given out to you by the bank branch manager and so on and so forth. If you go talk to a customer helpline, they'll very likely ask you for card numbers. This happens. My bank in particular has been asking me to send my debit card number over SMS in order to reset its pin. So yeah, banks in India are not PCI days are certified. We need to at least get the PCI days are certified first before you even think about getting better things here. And as he pointed out, rightly, PCI days is not law. It's not regulation. What happens essentially is it's a consortium that says, if you're doing card processing anywhere in the world, if you're getting payments processing, you should abide by these standards and banks hold us to that. When we go and talk to banks, we'll be doing card processing with you and it's an acquire or an issuer. They hold us to that and say, get PCI days certification before you even talk to you. Except banks don't hold themselves to the same standard as a field in India. I have a follow question to that. Do you think it makes sense for the RBI to mandate something like this for banks? It would be really, really hard. Yeah. I mean, there are less than five banks out of that set. But supposing, for instance, tomorrow, RBI says all banks which undertake or which allow other people to have card transactions have to meet this particular security standard. Do you think it should be something that should be written into our law? Do you want to leave it more to market? I would really like that. Because yeah, I know Access Bank got certified in 2008 or so. So it's doable for a large enough bank as well. So if it's doable yes, we should have a higher standard for our banks. We'll take some more questions. That's a straighter share. Sure. RBI has recently mandated that the cybersecurity framework is now available. I mean, applicable for PPA providers also, the valid providers. Yes. So from a readiness perspective, what would you recommend to be done additionally or and above what typical PPA provider would have done? Okay. So I'll repeat the question he's asking what should be done or and above for a PPA licensee? The PCI DSS is definitely not mandatory okay for PPA providers. But unless if of course, if they are storing the credit card details, but if PPA providers are expected to raise to the level of the banks, what according to you that the PPA providers have to do? They should be audited to the very least, not just when we talk about PCI DSS, the primary concern for PCI DSS is whether card data is being handled properly or not. That's the primary concern there. You look around everything else from that point of view. If you talk about wallets and PPA service providers, you'll likely be looking from a different angle is at the end of the day, a PPA licensee holds there's some record in the table that says Nemo has 2000 bucks in their wallet. Can that number be increased or decreased by anyone? How many people have access to that? So things change and since even if the data is unencrypted even if my number is my balance is known to other people how many people have access to it? Can it be changed at will? You'll have to look at things from a balanced perspective at the very least. But yeah, things should be done. Any more questions? Cool. No? Thank you. Applause. I'll upload