 Yeah, hi. Hello everyone. Welcome to the panel discussion on data security challenges for fintech companies. We have a very interesting panel with us. We have Ashwath from RazorPay, EBS from Zeta, and Ankur from PhonePay. The format that I have in mind is that we will have the panelists share their initial opening thoughts on the subject and then we have some specific speaking points and then we will open it up for the audience for Q&A. To begin with, fintech is a space that has seen phenomenal growth in the last decade I would say in India and I think all of us are on the front lines of cyber security. We've interacted or worked with fintech companies and we've seen that there has been marked evolution in their approach to cyber security from the time when I remember there was no concept of a CISO at some of the largest startups and they were already at unicorn, multi-unicorn valuations to the point when they started to say, oh, we'll do everything in-house and they were using Apache Metron and I'm sure many of them still are and they're strong engineering teams who managed to get value from that beast of a platform. To the extent where now it's a very mature practice, security is shifting left, there are dedicated security. All the three panelists here are part or leading the security teams at their respective organizations. There are significant investments being made in cyber security. Some of that has been driven by investor pressure. Some of that has been driven by companies looking to list publicly and of course a lot of it gets driven by regulatory pressure which we will talk about shortly and in the context specifically of the certain guidelines that have been released recently. With those opening remarks, I was going to go to each of you alphabetically but I now just realize that all three of you start with the letter A so I will have to do a little more hard work. Yes, Ankur, we'll start with you. So any opening thoughts you have on the subject in terms of what you're doing at phone pay and then we'll dive in. Sure, thanks, KK. I think I'll start with again a word called fintech and just drive down into why this word came. There's a tech in the word and that's where the whole difference is. These companies probably which we call about fintech these are basically tech oriented companies. They call themselves a tech companies not the financial organizations or a bank and when tech plays a hard role in such a company to stealing financial data and all those things, security takes a much more priority and all these companies are digital first which means their customer interactions and all those things have to be really, really frictionless and they basically take a lot of time in improving their customer experience and these are the companies who you guys might be using for instant fund transfer or they promote lending in 10 minutes. These kinds of products, crazy products where it's very fast when they play or when they do something on that side and hence when you talk about security here, when you have to do an instant funds transfer, a fraud team or let's say a trust and safety team has a very little time to understand what this transaction is about and they have to do a lot of work in background without compromising the speed. So in short, I think there's a lot to handle. There's a lot of data to be taken care of and the data can be anywhere. It might be with your third party or now we call it fourth party, fifth party and sixth party. We don't know about it. So I think there's a lot to do here. These are my opening thoughts. Yeah, thanks uncle. That's interesting insight. We'll come back to the speed and the agility part and how security needs to keep a step with that. We don't want the old school banking style where you say, oh, I'll release a RFP for doing VAPT of my app. That's correct. Thank you. So Ashwat, over to you from Razor Pay. You can share some thoughts with us. Sure. Thank you. Thank you, KK. Thank you, uncle. So quick intro. My name is Ashwat. I work as staff engineer at the security team at Razor Pay. I head cloud and intra-security and I also help out with data security and a little bit on the security monitoring. So to all you to what uncle said, I think all three companies, we have a great responsibility on our shoulders because there are multiple businesses who are dependent on our success. So for example, if something goes bad here, then end customers are impacted. For example, if we take the example of Razor Pay, an end merchant will not be able to do their transactions and phone pay and Zeta and end Kirana shop will not be able to make payments. So that's the impact that each of the companies have. So a long time ago, one of my managers from Microsoft told me this. So think of an x-axis and a y-axis, x-axis of usability and y-axis of security. So if you increase the usability, then the security drops. So you need to strike a fine balance between the two things. So where you make the system usable for both the developers and the end consumers and keep the product folks also happy and also have security in the play. So I think the easiest way, we need to change our mentality also like the security team, to kind of make security in and make it super easy for developers so that they don't have to put in additional effort. Now we as a security team can put in more effort to make life easy for developers and security becomes a part of the process of the development. That's all for me. Great, great, great. Thank you, Aishwarya. AVS, over to you. Hi, good afternoon everybody. And I'm Prabhakar. I'm at, you know, I'm CRCY at Zeta. From the information security or the cyber security at the, you know, from the scale perspective, visibility is very important because the kind of data it is coming, the kind of impact it has on the, you know, on the consumers or the customers and the experience what they face as an individual customer. We are using many of the services, many of the applications. We know how it really hurts or how it impacts even our own experience and the trauma we go through if it or something goes wrong. So that's very, very important. So it has come to light because of many reasons. One is that self-discipline and as a culture that is one part of life. And second, compliance through the compliance and through the regulations and to a great extent with the best practices implementation. So you are going to achieve a lot of these, you know, areas and second by the organization culture and also by the commitment to the security implementation. These are all very important. In respect of the industry, Fintech happens to be, you know, very sensitive industry because you deal with a lot of, you know, financial products, services and all. Otherwise as a subject, if you see the information security or cyber security across the domain, irrespective of the industry, it's very, very, very important because the adversaries, you know, their tactics, their tools, their procedures, everything is changing. So they are very dynamic. So in a nutshell, if you wanted to say that, you know, practically, you don't have any chance to slip, but the adversary requires only one chance to succeed. That's how it is. It's a very highly imbalanced game. Yeah, asymmetric warfare, right. Great. Thank you Prabhakar. I'll shortly come back to you. I want to start with Ashwat, if you can tell us because you're covering a wide kind of range of app, cloud, infra. So if you can give us some practical tips and tools as to how you mentioned this term, bake in security, right? So could you give us some tips, tools, maybe some examples from your experiences to how you've tried to do this at Razor Pay? So I think the first and the key piece is identifying all of the data sources. So, you know, when we started looking at the data security problem, first piece was identify all of the data sources, right? I can sit on something like Slack, Google Drive, for example, we use Sumo Logic for logging, Sumo Logic, and a couple of other different pieces, right? So there are tools for, for example, if you pick Google Drive or Slack, there are automated tools, where you just give it like a pattern and you just, you know, it goes after it. So step one, identify all of the data sources. Step two would be identify all of the pieces of information which are sensitive, so that, you know, then kind of draw regexes out of space. Then the third piece is risk crank, you know, whatever is your most risky data sources and whatever has the most exposure, then go after those one after another. So for example, at Razor Pay, what we did was we started with Slack and then we went to Sumo Logic. So those were like the two things that we were tackling. And some lessons learned here, right? At Sumo Logic, one thing that we learned was developers would like access to the complete list, right? Because they want to figure out, okay, who's the customer, who, which merchant are they doing a transaction and on what, what is the time, what is the amount and all of those details, right? But at the same time, you can't make everything complete. So you have to have some kind of restriction on what is exposed and what is not. So that's where if you do it post facto, it's always tricky. So what we learned was it's best to do it at the ingestion layer, where anytime a log is getting written, this ingestion layer will check if it matches the regex, either go remove it or it'll sanitize it and just put like the last three digits or whatever, right? So that is one thing that, you know, was a lesson learned. And at the application security level, any thoughts you want to share on that, how you have big security into your customized APIs, etc? Sure. So specifically, I'll just touch upon specifically the logging layer. So one thing we found was we do development mostly in like maybe two or three languages for most part. So for all of these, we have written our own logging library called secure logging library. And we give the developers three options, right? So one, if it is sensitive data, then you can either mask it, or you can encrypt it, you know, if it's really required or third pieces, you can hash it. So encrypt would be like the last option. We either prefer them removing everything and masking it, or, you know, hashing is to look it, but encryption is the service like the last option. And in general, when it comes to application security, the one thing that, you know, we have learned is we have used, we're using Semgrip pretty well. And we are writing like a lot of custom rules. We are in constant touch with the company. And SAST has kind of worked out well for us. So this tool, you mentioned Semgrip. Yeah, it's SEMgrip. It's called Semgrip. And there is an open source version of the tool as well, where you get like a bunch of libraries or a bunch of checks for free. And then in the enterprise version, you can do more things like you can schedule scan, you can get like enterprise support, you can add more rules and so on. So what we have done is we have made it as a part of the pipeline, where Semgrip will run automatically. And again, this was an iterative process where we did not turn it on in blocking mode initially. So we had to clear up all of the old debt file. And now, you know, part by part, we are turning it on blocking. Great. Thank you, Ashwath. So now, Ankur, one aspect that I wanted to explore further with you. So my question to you has two parts. One is if you would like to add further to Ashwath's points on shifting security left, making it further deeply into the product life cycle, please feel free to share insights on that. And also you mentioned this speed aspect which kind of ties into integrating security into the product life cycle as well. So if you can also throw some light into how you help security come up to speed, because security always is seen as a laggard or a bottleneck towards releasing new features or releasing new modules to both of those aspects if we can throw light off. Sure. So in terms of FinTech and generally talking about startups, I think this is my third startup in India in the last nine to 10 years. And the moment of trying to start up, I could feel and sense and I come to DevSecOps part, but just a bit of background before I start that. Majority of the companies, when you look at now also, even if they have their own security team or if they don't have, QA's products and the development team, they work very well in tandem with each other. And they understand QA is very, very important because they think QA is basically a sort of enabler for them. And by enabler, I mean that they don't treat them as a blocker. They know if QA is not properly done, their functionality might break in production and things. The basic core functionality might not work. And in terms of security, security teams are usually treated as their blockers. They will block something, they will identify some security issue and then the release cycle will probably have to postpone or something of that sort. The shift left mechanism was meant to basically make sure the speed of security bugs or identifying them much earlier in the process has changed. And the more you shift left and I could see that successfully happening in majority of the companies these days, the more you shift left, the speed of figuring out new bugs and getting them fixed much earlier in the cycle improves. And as rightly mentioned by Ashut, I think SEMGAP is becoming very famous these days. We guys are trying to explore that and started writing custom rules on that sort also. Our SAS and DAS are again sorted out. We have identified some tooling around how to identify dependencies, how to block if there's a dependency which is more than certain level. I think shifting left has helped us a lot. Another thing which has helped us and I consider this as a part of shifting left itself is other than tooling also, we started doing PRD reviews, design reviews. This is probably not part of the tooling side but it is again much more than before even the board has been started coding. Developers even started coding, we understand the design, we understand the PRDs and start figuring out and calling out any security issues or any security issues. And based on these design documents, we also give them a best practice document. This is what you have to take care of while you're coding or deploying the code. So this has helped us a lot in maturing security at various other companies that have all gone on past. Superb. I'll come back to both of you more in terms of SAS and tooling at the ground level. Some more questions I have but I wanted to shift the focus a bit to the governance, culture aspect. So Prabhakar, you have referenced this a few times in your opening remarks and the discussion we were having earlier. Could you throw some light on how you've or what advice you would have some insights into how to change the culture, how to bring about more awareness across the layers from management to product leads to developers, tester, etc. One is the awareness and that's very critical. So make awareness truly interactive. So and second is that we have to ensure that it is ingrained. How we have to do is whenever you're developing a product, we used to call it stated requirements and implied requirements. So there's nothing called implied now. So now it is granted. So security is granted. So nobody will come and tell you that you should have a rich UI or you should have this kind of a feature and all. So security is taken for granted. So you have to ensure that the security features are taken right from forget about the requirement. What I mean is that the functional requirement, the kind of features and USBs and all. But security you have to do it every requirement like application calls for some 14, 15 important requirements to protect yourself either from access perspective or to support your incident management or to support your application security or mobile application security. There are various things you do and various processes, various checklists, various tools we use, a lot of things we use. So we have to tell people that these are all granted. Nobody will come and tell you that why your password policy is weak, why logging is not happening, why certain fields are not captured in the login or why what you call you have to do masking of this particular field. So by design, we have to tell them that the kind of data you get to your network, how you are transmitting, how you are processing, how you are storing. It's like, you know, it's a culture. You have to understand, yes, if in doubt how to handle this field, better to protect it. So it's by default. Don't even ask the question whether a particular standard is asking you to mask it, a particular standard is asking you not to log it. Don't even get into that. Just think, when I'm coding, I'm getting this input for my particular processing. Is this input really a how to keep it plain or what is the importance of this particular piece of data part of my product? Then take a decision. Yes, it is very sensitive. I know because we are all human beings, by common sense we do. I get no SSNM. How to handle it? Is it sensitive or not? As an individual because we are all coding. While coding itself, we know that. So we have to take the decision. Of course, we providing the policies, checklists, documents, complaint, a lot of things you do. But we have to make sure that certain of these things by bringing, we come to know a lot of things. So something like that, we have to make part of our organization culture also. Of course, you have policy, you have procedure, you have compliance, all those things. Even otherwise we're coding itself, you know that what kind of data you are taking, how you are processing, what kind of output you are giving. While doing your coding itself, you know, what are the things you are taking and what is that piece of code is doing for you or what kind of request you are getting, what kind of response you are giving. So if you apply a little bit of brain there, automatically a lot of things will be ingrained and a lot of things will be taken care. So we have to make sure that this thought process is going to get. I agree with you. I agree but I want to dig a bit deeper with you as to how do you get this? Yeah, of course, the one point is that you will do the policies, you tell these are all the sensitive data we get in, identify your inventory of data, what you get and how you are processing, how you are storing, what are the implications of all those things. You keep telling them. At the same time, you have to apply your thought process around all these areas while you are doing coding. So you have to think that you are doing a very, it's a craft. For me, coding is a craft. So you have to craft it very carefully so that no loose pieces. So ensure that your craft is correct and while you are crafting, you are taking care of all the pieces properly. To take care of the three pieces, all the minimum guidelines are given, the minimum knowledge is provided, the policies are provided, a lot of things we provide. But while you are ultimately the guy who is doing has to ensure that these inputs are grasped and implemented. So that is where we have to give a lot of education and we have to tell them, this is a witty, give a lot of examples, tell them while working with them on the policy document and even the one or two points, they may get new example, today you have a policy, suddenly you will get one more, no require mind. So instead of waiting for the policy, yes, now I got another, yes, can I handle like this? Of course, you have a lot of gates, a lot of tooling, we do all those things. I'm purely telling from a governance and culture perspective. So an individual, if he questions himself, the kind of data of what I'm getting, what is the piece of code I'm doing, how I am ensuring the security and privacy, whatever the way I have to process this and what kind of output I have to give, if that process is put into the minds of the people, whoever it is, one part is only the application of the development. But there is a large chunk of crowd, they are not associated with the development, still there are also weak links. So that is where the governance and culture is very important. Great, so thanks Prabhakar. So back to any of you now, I'll just ask questions that either of you can just kind of raise your hand and choose to answer it. First point, I want to go back to Ashwath's point of figuring out what is the data that you are handling and where is it stored and what forbats it is in the basic data discovery exercise. Any tools that you would like to recommend or have you guys largely done it using homegrown scripts? I can take a stab first and then you can go next. So we evaluated Macy, AWS Macy. So there's a page like almost a complete AWS short. So we did evaluate Macy. So Macy is like a managed solution by AWS where you can give it like formats and it will go through S3 buckets and a couple of other resources and it will tell you if there's any sensitive data. So that is one piece that we evaluated, but it didn't really work out for us because S3 was just one of the repositories. We had like multiple other repositories. So we are still evaluating Macy and the other piece that is kind of work for us is at TracerPay, we have like a big AWS, a big data bank, which is not specifically on AWS. So it's built by ourselves. So we have, it's mostly on Apache Stack. So we use this tool called Apache Ranger, which is specifically meant for the purpose where if there is any sensitive data coming in, then you can either mask it or cache it or encrypt it, or you can just completely drop that particular tool. So that is also like another tool in kind of tool. And the third tool is we had to do things ourselves, when it was more like trial and error and we wrote it at the English window. So I hope that answered your question. Yeah, great. This is the same, the Sumo logic piece where you're ingesting the logs you wrote parsers at that station. Correct. Yeah. At the ingestion point itself, we wrote the parser so that it would drop it right at the ingestion layer. So again, the cleanup, we used to do cleanup and every now and then we would say, hey, personal information is there or sensitive information is there. And the cleanup, we would go do it, but it would be a spot fix. And we knew in the back of our heads that this is like a spot fix and it's not like a complete fix. So that's where ingestion layer was. Great. Thank you, Ashwant. Ankur, you would like to weigh in with some? Yeah, I think we've tried the similar hidden trial approach in past. I think mostly homegrown scripts have worked for us and it is working for us well right now. So since you're not using any cloud infrastructure as of now, we have a private cloud. Homegrown scripts work well for us. Great. Great. My next question again, please, any of you feel free to step in container security? I assume most of your environments are partially or fully containerized or at least there is some initiative towards that. So container security, any thoughts on how are you securing your containerized environments? I'll go first and probably Ramakar or Ashwant can take a short on this. I think so what we have done till now is we use gates and I think most of us would be using it as of now. And what we have done is we have basically hardened our base images. And we have a lot of automation whenever a container is formed and there's a deployment happening on that side. So I think our base images is sorted out. And if you have to make a change in a base image, there is a lot of approval process which is needed. So it becomes nearly impossible for anyone to change the basic core image. And that's how we are trying to sort things out. Okay. Ashwant, more specifically, how are you monitoring container security? Because you have done, I think, a fairly intense deployment of Sumo Logic. So does that cover your containerized containers also? Right. So container, let me break it down into rather three pieces. So the first piece is where the container images itself are stored. So these are like the container repository, like what Ankur was talking about. So we do like a scan and there are open source tools like Trivi which do a scan on the container image. So that's the first piece where the base image is secured and also we run like a scan on the image itself. Then the second piece is there is a control plane of Kubernetes. So we also use Kubernetes. So there is a control plane of Kubernetes. And since we are hosted on AWS, so AWS gives us some functionalities around the control plane of Kubernetes. So this is where we run Sumo checks. So this is covered by most cloud security posture management tools where they do look at the control plane of the EKS, which is the version of the AWS managed Kubernetes version. And then there's the third piece of the Kubernetes cluster itself. So this is where maybe two or three sub parts. So the first sub part is around so this you would need like the access to the Kubernetes control or COOP CTL. So this is where we look at manifest file. And so that is at the container level itself. And also we look at how is the pod configured and so on. And also we look at some pieces around are the secrets securely managed, what kind of port access is present in all the three sub parts. So the first sub part is taken care of by tools like Trivi. Second sub part is by the CSBM. And the third sub part we are currently using an open source tool called COOP Bench. So this has some basic checks, but we are also exploring ahead of you. All right, excellent. Prabhakar, do you like the issues? Yeah, again, there are two parts, not two, three or any number of parts, but before even the container that will be taken care part of the CI CD itself. So that the making the image and first of all hardening that image or understanding either our own or we can subscribe and we can take those things that is one part applied. And also going through your CI gates, various tools that scan and that will be pushed. And second runtime security we call it CP. And that is the part of runtime security. Again, there are various tools you can use to detect and give you that. And also there are certain controls on the cloud provider, NATO tools are also available to detect those controls. On top of it, again, we have our own ELK stack we built so that that is part of your security monitoring phase, even management kind of a lot of rules based on those activities. And at every layer so that, you know, that is another piece. I don't want to get too much into the architectural details, but basically this is how it works. One is on the new CI CD pipeline, then the runtime and observability, they call it. So that's how it happens, the monitoring. And it's a, you know, again, a feedback and I can correct those things. That's how it runs. And the other things we use. Yeah, since we have you on the line on mute and one very important point you brought up in the initial conversation. And also I think you're hinting at it now is visibility. So can you tell us a little bit about how you've worked on improving visibility and when you say visibility, it can, I'm not sure what you're implying, but maybe you can touch upon different aspects of visibility. One is in terms of the metrics that are reported to senior management in terms of the security teams visibility on the IT landscape, you know, the real estate that you have, IT estate that you have. So different aspects of this visibility if you can throw some light on. Yeah, basically I wanted to talk a lot of things that the conceptual and the policy level for various reasons. Basically, the visibility or the what I mean by visibility is that we should have the clarity that what we are trying to protect. That's very important. That's what I mean by visibility. So if we are very clear that what is that we are trying to protect the big picture, then it's a layered security. It is not that one tool or one particular activity of your environment is protecting the entire stuff. So that once you know that these are all the critical assets have to protect that these are all the flows or whatever it. So then what we will do is that we will apply that layered security and we will architect our entire environment around it. We will build our policies. We will select our tooling. All those things will happen on that. Then critical monitoring because otherwise your metrics will go many. You can write some 30, 40 metrics or 100 metrics also. But what is your absolutely critical metrics or behavioral metrics? Or, you know, we talk about micro segmentation, zero trust. What is that exactly you're trying to do? Certain metrics from access perspective very critical for you. Certain metrics from behavior perspective very critical for you. Certain metrics from the system side resources side. So that is what is very critical. Everything is very tailor made. I never suggest a lot of generic practices. Practices are good. Policies are very good. But take them and look at your environment. How do I map these policies or these generic stuff to my environment? From my environment today I'm in healthcare. For me the critical environment, the critical data I have to protect is totally different from a manufacturing stuff. So that is the point I wanted to write. So take all your policies and these are all very generic and conceptual level. Everything is good. But try to apply to your environment and try to get the visibility okay at the perimeter level right from age to your database or to your core. So just make it like a small, small chance and blogs and see how it is working and see where you have to put the gates or filters or alerts, cooks, whatever you call it. So yeah, I stop with this. Don't have to go too much. Yeah. Ankur, anything in terms of metrics, reporting, how do you measure your success at shifting security level? So when we run all these tools, which we are talking about, name a tool for container 1, 3D or some grip or any dash grip tools or anything of this sort. But there are a lot of consumers for these, just kind of a data or report or vulnerability, it's whatever you want to call it. The consumers might be developers, the consumers might be the security team itself, the developer or the consumer might be the leaders who want to see a general dashboard on how are we progressing in security or generating a single big score for our organization where we are right now. Let's say out of 100, we are 80 right now. How do we make it 82 or 83 next quarter? So I think we are still working on it, but working on a central vulnerability management dashboard where we could see everything going around all the systems, all the tools and everything, probably start a scan if needed by that UI by the dashboard itself. And internally based on the roles, we have different sort of access to different developers, leadership or security where they can see and probably see exactly what they need. So that's how we are working or trying to approach in that sense. Ashut, any thoughts on metrics? I think we are around the same lines as what Ankur mentioned. So the only additional point that I would like to make is initially it worked out well, like we had one single score for the whole company, but then the CTO came back and said, hey, I won't kind of gamify it, then you give me a score by the BU. So then he could use that as like another metric into like the BU scorecard. Security would drive that another measure. So I think getting the scorecard right was very, very important for us. So we went through a couple of iterations and initially we used to do it manually and we were supposed to get it out every month, but that was a challenge. So then we rerun the whole program and it's now completely automated. Anybody can go in on the dashboard at any point, then they enter refresh. We'll give you like the latest score. So those were some of the great, great. So the last question that I would like to ask and then we'll open it up for audience questions. I have lots of questions, but I want to just keep some time for the audience also to come in. The regulator, one is of course RBI and RBI indirectly through your customers, I think in the case of Prabhakar with Zeta it is more customers facing regulatory pressure and then passing that those requirements on to either ways, whether it is direct regulation or indirect regulation or the latest certain guidelines for monitoring and reporting incidents and maintaining logs, etc. Any thoughts on what are the key headaches or pain points that you have from a regulatory side? Ashwath, we've got you on the line. So maybe you could start with this and then the others. Sure. So I think first is understanding and consuming the requirement ourselves. So at Razor Play we have like I said, dedicated plans department which I will order to stay in and there. But the key pieces of understanding the requirement and translating it into a security requirement and then finally to the developers of the industry. So that itself is tricky and there are different regulations for example management and storage of logs. So that is also something that we have to have a good grip or a handle on and that's where most of our effort goes. I don't know if that was clear. Ankur, any thoughts from you? No, I think most of these pieces, so since we're a very, very highly regulated organization and again a dedicated team of a good amount of people working on audits and regulations on a regular basis, most of things for us in the latest certain regulations were already sorted out. But I think some changes which are dedicated time for incident reporting plus maintaining logs for quantity days or if there's some changes in the VPN or something of that sort has to be changed. I think there's some sort of a minor change for us. But for us it was not a big, big change. Okay, great. Prabhakar, from your end please, anything on the pressure side? Regulatory things, Ego, the compliance perspective definitely, a lot of requirements. But if you see those requirements are also coming out of a lot of best practices to help the, either from the incident perspective or from a protection perspective or from detection perspective, so various ways they're giving. And if you see, these are all already, example you talked about NTP, very first requirement, what they're doing. It's already their part of PCI and other things, also NTP central synchronization and all. Again, you know, launch VQ of the logs instead of, you know, three months live or one year of talk, five years. So these are all the things you have to build your storage. But from a regulator perspective, most of these logs are already there for a much bigger periods, because most of the regulations, if you see already, they're from superset of regulations of RBI, at least certain things, at least certain things, if not everything. So what happens is that they're already there, if at all we have to go and see and whatever that, you know, except we have to do or we have to modify those work only we have to do. Otherwise, most of these regulations, if you see, they are a subset of superset of regulations already available by the bigger regulators. I'm not talking about all, at least few. Right. So basically, you're saying that this, at least this latest circular from certain has not come as a major challenge, that this is part of what you guys would be, but we have to figure out what is that we are doing, how much it is already available, how much of the delta we have to see, but the delta per se, what I'm telling you is that because already the companies operating are mostly in the regulated side, for them, it may not be a big challenge. That is the point I'm telling. So now, we'll just open it up for audience. We do have one question from an audience member. Jagan wants to know if anybody is working on Objective C and Swift with SAST and source code, I assume SCA source code audit or source code review, do you recommend? And then if you could also throw light on your security observability stack. So Jagan is here, maybe you can clarify what he means by observability. But maybe the first question some of you could take forward. I think I can take a stab and maybe Ankur and Prabhakar, you can add on. So I think software composition analysis for software composition analysis we use dependable and for Objective C and Swift, there were some tools that we could look at from a SAST perspective, but we chose to use a single tool, same graph and write multiple rules on it. Because the one thing that we found was it's more difficult to maintain multiple tools. So we said, okay, we'll just put all the rules to some graph. And that's what we did with infrastructure support as well. So yeah, we did some graph and for the observability stack, we did experiment with a couple of tools and there's also one experimentation going on right now. But the one thing that we realized was going with whatever the rest of the team has in terms of like developers and DevOps would make more sense and kind of custom fitted there would be more was easier for us. So we use Vajra internally. So we wrote our dashboards also on Vajra. Thank you so much. I think for us also for Objective C and Swift, our custom rules on some has worked a lot better than any other tool. And as rightly mentioned by Ashwit, I think it's very, very difficult to maintain two or three tools because the reporting structure would change, the common vulnerability management systems would change. And for ease of use, I think, I mean, definitely basically our custom rules work better than any other open source tools. Okay. Yeah, you have a couple of tools part of your pipeline. So you can use some, you know, preware open source tools and you can do your custom rules on top of it. And also you can use, you know, infrastructure as a code or you were asked. And also now we have identity checks like, you know, source code compression, multiple scans you do before you go to your CD, the part of CA itself, so that it will give you a lot of visibility, then again, you will do the fixing and you can also apply the gates whether to promote to next level or not. So that gives a lot of context. Of course, when you talk about shift left, by the time it comes to your report to the CI, a lot of things would have been already filtered. Yeah, great. Great. Thank you. Jigand is happy with the answers. Salvador has asked, how do you manage data discovery on endpoints? Anybody would like to take a stab at that? I think I can take a stab. I don't know if you mean like, how do we figure out the roots on a given endpoint? So there are three pieces to it, right? So the first piece is how do you figure out like all of the subdomains that are there under the domain? For example, raiserbay.com, api.riserbay.com, payroll.riserbay.com, and so on. So that we use, since we are kind of white box and we have access to our AWS environment, we hook it up with root 53. Then the second point is around discovery of all the roots on a given endpoint. We are experimenting with a tool called Acto, it's an Indian startup. So they do provide this particular feature where I can tell you all of the roots that are active. And we also have hooks into our root start, roots right in the given repository. So that's the second piece. And the third piece is wherever sensitive data is flowing into our environment. That's also where the third party tool called Acto kind of comes. I think this is, if I understand Salvador's question correctly, it's mainly how do you discover, for example, card numbers floating around a particular user's endpoint? Yeah, that's where Acto comes in handy for us. Uncle, you wanted to add? Yeah, I think, I'm not sure. Okay, yeah, I think he's clarified. It's about BI, I was confused with the word endpoint, what exactly he meant with it. So I think rightly pointed out. But for us, in terms of endpoints, I think we've tried creating our own sort of a database with all our public endpoints listed there with all our parameters. And everything probably in a day if changes, if we get alert and we do analysis on what has gone wrong, or if you have missed something in the CACB, and then probably add that particular endpoint in our CACB. So that's what probably Acto also does for you, if I'm not wrong. Try to make it internally. Acto more sits on the edge. Uncle, it sits at the ALB level where it mirrors the traffic and then it's because of Acto. Excellent. Any more questions from the audience, you can just type them into the chat box. Okay, so I think those were the two questions for the moment. One of the aspects that we, that okay, there is one question. If I may, Anvesha's question, what is the percentage of budget assigned to security out of the entire IT budget? Or is it a separate budget or is it built in within the departments? Is it part of a departmental budget? I think all three of you will have to answer this. It might be different for each company. Yeah, from my level because I'm at a level where my listening is a little different. We beg the budget across. So because every team has to budget a lot of things and part of it a couple of, you know, either from ruling perspective or resources perspective, they have to also equip themselves. So that's what it's spread across. So every team will do and we work collaboratively and we make those investments happen. If development team requires certain tools, certain resources, certain example like threat modeling, whatever it is, certain training. So we ensure that they are equipped with it and they are provided with similar IT sites, operations. So everybody requires those areas but we drive them and we tell them this is what you have to have. So that's how we spread it across and every function will get their piece and together it is catered. It's not like, at least I don't practice that. At least I don't say that I take entire budget and give it to you. We don't do that. We work with them and collaborate with you and every one of us will take that piece and make it happen. Okay, great. Uncle Ashwat for you, how does the budgeting process work? Not sure if I'm equipped or allowed to speak a percentage of the budget but I think not a percentage number but just broad logic principles of how Yeah, on top of my mind, I think what we have done till now has done a fantastic job in convincing our leaders how important security is for them and for the company and they have been able to do this by showing them the value of it, creating security champions, scaling security without even hiring, making a big security engineering team or a government agency. So creating value out of security and I think that's a major win for a security team within the company where within a certain budget they have been able to do fantastic job till now. So till now, not face any difficulty in getting whatever is needed because our leaders know how basically security is important for them. Sure. So I think there are two pieces to it, right? So the first piece is the compliance. Compliance is more of the cost of running business. So whatever is required from a compliance perspective, it has to happen. So that's where, you know, whatever basic tools, CSPM, all of those pieces come in. Then from a security perspective, I think we do like a top-down approach where we say, okay, what are the companies, okay, ours, and what are the two or three areas that we want to go do a deep dive on, right? So we go do a deep dive and we evaluate if there are any open source tools. Instead of going and buying a big fancy tool paying, you know, $100,000, we try to see if we can achieve the same with basically cost optimization, right? And, you know, when we go present to the management, you need to have like a lot of answers ready because we do get a lot of tough questions. So yeah, that's basically, you know, how the budgeting piece works. And, you know, there are scenarios where we might not have budgeted for a tool, but it'll still get approved because that's the need of the hour. So that's basically how it works. Excellent. So Alvedo has a question, does a breach help? I don't know. At least in the Indian context, I've seen breaches happen and then there is no news about it later on. I think I think it has helped to an extent where investors are now very serious about this. So I think it has helped in top-down, it has helped to a certain extent, I think. Yeah, it's also, I think, spread awareness amongst the peer companies, right? Like, for example, when there was a breach at Moby Quick, I'm sure all the wallet payment companies management started to ask deeper questions on it, same with big baskets. I think I'm just taking these recent names, but all sectors have been impacted by a breach or ransom, and SpiceJet had ransom, and I'm sure all the airlines in India are suddenly asking there, the management must have been asking their IT and security teams questions on that. So yeah, breach does definitely help. Oh, we do have another question. From a threat eval perspective, are independent threat actors hired to hard test the application technical? Yeah, actually, this is a great question, which as I remember during the pre-panel discussion, Ashut, you had mentioned that you do use bug bounty platforms. Can you share with us how has your experience been with them? And of course, if the other two panelists also in your organization use it, please share. We'll start with Ashut. So we do use Hacker 1. So the process was we started really small, and we went with a private program. We said, okay, three or four endpoints, 15 researchers. And initially, we did not get any reports at all. So then we were like, okay, what's going on kind of a thing. So then we worked with them. And apparently what had happened was our bounty requirement was not enough. So then we had to up the bounty requirement, and we had to provide more clarifications. So basically what it meant was for the bug bounty hunters, I put in a lot of effort, I'm not going to get enough money. Though we were providing money, the clarification was not clear. So then they've just moved on to the next piece. But it's been a long journey. I think we've learned a lot. My recommendation to others would be start really small, start with a private program first, figure out how you want to handle incidents. Because the minute you go public, you know, this was a real experience for us. I was woken up four times in the night. So because it will cause like a bunch of five exits. So the first piece is they identify subdomains. This won't cause any noise. The second piece is they'll send like all possible URLs, let's say, you know, there are 1000 subdomains, then there are like 100,000 sample URLs. So 100,000 times, you know, this number, they're going to hit you with it, right? And you know, one IP can like cause a thousands of other. So basically, you need to fine tune your stuff with the point I was trying to make. Though we have said no automated scanners, they still, they just go up. So yeah, the quick point is we do have hacker one, and that's really helped us out in finding some really good vulnerability. So my point is be iterative, figure out what the right bounty amount looks like, and just be very specific in what is in scope and what is our scope. Great. Those are some great inputs. Thank you Ashwank. Anyone else here has used bug bounty programs, private, public? Yeah, I think look at we started the hacker one program long back and it has matured to a later extent now. At phone pay, we are yet to start one, but we have a responsible disclosure page where we invite researchers to report back to us. No, no rewards program as of now, but we launched it soon. Because of being a bug hunter in my past, I know the pain of finding a bug and not getting rewarded. So I understand that and for sure we will start rewarding people. But I think it is not only reward or they summoning as a bug, it's more towards working with the community and making our own product much secure. They, they indirectly become part of our security team because you'll be able to identify security bugs in a company where there's a hacker one kind of a program only when you try and spend much more time. The first moment you come to a platform and then you start searching for a bug, it is really impossible to find a bug unless you know what has changed in the new grid or new update. That there have been people where the moment you update the app, they know what feature is new and they start tracking it. So I think it has gone to that extent and people have been very, very professional doing this. Excellent. Great. Thank you so much, everyone. Quickly, we have a couple of minutes. I want to summarize some of the key takeaways from today. I filled up two pages of notes and I hope that those in the audience also did that. So without saying who said what because that will take more time. But I think some of the key inputs obviously have been that shift-left security is no longer a talking point. It is happening on the ground. A lot of tooling is already in place. Tech startups, tech companies have led the way and I think more traditional companies are also now trying hard to adopt the same tools. Some of the specific tools that were mentioned, SEMGRAP of course was mentioned multiple times. Homegrown scripts to do data discovery for Kubernetes, KUP Bench was mentioned, the AWS functionality of Kubernetes scanning, hardened images and again custom scripting for secure logging, parsing the logs as they come into your log management tool, etc. evangelizing security across layers through explaining to people how important their role is, why security matters, what could be the impact if they miss even a single SQL injection or a single input validation check. And the importance, I think the biggest challenges that you all have managed to overcome to a large extent is that the business is so dynamic, the market needs are so dynamic, speed is of such an essence, users are so finicky, I would say including all of us, that if you're overengineered security, you'll actually directly negatively impact your business and might even be out of a job. So it's such a sensitive space to be in, you know, where users want both, they want fraud protection, they want data protection, yet they don't want to key in a secure password. So I'm sure at the front lines of where you all are, it's much tougher than for many of us who are more comfortable providing consulting services to you guys. And then I think some of the other inputs were data discovery, we spoke about ACTO was a tool that you mentioned, I have not heard of it, I'll be sure to check that out. And I think, yeah, QA, QA being an important part, and then telling security into the QA process. And also, Ankur, you mentioned about how you've now even started doing PRD and design reviews from a security perspective, not just the tooling part. So great, thank you so much for all your inputs. And we've just were short by a minute. So I think if the organizers have any closing remarks, I would please request them to do that. Thank you everyone for joining today, it was a very enlightening conversation. We have more talks coming up. So we'll have a talk every Tuesday and Friday. And we're also doing a series with Razor Pay, we have two talks on the upcoming Wednesdays. So please join in, register at privacy mode, hasgeek.com slash privacy mode slash PPG. And we hope to see more of you there. Thank you for coming everyone. Thank you. Bye-bye.