 So, we do have one question from Nithish Surana, so his question goes like, should one report to authorities when you come across a data leak on platforms like Shodan, Census, Binary Edge, how bad can it be? Okay, so usually what happens is that like especially for Binary Edge, I think they whenever before publishing something they directly reported to appropriate owner of a breach like if they are able to attribute, they will usually report. But yes, I think that is definitely a good practice like for example, if you find a data breach some sensitive information being leaked, and if you could identify who this data belongs to then it is a good practice that you contact the security team of that organization and report as soon as possible. Also, if you feel that it is the data belong to a government or state based organization, then it is a good practice to reach out to the computer emergency response team of that country and report the data breach as soon as possible. I hope that answers the question. Any other question? No, we don't have any other question for now. Folks, if you all have any question, please feel free to post it on the Q&A and if you are on YouTube, then you can post there as well. Sure. I think even after the session if anyone has any question or if you see a recording of this session and you have some questions regarding data security, do you feel free to reach out to us or to me personally as well? I am available on Twitter as well as you can meet me at Abhisheka.com. So Nitish has a follow up comment on it. He says that the organization doesn't have dedicated email to report. So what do you do in such cases? That's, I don't think there is a generic solution for that. Personally, what I would do is probably look up that organization, maybe LinkedIn or some company address book and try to reach out to some senior technical authority in that organization like CISO or maybe some CTO or someone and send him a mail first telling him that I think there is some data breach affecting your organization and ask the foundation first that would you like to receive the full details over mail and once the person confirms, you should go ahead and share whatever details you have. Hi Abhishek, I think the talk was really awesome. Thank you. There were a lot of things. I mean, I just began into security like it's been one full year now. So initially I started off with Shodan. So I came across a lot of databases and you know, like I was unable to find a proper point of contact and many times I found many startups who have exposed MongoDB instances and stuff. So what I did was like first I will try to patch them through Twitter. If not Twitter, then I'll go for LinkedIn. Then I'll just be Google talking to find the right person of interest and then I will share the thing. Now many startups have successfully like, you know, mitigated it and they are they really care about it. But in many instances, like I cannot even tell they were like, we don't care. You do whatever you want to do. And the sad part is like there are many IT implications as well. Like somebody can easily sue me if I am accessing their data unauthorized without their permission. So like this is really a big cloud which is already there but has not been resolved successfully. So it's a bit of a gray area on the legal side of things. While I'm not an expert on what is allowed and what is not allowed, I think it totally depends on the law of the specific country where the server is hosted. And most of the large companies, they are open to people reporting issues to them. Even if they don't have a bug bounty program or a specific cert, they are open to people because by reporting to them, you are basically helping them. Yes, exactly. But yeah, there is no well-defined. There is no standard process. Large organizations have their internal certs but for startups, it's not really a priority on this. I mean, there were a few startups to which I reported and they were really happy about it. They said that, okay, thank you so much. And I was like, okay, you can help me with a Pintista lab subscription. That would be very kind of you. So I mean, that was a very positive thing about this. But the reality is a little different, I see. Yeah, yeah. Because the startups are more like, they really care about their reputation. They really want to make out a name. They don't want to be in the news for some bad thing, right? Yes. Yeah, exactly. Thank you so much. Thank you for asking. Hey, anybody else has questions for Abhishek? Please feel free to ask. We want to make this session as interactive as we can. Yes. Oh, someone is live tweeting. Awesome. I will retweet as well till the next question comes. So, Abseco Twitter handle is, you know, tweeting a lot of follow-up answers, some of the best practices which our team is aware of. So maybe you can look at Twitter and check out Abseco. So, if there is no follow-up question, I think we can end the session here. I just want to reiterate that we are doing an office hours next week on 16 July. It's there on hasgeek.com. So if you are looking for specific data security consultation, we are happy to, you know, speak to you for free. All you need to do is book a slot for haskeek office hours next week and we can have a chat. I think Arun, do you have any questions that you want to ask? Oh, questions, yes. I'll be happy to answer. Arun is saying hi. Hey, Arun, you want to ask something, then maybe Shruti can unmute you and you can ask directly to me. I just gave you permission. You can go ahead and ask your question. Abseek, as I saw, there is a pretty interesting topic to see about data breaches. It is really nice. So I have one query here. So I have noticed a lot of people, apart from the bug bounty hunters, who are using a legitimate platform to report certain vulnerabilities. I even noticed a lot of folks who go ahead and report such technical breaches, but it is not part of any of the platforms I've noticed in somewhere in Twitter and other places. So I noticed that they directly contact the company and then they discuss with them and then explain that. Then somehow they are getting in touch with them to basically disclose all these things. But if you notice, that is not a proper channel to do as far as I'm aware of. Ideally, they have to go to a proper channel. There should be a responsible disclosure program or a platform like Hacker One, where they have to disclose it. But I noticed few people directly reaching the company, but sometimes they'll tell like, okay, I'll sue you with some cases of why you did such a pentesting or certain bugs. How did you find it? Who asked you to find it and all? But certain cases, they do accept it and then bring up on. So is that approach, I mean, is it good enough or should it not be done? So what is your views on that? So in my understanding, like when I started looking at vulnerability research, like finding vulnerabilities, there were no concept of bug bounty. People used to find vulnerabilities just because it was cool. And people used to report vulnerabilities directly to vendors and to the CVE authority. I think now there are multiple CVE issuing authority previously was Mitre MITRE. So the usual vulnerability disclosure process, which has been there for a long time is basically reach out to Mitre. If you have found a vulnerability in a software product, like say Microsoft Office or say Apache web server, some software product, you can ask for a CVE and they will do the disclosure. But if you have discovered a security issue in someone's infrastructure, then either you can reach out to them if they have a security contact or a security team with them. Or if you're not comfortable, then I think I'm not very sure but the hacker one does have some program where they will do the reach out on your behalf. That is something you have to check. As I said to Nitesh, there is no standard way of vulnerability disclosure. It has always been based on communication between the person who is reporting and the security team or the assert of that organization. Does that answer your question? Yes. Awesome. I feel that it is more like leap of faith. If you talk the right way, if you exactly tell them that, do you have a responsible disclosure program or something? First you ask this and then you proceed. If they don't have then you stop there. Do you have a responsible disclosure program or do you have a security team which can handle vulnerability disclosure? Usually people will have that. We can even ask Zainab. Zainab, do you have a security team at Hasgeek? Yes, it's called Kiran. Okay. So if you find vulnerabilities on Hasgeek infrastructure, you reach out to kiran at hasgeek.com or ping him on Twitter, jacker hack. Abhishek, I have one more query. Yes. For example, say there is a startup who are, for example, into a SaaS vendor type of company. So it can be any mode of company you can think of in the IT industry. So as a startup, if at all they face any breaches and they are not aware of a particular person who exploited or performed that attack. So they are not totally unaware. But in such case, they came to know that there is something went wrong in their company. So what is the, at most, the first step they have to do? For example, consider that they don't have a big security team, maybe one or two guys may be sitting there. So in such cases, what would be the initial approach a company should take as a startup? Because a lot of startups might not be having idea about security because they are always keen on the deliverables and other things with respect to their product, but not into the security aspect. So what would be the approach they wanted to follow? Yes. So if you are thinking about this, that means you are already preparing for a potential breach. Ideally, everyone should prepare for that. So some kind of incident response practice should be in place. I think there are some incident response playbooks are available. If you feel that there is a breach, I think you should start following an incident response playbook. Usually, like for example, in our infrastructure, what we will do is that we will quickly quarantine the resources that we feel are affected by a breach till we are able to do an investigation and identify a root cause. If that is not possible from a business perspective. For example, suppose for a SaaS based company, as an internal team, you suspect that your primary application for your customers has a security vulnerability which may be exploited. You may not be, because of business reason, take off that application for forensic analysis. Then what I would say is that you deploy some application firewall or something in the monitoring mode and try to inspect the network traffic and figure out the root cause. Ultimately, you have to identify the root cause, identify the extent and impact of the breach and you have to respond to that. Even if you look at, as an example, you can also look at something called NIST cybersecurity framework. So they have, as part of the NIST cybersecurity framework, there are phases in information security lifecycle. They have this phase called respond and recover. So that talks about what happens if you face a breach. How do you respond to it and how do you recover from it? Does that answer the question? Yeah, yeah, sure, definitely. I'll definitely look into that particular NIST. I'm not aware of that one, that they had something with respect to the incident response kind of a thing. But you will find the long incident response playbooks out there. Okay, but basically we take a startup where the problem would be like say the management folks might not be aware of such things. For example, that security aspect of it like NIST or anything. So in such case, how they are going to approach like, is it possible for them to hire someone outside? Because it happened for a startup brand new and they are not aware of security at all. So in such case, they should ideally contact a company or they should go for any freelancer kind of a guy who can do such at least the initial analysis for them to find the root cause. At least the incident responder kind of a person they can reach out to. So which one works fine because basically the startup companies who are not aware of security might not be aware of all these NIST process and other things. So I think startups, this is again my opinion, startups will take a call based on risk versus gain. For example, if the breach is so impactful that it affects the startup in a severe way, then they should definitely go ahead and hire a professional who can help them in such a situation. Otherwise what they can do, they can, you know, if it's not that impactful, they can deploy some security control. But in any case, they have to do some kind of analysis to understand what is the root cause. You cannot quarantine or you can take off your entire infrastructure. You have to identify that what was the entry point, what was wrong in some component in your infrastructure, which may have resulted in a breach. Only then you can take appropriate action. But yeah, if your customer data is breached, then there is no going back. You have to follow the law of the land. Like in this case, I think in India, we don't have the data protection law yet. But when the data protection law comes, if your company is within, you know, is required to, you have to report it to the appropriate data controller. So there is some technical aspect to it and there is some legal aspect to it. There is one more question. Sorry. Does that answer the question? Yeah, yeah, Abhishek. Great. Thanks. Abhishek, there's a really interesting question by Aditi. So it says, what measures should journalists take when they report on data breaches? How should they verify whether a breach actually took place, especially if they get an anonymous step or documents? Aditi, it's very difficult to actually verify whether a breach is real or not because there has been many examples of fake data dumps. Like people have created fake data dumps also, right? So I don't think there is a generic way of evaluating whether some kind of breach is real or fake because so much PII information is available across the internet. Like if you want PII for Indians, there has been so many breaches in the past, so much data is available. Someone can just take that and create a dump which looks real but may not be real, right? So I don't know of a generic way to investigate. Maybe security experts can do some correlation based on the type of the data and the organization it attributes to. Then some kind of correlation may be possible but again, I don't think there is a generic way to that. Yeah, Aditi, go ahead and ask. Yes, thanks Shruti. Thanks for unmuting me. So Abhishek, I was wondering over there, is it a good idea to verify whether or not that data dump is real or not, whether it's a result of a leak or a breach? Is it a good idea to get in touch with a few people whose names are listed there? Maybe their email addresses are phone numbers and try and verify the kind of information that is there. Maybe just randomly pick up say, for instance in journalism when we report, usually we need corroboration from two to three sources. So maybe pick up three records and get in touch with them or would that be an unethical use of PII? I am not sure. I don't know whether there is some legal concerns on that. I mean the absence of the PDP bill, there is nothing illegal per se. No, an organization may claim that you have misused their private data. Okay. But again, this is a very great topic because you are doing it without any malicious intent. Right. But again, this is very, very technical in terms of law. I think I am the right person to answer this. And also just adding on to what I think Nitish asked earlier that whom to get in touch with in case of a data breach or if you have knowledge of a data breach. So cert often doesn't acknowledge mails sent to it, the Indian cert that is. So what's the process there as in, how do we ensure that we are actually doing our due diligence in terms of letting them know that there has been a data breach? We are about to report on it. Can you please plug that loophole first? Again, I don't think it totally depends on the internal policy. Again, I am not sure is there a restriction on the legal side, but I'll share something with you. Google has a security research team called Project Zero. They do extensive research on very high value products like say Google Chrome or say Internet Explorer, mobile devices like iOS. I think their researchers only found some critical vulnerability affecting iOS and Android. So they have an internal policy of publicly disclosing technical details of security vulnerabilities after 90 days. After 90 days of informing the appropriate vendor. In most of the cases, unless it is a very, very high impact issue, like for example, some iOS issue or some issue related to TLS library which affects all browsers. Unless it is a high value issue, as far as I know they have an internal policy of 90 days public release. Okay, okay, because at least the Indian cert doesn't seem to have that kind of a policy. And I've spoken to a few people who have tried to inform the cert about certain issues, never got a response, never got an acknowledgement. The issue doesn't get resolved. And as I don't remember, one of the previous participants also asked that a lot of startups don't respond. A lot of companies don't respond or there's no way to get in touch with them vis-a-vis vulnerabilities of breaches or supposed breaches. So then it becomes a gray area for us as well while reporting that. Yeah, without a well-defined law, I think it will be a gray area. Okay. And while reporting, do you think it's a good idea for the PII to be taken offline first and then publish something or reported simultaneously? Because that's something we've been grappling with. So again, I will draw an analogy here. Usually if I find a vulnerability in a software, say Google Chrome or some popular software, usually it has an ethical practice to report it to the vendor, wait for a patch to be publicly available, give a sufficient window for all the users to upgrade to a patched version, a fixed version and then go ahead with a full technical disclosure. Okay. And in case of a data leak or a data breach, what kind of a window would we look at? Because often until and unless that information goes public through media, people don't get into that. Yes, exactly. So that is why in software vulnerability research also full disclosure, like as I said, after a sufficient window, you disclose everything that is known, all the technical details. So full disclosure had always acted as a deterrent for software vendors to act and not ignore security vulnerabilities. Maybe the same will apply for data breaches also. Right. Okay. Thanks Abhishek. Thanks, Shruti. Thanks. Happy to help. So anybody else who has questions? Ravi has a question. You can unmute him. Yup, yup. Hi, Ravi. Hi Abhishek. Just was adding on to the large discussion. So even when someone is trying to publish the technical disclosure, we need a consent from the company which has been affected. Right. See, unfortunately, unfortunately, there is no legally defined way. So it makes sense to not offend the company who has suffered the data breach. So definitely I would personally go for a permission from the company first before disclosing. Okay, because in one case this happened that we had found a vulnerability in one of the big companies. We reported to them, they fixed it. And then when we said that you should publish it and this actually happened twice with big companies. And they said, no, we don't want to actually get it published. And even when we mentioned to them that you should kind of go ahead and tell that you should publish a release on your website saying that your customer data was affected. The kind of denights. So I think in that case we are more or less out of options and especially they are giving you bug bounty reward as well. Oh, see if they are giving you bug bounty reward then they are purchasing the rights to that research from you. You cannot do anything anyway. Yeah, without their permission. Yes, without their permission anyway you cannot do because as part of the bug bounty agreement I think you are basically sending the research to them. Yeah, so this happened like twice in one of the case we were not offered the reward in second we were. But yeah, from a company perspective or from a security researcher's perspective we actually wanted it to be published because it is their customers who have been actually affected. Someone else might have exploited. Right. Is there a way we can convince them to say that you should go about it? It is not a bad thing that it hits your reputation but it is a good thing that you come out and say. No, there is no way unless the personal data protection bill comes into place. I think Aditi will be more familiar with the legal side of things. I mean vis-a-vis personal data breaches there is a responsibility to report there will be a responsibility to report to the data protection authority. That is very limited. The bill unfortunately does not really address any responsible disclosures nor does it take into account that these disclosures may be made by third parties. So ethical hackers or security researchers and where liability would lie or where responsibility would lie in such a case. But as for the bill companies don't have to report the breach to the data protection authority but they don't have to list it on their own website or publicly claim it or inform their users either. Which is a concern to be honest. So in general there has been very few examples of software vendors actually suing or taking the legal measures against security researchers. Mostly software at least software vendors in the West do not do that. Although there has been an example of Cisco doing that once during the early days when someone found some serious vulnerability on Cisco routers and they were going ahead and presenting that in a conference. Cisco took legal route to stop that talk but that is like a very corner case usually that does not happen. Software vendors usually appreciate people reporting their vulnerabilities directly because otherwise what will happen is that if researchers choose to sell their research to the underground there will be a massive high impact. With software companies usually they come nice. But unfortunately that is not true for data breach. Data breach belonging to some organization. Just adding to the previous discussion. Suppose if you want to really publish your research you can simply redact whatever that was there. Like for instance if you are submitting a bug to the US Department of Defense via hacker one then you can like after the report is closed and it is resolved then you can request a disclosure. Where they already like they will just simply read out every IP every URL everything apart from what actually happened. So that can be a way to approach the company to publish your research and you know like get a name. Just to add technically usually it is you know someone who is has the required skill they can still figure out from that deducted information. But yeah what you are saying makes sense that some at least the publishing will happen. There are many bug bounty hunters and researchers who go to medium and blog about it after getting the permissions. Yeah once you have the permission then definitely you can do that. Yes but then you will have to again promise them to not to tell anyone. Yeah the bug bounty is a different thing and it is based on policy not law. Absolutely. Anybody has any more questions. Cool that was some interesting questions like initially there were no questions from there. I know right. Okay alright if nobody has any questions then I think we can wrap up. Yes. Yeah if you want to ask more questions yeah we can wait for a minute or two. There is a question on YouTube. Can someone read out the question please. Yeah so it says so it's somebody named VJD. I'm a part of startup one guy who takes care of security of all natures be it apps and cloud sec container. One man security. Suggestions so he is mostly asking for suggestions. Suggestions on. Hey Shruti I can't hear you. Can you hear me? Yes. Yeah so he is asking for suggestions. Currently he is trying to implement SAST so he is asking for suggestions on that. Yes so depending on the technology there are a bunch of SAST tools like if you want to go ahead with these auditions. You can probably look at Ovas.org right openware application security project. You can have a reference to a whole bunch of different SAST security tools which you can integrate along with your pipeline for security scanning of your code as part of your pipeline. Yeah just to add to that he is also asking if you have any suggestions on threat modeling. Well yes I think threat modeling is definitely very important. As I said in my as part of my talk threat modeling is something that I do in order to start my security program right threat modeling is like the visibility that I have for my applications or infrastructure and threat modeling allows you to see what are the potential threats that are applicable to me. Any security tools that I use or any security process that I bring in is basically to mitigate this threats right. I will not do anything without a threat model because if I just follow some random best practice I may end up mitigating things which is possible applicable to me. So I will definitely start with a threat model for application security again you can refer to the Ovas threat modeling practice. Also if you want to bring in some threat continuous threat modeling as part of your development process. You can look at Mozilla's rapid risk assessment methodology as well. Does that answer your question. So he has to say that for a beginner I just know stride of Microsoft but without basics or DFT it's like shifting. So even Ovas threat modeling can use the stride methodology. You have to create DFTs and on that DFT you have to apply stride as a framework for threat inversion. So stride is one of the frameworks for threat inversion.