 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer of Data Diversity. We would like to thank you for joining this Data Diversity webinar, Discovering and De-Risking Sensitive Data, sponsored today by OneTrust. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them by the Q&A panel. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And to access and open the Q&A or the chat panel, you'll find those icons in the bottom middle of your screen for those features. And just to note, the Zoom chat depots send to just the panelists, but you may absolutely change that to network with everyone. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Phil Donas. Phil is a solutions engineer at OneTrust, the trust intelligence platform, unlocking every company's potential to do good for people and the planet. OneTrust connects data, teams, and work flows across domains, trust domains, including privacy, GRC, ethics, and ESG. So companies big and small can collaborate seamlessly and put trust at the center of their operations and culture. And his role, Phil supports the OneTrust privacy and data governance cloud where he advises companies on how to transform privacy compliance into trusted and ethical data use. And with that, I will give the floor to Phil to get today's webinar started. Hello and welcome. Hi, thank you for having me. All right, so let's go ahead and dive right into it. I'll be able to speak a little bit more about my background and some of my experiences, both the data governance professional, a certified privacy professional, as well as a solutions engineer with OneTrust. But today, we are going to be talking about some of the most important topics that we've seen for a multitude of different businesses, especially looking at this regulatory environment that we're working with. And it's discovering and de-risking sensitive data. Now, I understand that de-risking seems to sound like an endeavor all in its own. And the definition of sensitive data, we know that there's complexity in what is considered sensitive and what's not. We're going to spend this session to dive into all of those particular topics and then to provide strategy views that your organization will be able to help address these concerns. All right, let's go ahead and with it. So the agenda is we're going to start by talking about how things have changed with regards to data and security views. We're going to dive into a little bit of the history, some of the metrics surrounding what enterprises and organizations are dealing with when it comes to data today. And we're going to be able to dive into that. We're going to talk about some of those cross-functional challenges and initiatives, a lot of which are taken from not only my own interaction with our customers, but as well as some of the different entities and organizations I've worked with and networked in with at various different university events, as well as other different data governance professional events. And then number three, we're going to go ahead and focus on the steps to help protect these organizations and the data. One of the most important things that I've learned from my time is the steps to protect those two ways. It's a step to one, ensure that the business one does not end up in the news, but two, that the security and the privacy of the people we serve, whether it's our employees, our customers, if they are consumers or if you're B2B, those individuals with business relationships, we need to understand the ways that we can protect that relationship going forward. So let's go ahead and get started with that. So the first thing to note is today's data landscapes. And it's very apparent, you hear in the news, there's just so much data. It's almost maybe every other month. We've seen or heard of data privacy violations and enforcement action, a data breach, or we get the notifications that we are unhappy to see, which where your potential data has been leaked or found as a part of this dark web search. We need to recognize that data is scrolling and continuing to scroll at an unprecedented rate. And the velocity is very interesting. When it comes to just 2020, as we were transitioning and remember COVID only really had a true impact in the United States and the world once entire centuries began shutting down and marked, but just think even before 2.5 million bytes of data was created every day. There is that is such an astronomical number to think that we could be able to even address a singular instance of a data element that belongs to an individual. And to understand that that particular individual could be producing a gigabyte of data per day. When it comes to the volume, we're looking at 200 data bytes. Yes, that's a lot of zeros, but 200 of data bytes of data that is looking to be stored within the cloud, looking at a vast, a vast network of systems, knowing that information related to an individual, a transaction, a data elements, a historical Wikipedia article or something. We know that this information is going to be stored within a cloud setting. There's going to be very easily accessible to those, whether they may have the proper credentials, or maybe they don't have the proper credentials, and instead have utilized brute force or more nuanced methods to access that information. That's a very large attack surface that we just haven't seen. And while we've been looking at these metrics for a long time at one trust, I would not be surprised if that projection is already ready to double. Finally, we're going to be able to talk about just the variety. When it comes to the types of data and the places which data can be stored, there are over 110 SaaS applications that are going to end up being used by many of the organizations we work with. And many of them have different data types and structures. They may have been utilized in different formats. You may go through multiple versions of the same application or multiple deployments of the same application. You also may have used a vendor previously and have switched to another vendor. And now what you've done is you've essentially, as a part of that migration effort, duplicated the amount of data that would be stored within these two servers. So there is a large amount of data and a big concern for why we need to be focused on this type of data at this point. Now, let's go ahead and dive into just some of the more specifics of how all of this information poses a risk to the business, how I've seen organizations try to respond or be caught on the back foot when it comes to recognizing that this risk has occurred, or an incident has occurred. And we're going to go ahead and dive from there. The first is going to be one, we need to be able to understand, and of course, as data governance professionals and everyone here at Dataverse, probably knows data is a valuable asset. We need to begin to treat it like as if it is cash, similar to how banks treat cash. They not only maintain an inventory of how much money sits within account, they have protections, they know who should have access to it, they have multiple ways to validate if someone has access to it. They have methods to store cash, to preserve it, to keep track of its records. And also, when it comes to the utilization of it, if you're looking at a bank, they have that level of security around just a single dollar. We need to have that same awareness of that single data element in order for us to be able to minimize risk completely. Now, essentially the same way how a bank may have a single dollar sitting at ATM or within account, there are going to be bad actors trying to access or to steal it. So, knowing that this is a risk and knowing the types of protections that we have, even in our own banking apps, do we leverage those same protections for our own internal data, for our own business data, for the information that drives a lot of organizations. This is not just specific to businesses where they are a business to consumer. We know that many different organizations that are B2B leverage personal data or information that includes transaction information. Maybe there is a point of account that may be leverage. Maybe there's some internal information that might be highly critical to your operations. We want to be aware of all of the different things that may be of interest to a bad actor, and we want to be able to make sure that we are implementing those proper ways to protect it. Let's go ahead and talk about the places where this particular data is stored. Previously, of course, we're aware of, and some of the organizations I've worked with, when they first started off as software companies, we've heard the old stories. They had a server rack within a room that had access to the internet via their internet service provider, and that's essentially where web applications were made for a long time lived. Now, we're looking at an entire architecture or an entire world in which organizations are now no longer talking about individual servers, but they're talking about data oceans. It's no longer data lakes, it's data oceans, massive repositories that live decentralized on cloud applications. We know that now what we've done is we've now taken our single dollar bill or a single data element and have persisted copies of it in multiple different places for easy access, for security purposes, for recoverability purposes, but now what we've done is we basically just said, you know what, I will take a dollar, I'll save it one bit as opposed to under my mattress, I'll put some here in my mailbox, I'll put some here under the floor mat, and we'll just hope that no one comes around to any of these three places. We need to start to recognize that the benefits of leveraging this new technology of the cloud is now going to pose additional concerns to us, so we want to be highly aware of that as well. Thirdly, new privacy and data security regulations are introduced. Not only is it difficult enough to have to worry about these concerns, we now have a regulatory space that is really developing. It's really still in its infancy in regards, in relation to something that exists in financial regulation that exists in regards to cash, for example. If you are someone in the financial services industry, you are very familiar with the regulations that govern the retention of specific records of transactions, exactly how you are to communicate that, what is expected of an organization when it comes to securing assets. We are looking at a world in which where those regulations for privacy, for data, are just coming into effect, and we're now going through the process of growing and seeing organizations that are either brand new as startups having to deal with these for the first time. A lot of technology has been built in this space. One trust was built in this space. What we're looking at is we need to be able to account for this as being an additional risk in order to put pressure on things. We want to make sure that we take this into consideration as well. Finally, the data footprint. We've spoken on this before. The data footprint could showcase, could include data that simply just no longer serves the value to the organization anymore. There's a cost to retain that type of data, duplicate data. We understand a duplication for accessibility, for purposes of convenience, but duplicate data, if it is sitting on multiple different data stores and not all of it is going to utilize, what type of risk are we presenting for ourselves by having those duplicate data elements listed? And then siloed data. This is in combination to many different problems we know in governance organizations. They don't want to end up becoming shadow IT in which where there's duplicates of data systems, ways in which where we basically create copies of data that maybe at one point served a particular purpose, maybe at one point served as a part of a particular initiative. And now there's just no use to it. And now we may not keep track of it, we may not recognize that it exists. So we want to be able to be aware of that as well. And then finally, no one wants to end up in the news. No organization wants to end up in the news, let alone for regulatory fine, or for having to mandatory mandatory to report in regards to an incident or breach. So knowing this, we want to be able to try and help organizations begin to address these concerns in a way that fits the modern age of the premise now. So one last thing that has changed is the way of doing business is evolving. Before data was owned by the enterprise, it was clearly understood that the data that you collected was yours. Outside of that, there was no getting back at it. There was no asking the individual to delete that data. Once you got it, it was yours. Now it's no longer the case. Personal data belonging to your employees or to your customers is now owned by the individual. It's no longer, oh, I've been able to collect this person's own information. I can do what I please. No, you have to gather consent for the individual before you need to use their information. You have to provide a method for them to request access to their data. You need to provide a mechanism in which where if the individual requests a deletion of their data or the right to be forgotten that that is executed within a regulatory dictated time period or deadline. So recognizing that this is a change in the way of doing business as well as the amount of data that we have potentially on these individuals, it seems like an overwhelming problem. So the strategies we're going to talk about today is going to be focused on helping address those concerns to make sure that we are going to be able to move forward in a consistent but also in a revolutionary way to help make sure that business is ready to address these concerns when it comes to data, to address the risks when it comes to data and being able to execute on the promise that we have for our shareholders or stakeholders for the organizations that we represent in support. So visibility and validation to these, these are the critical ways that we're, these are the critical methods we're going to use to solve these issues. Let's talk about current state of some of the organizations that I've worked with, I've seen working with in the past, I've been a part of these organizations previously. We asked, does this seem familiar? So at the current moment, we understand and they see that some of these initiatives when it comes to understanding what data risk is, understanding retention policies, understanding access requirements, methodologies like zero trust, these are all similar initiatives carried across these many different silos. Just because a privacy office or a team may comprise primarily of maybe a general council, a chief privacy officer or an individual who sits more on the legal side of things does not mean that they're not aware or concerned about some of the operations that may fall into IT risk management with security teams. So talking about what kinds of data that an individual and internal employee may have access to, what are the retention periods for certain data elements stored on these various SaaS applications? Do we have proper mechanisms to ensure that we can verify a person's identity before we provide them access to data or verify a person's reason for why they should be certain data elements? Privacy is very interested in those same concerns. As well as from the space that I'm coming from, data governance, we see that these are all of the challenges that we run into when we are trying to enable businesses to be able to do more with data. So data governance, whether it is wanting to understand what kind of data app is going to use it or what purpose, that again aligns very closely with privacy and the goals of security. Data governance may be trying to understand the metadata. So more specifically, less so about what is contained within those rows, but what is the purpose and what is the access, what is the number of data that we have, what are the controls or automations that we need to ensure that this data has the quality, serves the business in the way that it's intended to. We want to be aware of those similar initiatives across all of these different silos. And essentially, the short answer is you all could probably see this, but we want to begin to think about unifying these three different silos as a way for us to now begin to provide a holistic way to address these concerns when it comes to data. Here's another challenge when it comes to these silos. This incentivizes a lack of visibility because we're all looking maybe at the same three-sided object from our own perspectives. We all only see one side. We may not all see the, we may not all see the items behind the walls that security is looking for. Data governance may not see behind the walls that privacy is looking at. So we want to make sure that we're not creating this ineffective, inefficient, duplicative initiative, and we want to be able to change that as a result. So visibility and validation is the key. Let's talk about what that may look like for these different organizations. For security, it's being able to just identify and de-risk what is considered sensitive data. So being able to monitor those particular controls and risks. For IT, you want to make sure that those resources are utilized properly. So let's say we've identified as a part of both privacy and security, there may be a terabyte of data that probably doesn't service any value. That's an easy way for IT to go ahead and say, you know what, we can probably save some money by getting rid of the unnecessary data that we will not be able to contain any additional value. That's something that IT will need to lean on and ask of security, privacy, and governance to make sure that, hey, if my goal is to be able to provide a solid infrastructure that is easily available for the business for our goals, minimizing or our utilization is going to save us money. It's going to save us edits. It's going to save us time. So that's something very critical. Now, I'm going to go ahead and take a quick second as I go along here to answer just a few questions that have come up. When it comes to privacy, complying with global privacy laws, this is one of the main primary concerns for the customers that I work with directly at OneTrust. What we end up talking to a lot of organizations about is what are the requirements of reporting to a regulator, how we use data, what we're doing to protect data, where it's stored, is it following or is it compliant to the outlined data protection laws? And that's where you have a record of processing activity or a ROPA for short. I would not be able to recommend to any privacy professional that they could realistically generate a record of processing activity solely by themselves without having to involve sub-organizations like data governance, IT, and security. It simply would not provide a complete and holistic picture. And again, it would only have that sliver or a small part of that entire picture. So it's important to note that when it comes to visibility information, all of these different groups are looking at data differently, and they are all looking to ensure that the organization is taken care of in these four different domains here. Now, the best way to be able to help facilitate this is to, one, break down the silos, literally break down the walls or the cubicles or literally the rooms that you may have privacy located on one floor, security located on another floor in another department, and to provide everyone the ability to look at data the same way. So centralizing the architecture for how the entire business is going to address these concerns is going to be completely critical. One of the easiest ways to do that is to leverage a platform that is focused on leveraging automation, focused on using intelligence, focused on being able to go ahead and address the concerns of privacy, security, governance, IT, while still being able to make sure that you can consider the consumers and their rights, your stakeholders, and their expectations of what your organizations are to accomplish, as well as the stakeholders of the data protection authorities, the attorney general, so if you're looking at the United States, who are looking out at businesses, looking to ensure that they are doing right by these consumers, keeping up that regulatory compliance, as well as being able to easily incorporate some of the requirements that exist within your IT standards, IT or standards and frameworks, whether that's NIST, ISO, or an audit when it comes to those initiatives. Having that all lived in a centralized location helps you understand the why, the where, and how. And this is where we're going to be able to provide that complete picture of all of those different requirements. So additional questions I'm seeing here, there are ISOs where sensitive data has best practices, absolutely. Having those specific requirements already within the centralized platform is one of the best ways to help along with this process. I've had conversations in which where sometimes the first time some of these different organizations are meeting, it's actually on a conversation with a vendor. And at that time they realized, hey, we're actually pretty aware of the same things, but you may be speaking about these things through the lens of I need to be so certified, or I need to have the following controls as a part of the standard and framework, or I need to be compliant with the following data protection regulation. So this is where you're going to end up seeing those silos broken down and to be able to help you easily now make sense of all this type of information. So what can I do to get started with this? Well, for me, the immediate concern here, and I see some of those questions out there when it comes to how Discovery can play on this. But realistically, we have the capability through the advancements made in AI, the advancements made to discover and classify data, we can leverage tools out there that will help us actually identify what type of data we have, not necessarily not all tools are going to be able to provide the most context, but the what is going to need to be the first thing that you need still in order for agreement to come on to how you're going to approach this particular data going forward. What are the specific controls and risks that you've identified as a result of this? So leveraging a data discovery tool that intelligently classifies this data helps you understand where your data is, what it is, potentially who it belongs to, and that potentially what are the different ways in which regulations from the privacy standpoint controls and frameworks and standards may exist from the from the information security standpoint. How those different items may reference the same data element and how different requirements for retention for policies may affect that particular data element. You want to be able to have that repository ready to go for your organization to then effectively protect and govern that data. So once you've gone ahead and discovered a particular data element, let's say some of the more highly sensitive data elements under a California privacy law, let's say first name, last name, social security number, email, data birth, some biometric data. Let's say we've identified that one of the ways that we're going to best serve our organization is going to then be ensuring that privacy isn't just the only person in the room for when you're trying to effectively communicate. Hey, this is sensitive under this privacy. Well, here, we shouldn't use this at all, or let's just get rid of it. That's not going to work. We want to make sure that privacy is embedded as a part of the operations for the teams who leverage that particular data. So privacy would need to be a part of marketing operations to ensure that whatever data is collected from the consumer, it serves a clear purpose. It serves the purpose for the time period that you need in order to facilitate that sale or transaction. And it is not utilized for any other purposes outside of that. Embedding privacy in that operation is going to help. Let's say we have additional more sensitive information, like in the example, social security number. We're going to need information security to properly address who has access to that data. Who should be able to view that data? Are there masking policies that need to be implemented? Is there going to be remediation actions that need to be taken on a regular basis? Is this something that we're going to have to continue to do on a weekly, monthly, yearly, quarterly birth basis, whatever that case is? We want to be able to help implement those specific controls, and we want to make sure that we're doing that effectively. And then finally, the enablement. We want to be able to enable all of these teams to be able to ensure that compliance can happen at the large scale for the organization, and that data can be used responsibly. I can't say how many times there are conversations with marketing teams in which where marketing walks in and says, I want to use all this data for these various many purposes. And privacy, IT, DataGov says, no, no, no, you can't do that. You can't do that. You can't do that. If these organizations already had an understanding of the requirements of, one, the marketing team to say, hey, we're going to use this data for our next upcoming email campaign, or hey, we're going to use this data because we collected the consent of the consumer, and they've allowed us to send them communications, let's say, under this law for a year. When you have that visibility of those other teams, you no longer have to go ahead and jump in to say, no, you can't do that. No, you shouldn't use this data. I've been in those circumstances where I've communicated with those different teams, and everyone in the privacy office is like, why doesn't anyone like this? Well, it's because oftentimes you become the office of no, IT security and you've been there, they end up being the office of no, you can't do that. When we enable these teams to communicate, to have visibility on all that data from the get go, it becomes a lot easier to start saying, instead of no, to start saying go. You end up saying, go forth with your marketing campaign, but implement these following requirements for me and you're good to go. Go forth and release this new data product. And also make sure that you're applying the following security controls. That is the enablement that we're going to need in order for us to effectively reduce the risk that we have for having this data, for holding onto this data, while also making sure that the organization's purpose and goals are not burdened by this as well. So let's go ahead and talk about some of the key questions here. What data do I have? Looking at the chat here, we're looking at a few individuals who are saying, hey, what kind of data do I have? Honestly, this question is very, very popular and is one of the more difficult things. I see a discussion there mentioning some data discovery tools out there. There are tools, there's tooling out there to be able to help you understand exactly what kind of data elements do you have? Is it a first name? Is it a last name? Is it an address? Is it an identifier for an individual? Or is it a data point? Is it a KPI? Whatever that item may be. The question of what data do I have can't just be what data element do I have. It's going to be important to note that the what data do I have is going to, especially under the various privacy laws, have to be interpreting with a bit more context than just the data element. How is that data supposed to be used if it's first name, last name, and if it belongs to an employee versus a consumer versus a vendor versus a business partner? So the what data do I have needs to also include the whose data do I have in order for us to be able to completely address this concern. Where is it? Very simply, where is that data being stored? We understand that for many different privacy regulations, there are restrictions or there are requirements governing the transfer of data from controllers to processors that live that exist within different countries. There's cross-border data transfer mechanisms that exist. However, those are highly regulated and we understand that even in that when you are participating in some cross-border transfers, sometimes there are requirements to notify consumers that this is happening. Sometimes you need to be able to provide a privacy notice that details how that is happening. The where is it is going to be so critical as well as the where is it in relation to the systems that your team is leveraging is going to be important as well. This is where visibility of a processing activity is so critical. Conversations I've had with many customers involved, well, we've identified where all of our data is from a technical aspect and typically that is the hardest thing for any organization to do until they recognize that there has been a process in actually for years in which where copies of data has been put into file cabinets, into boxes, into paper storage formats. So the where is it is not always going to be sitting in a technical database. Of course, for the majority it is. We need to be able to recognize where that data may originate if it was, for example, a paper record before it became a digital one or if it was a digital asset that became the paper record as well. So having that this is going to be critical. What is the purpose for having the data? Not legal advice, but one of the things that we've seen is having a clear purpose communicated to a consumer of how their data is to be used is going to help protect your organizations in the long run. As long as you're communicating that information about how that particular data is to be used and not exceeding that. So when a customer signs up for a newsletter and they understand that they're providing their email, potentially their address and some other information to participate in that newsletter, making sure that the entire organization knows that this data was collected for this purpose and to use it solely for that purpose is how we maintain relationships. It's how you're able to ensure that you can continue to have a positive brand experience. So understanding the purpose for having the data is going to be critical. And for a variety of different organizations, having that documented is going to be this going to be highly important as well. How effective are my data controls and policies? This is a big question that I've worked with customers actually directly on in which where sometimes the challenge does not exist solely in the regulation, it does not exist solely in the presence of having the data, but it is in the business itself. What are the policies that we would like to implement as an organization to effectively protect this data while ensuring this data is also producing the value that we hope to gain from it, as well as how we're able to continue to ensure there's a steady influx of that data to support business processes. And also what are our requirements for retaining this data so that way after it's no longer valuable, it does not incur an additional cost. So being able to answer that question is going to take the team of our privacy IT governance professionals as well to ensure that we're implementing proper policies. And sometimes these policies can take a variety of different methods. Sometimes it's data masking as a means for being able to prevent an individual from actively utilizing certain bits of information as its usability date without having to go and completely remove it maybe for specific legal purposes. What are our automated retention policies on for example emails have implemented those? What's the time period of that information? How are we identifying information that may be important for litigation in the future or the preservation of records regarding a contract? Being able to understand how effective are your data controls and policies is going to help. And being able to document their effectiveness within the same platform that you're all using is going to be helpful. And then finally, what is the most impactful fix? Awesome. Great discussions here. So absolutely. There's no magic wand, right? There's no magic wand when it comes to being able to address all of these concerns. While technology is going to be very capable, it's going to take the individuals, the organization as a whole that wants to be able to prioritize a strategy on how best to go ahead and effectively solve this issue. If there simply was a technology, we would not be in this, if there was a simple technology that could solve all of our problems. Some people say chat tpt. I'm certain it is not. But one of the things to recognize is we're going to have to continue to look at these requirements, interpret them for what is going to be best for the organization we work with and to be able to continue and in an agile manner respond to this changing landscape. Because even with the onset of different technologies like chat tpt, you're going to highlight and put there are going to be some additional risks with that. And of course, in our q&a, we could definitely talk a little more about that in the future. Awesome. So let's talk about prioritization when it comes to reducing these risks. So the first is going to be, of course, locate an inventory sense of the data. Start from the basics. And this is what we work with our customers with all the time. It can be overwhelming to have to try and understand the purpose for data, to understand where data is moving, how all of these relationships exist. But in reality, the first foundation to be able to have is what data sources contain my most high-risk data. What are the things, what are the data elements that have stolen a bad actor would be able to monetize for themselves, be able to use against potentially our consumers, be able to cause harm, or endanger the organization or the brand? So locating and inventorying that sensitive data is going to be highly critical. The question of do I have sensitive data stored in systems outside of governance policy is requires a discovery initiative. That discovery can be led with automation as a means of being able to address the vast majority of data elements. And then they're going to require the careful assessing of individuals, people who actively use that data or who govern those data sources, communicating with them to be able to identify have they covered all their bases when it comes to the data that they have within their source assets. Number two, we want to be able to help automate remediation. Remediation is, and I've been a part of, I want to say maybe three or four remediation exercises in my career, all of which existed for, took over six months minimum. And that was for just my portion in that remediation process. Remediation is difficult, effectively being able to address all of the requirements of a remediation. So whether that may be one, if data is over this age, don't do anything to it, just delete it. Two, if data is over this age, looking to this database, cross-reference to see if this data element is related to us for another reason, and so forth and so forth, automating that step is going to be highly critical to be to ensure that your attack surface is reduced. So what processes and policies do you need to reduce that risk is going to oftentimes come from the strategy of your data governance organization, as well as those requirements from InfoSec and privacy. Automatically apply and enforce them at scale across all of your data. Even better, leverage a tool that allows you to implement those policies directly in those assets, right? So if you can both scan that information as well as apply that specific retention policy or enforcement action on that data source, you're going to end up let leveraging a level of automation that really helps organizations minimize the attack surface from both internal and external threats. Number three, report progress and risks to stakeholders. I want to sort of update this to be more clear and say, as opposed to communicate risk reduction and process improvements to the business, I want to say this is a step that needs to be treated as educational. We need to educate the risk reduction and the process improvements. We need to educate the business about our activities in a way that is understandable to how we are taking part in the revenue generating or the enablement of the priorities of the orgs we work with, right? Oftentimes these requirements and as professionals, we can get bogged down in these requirements. I've spoken to a few different teams where their privacy office just says, you need to do this and ask me why. That's not an effective means of communication of what you're trying to accomplish and how this is going to help. We want to be able to educate our business partners, the people participating in these initiatives on what we're doing and why we're doing so that they can clearly understand how their lives are going to be improved as a result, whether that is, again, increased revenue or being able to optimize or streamline processes in which where they may have a lot of negative intervention or they may have a lot of risk. We want to be able to effectively educate that to the business and we want to be able to make sure that we educate our consumers. A privacy notice can't read a legal paper that no one can really understand. We see that as not being effective one for Garner trust when it comes to B2C consumers, but also we do see when you have a privacy notice that is clear and simple to understand that improves the relationship going forward. We want to be able to effectively educate our consumers as well as soon as we've been able to identify, hey, this is what we need your data for. This is what we're going to use it for. This is how we're going to protect it. These are the options you have to update or opt out. Effectively communicating and educating the individual of that is going to make this entire process a lot easier. Let's talk about more specifically some of those processes and how to implement them. Technical control validation. Detecting at risk data. Initiate controls like encryption masking and access, leveraging, again, the same capability for you to be able to discover that data, to be able to within the same platform, go ahead and initiate those controls, encryption, masking and access, especially at the data source level. Not at this object level that sits within a massive integration that requires a lot of upkeep and lift, but being able to implement something that can easily say, you know what, here's a policy for this particular data type. Implement this policy and have that system help you execute that policy whenever we've identified it. That's going to add a lot of value. It's going to be able to help you effectively address the data element where it lives, and making sure that you're not interrupting business processes any further by having to leverage additional technologies that may be more invasive. Retention orchestration. Document retention schedules. Being able to enforce deletion and downstream systems, of course, is going to be the best, most effective way to minimize that attack surface and ensure that the systems that your organization is using are, one, not burdened by having to maintain large databases of data that may not be useful. Two, ensuring that reports remain accurate and relevant so that duplicate data is no longer an issue. And three, making it so that when these retention schedules are properly executed or when these policies are properly executed, the downstream systems and the individuals can expect a clean environment for them to work with, as well as they are aware of how these policies may affect their jobs. So it's important to be able to have proper retention orchestration there. And then finally, this is really more from the introduction of privacy requirements, but data usage requests. Honestly, the best way to help facilitate this is to automate it. There are organizations who are either solely have manual processes around data subject access requests. That is a difficult thing to have to do. It requires a lot of manpower. If you don't already have broken down silos across privacy, IT, security, data governance, then having to facilitate a DSAR request on your own, while just giving access to a ton of data sources, it's not fun. I've had to facilitate one myself. It is certainly not fun. So leveraging automation here is going to be critical. Again, if your data discovery capability already knows where data elements live, it already knows where to look. If you gave it a first name, last name, and an email, so identify an individual. Leveraging that same architecture, that same capability to then automatically fulfill the request, whether it is an access request, rectification request, deletion request, is how we're going to be able to help save time. And of course, encourage the consumer, if it is a deletion request, to consider reinvigorating that relationship with that organization. Helping to effectively protect the individual's rights is one of the strongest arguments for doing business with any organization. So being able to leverage data usage requests through automated access is going to be highly critical. What other activities exist? Data retention, we've spoken on that a lot. Accessing control, masking encryption, pod migration, of course, regulatory compliance, and lastly, unexpected data. Data discovery will be able to help with unexpected data, the places where you've identified data that your organization may not be aware that you have. This is an attack surface that for many organizations, while you feel like you have an understanding of your processes of what is stored within your data sources, it's the unexpected data, it's the accidental collection, or the collection of data not really recognizing where that data was going. If it's going into a black box and you don't know what comes out of it, being able to identify the unexpected data is going to help cap the opportunity for a bad actor to take advantage of that unexpected data. So this is how we're going to be best aware of all those different requirements here. Now, with that said, we've spoken a lot about some of the capabilities that we do recommend for being able to address this new and changing landscape. For some additional information, you can go on and visit onetrust.com or email me. I have my email here. And now we're going to go ahead and take the time to open up the Q&A portion. Thank you so much for this great presentation. And just to answer the most commonly asked questions, just a reminder, I will send a follow-up email for this webinar by end of day Thursday with links to the slides and links to the recording. You know, you tackled some of these already along the way, but I wanted to bring this question came in. And if you have questions for Phil, feel free to admit them in the Q&A portion. This question came in, but I want to ask it again because the kind of the debate continues. You know, what is sensitive data and is age sensitive data? You know, and I want to take it even a little bit further. I heard a question earlier. It's a photo of somebody. I'm going to go ahead and say yes, but it depends. And this is where the silos need to break down because the definition of sensitive data is going to simply change for each business and organization. The places you operate are governed under different rules and those different jurisdictions have different rules on what is considered sensitive data. And this is why this is so important that your teams are involved as a part of this initiative because, yes, under certain laws, a photo is considered sensitive data, right? A social security number may be considered sensitive data. Biometric data might be considered sensitive data. And in other places, those things are not even close to being considered sensitive, right? So it's important to know and to understand what those requirements are. If you look at the one trust platform, we're able to help with that. When it comes to helping you understand what is sensitive under those different requirements, you may have, but again, right, it's going to change for every single organization. And it's important to be able to have a process. I just simply assess that, right, that initial assessment will help organizations a lot of ways there. Thank you. So what do you think about classifying the data into several different buckets from a sensitivity perspective? That's a good question. So classifying that data, once we've identified its nature, if it's sensitive personal data and so forth, the classification is going to have to serve the business, right? Some of the organizations I've worked with for B to C, right, their classifications of that sensitive data may follow the following. Is this data related to marketing activities? Is this data related to a contractual obligation that you may have for that consumer, right? What's required for marketing is not what's going to be required to process a credit card, for example, right? So being able to classify that data is going to be critical as well as having the business be an agreement on what level of risk, right, of storing that data and using that data is going to be important. I get my email out all the time. So I would not classify my email as more sensitive, right, other than my personal credit card or my personal phone number, right? For depending on the organization, you're going to have those specific requirements. And that's where as a part of the assessment period or the discovery period, that discovery period typically brings out a lot of the places of which we're able to say, well, you know what, email may be a lower sensitivity for our organization because we use it all the time. We freely collect it as well as people freely give that information. However, if that email address is also associated to credit card and billing information, right, then we may have a different conversation. We may need to think about how we will score that particular presence of data there. Thank you. And can you talk a little bit about the tools and processes to discover sensitive data? Yeah, so data discovery primarily, if you think about it, is a way for you to be able to just go ahead and identify the presence of that data element. I will say this, one trust is pioneering in some of the ways in which we understand that email, for example, as someone had mentioned in the chat, I think when we mentioned it, email would be considered sensitive personal data. Email in particular could be, of course, in this case, it's my name, but when tied to relevant other information that may personally identify me, such as my address, such as my credit card info, right, that may need to be treated a little bit more. Data discovery not only has the ability to help you understand those particular data elements, but also helps you understanding combination, right? What does that move for finding an email address in a load table, not tied to anything, or just finding an email address attached to my billing info, right? Data discovery will be able to help you identify that, provide context to how that information should be treated, what are some of the recommendations or policies that may apply to this data. And from there, if you have a data discovery solution that is also integrated as part of your ecosystem, it can help automatically implement the controls. So once we find that data, we're able to identify and make sense of the policy that it falls under, and then immediately then providing the machine in order for us to protect that information. Data discovery goes a long way with that, as well as the ability to, again, map that discovered data to the regulations and the requirements and the risks that you may identify as being pertinent to the organization. So mapping that data back to go ahead and say not only did I discover that my email sits within this particular data source, but I now can put it into context, oh, that email came from this marketing campaign, or came from this web forum, or it was collected from this call center, right? And making sense of how that data got here and where it's now headed. After those contracts is done, his email is going directly to our archive server, and then after that, it's going to be deleted, right? Having that full picture is going to be super critical, and that's where data discovery can also help in that initiative there. Perfect. So, okay, Phil, I have to ask the big question. AI. Yes. You know, even here at the conference, so many people are like asking about this, AI, and so what ways can AI impact our governance strategy? What strategies exist to help mitigate risk in this space? I don't want to oversimplify it, but it similarly boils it down to we need to know what data is used in AI, how it is to be leveraged within it, and what is the potential of that data to be accessed from that AI model. I can probably plug in information related to any publicly known individual Elon Musk into TAP GPT, for example, and get a complete written up overview of his background and any information, whether it's correct or not is up to, you know, the engineers over at Open AI, but that information could be sensitive, could be personal in nature. Did Elon consent to that? Well, I mean, he's a public figure, it's different. What if a smaller individual's information that made it out there or was included in that particular model? The important question is, what data is being utilized within these models? What is the process to determine what data is going to be included? What data is going to be excluded? What is the risk of processing that data, having to build a model, and then what is the risk of that data potentially being the model through adversarial machine learning methods or ways to trick the machine into providing validation? All of those exist, right? So we want to be able to have an inventory of how we're going to be using AI, what kind of data is going to be utilized to see that AI, and also for what purpose, because we want to be able to clearly communicate to any individual, especially if you're using AI in order to facilitate certain business functions, what purpose are you using that information, and how that information is supposed to benefit the consumer. So we want to have that in one location in place, because while there isn't many AI regulations or requirements just yet, we see them and they're on the horizon. And we understand that having a regulatory body to talk about AI, as they have within the states as well as Europe, there's already definitions that people are using to allude to AI, such as the automated processing or automated decision-making clauses that exist within GDPR that dictate an individual must be able to consent to that, right? People are beginning to recognize and see, oh, that includes AI, that includes our use of AI. So it's going to be, it's going to have to start with maintaining an inventory and then knowing what we're doing with that data. So very true. Yeah, we've heard that from a lot of different companies. Yeah, who are even, some are even questioning whether they want to start it because of the rest of the name holding it. So, Phil, we've got about four minutes left. What are the best ways to integrate privacy and security functions into the data governance strategy? I would simply open the door to them. One of the things, even before you leverage technology, or even before you begin the process of building a data catalog, you're providing them access to that. I would say open the door, right? And highlight your, highlight and align on your goals with those other teams. Help them recognize you're not there to do their jobs. They're not there to do yours, but instead we are all working towards the same ultimate goal, which is to enable our organizations to be able to use the data that we want to, right? So opening the door is one of the first steps. The second is obviously going to be ensuring that we have communicated, right? Our strategy or once we've gotten together and we built a strategy and we built an idea of what our risks are, we want to be able to effectively communicate that strategy through our higher ups, helping them connect the dots on why this initiative is going to end up saving them revenue, if the case is looking at cost, or helping them enable access to more revenue. If you're able to now produce technology that can collect more data because people are consenting to providing more data, where people feel that there's transparency in the data that is that is provided. So they're more inclined to share whatever that may end up being. Those are typically the first steps, even before we even begin to talk about some of the more tactical measures such as leveraging data discovery. Without that buy-in, without that, that agreement to want to work together, an initiative could be stood up, the technology could be stood up, the capability is there. I haven't seen an organization that has not had that buy-in and has been successful yet with it. So it's really important to ensure that that first step of opening the door and being able to get that executive buy-in is going to be critical. And very cool indeed. And we've got another question coming in and I'm going to slip this in because I know it's a quick answer and I already know the answer to it, which I find very exciting. So I understand that OneTrust is focusing on USCCPA and Europe's GDPR regulations. Will it be easy for you to include other requirements and other and other laws around the world? Absolutely. Absolutely. And they're already there. One of the things that OneTrust really does well is we've embedded our own regulatory intelligence tool in the platform. So if there's a privacy law out there, there is a checklist, there is a pop-up, there is an action that is available for every customer to be able to address that particular privacy law concern. And we even have laws that aren't even active yet but are signed. So if you want it to begin to start to understand your requirements under US state-based privacy laws such as Utah, which is going to be live towards the end of this year, as well as the many other states that are looking to go live next year as well, you can begin doing that right now within your OneTrust platform. And you can expect us to keep that up to date as this entire environment begins to keep changing. Ah, perfect. Well, thank you so much for this great presentation and thanks to OneTrust for sponsoring today's webinar. Again, just a reminder to everybody, I will send a follow-up email by end of day Thursday with links to the slides, links to the recording. Thanks to all of our attendees for being so engaged. I really appreciate it. Hope you all have a great day. Phil, thank you so much. Awesome. Thank you so much. Thank you, Shannon. Thanks, y'all. Hope you have a great day.