 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVercity. We would like to thank you for joining this DataVercity webinar, Three Steps to Achieving Data Intelligence, sponsored today by OneTrust. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them via the Q&A panel or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag DataVercity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And to access and open the Q&A or the chat panel, you will find those icons in the bottom middle of your screen for those features. And just to note the Zoom chat defaults to send to just the hosts and panelists, but you may absolutely change that chat and network with everyone. And as always, we will send a follow-up email within two business days containing links to the slides and the recording of the session and any additional information requested throughout the webinar. Now, let me introduce to you our speaker for today, Nick Brandrith. Nick serves as a data governance evangelist at OneTrust. In his role, Nick advises companies throughout the data governance implementations to establish processes to support operations and align with their enterprise objectives, including better knowledge and insight of their data landscape, ensuring data meets compliance and policy requirements and enabling greater use of the data. Throughout his career, Nick has evangelized cutting-edge technologies and implemented modern solutions to resolve challenges more effectively. And his pursuit of innovation has helped establish the companies he worked for as leaders of their space. And with that, I will give the floor to Nick to get today's webinar started. Hello and welcome. Hi, thanks. Appreciate the time and I appreciate everybody jumping on today. I think maybe just start things off by saying happy holidays. Can't believe it's already December of 2021. You're sure did five by fast. Super excited about the content and the conversation that I'm going to share with you today. This is kind of real high level. Before we dig in from an agenda perspective really we're going to touch on three areas. The first step we're going to focus in on is really the data landscape. And it's just looking at data and the way organizations are leveraging data today. This is a topic that's very near and dear to my heart that I've been focused on for about five years or so now, where there are a lot of challenges that we see organizations bumping into, and many of them are ones that they're not even really looking at yet. From there, we'll move into what are some of the keys that organizations need to do to bridge that gap to build out data intelligence and to get themselves, I would say even on the journey to what I would say is true data intelligence in their organization. And then finally at the end we'll cap off a little bit with how one trust actually can help organizations solve some of these problems. So digging in I think just starting at the beginning data truly is now the competitive advantage that every organization is looking to share or to leverage in order to improve business function in order to improve brand loyalty in order to gain more customers and etc. And there's a couple I think key things that are coming out of this and that a the public. Even though we see, you know, breach after breach after breach and we've heard a lot about privacy regulations over the last few years. The public is wanting organizations to provide them with better services and because of that are willing to share data. I will touch on this a little bit in more brief in a little bit more here in a minute. And on the flip side of the coin you've seen organizations I think Apple is really a great example of this that right at the beginning took privacy and said hey we're going to actually take privacy and making the message around data privacy, a competitive Well, so there's a lot happening when it comes to data within an organization and the dynamic ask that the business has for data is quite diverse throughout the organization. The organizations and the business in general are asking to consume more and more data and they're wanting to consume this data, very simply to make data driven insight decisions to help the business operate in a better way. So I was maybe looking at how do I launch a new product in a specific geographic region or how do I get more predictability around my forecasting models. Personalization from a marketing perspective is is is massively huge and important. And all of us, regardless of our demographic have felt that change, whether you want to call it the Amazon effect or whatever it is, have felt that change over the last three or four years or so where being able to get the products that we want in front of us and deliver to our doorstep within a couple days by simply ordering them from a handheld device is becoming something that we're accustomed to and that we expect from the business businesses are trying to optimize processes as much as possible from a logistic standpoint from an internal operational standpoint. And of course that's it. There's that customer experience that comes along with that as well. So, when we do business with a particular company. And we want that, that kind of awesome level of customer service technology is really driving this and data is really helping organizations drive this, but with this dynamic ask for data comes the need for technologies and most CTOs and most organizations are going through a massive transformation right now where they're starting to leverage a lot of technologies that give them the ability to, to deliver this very strategic these very strategic initiatives to the business. And this in and of itself creates a problem. And I like to call this the modern day data problem. And if you think about all the massive innovation and awesome cool technology that's come out in the data space over the last decade or so in organizations ability to consume, transmit, analyze data has grown exponentially and thus the volume of data that an organization can leverage needs and collects has grown exponentially as well. Not only that, but you know data has moved from sitting in an on-prem Oracle database or a legacy say SMB file share or SharePoint sitting in on-prem data centers. Data has moved from there up into the cloud. And there's a combination of this happening within most organizations. Some are cloud native, but most organizations have a little bit of this dynamic going on where I'm moving data from Oracle up into AWS or up into Azure. I'm moving into unstructured data. That's out in the cloud. I'm starting to leverage unstructured data sources out in the cloud. Maybe I'm having IoT devices stream data into things like S3 buckets. And on top of all that, we're leveraging technologies. Salesforce has been around for a long time. Many organizations are utilizing that or some type of SaaS based CRM. Many organizations are leveraging things like Workday, the SaaS based HR type of systems. And then you have kind of the King of the Hill when it comes to this modern day data problem in that being these large data repositories or data warehouses and really kind of data lakes, if you will. You saw yesterday, just a day before Snowflake came out with very, very positive earnings. And that's just a testament to how much technology like that is being adopted out in those environments. So not only are organizations adopting things like Snowflake, they're adopting things like Databricks or GCP, BigQuery sitting in GCP, and they're moving data out to those environments in mass volume, and they're sharing data back and forth with business partners. And this starts to create a bit of a problem for organizations in that, you know, being able to rely on understanding your data becomes extremely difficult when you have all of these dynamics associated with massive volumes of data, a wide variety of data in this velocity of data that we see in the environment. And the whole purpose behind all this going back to that first slide I talked about is to give the business access to the data to make data driven insights. However, there's a lot of risks associated with it and I'll touch on this a little bit more in depth here. But one of the big risks that we see right away is that the definition of what is sensitive data now, sensitive personal information, has changed and evolved. We've moved away from those NIST PII standards where we're looking at something like a first name or a last name or a credit card number which we can do lookups or lunch checks and things like that to try and find. It's moved into data that could potentially infer behavior and whatnot. We'll touch more on this in more depth here in a minute. But looking at that complexity and that evolving data risk that's happening, if you think back to 2016 in Cambridge Analytica and really kind of what happened out of that, the reality is that public sentiment has changed and has dynamically shifted where, you know, we do want to give organizations our data to use in a way that's positive and produces a meaningful impact to us. However, there is an expectation from the public that the organization is treating that data with the appropriate fiduciary responsibility that they need to be treating that data with and that they're not either A, misusing that data in some way or another or B, making some type of bias based decision about us because of the type of data that they're using and the decision making process that they're using it for. On top of that, if you think back, you know, a good 10 years ago or so, you know, hacktivism was pretty predominant, you heard it in the news a lot. You would see hacktivists, you know, defacing websites, you'd see hacktivists, DDoS and companies taking down their services and that type of thing. Hacktivism is still very real and very alive. And if you think about how can somebody truly impact a business they're targeting from a hacktivist perspective, it's all around the data. If I can breach a company and post data online, you know, in the public view, and there is data in that data set that is outside of what their privacy policy statement says that companies in a lot of trouble. If I poison data in a data set where I'm having analytics done around this data, I'm potentially causing a lot of problems to that algorithm and basically all of that work has to get thrown out and start over. What ends up happening with organizations when we start looking at these risks, data regulatory risks, cyber risks, other types of regulatory risks, when we start looking at this challenge, it really starts to create a business problem. You see organizations go from this risk, you know, very kind of risk adverse type of approach, where there's an imbalance starting to happen, where the knee jerk reaction is locking down the data. You see this very frequently, especially when you know we're talking with people that are deploying things like snowflake where we've moved data out to these big data lake house platforms, and we're trying to enable access but the business is reticent on the the the risk based part of the reticent on giving that access to the data, because they're not really sure what's there and they don't want to over expose the business to risk. What organizations need to do is to be able to find that balance, where they can move from this risk type of posture to a more risk aware posture that enables the business to drive value from the data, and to be more agile in in their usage of that data. And that's really you know data intelligence really is built from this kind of concept of unifying disciplines across the organizations in a very unified governance strategies so that they can create this balance. So let's take a look at some of these keys to bridging that gap, how do we move from that that that that in balance to that that balance teeter totter. And these are really kind of some initial steps that organizations have to take, and have to make sure that they're nailing in order to get to that point where they want to drive what they view as data intelligence, you know I want to be able to use the best quality data, I want to be able to use it more effectively. You know, how I'm making business decisions, and all these types of things that really starts with these three keys and that these are really advantages that organizations can get in their data governance strategy and programs, by just understanding these three principles and putting them into practice. The first one we see is really truly knowing your data, and I'll speak into this in a bit more detail here in a second. The second one is understanding that data holistically, not just what data do I have but what data do I have what business purpose do I have for processing this data. And what does this data mean to the business it's interesting because in I was on three different conversations yesterday with various different organizations in each one of them said different parts of the business unit have different definitions for what is data and how do I identify. So being able to understand that data holistically across the organization is extremely critical, and then finally unifying tools and processes. Basically, what we've known as governance and privacy and security. For the last, you know, 15 years or so now has been very desiloed, and there's a need to unify those tools and processes. It's truly knowing your data and kind of how this fits into the overall strategy that organizations need to take. So if you think about the way most organizations are going around trying to understand their data now, they're typically doing it in a couple of different manners they're either a relying on legacy type technologies, which don't account for going back to that second time show, going back to, you know, the volume the variety and the velocity and the big data problem that they're dealing with now. They're either trying to do it with those types of technologies, or they're still leveraging metadata and doing metadata scanning. And the reality is, is that metadata scanning cannot solve this problem effectively for organizations, organizations need to use technology built on the same types of big data computing concepts, leveraging things like machine learning to do a deeper data discovery across structured data sources across unstructured data sources, whether on-prem or in the cloud to do this in semi-structured types of data sources as well as that's where a lot of data is starting to move towards. And it needs to go broader than that PII definition. Whether that's looking at product data that might exist within the organization, whether that's looking at behavioral related data or data that can infer religion and I've got a really awesome example for you that I'll show you next that digs into this and kind of some of how this is really problematic for And then, you know, a company data if you think about, you know, you're a company that's growing by M&A, typically those projects have some type of code name or something like that on it. Do I have a document that is non-public material information and could potentially expose some information to somebody that shouldn't have access to it in a source that's in the clear, you know, sitting on an S3 bucket sitting in one drive somewhere that type of thing. So going to why this whole concept of going deeper than metadata is so critical. Let me give you a really quick example here. Freeform text fields in open text boxes are the being of solving this problem right there. So it's a massive issue. They're needed, but they create massive issues. The example that you see here from this is a Salesforce example is for a customer of a wealth management advisor. And if you think about like what a wealth management advisor does, you know, they help you plan for life's events, they help you plan financially for goals that you're trying to accomplish in the future. They know tax information they know who you're donating money to these types of things. In this particular example, you'll see that there's, you know, partner name, there's alcoholic anonymous volunteer, there's upcoming vacations, a reminder to send a Hanukkah card, an end of the year Democratic Party donation that they're going to make as well as obviously as a tax write off. And what you have happening here is in this description field, where if I'm just relying on the metadata, I do not know this data is here. I'm a financial services institution that has health related data, it's got behavioral related data, it's got data that infers sexual preference and it's got data that infers political affiliation and religion. And if I'm moving this data around and moving into an area where maybe I'm trying to make decisions on how else I can create better products for this particular customer or how I can maybe make a loaning lending decision or something about this particular customer, I potentially introducing I am introducing bias into that so it becomes extremely problematic if we're solely relying on metadata. And if we're not leveraging machine learning and these types of technologies that can help us out with these trickier, more complex pieces of data. And that kind of moves us into very nicely into the second key. And that is understanding the data holistically. And if you think about data across the board data can mean various different things to an organization based on a regulation, based on how that attribute is based on that attribute together. And one of the key things that every organization needs to do is to be able to accurately classify data for the business as a whole, but also from a regulatory science in a security standpoint, a great example that I like to use on this one of my favorite customer types that I absolutely love talking to about this problem, and about solving this problem or clinical research institutes, because what they are is so beneficial to the greater community as a whole. It's truly an awesome thing. And they are leveraging data, they're sharing data collectively between various different research parties to try and solve health problems to try and solve the lives of people. However, personal health data is governed under many different ways. And if you think about just the difference between health related data and mental health related data, mental health related data needs to be treated a specific way. So having the ability to go in and accurately classify tag category, categorize this type of data becomes extremely critical, so that an organization can go out and truly understand what data do I have. Where is it. What business purpose do I have for processing that data. Do I have data that breaks the business purpose where I don't have a business purpose for processing it. Where is it that's potentially breaking my privacy policy have a privacy policy that says, I'm only collecting certain amounts of data for certain consumers, do I have data that's living outside of that outside of that balance. And then I think finally, the third, the third pillar here is unifying tools and processes. And this is where we see a lot of organizations have, you know, not only multiple different technologies out in their environments, but they have multiple stakeholders that each own these various different solutions and are operating kind of autonomously sometimes maybe there's cross function happening, but they're still running very autonomous, autonomously. And the reality is, is that governance can't be done in the silo anymore. You know, there, there, there's the need for organizations to be able to go out and operationalize and understand privacy, and how their organization is handling things like am I doing privacy impact analysis. How can I automate that am I living in Excel spreadsheets anymore. How am I building out my records are processing activity. How am I executing on a consumer rights request if I'm an organization that's getting a lot of consumer rights requests from a security perspective, you know, we've got risk management out there, along with all of our other controls that we're leveraging maybe to help us data like a, like an encryption or a masking type technology, we're leveraging other types of technologies out there and we put policies in place that say, you know, hey certain types of data shouldn't be retained for a certain period of time. And, and certain types of data shouldn't be in certain assets out in the environment where there's more access and exposure to that data. And then of course you've got that kind of data catalog and that traditional metadata management perspective, you've got the data stewards out there that are looking for the best data in and stewarding the data within the organization so the organization has good quality data to understand and understand where that data is at, and where everything resides. But if I'm trying to even just looking at CPR a is a great example of this. If I'm trying to understand, how do I go out and solve these very complex privacy problems like understanding how I'm processing new data or I have a new process for data, while being able to give access to that data and being able to understand where the data is at, while being able to secure the data. While doing it all separately. It creates bureaucracy it slows down the process and giving access to the data, and you know enabling that agile use of the data to the organization. But it also creates a lot of gaps in that, you know, the organization doesn't know if they have all their ducks in the road, so to speak. So really what needs to be done and if you think about it. Privacy security governance are all part of the same continuum, and these this these disciplines can't really be happening in silos anymore in a technology the technology can't be happening in silos anymore either. It all needs to be part of the same continuum, both from a business alignment, as well as from a technology alignment and an automation perspective. What is an organization the ability to understand what data do I have where it's at what business purpose to have a processing it. And what risks does that data bring to the organization. By mitigating those risks in terms of policies I've got encryption policies in place maybe I've got retention policies in place, and even more effectively, being able to peer or even more important being able to empirically understand that those policies and processes and tools that I have in place to protect the data and to minimize the risk to the organization from that data are effective. In the great example on that that clearly illustrates most organizations are struggling with the capability of doing this is that that use case of being able to enable access to things like snowflake and these big data link houses where I pumped a bunch of data, but I'm not interested on giving access to that data because I don't really empirically understand the data, and I don't know if it's breaking some of these core data protection policies that I have in place. Let me give you a really good example of, of why this is so important and, and, and kind of where all this ties in together. So let's look for at processing data for a new business purpose. So the business wants to use customer data for a new purpose right that it wasn't initially collected for this becomes a couple of different issues. It's a policy it's a legal issue, but it's also a technology issue as well. So the first thing that needs to happen is the business needs to request a privacy impact analysis. As it is a new business purpose. One needs to be triggered, and it should be automated in that it's getting triggered. After that's checked. The Ropa the records of processing activity needs to be checked for a legal basis on the processing of that data. And depending on the legal basis, either a policies may need to get updated. It's not going to go into that processing because legitimate business purpose, or maybe you need to go out and get updated consent from a consumer you need to update the consent from a consumer, because of that new use of the data. So that once that PIA is approved, and all those updates have been made, then access can be given to the data. And I know that hey I'm not overexposing the organization to to too much risk by doing this. Number one, but number two, going back to that imbalance slide. One of the reasons why there's such an imbalance is not just because we're locking down the data maybe, but it's also because we've introduced, we've had to introduce so much process into the bringing access to that data into following through with these regulate regulatory requirements that we have to do that we haven't. And we haven't automated in any way at all. So we're automating this process under a unified tool set gives you the ability to go out and alleviate that drag that you're putting into that that part of the process in terms of giving exposure or access to the data. So just kind of wrapping up here, really important in terms of being able to establish that trust fabric, going back to that very first slide right organization or the consumer. And I'm one of them wants companies to leverage the data. I want to have better products put in front of me. I want to have better services put in front of me. I want to understand how I can make my time. I want to be more efficient. I want to spend less. All of these types of things I want my data to be leveraged in a way that is good for me, but I also need to trust that that company is in fact acting with the appropriate fiduciary responsibility in terms of the way that they're using data. It's not being misused. It's not being misshared where it could potentially be weaponized against me. It's not being overexposed potential cyber risk, all of these types of things in order to do that to establish that trust fabric with that consumer to drive data intelligence, you got to start at the key one principle, and first, know your data. What data do I have, and where is it. Then number two, understand the data holistically, what business purpose do I have for processing this data. Do I have business purpose, do I have data that I don't have a business purpose to process, and then finally, you know unifying those principles and processes out across the organization. So, being able to answer those questions around what risk does this data bring to the business. What new risk am I introducing to the business for by introducing new data or introducing a new business purpose for processing that data, what policies do I have in place to protect that data to retention policies on optimization policies. I've got my privacy policy that says, this is the type of data I collect on the consumer, and are those effective. You know, can I validate the effectiveness of those controls, do I know that I don't have restricted data sitting in the clear in S3 buckets or in my big data lake house platform where I do want to enable more access to the data. So, if I have data breaking retention policies, do I know what it means if I need to process data for a new business purpose, and can I move from a posture of being risk adverse in a drag in the organization in terms of being able to enable the use of data to more of a posture of risk awareness and a business enabler, which is really what data intelligence should be to the organization. And there we go zoom, and now I'm just going to wrap up really quickly with how one trust helps out from this perspective and I think everybody for their their time today. One trust is the fastest growing software company in history, and there's a big reason for this, and that is that really what we've gone out and done is pioneered this space called trust. And it's very evident that the market has taken notice of it, and that what we're doing is really, really resonating with with the marketplace as a whole, we're growing extremely fast. We're very well funded. We have 10s of 1000s of customers around the world, and we're a very innovative organization from a technology perspective. And what we've done is build what we call a trust platform. And this trust platform enables organizations to act and prove they're acting in an area of trust, both to internal stakeholders which is really critical. And, you know, key executives, the board, which is always asking questions every time there's a new headline, as well as the employees themselves, you know, employees want to know they work for a good company. And with CPR a and the changes to that employees are also now in scope for, for that that California privacy initiative, as well as external stakeholders right so us, the consumers, the public, the street. Anybody that that brand loyalty and the meaning, the meaningfulness of that brand is important to trust becomes an essential element that they have to a act in the air of and be proved. And what this platform does is gives organizations the ability to handle things like third party risk management, environmental social and governance is something that's becoming very, very, very much a hot topic with organizations doing with my carbon footprint and these types of things. How do I manage consent and preferences across the organization. And of course what really was a tip of the spear for us in this is operationalizing out privacy, as well as how do I build out the actual real source of truth catalog. How do I automatically build out a data dictionary created source of truth that I can both manage data risk to, but also enable access to the best quality data to solve a particular problem, or to answer a particular question for an organization. And it's all kind of hinges on our data discovery technology as well. This is all built on a single code base so it can interact well together, which gives us the ability to have, do that single siloed across more organization. And from a governance perspective data intelligence what we really deliver from that perspective is giving you the ability to automate out a lot of your privacy program but also be able to deliver on a lot of the key kind of governance initiatives that you're trying to do, whether that's enabling use of the data or mitigating risk of risk to the data. And it all starts with scanning and enriching data that's the actual data itself, we'll go out and we'll scan the actual data in your environment, we can tag that data and generate metadata. That metadata is what we then use to automatically build out the data dictionary, apply the various different terminologies to create the data inventories and generate that metadata catalog for you of all of that data that you have out in the environment. And from there you're able to apply the business semantics that you need to apply across the various different parts of the organization. So really being able to help you nail down that classification and understanding what that means to various different parts of the business. And then finally being able to better govern that data right so understanding the risk from a regulatory perspective, and not just helping you implement policy, but helping you be able to validate that those policies are effective which is really really really critical. Am I able to able to understand if I've got data sitting out in the clear that should be encrypted, am I breaking a retention policy per se as well. I mean what this does is it helps us create a knowledge graph based upon this by leveraging a lot of these big, same big data computing to technologies and concepts, we build out a knowledge graph that gives organizations, the flexibility and the speed they do that search for the data gives them the ability to do data policy enforcement. I found data that breaks your attention policy, initiate action to remediate it, whether that's with other tools I have in the environment or doing it yourself with native capabilities and existing technologies, which really helps you manage that risk to the data. And then the best for reporting an analytics around that that gives you the ability to look at trends that gives you the ability to go out and understand, you know, what is my data footprint look like, do I have data drift do I have data out in parts of the organization that I shouldn't have and then finally, being able to as a natural byproduct of doing that, being able to help you execute and automate out that privacy rights request which greatly reduces the resource impact from not only a people perspective, but also from a from a asset a data asset perspective as well. And with that Shannon that wraps up my content for today. Nick thank you so much for this great presentation and thanks to one trust is always for sponsoring another great webinar. I just to answer the most common last questions here just a reminder. I will send a follow up email by end of day Monday for this webinar with links to the slides and the recording along with anything else requested. Thank you for your questions for Nick feel free to spend them in the Q&A portion of your screen. I'm going to dive into one here that came in early so Nick with all these data sources and potential threats where do you suggest one starts identify and addressing risk. Wow, that's a good loaded question. I'm kind of from a best practice perspective that I've seen over, you know the course of last five years or so and just working with organizations is prioritization. So, you know, we know we have to go out and, and look at a lot of different sources that I have in my environment because I definitely have these legacy SMB file share sitting on prem that have legacy data in it we know it, we just don't know it empirically. You know I've got things like S3 buckets where they basically have become the kitchen junk drawer and people just been thrown, you know, it was supposed to be just our keys and, and now it's like, full of like 27 Starbucks gift cards and we don't know, you know, how much is on each one of them. So they become very problematic. But really what we say is, and see as being effective is prioritize those data sources, look at your data sources where, you know, you're trying to enable a lot of access to the data quickly like your data lake houses. What are your most critical applications where you know you have a lot of sensitive personal information. And structured sources sometimes people will get a very false sense of security about it, because they're, oh, we're already scanning it, but going back to my open text field, open text box example there. It is very, very frequent that you will find values and some of those columns that you're not expecting to find it in. So go to those kind of core source sources first, and understand that understand your data there. You know where it is specifically in those sources. And then, I think the important question that organizations have to ask themselves when they're solving a problem this problem without getting stuck in too much of an analysis process around it is. That's why it's important to go to those high priority sources first is, what am I going to do. If I find breaks in certain types of policy to the data. So we have every organization has some type of data protection policy in place. What am I going to do if we find breaks to that. What is that remediation workflow look like. And then once you figure that out, being able to iterate that out it's a very agile approach to being able to, to roll that continuously out into other parts of the organization, because much more effective. So, can one trust conduct the identification of personal health records to use for analytics, ensuring that re identification is not possible exposure. Yeah. Yeah, can you speak to what one trust does for that. Yeah, I love this question. Actually, so if you think about it, I think the famous one because we're going to a lot of organizations have gone through some type of student anonymization process in many cases with their with their data. And I think the famous one there was like a US census and Carnegie Mellon study done years ago. The famous one example is age gender zip. Do you have those three classifiers together, your ability to re identify someone is like 87%. You can like re identify 87% or something like that of the US population using just those three identifiers. When you think health data, especially if you think of things like rare disease codes. So let's say, let's say we're doing analytics around a rare disease, a rare eye disease, for example, this is actually a real world example, rare eye disease. If I all the, and then you've got zip code, right zip codes public information who cares about zip code. But if all of a sudden I've got zip code and rare a rare disease code together, my ability to re identify that person is is really, really high. So what we can do is go out and use a lot of like contextual awareness to look for those types of identifiers together classifiers we call them together. And if we find that we flag it. And if that's sitting in something like say a snowflake or a different type of technology where we can leverage the native mechanism within that technology, we can initiate masking of that that attribute that shouldn't be in there that you say shouldn't be So the same thing applies from, from like a HIPAA record perspective, per se so if you if we're if there's like certain identifiers that are in the clear that's making a record re identifiable. You can have a policy in place that enables us, we would catch it, we can flag it say hey this is XYZ related personal health record. And it should be following this particular this particular type of, of policy, go out and fix it, would be able to catch it, not only, but we'd also be able to give you the intelligence as to what policy it's breaking and what the remediation and if we have the mechanism to do it we can automate some of that remediation. I love that question. Great question. So, Nick, does one trust help identify data rot to reduce regulatory compliance security concerns or issues. So by data rot. I'm assuming I mean bad data obsolete data that type of thing and that's, that's like right down the, like the alley of one of the main use cases that we always see is going back to. I don't know why I pick on these on premise and be file shares but a lot of organizations still have these legacy on print file shares, or they've got legacy SharePoint sitting on prem and they're migrating it up to the cloud. Why would you just migrate all that data to the cloud. If you know you've got probably some problematic hotspots of data within within that that source, being able to look at that data and say hey, you know we've got data that's older than seven years breaks retention policy that shouldn't even get migrated, or maybe I already have data, my out in the cloud and I'm moving to GCP, and I'm moving this data into big query, being able to understand what that data is in that source. Before it's moved out into other sources becomes a very, you know, a very hot topic so being able to, again, just going back to kind of the basic fundamental understanding my data so I'm discovering and identifying that data that's in those particular sources. And I understand hey okay GLBA maybe this is GLB GLBA related data so I do need to keep it even if it's a right right to be deleted. For example, for CPR or CCPA, being able to identify that understand that we help handle all those, those types of concerns. And then, I think, more than anything, it's, especially having spent so many years in information security, it's understanding that data that I have out in the environment and am I, am I protecting the right sources in the right way to solve those types of problems we give visibility into all of those types of issues, and can help automate a lot of the remediation around it as well. But well if you have any additional questions for Nick I'll give you a minute here to submit anything. Oh, here we go. What is one trust typical data governance project lifecycle. Sure. This is an interesting question that loaded also. And that is that. I mean, I just I have a I have an itel related background and you know a typical deployment gets up and running fairly quickly you know going back to that example that I gave to the first question and you know you can set up a a you know scans and that type of thing on your priority data sources build out that part of your program understand what I'm going to do with that data and all of those types of things typically organizations if they're very organized can start knocking those things out in in six to nine to 12 weeks for that first core group. Right. And then from there being able to, and while you're doing that, being able to apply policy to it, being able to understand quality of the data, being able to understand a lot of these types of things and leveraging that there's two things that happen that are really, really, really critical with that step. The first one is, you're creating kind of a blueprint that helps you take this that agile type of approach to the rest of your deployment. And I would say this is probably even more important is one of the keys that I had two people tell me this years ago. And it's very, very true. One of the keys that I see in successful data governance programs is while they are very strategic in nature. They take quick tactical wins in order to show quick value, get quicker buy in from the rest of the business to show that growth for the rest of the business. And in that approach that we take from a deployment perspective that first, you know, iterative approach helps get that tactical win really, really, really quickly. And that depends on the sources that you're doing what types of policies you're putting in place but you know six to 12 weeks from that beginning. Now, project lifecycle, that's the, the, I think the interesting part of that. I would say that a governance and just take the one trust out of it but I would say that a data governance project lifecycle is continuous, and it's continuous because organizations want to continuously improve. They're continuously using data from a different perspective. They're evolving their technologies, they're leveraging different types of technologies, which means their policies are changing. But the, the, I think the core key important factor is that once you have that foundation layer with one trust established, that's helping you operationalize and automate our privacy, that's helping you from a governance perspective as well. That's doing all those types of things that kind of continuous improvement becomes a lot easier and a lot faster, because you've already created a baseline that you're controlling drift against. And y'all are, oh, we had a follow up to that. Oh, just a thanks. That was a great, great answer to the question. Any, any if y'all give you a minute to sign in any, add any additional questions there anything, Nick, you want to add to any of that while we give everyone a moment. No, just thank you again for everybody. The attendance today I really appreciate the opportunity to speak with each and every one of you. We have a lot of awesome resources about thought leadership and technology resources as well as well that are available on our website would just request that you reach out to us and take a look at those. And again, I wish each and every one of you the best of holidays and a great December. Nick, thank you so much and again thanks to one trust as always for sponsoring today's webinar and providing another great, great webinar session and congratulations to the growth that y'all are having that's just incredible. So, and thanks all the attendees for being engaged in everything we do. Again, our reminder I will send a follow up email by end of day Monday with links to slides and links to the recording for this webinar. And let me just echo Nick's sentiments and say happy holidays and hope y'all have a great day. Thanks everybody.