 Oh. Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer of Dataiversity. We'd like to thank you for joining this Dataiversity webinar, Rethinking Trust and Data, sponsored by OneTrust. And I'm excited to report. I am coming live from Sandy Agel at one of our best, our new in-person conferences. It's very exciting to be back in person. We have another OneTrust session going on at the same time I was showing Shane earlier. I just love it. It's a great day to talk about privacy, security and governance, both digitally and in person. So just a couple of points though to get us started, due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A panel. Or if you like to tweet, we encourage you to share your questions by Twitter using hashtag dataversity. And if you'd like to chat with us or with each other, we certainly encourage you to do so and to access an open Q&A or chat panel. You will find those icons in the bottom middle of your screen for those features. And just to note, the Zoom chat defaults descended just the hosts and panelists, but you may absolutely change that to network with everyone. And as always, we will send a follow-up email within two business days, containing links to the slides, the recording of the session and any additional information requested throughout the webinar. Now let me introduce to our speaker for today, Shane Wiggins. Shane serves as a product line manager at OneTrust, the trust intelligence platform, unlocking every company's value and potential to thrive by doing what's good for people and the planet. OneTrust connects privacy, GRC, ethics and ESG teams, data and processes, so all companies can collaborate seamlessly and put trust at the center of their operations and culture. And as Rolshane supports OneTrust data governance, including data discovery, data catalog and AI governance product lines. And with that, I will turn it over to Shane to get today's webinar started. Hello and welcome. Thank you, Shannon. Good afternoon. Good morning. Good evening to all of you. Appreciate you joining. So the kick us off, the explosive growth of data and the value it creates really demands data professionals to improve the privacy program. And this is in order to build, demonstrate and maintain trust. I think the era of the fine prints, the pre-tick boxes and the data hoarding, this has passed. And the strong collaboration from privacy, marketing and ethics teams is really required to drive trustworthy data driven practices. So in today's session, we'll consider the prominent trends that we're seeing in trusted data and how you can refine your program to account for these trends and ultimately build trust. So Shannon very much appreciates the introduction. Do want to thank you all for joining and certainly do want to thank the DataVersion team for hosting the session in the Plain organization that did go into it. So as Shannon mentioned, I am the product line manager at OneTrust for our data governance offering. So there's a couple of product lines that fall underneath that. We do have our data discovery engine, our data catalog, as well as our AI governance product lines. And before OneTrust spent some time with GE Centra and also in the IoT startup space. So I hold a Bachelor of Science in Industrial and Systems Engineering from the University of Florida, GoGators. And I am CIPPE and CIPM certified as well as a Lead Green Associates and Six Sigma Green Belts. So I think with this supply chain background, definitely tend to take a supply chain mindset to data and the overall flow of data. So let's get to it. So in today's session, we'll start by reviewing the latest industry trends driving both the macro but also the micro and trust isn't just in our name. It's within everything we do. So we'll consider four prominent areas to focus on when refining our programs. That would be embedding privacy, managing preferences, sustainability, and the ethical use of artificial intelligence. And at the end, we'll hand it back over to Shane and hopefully we'll have some time for questions as well. So let's get off by considering the latest industry trends. Firstly, what's driving trust? Well, I think there are three prominent drivers that we should consider. First is the societal change. And by that, I mean pressure from individuals, groups, and society as a whole to respect privacy. I think today data subjects have high expectations on how their data is used and are concerned with organizations' missions and beliefs beyond just the products or the services that they provide. So your people, your employees, your investors, they want to trust you and ultimately will only interact with you if they trust you as an organization. Next is the omnipresent changing and evolving regulatory landscape. Whether it's in the privacy realm, ethics, or security, all are motivated by requiring organizations to be trusted in different areas and also serve as a tangible sense of urgency driving these initiatives and driving clarity on what needs to be built. And then finally, which falls into the line on the product side is the technology. And I think the plethora of powerful tools we now have available to us continues to knock down barriers to entry every single day. Focusing on emerging technology, these tools really do require a new approach. In many cases, the technology is outpacing the legislation. And this may require us to practically take responsibility and also self-regulate at times. I think for our organizations, there's considerable opportunity. However, technology can also be the greatest source of risk. And that's where we see trust as a competitive advantage. It's not all negative or challenging. Trust is starting to reap the rewards for organizations that have invested in their programs. And effective and transparency as part of a privacy program is really a significant competitive advantage over your peers. Just considering the statistics that are being displayed, data subjects who trust a brand are more willing, more loyal, and prepared to also pay more. But this shouldn't really be too surprising if you consider your own relationships, whether that's with your peers, your colleagues, or your team, or even someone that potentially services your vehicle. Where there is trust, you tend to share more, you tend to commit more, and you tend to spend more too at the end of the day. So what steps are you taking? Moreover, what steps maybe should we take to either re-establish or drive trust? And maybe some also points around what's holding us back. I think moving forward, there are going to be new elements that we must incorporate. And we need to also refine existing processes. And this is to account for the societal regulatory and technology changes that we see happening day in and day out. And this doesn't necessarily mean the old processes your business has undertaken are now invalid. I think it just means that they need to be re-evaluated in light of the rapidly changing landscape that we live in today. And as an example, encouraging data stewardship in your organization to improve data is still going to be fundamentally very important. However, the practices for improving data will continue to evolve. So we'll take a look at these four emerging areas of focus in a bit more detail. So the privacy side, we'll consider embedding privacy into our operations. Considering governance, we'll consider how to balance, let's say, data subjects choice with also delivering very strong business value. And from there, we'll consider two emerging areas that we are still learning quite about. And that's sustainability and how data and privacy directly impact our environmental programs overall. And this is particularly interesting because we mostly see technology as an environmental enabler with things like paper and travel reduction. But there's also this other side to it where data has that negative environmental impact. And we'll get into that as well. And then finally, the ethics and the overall ethical application of artificial intelligence. So this is one area where a couple of our teams are heavily focused on, given some of the evolution of this in the past three or five years, and looking at how we can really use new tools effectively and responsibly. So a lot to get into today, very excited about it, but we'll go ahead and kick off with embedding privacy. So what do we mean by this? And what's the value? At a high level, we mean building privacy into our business as usual processes, but also trying to switch our mindset from a barrier to really an enabler. And one of the drivers behind the need to embed privacy, as many of us know, the main driver is the seemingly relentless trend of privacy regulations that we are seeing. And there are three things to consider in this space, the current regulations, the emerging regulations, but also the evolution of both. I think privacy never stands still. And if we want to be compliant, neither can we. And I think it tends to be said that if you're compliant, then you're not moving fast enough. So when we speak to new hires at one trust about the privacy industry and get them on boarded and into the domain, I think one of things I love to highlight that makes it exciting and also challenging is the fact that it's constantly changing. And I think that's one of the greatest things about the privacy space. It's not all bad or challenging news. It's just more and more when these baselines and other privacy legislations are established, there's variations and nuances. And we're not necessarily reinventing the wheel every time. But it is important to evaluate how you adopt a new regulation or more importantly, you adapt to changes in an existing one. And I think Gardner has noted that by the end of 2023, so today, June, about 18 months time frame, 75% of the world's population will have their data protected by some form of privacy law. And if we just think about that for a second, so that's approximately 6 billion people whose data will be protected. Now, about half of that 6 billion live in both India and China. And I think this just gives you an idea of the scale of governance that's going to be needed. And if your organization works with personal data in any of these markets, it's important to be ready to adapt and also support the regulations we see coming forward. So considering the regulations and continuing on that topic, GDPR seems to be the international baseline. And I think for this reason, there's a considerable overlap between regulations. So what we really need to focus on are the nuances between the regulations. In this example, we see the differences between retention and minimization when we're considering GDPR versus the CCPA. And in this case, the GDPR is stricter on both counts. And organizations can be strategic with their specific approach. You may choose to be fully compliant with the common elements of several standards and tolerate some of the outliers as well. And in addition, an organization may choose to become compliant with the legislation of a market that they wish to enter. And I think this shows respect to the specific region and your consumers that reside within that region. So where is the opportunity if all we are seeing is expanding regulatory landscape? I think the opportunity is to leverage the privacy requirements to build loyalty and at the end of the day to demonstrate trust. These are the things that we have to do anyway. So why not take it and turn them into an advantage? I think a good analogy I can think of is if you think about environmental features in cars. Many are compulsory, but rather than just comply, these manufacturers and OEMs, they're marketing. They're out there pitching how clean, how recyclable the vehicles are. And this is ultimately turning a demand into this broader opportunity. And the change is drive privacy programs from being focused on compliance to really focused on enabling the business and taking it the step further. So how do we start to address some of these challenges? I think first, we need to ensure we are working with real-time information. I know privacy teams, they need a complete and current view. Manual methods and legacy stale old data are just not acceptable today. And I think secondly, privacy teams need data to be intelligently identified in context. Context is so important in the use of data. And thus, any risks or violations can be flagged automatically by the system. And automation will hopefully continue to drive that. And we need to take an aforementioned context and use it to carry out certain actions. So this could be reporting a privacy incident or even fulfilling a data subject access request. And then finally, privacy must be embedded into the data operations. So when we think about the process of mitigating a risk or applying control, this really should be done automatically and not left to an individual to necessarily do manually. So building on the concept of the return on investments, I think that this graphic and blue line represents the top 10 most trusted companies. And we can see that over the course of 2020, these organizations truly did outperform their peers with regards to shareholder returns. And this is a real example of trust driving value for data subjects and the business. And this indicates that the data subjects are prepared to pay that premium for trust, and they are more willing to share their data with the trusted organization. But of course, with the trust comes loyalty, and they may also advocate for their trusted brands to their peers, their families, their network, etc. So I think last point before we kind of shift into how organizations are taking the next step at the privacy programs is to, you know, just looking at how trust fundamentally changes commercial outcomes. And this is really why we're seeing organizations begin to make more significant investments, not only in what we refer to as, you know, record keeping, you know, your records of processing, article 30 is your data map, but how we foundationally invest in trust as an organization. So really appreciate that this slide and some of the value that we see represented in in place going forward. So now let's focus on balancing that choice with value. How can we deliver value to our data subjects, but at the same time, respect their choices, and also walk that fine line between personalization and intrusion. Let's start by considering how personalization is perceived from the data subjects perspective. I think these perspectives were captured by Forrester and each take a unique approach. We see some familiar trends here. So, you know, advertisements following you around the internet, the targeting algorithm making assumptions, and then ultimately, you know, targeting you with incorrect or insensitive ads, you know, leads to individual concerns over the security of data that organizations hold and also crosses with the demand for control over personalization. And these experiences all impact, you know, the data subjects trust of the brands they are engaging with. If a brand virtually stalks, you know, potentially makes incorrect assumptions or presents totally inappropriate insensitive ads, then that will significantly damage the trust relationship and trust is something that takes a lot of time and effort to build, but can certainly be lost in an instant. And this personalization is, of course, based on the collection of personal data. And where are we collecting personal data? Pretty much everywhere. And this is an escalating problem because more of this data is protected by legislation. Thus, you know, overall, holistically, we need to refine our approach with respect to the collection, the use, the storage and ultimately the retention of the data as well. We also need to be mindful of any of the categories of data we're capturing are sensitive or biometric as these require special care. And I think in addition at any point in time, you know, we must be able to present, rectify or delete that data should the data subject demanded. As you can see, you know, managing this web of data is certainly a challenge. So that's the bad news. We have, you know, angry upset data subjects and a data management lion that we ultimately have to tame. But flipping that on its head, you know, where are the positives? You know, one, the emerging trends in the shift to zero party data. So zero party data is captured directly from the individual or the data subject itself, not indirectly due to the use of tools such as cookies. And zero party data usually refers to communication preferences such as, you know, certain mailing list, the data subject wants to be on. So there's a lot of control that is ultimately shifted from the enterprise over to the individual consumer. And when considering zero party data, the marketing websites at week noted here, data that comes from customers themselves is almost by definition the most valuable tool you have. And you don't have to pay a social media company to get it. And they, you know, continue noting that smart marketers should turn the first party data first. And in fact, in some respects, the data is free other than, you know, the set up costs of your banners and certain preference centers. And this is where we start to see a return on investment and what privacy can lead to profitability. You know, certainly we invest in some technology and in return, you get valuable data, which we'd previously would have to pay for. And this is, you know, certainly a new concept and sometimes hard to ingest. It's like asking your wife or partner what she would like for her birthday, or at least laying out some options, rather than guessing and getting it horribly wrong, which I'm sure no one has been there before. So building on the concept of zero party data, let's consider an example. Here we see the adidas websites. And when navigating to the site, it's immediately requests zero party data from the user in exchange for a 20% discount. It's a trade and potentially a win-win. So now, you know, are you getting better quality data? Probably. And the data subject is also getting something in return. So wonderful example to take and run with. So the data subject has provided us with zero party data. You know, where do we go from now? Well, I think there's two things that need to happen post this. So first, you know, collecting the data and then ultimately managing the data that you've collected on an ongoing basis. And building on the concept of managing that data, we have a shift left concept. And by that, I mean classifying and rationalizing data at inception rather than accumulating large amounts of data than trying to figure out what to do with it. And, you know, in this specific, you know, image the explorers climbing the data mountain, and the task becomes more complex and demanding the higher he climbs. And this is the same with managing that that amount of data that you have. If we classify and rationalize our data at that bottom of the mountain, as opposed to peak, you have a lot more opportunities and you can reduce the amount of personal information collected and thus reduce the risk that presents itself. But when you also obtain more value from the data, because early on, you know exactly what it is, why you have it, and what legislation it is subject to. So your business may have data that sits on, you know, the right side of the mountain in the middle. You may be starting your journey on the left hand side. But it's important to recognize and ultimately implement solutions such as data discovery to certainly help you in a broader enterprise and your data teams tackle that challenge. So let's look at a practical example. Let's assume we've captured data subject consent. We have preferences and we ultimately have that data, you know, they've consented to this data being shared with third party marketing partners. Because we've got, you know, the house in order, and we've captured and rationalize this data. When it was collected, we know exactly where it was stored. There comes a point in time where the data subject changes their minds for they return to our website, navigate to the preference center and withdraw their consent for data to be shared with third party marketing partners. So what happens next, the next step is for the year consent record to be updated. The option to share with their party marketing partners is now updated to be withdrawn. So from here, the system initiates the required downstream actions. It connects to the systems where the data is processed and propagates the consents and the changes necessarily to fulfill the request. So the data that was shared with third party is no longer shared and consequently that organization will no longer be privileged to that data. Job done. And this can be done instantaneously as well. Equally, if the data subject changes their minds in a few weeks time, the change can be reversed. So by having a place and a system in place, you know, we are able to meet our privacy obligations, we're able to build trust with that data subject. And we can start to build trust with our third party marketing partners. Because, you know, they're looking at, you know, us and they're confident that they can use the data that they are privileged to as well, reducing the risk ultimately on their side. So to conclude, let's focus a couple of statistics here. Recent reports produced by Adobe. 81% of data subjects noted having choices about how companies use their data is important to them. 69% have advised that they would stop buying if companies use their data without permission. 72% noted that relevant content delivered at the right time boosted their trust. And finally, 84% expressed that keeping data safe and providing transparency and controls helps to regain that lost trust. Next, let's consider sustainability and how do we achieve sustainable data practices? I think when going through the different topics to cover on this, this area is very interesting. You know, there's a lot of things that are still being considered as far as how data usage kind of is classified as responsible. And certainly I think we're familiar with the privacy side of things and hopefully continue to get educated, but in the sustainability side, there's still a lot more to consider. So when we hear about the digital era and digitalization overall, and the move from manual processes to digital forms to automation, we tend to look at this as good news. We've gone from, you know, file shares, the SharePoint, SharePoints, Office 365, et cetera. And each change was a drive towards efficiency and took advantage of the economies of scale that came with it. You know, our data was backed up, it was more secure, we could work collaboratively, anytime, anywhere, so far so positive. Yet few organizations consider the environmental consequences in the primary one being energy consumption. Data centers use a lot of energy and more data and more services means more energy. If we consider this graph, we have energy used on the y-axis and the time on the x-axis. And we have three lines here, best case, expected, and worst case. And what's concerning about all these scenarios is that they all demonstrate exponential growth just over, you know, different timelines. If we look at the expected scenario between now and 2030, we have an increase in energy demand of between three and four times. And data centers are currently responsible for about two percent of the world's greenhouse gas emissions. And this number is very likely to continue to grow as the demand for digital services grows as well. So businesses that work with large amounts of data or provide software services will use data centers. And now is really that ideal time to take a snapshot and also start to consider more sustainable data practices going forward. So what steps can we take to reduce the impacts? In any environmental audits, whether that's reducing the environmental impact of air travel or even heating a workspace, we can start capturing a baseline. And we can use this to determine the carbon footprint. And at that point, start to consider ways to reduce it, where, you know, the footprints may not necessarily be able to be reduced. We can actually start to identify opportunities to offset that. And I think the same approach applies for data. When we, based on, we look at areas that we can reduce the overall environmental impact and offsets where necessary. And, you know, when you look at data and, you know, certain redundant obsolete and trivial data, the same type of approach can be had there as well. And how do we kind of simplify this? Are there simpler ways we can reduce the overall environmental impact? And, you know, one of the simplest ways is to reduce the data. We continue to hone on the data. And I'll say that again, because it's not always the most popular idea out there. Less data, though, means less storage and less environmental impact. But it's not just environmental gains. There's also, you know, financial gains and risk gains tied to that. The lesser data, you are reducing the storage costs, and also the overall exposure to risk. And this starts with discovery and an audit. Now, why do we have this data? And how long do we intend to keep it? How many people here have used the storage? How many people realize that it was a waste of money or the cost of the storage quickly surpassed the value of the data itself? We must not be blind to the data footprint. And if we're storing data, we need to be able to justify why, and if we can't, we need to consider how to archive it or start to reduce the footprint and potentially delete it overall. And when considering carbon offsetting, we need to be aware of this so-called greenwashing. That is, you know, marketing to sustainability, while not actually doing it. So if you're looking to offset your carbon production, always look for an organization that is accredited to one of these standards. That way, you can be confident that your carbon credits are being invested responsibly and trustworthy. All right, to conclude, let's look at a really hot topic in the privacy space. AI and the ethical implications of its use. To start off with, let's consider one of the emerging concerns, bias. So artificial intelligence is a learning technology, and thus it is very much influenced by the data sets it's analyzing. And as a result, the output and the results are influenced by the strength of the algorithm it was designed to use. I think we all remember bad data in equals bad data out. And this slide highlights some examples of bias resulting from AI analyzing certain data. So especially in the context of AI, I strongly feel that the data work tends to be much more underappreciated than the model work itself. I'm a firm believer that the devil is really in the data. And it's a fundamental part to ensure that you're establishing a trustworthy, responsible AI program as well. So some points to highlight here are gender bias and racial bias. And you may be thinking, well, how can a machine be sexist or how can a machine be racist? It's just an algorithm. Well, there's either a flaw in the algorithm or that's a behavior that's learned over time. And what's more concerning is that artificial intelligence can be manipulated. If you flood the system with enough bias data, the system may start to believe it's true. For example, an AI system is learning from survey results and we submit 100,000 surveys depending on that sample size. That could very well manipulate the results. And there is a concerning statistic from Gartner that went through 2030 that 85% of all AI projects will provide false results for the reasons we've just described. And no one wants AI to disappear. We all want it to continue to improve our day to day and there's so much potential that it has. So the positive news is that we can be aware of it and earlier on that that awareness we can recognize also start to address it. And specific to AI and the overall decomposition of how the flow works is the good thing is for the most part, the dataset that is trained on can be modified. And I think with enough effort invested, they can become largely unbiased. So our artificial intelligence is potentially racist and sexist. We're off to a pretty tough start. But what can we do to address that? Well, given the pace of AI, self-regulation is one option, making sure that we do our due diligence and we properly audit our data. There are also data protection frameworks we can use that focus on the space. In this specific slide, we see the European Union trustworthy AI guidelines and also the ICO eyeing framework. And both of these frameworks define an approach to using artificial intelligence and also account for assessing impact and managing the risk. So the graphics on this slide show our evaluation of each standard based on kind of how our legal experts analyzed the various scope of the framework. So across six parameters, the European standards for high and all errors, and the ICO standard also did very well. But three specific parameters that I'd like to discuss in more detail are impact assessments, risk management, and explaining decisions. So when considering impact assessments, we investigated if the framework recommended privacy impact assessments or similar and established when they should be carried out, which is positive. When we look at risk management, we took a look at investigating the risk management processes, such as when and how to address certain risks. And then considering explaining decisions, we certainly want the framework to be able to cover challenges. And this is sometimes very difficult to do to explain automated decision making. And if you're at the hold of decision is you want to ensure that you're able to get the transparency that you deserve as to why that decision was made. I think one example where he uses when you're visiting a healthcare provider and some diagnosis or output was done, at that point in time, you're able to ask certain questions and you have that dialogue with the healthcare provider to understand ultimately how that decision came to life. And the same thing has to be in place for any type of AI system and implementing a good standard, such as the EU or ICO or several more that are starting to grow in impact and also doing your own analysis will go a long way. And this will help you ensure that AI is delivering good results in a trustworthy way that's not necessarily influenced by bias. So beyond frameworks, AI legislation is coming. And it's only a matter of time. In Europe, we can certainly expect to see local laws and even potentially that's European Union wide law. And I think the same approach is going to happen here in the United States. Looking at what's in the pipeline specifically from the EU, there's already a draft regulation and the United Kingdom specifically have proposed changes to their data regimen. And the UK are going their own way in some respects, but still have the same privacy principles as Europe. Considering the Americas, the USA are also introducing legislation and we're starting to see enforcement of that too. As of January 2023, New York City will ban the use of automated decision making tools to screen employees unless those tools have undergone a bias audit. So it is here, it is coming and is growing in scope. So finally, how do we take our privacy principles and relate that to the responsible use of AI? We really see data privacy and AI as intertwined. Considering purpose limitation, data collected for one purpose may not be used for another purpose without obtaining additional user consent. In one instance, a lender introduced bias into a loan approval algorithm because there was an inherent bias in using individuals post codes in the calculations. So considering transparency, organizations must demonstrate that the data selected to build their models was evaluated for harm and that bias was mitigated to the degree possible. And I think finally considering risk assessments and betting risk assessments into the product lifecycle of an AI project can be used to assess the risk in documents appropriate actions as well. I think one thing that is unique that we're still trying to get our head around and I think it's something for you all to think about as you start to deploy AI is the data deletion conundrum. In the lens of AI, exact data deletion effectively means retraining that model from scratch. And doing that requires taking that algorithm offline for retraining and that costs real money and also real time. And furthermore, the challenge here is that even after an organization deletes the data associated with a given individual, information about that individual may persist in certain predictions made by the machine learning models trained on the deleted data itself. So while, you know, first plausible solution is exacting deletion. Really, you're trying to get to a point where you can reproduce that model as if, you know, those deleted points had been omitted from the training set to begin with. And there's a lot of great studies out there in Stanford and a few other universities are really looking into this. And there's a new metric that they're evaluating in the lens of data removal from models itself. And they're calling it the feature injection test, so FIT. And not to get too technical, but what this metric captures is how well can we remove the model's knowledge of a sensitive, highly predictive feature present in the data. So a ton of great work being out there, being done and excited about how we continue to embed privacy into the responsible use of artificial intelligence. So One Trust is an organization that we've tried to pioneer the trust software platform over time. And with this unifying and operationalizing privacy governance ethics and the environment. So we deliver to a portfolio of over 12,000 clients across all verticals. We have about 3,000 employees, you know, 40% around that's work in R&D. And the innovation and investments is certainly critical in this market. And, you know, very fortunate to have the broader One Trust community and ecosystem. We looked at, you know, 20,000 members continuing to collaborate and drive the future of the markets every single day. So when considering One Trust, we delivered trust as a unified cloud application across the four pillars which we discussed today. Privacy, security assurance, ethics, and sustainability. And with One Trust, you know, we can leverage the data where required as much as we can to ensure that, you know, we're helping you as an organization maintain compliance, but also utilize the data to benefit your organization. So with that, we'll go ahead and wrap up and very much appreciate y'all's attention. I'll hand it back over to Shannon to take us next steps in the session. Shane, thank you so much for this great presentation. Been a lot of great comments in the chat section throughout. A lot of thought-provoking ideas here and information. And if you have questions for Shane, feel free to submit them in the Q&A portion of your screen. And just a reminder to everyone to answer the most common questions, I will send a follow-up email by end of day Thursday for this webinar with links to the slides and links to the recording. So diving in here, Shane, you know, is CCPA interchangeable with CPRA? Good question. So you could look at it to the extent, but how you should really look at it is the CPRA replacing and kind of amending several parts of the existing CCPA. So in 2020, late 2020, that's when that was passed. But that's how you can look at the difference with the new act coming to effect here in 2023. Lots of new privacy policies coming into effect as we emerge as you were talking about. So Shane, what processes and techniques should a company who is developing or licensing an AI technology follow in order to ensure that they are behaving in an ethical and trustworthy manner? Yes, good question. So I think first and foremost is when you're looking at an enterprise who's exploring the use of artificial intelligence or has already deployed AI applications, I think the first step is creating that AI registry. And what this does is this will enable the enterprise to provide a 360 degree view for all stakeholders of where AI is used. It's oftentimes very difficult to track across the enterprise. You have different departments leveraging AI in different forms and AI can also be sourced from different vendors and potentially even developed in-house. So we're seeing AI use becoming widespread in many institutions. It's decentralized across the enterprise, which makes it difficult for risk managers to track. So I think that would be the first thing. After that is understanding the data. You've got to be able to ask about the data. You have to question the data. And with that, start to focus on the data generation process and the selection and the control over data sources and inputs. It's hard for me to show with my hands, but there's also this concept of an ethical matrix. And this is a really good foundational baseline to start with. And this will help you understand who the stakeholders are and to get those stakeholders involved in the construction of the matrix. So how it works is that the rows of this matrix or this grid would be the stakeholders and then the columns would be specific concerns. So what you do is you go through each cell of the matrix and you try to decide, are these stakeholders at high risk for this concern? And how wrong could it potentially go? So once you have that defined, you can look at all those concerns that they might have, whether it be fairness, transparency, false positives, false negatives. And you can consider these holistically and start to balance the concerns as well. So I think those are a couple ways that if you're getting going that you could establish these type of processes and techniques to behave in that trustworthy manner. I love it. And along this line chain, you alluded to organizations taking on responsibilities and you're dealing with clients asking for your expertise. Do you find corporations actually are interested in listening or discussing social responsibilities like reducing bias and things? I'd say it's certainly growing in scope and I think one key part of this too is AI and the use of AI is starting to be driven a lot from the leadership side of the house too. And that's the ask down to engineering is we need to start to automate XYZ or we need to inject AI into our certain customer service processes or how can we leverage AI to help with this customer experience. And I think when it's that type of approach, certainly you need to bring them along in that journey as well and understand that the impacts are really significant. And I think the more that's the awareness of that happens, I think that the better overall that enterprise will be because when you're looking at AI, it's very easy to take the high road and look at everything that it can do. But it takes a really good leadership team to be able to take a step back and understand what it can't do as well. Indeed, data ethics is becoming more and more popular as a keyword. When you talked about the stakeholder concern grid, what is an example of a stakeholder? Yes, great question Lori. So it could vary. So for example, one could be let's say the C-suite. Another stakeholder could be the marketing team. Another stakeholder could be the data science team. Another stakeholder could be the specific customer. Another stakeholder could be the data engineer responsible for the data set itself. So those are a couple of examples and they definitely vary as far as the position, but they all are impacted in some form or fashion. And that's where by building out this matrix, you actually get exposure to certain stakeholders that you may or may not have thought would be impacted but actually are impacted at the end of the day. Sorry, I'm talking to the mute button there. The common issue of these days. So, Shane, why do you think there's a need to regulate AI? Yeah, so I see a couple ways. So I think if we look just a few years ago, data analysis, it took a specialized team. It took advanced computing. It took a lots of data. Now, today, there's really good platforms. There's open source technology and this makes it easier to adopt. And on top of that, there's big data sets that just didn't exist a few years ago. So I think the advent of affordable prevalent high compute hardware and then you couple this with parallel processing and then emerging platforms that are bringing in the cloud-based infrastructure. It's made it much easier to deploy AI solutions that are good enough and have good enough performance for real-world applications. If you look at Azure, AWS, GCP, if you have a subscription, you can get up and running very quickly. So what used to be run and specialized at out labs with access to supercomputers can now be deployed to the cloud at a fraction of the cost and much more easily. And I think the digital era and the transformation has certainly helped driving that. Microsoft CEO during the pandemic, they looked at what was going on and they said roughly two months in the pandemic was equivalent to about two years worth of a digital transformation. So these past couple years has been really significant and AI has played a big part. And then I think also kind of mentioned a little bit earlier is that there's this pressure to embed and optimize and deploy proof of concepts. So there's a lot of pressure coming from the business as well. And this accelerated AI development and adoption, I do not believe it's been matched with the same surge in education and awareness of its risks, which is helping to drive a lot of the regulation that we're seeing coming out. Perfect. And we got a couple additional questions to the discussion earlier. So to what degree do you think organizations make the effort to assess unconscious bias internally staff and externally customers and partners? Any thoughts on exercises that could be conducted even subliminally? Andrew, I think it could happen in a couple ways. And that's where I think involvement from multiple parties is really important because not only do you get different viewpoints, but the understanding can be level-set across those different individuals. So I always try and even internally at Wontres is how do we get the data science team better exposure into the operational context of the model that they're building? So I think it's important to continue to have a diverse team that's involved in the overall governance program that you have for AI, continue to open up the stakeholders that could be included as part of those discussions. So I think to date, the whole concept of a bias algorithm and bias data, we're seeing more and more examples of it happen in real time and in production and out in the field. And I'm hoping that this helps to drive organizations to make that additional effort to evaluate these things as early and as often as possible. I love it. So I think we have time for one more question here at least. Then would you suggest to implement AI ethics within the company from the top down? How would you implement a program like that? I think it depends on your organization and ideally would have its coming from top down, but also have it established fundamentally at the data science and at the data engineering level. And that's where we try and explore when we look at certain frameworks out there, there's a component of it that needs to be resolved and captured and covered by let's say maybe somewhere more on the compliance side of the house or more on the leadership level, but there's a level of technical detail that's very important in the scope of AI ethics that requires more of that bottom's up approach. When you look at the ones who are very close to the specialty itself and the group of data scientists that you're working with and the data engineers who are responsible for the data helping to feed the models, it's critical that they're involved. What we've seen is a lot of organizations will try and use maybe a second or third line of defense to help kind of answer a lot of questions and to get the documentation in place and to try and get a lot of these metrics to help kind of quantify and qualify you as a AI government institution, but what happens is you end up having to go to that first level identity anyways. So while top down is certainly very important for the enterprise overall, it's very important to get that bottom level approach because of the level expertise and knowledge that they have day in and day out fundamentally to the components that are being used for AI. Shane, the subtext of that question and the question that we get so often about data governance in general, and Kelsey, you read my mind in the chat there, any recommendations on how to get buy-in from stakeholders in leadership that see governance is not needed or have already wiped their hand of it because it takes time? Sure. I think just looking at real-world examples and how one negative impact out of the field could have significant impact to your enterprise and the brand itself. All it takes is one faulty mistake and it's very easy to deploy, but when you look at when an AI model deployment goes wrong and kind of that example where you have to take the model offline, the ramifications of that are extremely significant and I think those are important things to surface very early on in the process because that's going to impact the bottom line in more than one way. Well, Shane, thank you so much. This has been a fantastic presentation. Very much appreciated, as always, and thanks to OneTrust for sponsoring today's webinar. That is all the questions we have for today and it is all the time. Again, just a reminder, I will send a follow-up email to all registrants by end of day Thursday for this webinar with links to the slides and links to the recording. Shane, again, thank you so much. Thanks, everybody, for your time today being so engaged. Have a great day.