 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Officer of DataVersity. We'd like to thank you for joining this DataVersity webinar, Better Data Governance for Responsible AI, sponsored today by Google Cloud. Just a couple of points to get us started. Due to the large number of people that attend these sessions, you will be muted during the webinar. For questions, you'll be collecting them via the Q&A panel. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And just a note, Zoom defaults the chat to send to just the panelists, but you may absolutely change that to network with everyone. To find the Q&A or the chat panelists, you may find those icons in the bottom middle of your screen. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Shuba Law. Shuba is a Senior Program Manager and Data Governance Lead for AI Machine Learning at Google Cloud. She is an established leader, experienced in driving large-scale, ex-functional risk and compliance initiatives, including iData Governance for Google Cloud AI Machine Learning, alphabet-wide programs for regulatory compliance, standards compliance, risk and controls, and internal audit. She has a background in software engineering and a passion for accelerating business through compliance and risk-aware decision-making. And with that, I'll give the floor to Shuba to get today's webinar started. Hello and welcome. Thank you so much, Shannon. So I'm really excited to talk about some of the offerings that Google has in the areas of data governance and responsible AI, our approach to responsible AI and data governance, as well as share some of the learnings that we've collected based on our experience with data governance and responsible AI. So without further ado, let's dive right in. I'm going to start with just some questions that we have been hearing our customers ask us regarding especially AI data use for generative AI, especially in the recent times, but also generally just data as it's relevant to AI. Some concerns that we have heard from our customers are around the areas of, well, data is my differentiator, and certainly that's true. And, you know, how do we protect, how do I protect my intellectual property when I am using data for customizing foundational models, or even for other situations where I am actually using data in the context of AI, even if it's not specifically for generative AI purposes, which is where foundational models tend to play a role. They also are very concerned about ensuring that their data is secure and compliant, especially as they are using data for training models for AI and other purposes, really, who has access to my data. That's an important question that we hear as well. And then, of course, we hear questions around cost effectiveness, not necessarily related to data, but generally still a concern that we've seen our customers express to us. And then, of course, questions around how do I ensure that in the space of AI ML, we are actually mitigating harms. And what I wanted to do is start off with these questions because we take a very holistic approach to data governance at Google here. The first thing that I want to point out is that data governance and responsible AI are both pillars of a larger enterprise readiness offering that we have here at Google. An enterprise readiness, actually, is a really broad topic that covers through, you know, that we've actually uncovered with our conversations through customers like you. It covers many topics. We've boiled it down to four pillars. And, of course, these pillars include data governance and privacy, security and compliance, which is a big one, reliability and sustainability, and also safety and responsibility. In today's talk, I'll be taking us through two of these pillars, the data governance and privacy pillar, and also the safety and responsibility. So, let's start with our very first topic, which is the data governance and privacy pillar. But before actually getting into that, I do want to start with just a little bit more about enterprise readiness and why it's important. We actually take it very seriously here at Google, and it's really at the core of our approach to generative AI and just AI in general, enterprise readiness. So, you know, you might ask, why should I care about enterprise readiness? Why is it important? And I think the key thing to remember here is that at Google, we actually want to ensure that through our efforts in the area of enterprise readiness, we are protecting your data as in customer data. And we are only ensuring that your data is used in ways that you intend for it to be used, especially in the case of AI ML development, that we ensure the safety of the data that we get from our customers so that it is secure, and that it is safe from, you know, our applications are safe from threats. So generally speaking, the area of enterprise readiness is critical to helping our customers build high quality, secure machine learning applications. And with that, let me start to dive into the area of data governance and privacy as it pertains to AI ML. So let's start with how we enable our customers with their data governance efforts and privacy efforts. The first and most important thing to remember about this enabling and offering that we have is that we actually really want to make sure our customers know that their data is their data. So once again, your data is your data. In the context of generative AI, what your data includes are input prompts, model output, training data, all of these, and we believe that these are part of your data and your IP. So we take lots of measures to ensure that we do not use our customers data to train our own models, that we process customer data in ways that are in accordance with their instructions. And we actually go a step beyond. We were the first in the industry to publish the AI ML privacy commitment, which outlines our belief that our customers, meaning you should have the highest level of security and control of your data and how it's used for AI ML. Now, I would be remiss if I didn't talk about data governance, if I didn't talk about privacy and security in the context of data governance. So I briefly touch on the fact that we have instituted lots and lots of best practices around privacy to ensure that we are in compliance with regulations like the GDPR. And also that we have extensive controls to protect our customer data from other customers, users, attackers, and any kind of unauthorized access by Google employees when their data is being used for AI ML development. So let's take a closer look at how data, your data especially is used for generative AI. So the first question that I get from customers is what happens to the prompts that I send. These are prompts that are inputs that are provided to the generative AI and a response is generated. So typically that's really how the prompts are used both the input and the output are always encrypted in transit. Then next often I hear from customers the question about how do I customize my models when I'm using AI and how does my data then get used in the process of customization of the models. So the answer there is yes, our customers are enabled to customize their generative AI applications through model tuning. And what you can do is that when you do model tuning to train, you actually train and adapt a model. An adapter model is a model that works alongside the foundation model. It is trained on the customer's data, but importantly, it is only accessible to the specific customer who actually trained that adapter model. So adapter models are fully controlled by the customer, including who can access them and when to delete them. Not only do you control your data used for adapter model training, but your input data is secured at every step of the way as well. And it's also important to note that we do not use customer data logs or any of those additional information to train our foundation models by default. Moving on, I'm going to also quickly touch upon effective data governance and our approach to data governance. So effective data governance and privacy requires a multifaceted approach that starts with a clear set of principles. And privacy first design tools and processes to ensure adherence to those principles. And what we've done is we've developed policies internally regarding the use of data for AI ML development. These policies address both the use as well as the handling of data for AI ML. They also ensure that data classifications are available and guidance is provided to our teams about how to classify their data. What kinds of information about data is important to retain so that they can that information about the data that then can be used to conduct our reviews for data governance purposes. And that, you know, we have the appropriate set of controls to enable protection of data, any data, including customer data during model development and inference. Effective data governance also requires prioritizing data privacy and security. So we do build the strongest security technologies into AI products so that we can ensure protection of this data and appropriate controls as data is used for AI ML purposes. Our approach also leverages Google Cloud's privacy experience and incorporates privacy by design principles. Such that designing AI products and services happens with privacy safeguards right from the very beginning. We offer tooling and solutions like the cloud DLP or data loss prevention tool, which ensures that we can redact data as it's used for AI ML or any purpose for that matter. We also offer content filters and recitation checks that basically enable us to improve our data governance and privacy efforts as well as we offer these tools to our customers so they too can leverage these same tools to improve their data governance and and privacy efforts. We also provide transparency for customer data usage and assist enterprises in their data protection impact assessment efforts with our DPIA resource center and documentation. And we've achieved many privacy and security certifications such as ISO 27001 and compliance attestations from various different independent auditors who've assessed our practices giving you the confidence that you need around our data governance approach and our practices. So when setting up your own data governance program, I want to leave you with three actions that you can take right now as leaders. Number one, record and manage your data provenance. Now this is really important. It's important because it sets the foundation for data governance for AI ML. As you know, models are heavily dependent on their data for their performance. So it's very important therefore to know what data sets have gone into training which version of the models. So keeping model lineage and versioning straight as well as specific versioning around the data sets that actually have gone into training those particular checkpoints is absolutely key. It is the foundation on top of which then your data governance for AI ML programs can be built. My understanding would be that even without AI ML, you already have some of the data mapping data flows already understood for your for your organization, but for AI ML purposes, we also want to start tracking data provenance as as data is used to develop models. The next thing that I would focus on is to establish policies for data use classification and handling. So this is key as well. What kinds of how would you classify different kinds of data that your organization uses? Is it user data? Is it customer data? Or is it some other kind of data? And when you classify these data sets into various different classifications, what are the approved use cases to use those data sets? Who is going to have access to those data sets when they're used for AI ML and providing guidance to your engineering teams and product teams is key. And establishing policies allow you to actually publish your your stance on on how this data should be used or could be used as well as how it should be handled. Now, one idea is to actually publish policies and the next is to actually uphold these policies through actual AI governance procedures. So setting up some kind of procedures, reviews, processes to enable that these the to enable the use of this data in ways that upholds the policy is also equally important. And then lastly, what I would focus on is taking a very risk based approach to managing data, especially it's used for AI ML. Now, as you know, in AI ML, there is large amounts of data sets that can be used to train models. And it's really important when you are using these data sets to a comply with the policy, but also be aware that especially when there is a huge amount of scale. And there is a lot of different use cases to be able to have some way of prioritizing those that are kind of going to be the most highest to risk or potentially the highest risk. And also mitigating those risks, and then aligning with leadership risk tolerance levels on on taking a going forward with a use case or not. So having a risk based approach actually ensures that when using data and handling data that it's actually done in alignment with leadership risk appetite as well. Moving on to the fourth pillar for of enterprise resident readiness, which is safety and responsibility. I'll start with a little bit of history here five years ago. We were the first companies, one of the first companies to publish. These principles were crafted with input from customers and partners. And they're at the core of how we design and operate our generative AI products. We work with our customers to collectively achieve the intended benefits and avoid unintended potential harms of these amazing technologies. We operationalize these principles in three ways. Processes to review products and use cases. Tooling that's available both to our internal teams, as well as to our customers and partners. And third through industry leading research and best practices. So let's take a look at an example of our review process. When we conduct a responsible AI review, we follow three steps. First, we identify potential harms. Then we assess the risk levels. And last, we develop mitigation plans. One of our customers wanted to use image analysis to help categorize images in a way that would help to flag content that might be offensive to users. Enable better editorial decisions and to enable better ads targeting. We helped the customer identify potential harms both in their business roles and in potential bias in the image class of suppliers. We assess the risk levels that led to a prioritized list of medications to establish fairness testing for sensitive categories and subgroups. And ultimately, all of this resulted in a better experience for their users and a higher confidence in the implementation. So conducting these responsible AI assessments is a skill that can be acquired in many ways. It's different than standard decision making. And we welcome the opportunity to engage with you to conduct one or more responsible AI assessments to help you launch better products and to impart some of our experience onto your organizations. Tooling is also a key area of leadership for us. We were one of the first to deploy the automated adversarial testing and are consistently leading the industry on content filters and checkers. Let's take a closer look at some of these tools starting with safety filters. Here's an example output from our safety filters. We classify across 15 potentially harmful categories and provide a confidence score of zero to one so that you can set safety filters that are appropriate for your use case. The next one is recitation checkers, which are another important tool. They help to ensure that the outputs from your generative AI applications do not replicate existing content. Our state-of-the-art recitation checkers work across text, images and video and run with a low latency that's needed for modern applications. At Google Cloud, we continue to invest in best-in-class processes and tooling for responsible AI. We tap into the best thinking and innovation from our Google research teams and also work with third parties from academia, government and industry to ensure that we are bringing to you the best, most comprehensive solutions to ensuring your applications minimize potential unintended harms and maximize the intended benefits for your users. I want to leave you with three actions you can take right now as leaders that are in the responsible AI area. The first one is to be an active and visible sponsor for responsible AI. Responsible AI actually is important from the standpoint of generally ensuring that the applications that we are creating are going to be useful and truly achieve the intended benefit. There have been a lot of concerns and questions around data quality as it's used for different kinds of AI applications. One of the key things to really pay attention to is ensuring that our data is actually fully representative. It's free of bias and that it is actually being prepared very carefully as it is being used to develop models. There needs to be an understanding as we are using data to develop models that needs to be shared and consistently understood across the organization in order to ensure that this data is used responsibly as we develop AI ML. And we create applications using this data. The second is to establish AI governance processes if you've not already done so. Here what I'd like to point out is that at Google what we do is we have processes through which we conduct these reviews like the one that we were talking about earlier in this talk that enable us to actually ensure that the applications that we are developing are actually living up to the principles that we have set forth for the development of AI in a responsible way. So oftentimes it is helpful not only to publish these principles or internal policies that we have around data governance but also to have teams who can actually conduct reviews and provide guidance on specific use case by use case basis to ensure that these principles and these kinds of policies are actually being implemented and upheld as AI is being developed. And then lastly, we also want to build responsible AI capacity and capabilities across teams and throughout the organization. It is really important to actually have teams that work cross functionally to make these principles and policies actually a reality. So not only is it an effort of say the compliance teams, but it actually needs to be a very joint effort, a collaborative effort between your privacy teams, your security teams, legal teams. Of course, product teams who are consisting of both product management, engineering, and of course the data practitioners like yourselves to come together and as a collaborative effort, ensure that when designing these kinds of applications that we are actually considering all aspects of both use and handling of data and also the resultant products that are going to be created in terms of models and how they'll function as a kind of collective picture before we actually make decisions on how we move forward. So it's extremely important to actually bring together just a set of cross functional folks who can collectively provide input and help make decisions that can drive towards better, more responsible products and also handling and use of data in responsible ways. In closing, I'd like to ensure that you are aware that enterprise readiness is a topic that we take very seriously at Google, especially at Google Cloud, and that our generative AI products have been built with all of these four foundational pillars in mind. To give you the confidence that your data is only being used in ways that you intend, your applications are safe from threats and always available to deliver the value that your users are expecting from you. Thank you. And I'm happy to open it up to questions. Shiba, thank you for this presentation. If you have questions for Shiba, feel free to submit them in the Q&A portion of your screen. So, and just to answer the most commonly asked questions, just a reminder, I will send a follow up email by end of day Monday with links to the slides and links to the recording of this session. So, Shiba, can you elaborate on enterprise readiness? What does it refer to exactly from a technology perspective and security? Yeah, so enterprise readiness is really a concept that is a collection of various different areas that need to come together in order to ensure that our customers are able to rely on the offerings that we have here at Google to develop applications that are high quality, and are secure. So what that means is that there are offerings that are already available through the Google Cloud infrastructure, including the security offerings that we have, the data governance work that we already do and assurances that we provide to our customers, the responsible AI work that we do to ensure that our customers actually can leverage all of these existing capabilities when they're building their applications without actually having to reinvent the wheel or create these themselves. So there is already a rich offering of these kinds of services that are already provided that enable our customers then to go off and focus on what is really important, which is how to use their data to train their models for their specific use case without actually having to worry too much about the underlying infrastructure that is already in place and offerings that are already in place to enable you to develop these applications responsibly. These are the same applications or offerings that will also enable you to actually get a jumpstart on your own data governance initiatives, on your own privacy initiatives, and your own responsible AI initiatives as well, because there is tooling that is being offered, whether it's in the form of DLP, or in the form of various different recitation checkers, etc., that not only are being leveraged by us in order to develop our own models, but are also being offered to our customers so that they too can take advantage of these tools and use them as part of their data governance and responsible AI efforts. Perfect. And what are the steps to become a data governance? So the steps we talked about are, you must start with having a really good understanding of what your data sets are, what the flows are, where they exist, and such. Now, this is not new to AIML. This is actually a standard practice that's required anyways for your own privacy compliance and such. But for AIML, I think the unique and interesting twist that AIML presents is that in addition to your data mapping, what you want to do is also ensure that you have a very good understanding of how data is being used and what versions of data sets are being used for developing which model. So tracking your model lineage, tracking your data provenance is, I would say, the first step to really understand which data sets and which versions of data sets are being used to train which checkpoint. So that's the unique and new thing, the AIML, that you might have to do in addition to your already existing data governance efforts that include data mapping. On top of that, as a company, you really, or as an organization, you really need to decide what policies and guidance you want to put in place with regards to the use and handling of data. Some of this may already be in place for generally data governance efforts, but for AIML specifically, there might be some unique nuances and twists that come about as a result of using data to train models that need to be comprehended as part of your policies and frameworks that you put in place specifically for AIML uses and handling of data. So that's the next step. And of course, in order to uphold these policies and procedures, policies, you would need to then put in place some procedures and processes that would ensure that appropriate practices are put in place to enable the compliance with these policies and once you've actually gotten that going, then you would want to start to think about really monitoring data use and data handling and really starting to think about how this is not compliant with existing policies, how it is that policies actually need to maybe expand in scope or address some of the new use cases that are coming your way as you're noticing and then continually evolve through an iterative approach the policies that you've put in place as well as continue to evolve your data provenance efforts as well. As many of us know, this is a fairly nascent space, lots and lots of new things are happening in the space of AIML. So it's quite natural to expect that your policies will evolve especially with changing business needs and especially with just the way the space is evolving as well and also your procedures will need to evolve accordingly as well. So I would continually iterate on the approach but I would start with these steps and then and then go ahead and continue to iterate. Perfect. Thank you. So and Shiba, how do data governance, AI governance, IT governance and corporate governance intersect and or complement? Yeah, so there is a set of organizations in any company who will be going forth with with data governance related efforts. My experience has been that when you set policy, it is actually very important to ensure that any policy that is being created or published is actually getting approvals and alignment from any relevance stakeholders and usually this needs to be a very broad set of stakeholders that come from various different areas and different functions within the company so that any policy that is being created is actually aligned with existing policies, whether it's in the area of security or privacy or say another organization so that all your policies together form a web that actually provide guidance collectively to an end users. You definitely want to avoid any conflict in policies. There shouldn't be that and also that policies ensure that policies are actually complementing each other. So if there is a policy that addresses a privacy, then the security policy should jive well with it and they should actually point to each other. Same thing with your data governance policy as well. And typically what happens is that in large organizations, you might have an overarching policy, that is say a company wide policy, but then that policy actually makes room for smaller divisions in the company to come up with their own policies that are unique to those particular divisions and speak more to the specific needs, the specific product needs or the specific function for that particular division. But always those policies need to be in alignment with the larger overarching policy as well. So I think the key message is that not only do you want to have your teams who are your compliance teams collaborating with each other and collectively leveraging each other's work as they conduct their specific reviews, but their policies should all be in alignment with each other as well. And that truly is the kind of building block, the first step that as you're setting your policies that you want to address in order to ensure that your efforts in the data governance space and the larger data governance space are all aligned with each other. Thank you. And so many great questions coming in. Feel free to put them in the Q&A portion. And so, Shuba, I agree that a key objective is to ensure that data utilized in AI model is free of bias. What methodology or best practices do you recommend to achieve this goal? Yeah, so ensuring that data is free of bias is something that our responsible AI team actually looks into. It is actually done with a really good understanding of the data and also with some tooling and testing that is done to ensure that we are really looking at the data and making it as representative as possible. It is covered as part of our reviews of data as data sets are used to train models. And a lot of this work is actually based on research that has been done that has enabled us to really develop the tooling processes that are necessary to eliminate bias from our data sets and from the models that are resultant from those data sets. Thank you. And what are the differences between machine learning governance practices versus generative AI governance practices? Yeah. So machine learning practices in the past, we have dealt a lot with classification type of models, models that have been able to predict, for instance, certain predictive models for certain use cases and such. And those have been areas that we've actually looked at various different risks as well. But with generative AI, the big difference that I see is that generative AI tends to actually be very heavy in the use of data to train models. So we're talking about large data sets that are used to train these models that then can actually generate results. So the risk that these models present is actually quite high from two aspects. Number one, that the output of the models, these generative models can be quite imaginative out of the regular parameters that you might set for a classification model. So an example would be if you have a model that is specifically going to classify certain text as, say, a certain category and provide you a category. That output is actually pretty limited in its range to the categories that the generative model, the classification model is going to be able to output. Compare that to a generative model, which can actually have a much wider range of outputs from which it can select because it's generative, and that you may not be able to even predict in advance. So in terms of the outputs that these generative models can provide, there is a huge range and that huge range then gives a lot of freedom to the model to provide you an output that can come from anywhere in this large range. And what that typically means is that you have to then take additional measures to ensure that these outputs are free of any kind of recitation that the models were trained on and other kinds of controls that you need to put in place so that these outputs are not going to essentially create any unintended consequences than what the model was intended for. So that's one. And the input side, these models are actually taking huge amounts of data to train these models, even the foundational models. So one has to be really careful to look at these data sets and understand all the risks that come with these data sets and ensure that these risks are actually mitigated as we are training these models. So both on the input, as well as on the output, there is a larger risk on the generative AI models than there is on regular machine learning type of models. And one has to account for those both legal as well as business risks when when thinking about generative AI models. Great information here. Lots of questions coming in. We've got about 10 minutes left to get through the as many questions as we can. So Shiba, let's say I've deployed a LLM foundation model, which obviously pre-trained. What technique would you recommend to manage model bias and model drifting? So I am not an expert in the specifics of model drifting and model bias, because this is something that our responsible AI team handles. But I do know that both of these are areas that we look at at Google for our own models. And these involve usually reviews of these models and appropriate monitoring of these models to ensure that we are actually enabling these models to be both unbiased. As well as free from drift. And how do you handle the people culture element of data governance? What are the learnings of this to best implement AI? So I think that the key here is that the culture of our teams needs to be just as inclusive and as aware of the kind of issues that we've had. With models and AI in the past to ensure that we are able to model the kind of diversity and representativeness that we are looking for in our development teams as much as possible as well. So I think our teams here from a responsible AI perspective are actually very carefully looking at various different happenings in the world. They're aware of what's happening outside of Google as well. And looking to incorporate all the learnings that we have regarding the issues that come up with all kinds of fairness and bias issues as well as just kind of social harm types of issues to ensure that when we are developing our models, that we are actually comprehending these and taking measures to avoid them in the functionality of our models as we are developing them. And not only that, but also helping our customers develop models in ways that they can work through as they were intended. So it's always an effort of ensuring that our ethicists are involved in the development of our models, ensuring that their input is heard and it's actually implemented. And that ultimately our models are free of these kinds of biases or any kinds of issues and also in alignment with our own leadership's interest in ensuring that these models behave the way they're supposed to behave. And what are the steps to become a data governance from a technical point of view, so programming language, any specialization on a data cloud storage? So I would say that it is a field that is quite diverse in the kinds of people who I've seen become data governance practitioners. I have found that some kind of technical background is actually quite helpful. So in my case, for instance, having written some software myself in the past has been helpful in being able to understand how the lifecycle works and how data is used in the development of these models and such. So I think that is helpful. And then I would say a good amount of risk and compliance background is also very useful. So between having some kind of technical knowledge of the field itself, as well as risk and controls background. I found that most of the folks I work with have some combination of those two types of skills from a data governance standpoint. Now, in the responsible AI realm, I have seen many more folks who have backgrounds in ethics and such. But generally speaking, I would say that the ability to manage risk and compliance is somewhat evident in the work that we do and a good skill to have as you think about career and data governance. I love it. So what are the three most important things employees should know before using and relying on enterprise AI? Well, I'd say that some of the key things that anybody should be aware of as they are looking to use their data sets to train models is number one, have a really, really good understanding of the data itself and make sure that you understand how the data is structured, how representative it is. You've done some kind of data visualization and such to ensure that you yourself have a fairly good understanding of the data. Secondly, I would say that as you're using Google's enterprise readiness offerings to train your models, make sure that you are actually aware of the different kinds of offerings that are already there and you're using them appropriately. So look, having access to a data loss prevention tool so that you can actually redact the data sets of PII as you start to use them for model training and what have you. Having access to these tools and being able to use them so that you can appropriately design both what your input as well as what your outputs would look like with the help of these tools is also, I would say, paramount. And then lastly, I would say that as you are training models and you are developing these models, really understand the needs that you have for the organization. What is the specific use case and what does it really warrant? Does it warrant a custom model? Does it warrant just regular model tuning? Does it warrant just needing a foundation model that you can feed your prompts to? Really understanding what the architecture should be, what types of models you really need to create or use out of the box and then how to deploy them, making use of all the offerings that are already there to help you ensure that you are developing models that, A, are using data in a manner that is comfortable to you and is in line with your policies and two, also is functioning the way you intend it to function. Perfect. And I'm going to ask for just the elevator pitch here. We've got just two minutes left, but I'm going to slip one more question in. Since generative AI or any other AI subsets could generate stunning outcomes, you might cover about auditing is the most important to make sure enterprises do not take the outcome as is. You cut out there for a moment. Shannon, do you mind repeating the question? Sure. Yeah, since generative AI or any other AI subsets could generate stunning outcomes, I expect you might cover about auditing is the most important to make sure enterprises do not take the outcome as is. Yeah, so I think what we're talking about here is different kinds of checks that we have to ensure that your outputs are being then filtered. So we talked about some safety filters that will enable your outputs and depending on the thresholds that you set can enable your outputs to be free of certain kinds of parameters, that you actually will set. So certain kinds of things that you don't want to be output can actually be set with those kind of recitation filters and such. And the other thing that you're talking about with respect to auditing, typically what happens is that there are standards that we as Google would also comply with. And in order to meet those standards, we go through our own external audits to ensure that we have developed our own practices and processes in compliance with certain standards and certificates that we would certify to to provide assurance to our customers like yourself that we've met certain criteria and standard for privacy and security for our product offerings as well. Thank you so much for this and thanks to all of our attendees for all the great questions and being so engaged in everything. Just again, a reminder, I will send a follow-up email by end of day, Thursday for this webinar with links to the slides and links to the recording. Thank you all. Thanks, Shiba. Thank you. Hope you all have a great day.