 Hello everyone and welcome to our next EDW session called a case study standing up a data governance program and data capabilities during a global pandemic which we presented by Rachel's data governance director at KPMT US and Karam Mahmoud the director of digital management at KPMT US as well. All audience members are needed during these sessions so please submit your questions in the Q&A window on the right-hand side of the screen. And our speakers will respond to as many questions as possible at the end of the talk. Please note that there is a linked form at the bottom of the page titled EDW conference session survey. This is where you can submit session feedback and we encourage you to do so. So now let's begin our presentation. Thank you and welcome Jenny and Karam. Thank you for that. Good morning from Arlington, Virginia where it is in the mid-60s and full of pollen in the air. My name is Jenny Schultz. You heard I'm the director of data governance at KPMG. I started there a year ago or started here a year ago during at the height of the global pandemic and I'm joined by Karam. Karam, do you want to say a little more about yourself? Yeah, hello everyone. Good morning also from Virginia. I've been with KPMG about two years now. I'm a director in our data services practice. Jenny and our peers and we're happy to take you through our case study today. Thank you. So welcome. Thanks for joining. So I'm going to start by talking a little bit about our data journey, how we began, kind of what we've done to stand up our data governance program. And then I'll turn it over to Karam to talk about the data capabilities that are needed to support our data program here at KPMG. And then of course we'll have time for Q&A at the end. So what is KPMG? We are a group of global firms that provide tax, audit, and advisory services to many types of clients. We Karam and I are focused on the United States member firm and it is large. We have, there's over 30,000 people and as you can imagine, lots of data. So where did we start our data journey? It started back in 2018 where we spent quite a bit, quite a bit of map time interviewing dozens of stakeholders to understand what were their challenges with data. Where were they, where were we falling short in either taking too long to meet a client deliverable and maybe we could have done something better or just how do we work better internally to the firm. And this slide is the output of what we learned from all those stakeholder interviews. So why do we need a program, a data program at KPMG? Let's growth, right? We want to be able to increase speed to market. We've got, we want to beat our competitors, you know, get more clients and also potentially provide additional services to our existing clients, right? We want to grow and expand our services and products. And let's talk about, you know, reducing costs. You know, how can we automate all of the data that we have here at KPMG? KPMG has been around over a hundred years and many, a few different forms but in total over a hundred years. And you can imagine that our technology journey started earlier and we've been collecting lots of data for decades. And, you know, way before the terms data governance and data capabilities and all these, all these wonderful terms that you all are embedded in. And we needed a way to, you know, automate some of those cleansing and standardizing activities so that we could spend the time actually providing insights to either our clients or to help us make better decisions internally. And we also wanted to increase the transparency of the data we have and, you know, make sure that folks can get what they need when they need it to do their jobs, right? Enhancing that self-service. And lastly, risk management, right? We don't want to under control our data or over control our data, right? So how do we master our data so that we have good data quality here at the firm? You know, our data is organized. We know what needs to be retained. As you can imagine, you know, we're here not only to provide value through our data, right? And think about data as an asset but also about, you know, again, managing risk. It's kind of that defensive position. We are a highly regulated firm and we have a lot of external bodies that we need to meet their needs. So we want to make sure, again, that our data quality is fit for use and that we are only exposing the data to those who need to know. So this is what we heard from our stakeholders in 2018. And that led to the creation of the department that Karam and I are in. It's called Data Strategy and Operations. And it led to, you know, the hiring of a chief data officer, you know, determining, you know, what did we need to do to help solve these problems and meet the needs of our stakeholders? So this is a view of our data strategy and at a glance here on one slide and I'm going to start from the bottom up. So again, I talked about our group as data strategy and operations and we are here to, you know, implement the data strategy, build a data supply chain from source to, you know, source to target or consumption and everything in between that needs to happen, right? Governance processes, tools, technologies and that you can see kind of in the middle here, all the things we're working on to help and help our stakeholders meet their needs. We've got a literacy program. How do we make sure that we talk the same language? Data governance, which is my bread and butter, right? Making sure that we are treating data as an asset and doing all those wonderful things like, you know, checking for quality and metadata and confidentiality rules. Again, you heard me say we're a highly regulated industry. How do we make sure that we are protecting the data we have we have a lot of data about our clients and we're making sure that we're complying with all the regulations, laws and regulations out there about our data. Data quality accelerators, right? How do we enhance the completeness and accuracy of our data? Again, making sure that our quality is fit for purpose. Creating a metadata source of truth. So again, tools, technologies and processes around enhancing the transparency of our data, mastering our data. Again, we have a lot of data and it's all over. So how do we make sure that there's one golden record so that folks know where to get the best data from? We buy a lot of data at KPMG. So we have a group dedicated to making sure that we comply with those vendor contracts and use that data the way it should be. And then we have a group also focused on building those things that we need at the center, right? So an enterprise data warehouse, a data lake to again provide that one stop shop for those who need the data. And you can see here, moving up from that, the data supply chain, again, from sourcing, just storing that data, using it for advanced insights and analytics and then consuming and reporting off that data. So our goal is to make this kind of a seamless data supply chain that's standardized, consistent and as efficient as possible again so that folks can get the data they need to generate those insights for themselves or for... And again, you already heard me talk about why we're doing this, right? Crease speed to market when we want to protect the firm, decrease costs and make sure that we're managing our risks appropriately. So I view a data program or a data governance program as kind of a culture change initiative, right? This is, yes, somewhat of a data problem, but it's really about culture change. How do we get the firm thinking about data differently? And so we've done this through a couple of ways here. We have training that we deliver again to teach everybody, like what is data governance and management? What are the best practices in the industry? What do they mean? We might be doing them here, but we might call them something different. How do we get everybody speaking the same language and understanding what these principles are? How do we know who our supporters are? Who our detractors are, right? Understanding who our stakeholders are. Again, I said I started a year ago. I have met with everyone and anyone who would accept my meeting invite and just like this, right? Through a computer on a laptop with video, just understanding who they are, what their needs are. Are they going to be able to, are they going to work with us and help us, you know, change the culture of the firm around data? Or are they going to be that, well, I've always done it this way person, right? So really want to focus on those early adopters who will work with us to, you know, again, get these things complicated throughout the firm. So we have targeted communications that go out. We started a newsletter. Again, we are so large. So we have to get creative about how we communicate with folks. You know, one-on-one meetings are the best. But, you know, I can't, I can't meet with 30,000 people one-on-one on a monthly basis. So we have, we've gotten creative presence as well. And we have working groups and a data council, which I'll talk about on the next slide. But it's really about, you know, collaborating as much as we can, understanding what folks need. How can we help them and making sure that the things that we do and accomplish for the firm, we can map them to their business priorities and show them, hey, we've got this new thing, this new data capability or new tool implemented. And here's how it can help you. And then, you know, measuring progress, showing progress, making sure we're transparent at every single stage of everything we're doing goes a long way, right? It builds, it builds adoption. And we've also been working with folks, you know, to be their arms and legs, right? So some of these data concepts are new to the firm. And so we are educating them, again, through large things like training, large efforts like training, but also in a more kind of intimate setting, like talking to them about their area, their data asset. You know, let's talk through, you know, what it means to adopt these data governance and management principles for your area. And here is a view of kind of who we work with in the data governance space. So I mentioned the data council. We set up a data council, again, with leaders across the firm. So we've got, you know, profit centers, right in our advisory tax and audit businesses. And then we also have groups like finance and HR. And so we have a group that gets together monthly to talk about where is the data strategy? You know, how are we doing? You know, where do we need to focus next? Our current effort right now, we just did a data capability assessment. So we're working on getting those results together. And then we're going to be showing them to the data council and saying, okay, help us make a decision about where you want to focus, right? Resources are not unlimited. So we've got to prioritize. Like, what can we work on that can provide the biggest bang for the buck at the firm? And then we also have a data governance working group. We meet every other week. And that is, I would say, kind of more boots on the ground. And they are the same kind of in the same areas as the data council members. But we use them for more of an opera, from an operational perspective. Like, how can we implement these things in the firm? And, you know, they'll be the ones doing that, right? They also make recommendations to the data council on decisions that need to be made. And we have a federated data governance model. So again, we are large. And my team talks about the what, right? What are the rules of the road about data for the firm? And the federated data officers who are most likely some of the same folks in the data council are there to say, okay, here's how we're going to implement this thing in our function. And here's how it'll be most meaningful to our people. Because every business line is different. Everyone kind of has a different mindset in this area. Our ICPMG. And then, you know, the data owners, the data leads, data stewards, data consumers, they're, you know, we are identifying them, you know, fast and feverishly and, you know, educating them on who they are and what their roles and responsibilities are. And again, I consider them, they are really helping us, again, adopt and execute what needs to get done to implement these data governance and management best practices. And you'll see over on the top right, we've got some, we have some little icons about how we work together. Now that we are in a global pandemic and, you know, we do have a lot of meetings with folks, but we've been trying to condense meeting times, you know, give people a break, you know, time to step away and get a glass of water in between meetings. So we're doing our part to make sure that it's not, you're not, you know, virtually running from meeting to meeting to meeting. And the data governance standard. So how did we develop and publish the data governance standard? I could have gone in a room, you know, put a bunch of words in, you know, Microsoft Word and said, okay, here are the rules of the road around data. And I could have said, you know, put a gavel down and said, here have a nice day. Here are the rules. We didn't do that. We took kind of a, again, a shell of what good data governance industry standard, data governance and management best practices are, put them on a piece of paper. And then we went through them with the data governance working group, word by word, line by line. Yes, it was tedious, but we heard the concerns, the questions of the group, they felt bought in because they helped create this document with us. And then that document was also reviewed or ratified by the data council as well. And it took, you know, a fair amount of time, at least six months, if not longer, right, to go through this process. And because there was some education that had to get, had to come with this process. Right, like, why do we need a section that talks about metadata in this document? And so we had a lot of, okay, well, what is this going to mean to me? How is this going to work? So a lot of, you know, a lot of whiteboarding sessions about what this document should say, you know, how it should say it. And so it's, again, my, what's worked well is creating things with this group. And then we did the same thing with the training, right? So once we got the data governance standard with those, again, industry, you know, guiding principles together, we created a training deck and we piloted it with those data governance working group members, right? Ask them for feedback. Tell us what you need, you know, you need to understand more of so that we make sure we meet your needs. And so now we're training on a quarterly basis and, you know, getting the word out about kind of the basics of data governance and management. All right. Now I'm going to turn it over to Karam who is going to talk to you in more detail about, okay, so I've talked to you about our data journey, our data governance program, how we've set it up, who our stakeholders are, how we work together. And Karam will tell you about the supporting tools and technologies that help make this, again, a robust data program. Yeah. Thank you, Jenny. So as Jenny talked about the overarching data governance structure that we have put around, what we also wanted to do from the center, right? Again, coming from the CDO office, we wanted to set up some enterprise data services that we can provide as a foundational capability to our stakeholders. And again, our stakeholders are different functions of KPMG, right? So as Jenny spoke about our functions such as audit, tax, and advisory. So I'll go around this slide. We have four services that we want to talk to you about today that we'll zoom in on in the next 20 minutes or so. So I'll start on the far right top, right? So metadata management is at the heart and the foundation of everything, right? So once we have identified the data owners and a lot of the roles that Jenny talked about, we want to make sure that we understand the business context of the data, right? And this is where we define the business context of the data as well as the technical context, right? So we get the business context and we tie it to the technical information to get a full picture of where our data resides and what does it mean, right? And that's why the value that we're adding to our stakeholders is that we're making our stakeholders and our professionals in the field, we're making them data intelligent, right? We're giving them more awareness of where the data is and also other features that we provide are that, you know, they're able to get their hands on the data. We provide a provisioning service as well where they can get their hands on the data from the metadata management tool. If you come over to the left side on the top, data quality, which is a natural next step after gathering metadata. So once you have identified your critical data elements, we want to make sure we're able to measure and monitor the quality of data in our firm, right? And a lot of the data that we deal with is internal data, business operations data, but it's also our client data that we ingest into the four walls of KPMG and we want to make sure that it's protected and the quality upon transfer is correct. So the value we provide here to the firm is the data health and also the exception monitoring, the root cause analysis, the remediation of the data issues. So that's in essence data quality and we'll zoom into each one of these in a second. If you move over to the right side bottom, we're talking about master data management here. So Jenny hit upon that a little bit, right? So if you have disparate sources across the firm about some of your master data, right? So your client entities, your contacts, your employee data, your vendor data, these are all master data sets that should not have multiple versions of the truth. So master data management attempts to combine this data to create golden records and becomes a single provisioning source for the firm. So we have done that and we have provided customer 360 views to the rest of the firm. And if you come to the left side, last but not the least is policy engine, what we're calling a policy engine, but essentially what it is is data privacy and protection. So if you collect with the metadata, you measuring data quality and health, you're creating single version of the truth from a golden records perspective, then the last part is that you want to protect your data. So if our professionals are trying to use the data for whatever purpose, right? It could be their client engagements, it could be a research assignment. We want to make sure that they use the data according to the policies that govern the data. So as you know, KPMG is a large firm. We have client contracts, we have vendor contracts, we have our own risk policies, our CISO policies, and we have laws and regulations that we have to abide by. All of those can be very cumbersome for a person who wants to just use the data. So what policy engine does is that it provides the transparency of what the data governing rules are and how can you get your hands on the data while staying in bound with the rules. So I've given you an overview of the four services. What I'd like to do now is zoom into each one of them and kind of talk about how we progress them through the pandemic. Thank you, Jenny. So the first service we're going to talk about is a master data management. As you know, the biggest, I would say, foundation of a successful master data management program is building trust with your stakeholders. Because essentially what you're doing is you're going to different stakeholders saying, well, you know what? Instead of you maintaining data in your world, I'm going to centralize the data and you're going to come to this centralized spot to maintain the data. And as you can imagine, in a global pandemic, it's very hard to have those face-to-face conversations where you can build the trust, right? And, you know, not all the time we have our cameras on. I mean, I give Kudas to Jenny. She has her cameras on 99% of the time, right? But not everyone has her cameras on all the time, right? But the way we build this trust during the pandemic is that we had to do, I mean, myself, I had to personally do a lot of, I would say, pre-meets and post-meets, right? So you can imagine a lot of those hallway conversations before you go into a meeting room, right? Or after you come out of the meeting room, you debrief for like two, three minutes, right? So those actually turn into, and Jenny talked about condensed meetings, right? Those turn into five-minute, 10-minute meeting calendar invites on the calendar. So that's how we pivoted to building trust with our stakeholders as we wanted to bring them on board with the master data management solution. And if I were to just talk a little bit about what we have on this slide in the middle, I mean, you guys all probably heard people process technology. So again, no bringer here. It's built on people process technology, as well as data domain. So we made sure that we did not start off with systems or applications, right? We started off with domain. So we want to fix our customer data. We want to fix our contacts data or entity data, right? So that's where we started. And, you know, we interviewed folks. We talked about who are all the people involved? What are the current technologies? What are the current processes? And the way we turned that into a value was that, well, a lot of the inefficient processes will go away, right? A lot of the folks who are doing non-value-added work, they can now do value-added work, right? Which is post, I would say, match merge, which is the deduplication process of MDM. So we were able to convey this value to our stakeholders in the pandemic, doing those pre- and post-meetings, having a lot of virtual sessions, and we were able to deliver value. And something I didn't note earlier, around all these data services, we're heavily Informatica-slash-Colibra a shop here. So we're using Informatica solution for our MDM solution here, and it's working great for us. And the other point I want to mention is that we're maturing, right? So we are in our year, I would say, two of our foundational capability, and now we're maturing to some of the next steps, right? So we're introducing new technologies. All right. So we'll move on to the next service. So the next service is data quality. So again, what is data quality? We touched upon this already, but if you look at the slide here, we want to make sure that we put data quality in a fit-for-purpose model, right? So what you're looking at is our framework, which says that, you know what, in any data supply chain from acquisition all the way to storage, all the way to consumption, we want to make sure that we hit five types of data quality controls, which you see at the bottom here, right? Data capture, data entry rules, data address rules, data movement rules, data reconciliation rules, and anomaly detection, right? And the reason I mentioned fit-for-purpose is because that became very crucial for us when we were implementing data quality one by one across the firm, right? Because we wanted to make sure that the riskier data, right, or data that has a lot of consumers, right, or that has already identified issues, got heavy data quality versus something, you know, which wasn't hitting any crucial reporting, right? Or, you know, the data wasn't that large or didn't have too many data quality process or existing processes on it. So that helped us implement data quality in a wide area of systems and applications rather than zooming in on one system application. The other point I'd like to mention here is that standardization is very important, right? So you see on the left-bottom side that we have VQ dimensions. So we at KPMG, our data strategy, we have aligned ourselves with the EDM council. Now, if you go outside in the industry, there may be about three, four, five data quality frameworks or dimensions out there. And at the end of the day, most of them say one or the same thing, right? But you have to... The point is that you have to pin yourself to one of the frameworks and dimensions and go consistent at it. So we have picked EDM council, which is pretty comprehensive and we're happy with it. And those are the dimensions that we cover when we implement data quality. Again, so how did we pivot to the pandemic while implementing data quality in our environment? So I would say, of course, the pre-meets and post-meets were important here as well. But in data quality, what became more important for us was virtual sessions because there are a lot of demoing abilities here, right? Because data quality... the way you develop the rules and the way you come out with the data quality scorecards and the dashboards, a lot of visualizations involved. So here, the way we pivoted was that we had a lot of virtual sessions, a lot of demos. We memorized our demos and spiel by the end of the day, right? But we were doing a lot of roadshows, a lot of some one-on-one demos, right? Some large demos. And that was very helpful to get the capability understood out there. I think we can move on to the next service here. All right, so the next service is the metadata marketplace. So we have coined the term internally. We call it metadata marketplace. But essentially, this is the metadata management solution we talked about in the original slide. And like I said, this is the foundation of all the services, right? So we have a metadata management solution. It's built on Calibra. And it provides the business context for the data, right? So we're able to define data sets, logical containers. We're able to define business elements, the enterprise definitions. We're able to identify high-level classifications of data. We're able to identify critical data elements as non-critical data elements. So all that is done in the metadata marketplace, which becomes the foundation of that implementing data quality or master data or policy engine that we'll talk about in a second. Again, this is again one of those tools that required visualization, right? Because this is a self-service tool that's available to our entire KPMD US practice. So we did a lot of virtual sessions here. We did a lot of demos. And something creative that we did is that we recorded short videos, right? How-to videos. How do you filter? How do you navigate? How do you search, right? And then we put those short videos on our metadata marketplace training videos page. So that helped alleviate a lot of the one-off questions that we would get. And the last one I'll make on this is that we've also provisioned this tool to request access to the data. So again, some of the data can be accessed, or at least the request form can be initiated from our metadata marketplace tool. I'm getting some questions here on the left. So let me cover this last slide. And I think what we'll do is we'll tackle the questions towards the end because we have about 10 to 15 minutes at the end. So the last service I'd like to talk about is a policy engine. Again, the internal term that we have coined for this solution is policy engine. But in essence what it is is a data privacy and protection solution. So as I talked about, right, that data can be governed through our contracts that we have with the owners of the data. It could be vendor, it could be clients. It could be our internal risk policy, our CISO policies, our risk framework, right? Or it could be laws and regulations. It could be GDPR, CCPA, or our favorite these days, tax center to 16 law. So what we have done in this policy engine solution is that we have brought all those different policies, if you will, and we have centralized them in a very easy to read and easy to process rules. So the rules are on data access, rules are on data usage, rules are on data storage, right, that contractors cannot access this data. Or you cannot take this data offshore, right, to your offshore team. You cannot use this data beyond the original engagement team or something. So what we have done is that we have centralized the rules and we have then applied the solution to different applications at KPMG. And what we're able to do is we're able to classify and tag our data with easy policy rules. So our professionals, before they get their hands on the data, they can see, can I share this data with my offshore team? Can I use this in my data lake for secondary analysis, right? So that has helped tremendously our professionals in provisioning the data and also where they store the data. So another use case for this is that when we store our data, we have different environments, you know, highly secured, medium or less secure. So we're able to use these policy rules to inform our application owners in which area of the application they should store the data. So that has helped us a lot. And again, policy engine, I would say, is one of the very unique solutions we had to actually stitch together multiple industry products from Informatica and build a solution that works for us. And we are in our year one of this journey and we're going to continue to elaborate on this. The point I'll make on this solution from a pandemic pivot perspective is that since this was a large solution which required a very large cross-function, cross-discipline team, we had some turnover, right? So I don't know from folks on the session today who experiences, but COVID was, we experienced some turnover because people were moving locations, folks were more fluid in their destinations. So one point I have there at the top is getting personal, right? So in order to keep the team engaged, keep the morale high. While of course we did that, a lot of that at the top from our CDO and everything, we made sure that we spent time in the beginning and end of our meetings to get to know people, right? We had a lot of new folks joining the team, joining the firm directly on the project. So we got personal with everyone on Zoom, right? On teams. We were asking family questions, house project questions, health questions. So a lot of that helped us retain, I would say, talent and keep up the morale on some of these large projects. I think, Jenny, we have one last slide where we summarize our pandemic strategy. So my throat is dry, so I'll turn it over to you. Thanks. Yeah, I mean, just to echo kind of, you've heard Karan, like working, you know, 100% virtually has its challenges. And when you're standing up a new program, it's even more difficult to build relationships with stakeholders, you know, to Karan's point. Not everyone is on video. You can't interpret their nonverbals. So I try to be on video as much as possible in hopes that others will do it too. So I can get to, again, see their face and know, like, are they scowling? Are they smiling? And, but again, you know, Karan and I, we spent a ton of time, you know, having those virtual coffee chats with our stakeholders, our teams, our peers, our leaders, I mean, just up down sideways as many as we could. Again, to get to know people, get personal, you know, because work and life has just become kind of, you know, one big blended activity now, right? Who does laundry while they're on the phone, how do they fold clothes? It's just part of kind of our daily life now. And again, and working, you know, working virtually, having those meetings with our stakeholders to help, you know, solution things together has been really important, right? Let's brainstorm, let's figure this out together, right? Having that mindset and not just kind of dictating things to people has been helpful. And again, you already heard me talk about condensing meeting times, you know, 30 minutes for that touch point, or could you have a 15 minute? Do you really need an hour, or is it something you could cover in 45, right? So I'm constantly challenging myself and my team to think about, okay, yes, we need to be more intentional about communicating with each other, but can we reduce the amount of time? Because people are, you know, there's just meeting fatigue, right? And staring at a screen all day, you know, by the end of the day, I'm sure you're tired, like I'm tired, so how do we, how do we, you know, get the job done, but think about others at the same time? And, you know, Karam talked about, you know, building trust, you know, do what you're gonna say, or say what, you know, do what you say you're gonna do, right? Help build that trust, you know, community, you know, follow through on anything that you've promised, you know, again, you know, meet the people, really understand your stakeholders' needs, ask questions, right? And, you know, having those coffee chats and sometimes there were happy hours too, there still are, having those one talks, you know, sometimes people don't feel comfortable in larger meetings, you know, bringing things up, they're more introverted or they wanna take something away and think about it. And again, you know, it really does help, you know, smile, turn on your camera, I know you're not always camera ready, I'm not always camera ready, but when I am, I try to do this as much as I can. So, we call these our pandemic pivots and they have really helped us, again, you know, build relationships, get things done, enhance trust and move our data strategy forward. So, with that, I will open it up for our question part of our session. Thank you both so much. We've got a lot of great questions coming in. We'll get to as many as possible here in the time left. So, I'm just gonna dive in. Thank you for the helpful presentation. Our data governance program was launched this last January, just four months ago. You spoke about training programs. Do you have an, or can you send us some information about your training programs? Yeah, I'll have to see, I'll have to check in to see what I'm allowed to share and what I'm not allowed to share. But yes, we've, you know, we took our data governance standard and said, okay, what are the main topic areas? Quality, metadata, you know, went through all, you know, classification, went through all those things and just really, you know, tried to break it down for people. What is, again, and by role too, like what does a data owner do? What does a data, what does a data steward do? So, yeah, it's, let me check in and see if I'm allowed to share or what I'm allowed to share. I love it. And may I recommend people connect with you in the spot me app to follow up on that? Is that okay? Sounds good. Awesome. I love it. So how do you provide KPIs on data governance to provide business justification for budgeting and project planning? This was always a tough one. Oh, it's always a work in progress, right? Especially when you're in build mode, right? Which is where we still are from my perspective. So what we did with our data council, every, we run on a fiscal year, right? So September to September. And what we do at the beginning of every fiscal year is say, okay, data council, here are the things that we think we should work on, right? We create kind of a view into a roadmap and we say, okay, do you think this, these are the things that will help you the most, right? Move your function forward or help you with an enterprise service, whatever it is. And we get there by and before we even start working on things because there's no point in us delivering something that no one's going to use or care about. And then we track progress to the items on those roadmaps. Governance is, again, it's difficult to show value quantitatively. But eventually, once you are kind of past that build mode, you can show, hey, there are so many critical data elements identified and so many that have controls on them. Or we have metadata and lineage on them. So it's easier to do kind of after that build phase during build, which is where we are. It's what's the status? What's the progress of this initiative? Are you green, yellow, red, 90% complete, whatever it is. Awesome. Thank you. So have you dealt with archive data, particularly the metadata and decision to load data up to which year? Oh, gosh, because I mentioned how old we are. So it is a, we're taking a risk-based approach. So we are not loading every single piece of data and governing every single piece of data, right? We are starting with what data is critical to the firm. And that is a conversation with our data governance working group members, our data council members. Which data assets are you leveraging for kind of your most important uses? And that's where we're starting, right? There's no way we will ever, you know, ingest and govern every single piece of data. So that's how we tackle it. How do you handle the debate on data privacy? Any experience with that? Can you say more about the debate? Which debate? I feel like there are a lot of debates. Well, what's the most common debate you've had? It's always the balance of, you know, you heard Karam say, you know, there are lots of rules, lots of laws, lots of regulations. How do you balance making sure that we are complying with those while also you're trying to generate as much value as you can out of the data? There's, there's, it's usually case by case basis. And that's why Karam and team are working hard on that policy engine so that we can create a framework so that it's not always a case by case basis. We don't have to come to the well every single time users and say, do we need, can we use this data for this purpose? It's like, no, let's create a, let's create a standard set of rules as much as we can and say, okay, you know, can you use this data for this purpose? Yes or no? Or yes with some stipulations. And Karam, I'm just assuming that you're, you're letting Jen answer. I don't want to be cutting you off. We have tons of questions that I got a lot juicy ones after that as well. So yeah, I agree with what Jenny said. I mean, yeah, I mean, not knowing exactly what debate we're talking about. I'm good with that answer you. All right. So then do you adopt tools prior to start of your journey or in the middle of the way? I can take this one. So, so again, strategically, not right in the beginning, right? So we always start with some sort of a data strategy around our services, right? So whether it was MDM or metadata marketplace or data quality, we always start with a data strategy that tells us, defines what the service is, what the problem statement is, what the solution is, which is too agnostic. And then we do vendor landscape. We do a product evaluation. So we have pretty rigid, I would say, processes. And even sometimes if we know of some of the A1 marketplace leaders, we still like to prove out some of these evaluations, right? Because again, we are a very regulated, heavily regulated firm, and we always want to make sure that we have the strategy and the evaluations documented before we jump into any tools, right? So we do our homework basically. So we don't decide tools right in the beginning. It's part of our strategy. I love it. And have you guys done some work yourselves to showcase the significance of true and trusted data? I'm guessing if this is related to maybe MDM. Yeah. So again, the part of our data strategy is some level of data profiling and showing the impact and the extent of the problem, right? As I talked about MDM, there's a huge component of buying and trust. And what we have done in the beginning or what we had done in the beginning was that we picked a few problem areas and we actually profiled the data. We actually pinpointed where the issues were and then we showcased, right? If you clean this data or de-dupe it, what are some of the advantages, right? So yeah, we did do a lot of the work ourselves. And even once we deploy these services, right? I don't know if the question is geared towards that, but we are involved, right? So my team does a lot of the initial false positive reviews, whether it's data quality or MDM. We facilitate with the data owners and the data stewards in diving in the data, right? And approving de-duplication results or root cause analysis, all of that. And I'll just jump in to any of these questions. I don't want to cut anybody off. So just let me know. Same one from question to question. What tool did you say you used for metadata? Colibra. Colibra, yes. Okay. Did you think KPMG has entered the era of AI machine learning? I can start with that, Jenny, if you want. So as we said, we are, I would say, two years now into our data strategy, right? The Chief Data Officer data strategy. So now we're getting to a point where we're embarking on those, right? So we always had that vision in our data strategy for AI and machine learning. So the tools that we procured, the architectures that we set up, we set up with that in mind that that was coming, right? And now that we are at a point where we have deployed some of the, I would say, foundational capabilities, now we're maturing our services with a lot of the algorithms and a lot of models that help us either in data quality, right? Helping us, nudging us towards some data quality rules that we may have missed or some match merge algorithms or rules that we may have missed from a match merge MDM perspective. So we're definitely embarking on that now that we have deployed the basic foundational capabilities. All right. We have time for just a few more questions here. So I'll sneak in as many as I can. Is KPMG now using cloud computing? KPMG is a hybrid multi-cloud strategy firm. So yes, we have a lot of solutions that are on the cloud, a lot of lakes on the cloud. So we have a solution, but the way we're set up because of our client base, we will always have need for some on-prem data. So that's why our solutions are hybrid. So we're able to support a lot of the cloud platforms, multi-cloud, as well as on-prem solutions. So yeah, we are set up for that. So in regards to the policy engine, is the policy from the engine automatically applied or is it up to the user to actually follow the policy? For example, is it just a reference or does it enforce the rules? Yeah. Do you have a sneak preview or example? Yeah. Good question. So it's actually both. So we have the ability to, for a data owner to fill out a survey and manually classify the data. So I would say semi-automatic where we need some input from the data owner in terms of what they know about the data. But the policy rules are already ingested in our library. So based on that survey, we can automatically hit the survey against the policy rules and the logic we have and then it automatically applies the policy rules with the data set. So again, we're applying it to the metadata. So we're applying it at the data set level and sometimes at the data attribute level. And we're also building a capability where multiple applications can do an automated service call to the policy engine. So again, this is a centralized utility that's set up as a service for other KPMG applications where they can call on the policy engine and classify their data. The manual part is a little bit of the policy rules. They have to be hydrated in our library. But there are some automation, some manual steps there. Right. Well, that just is bringing us to the end of our session. Thank you, Jenny and Chrom for that wonderful presentation. We just want to note again that there is a linked form at the bottom of the page called EDW Confidence Session Survey. This is where you can submit feedback for today's session. That wraps up the session. You are welcome to continue networking with each other and within the SpotMe app. So we take a quick break between sessions. We look forward to seeing you then. Again, Jenny and Chrom, thank you so much. Thank you. Thank you.