 Here we go. Hello and welcome. My name is Shannon Kemp and I'm the chief digital manager of Data Diversity. We'd like to thank you for joining this Data Diversity webinar. Data management meets human management. Why words matter? Sponsored today by Elation. Just a couple of points to get us started due to the large number of people that attend these sessions. You will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom middle of your screen to find the icon. Or if you like to tweet, we encourage you to share highlights or questions by Twitter using hashtag Data Diversity. And if you'd like to chat with us or with each other, we certainly encourage you to do so. And again, to access and open the Q&A or the chat panel, you will find those icons in the bottom middle of your screen for those features. And as always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now let me introduce to you our full panel of speakers today, Greg Swiger, Carissa Tool, Bob Sinner, and Matesh Shah. Greg is the Vice President of Enterprise Data at Fifth Third Bank. He is passionate about creating data-driven cultures within complex working environments. Greg has spent the last eight years working in data and analytics in various industries, including credit, customer service, retail, and banking. And joining Greg from Fifth Third Bank is Carissa Tool, who is the VP of Enterprise Data Program Governance and Strategy. Bob is the President and Principal of KIK Consulting and Educational Services and the publisher of the data administration newsletter, TDAN.com. Many of you may know him already from his monthly webinar series with us, Real-World Data Governance. Bob specializes in non-invasive data governance, data stewardship, and metadata management solutions. Matesh is Elation's Vice President of Product Marketing and Analyst Relations, helping bring Elation data catalog to market. Matesh has over 20 years of experience spanning a number of roles, including DBA, Information Security Lead, and Data Scientist. And with that, I will give the floor to Greg to get today's webinar started. Hello and welcome. All right, thank you. I appreciate that intro. And I think you just heard just about everything up on the screen, but a little bit about me professionally. So I started off in retail working for Macy's Credit and Customer Service. And then in the past year or so, I transitioned into banking with Fifth Third Bank. So one thing I want to point out at the top of the hour here is that there are a lot of data management experts in this bank. I am not one of them. And so the reason I find that important to say is I am well versed in data management, but I am in no way, shape, or form an expert. I actually brought one of the experts with me today, Carissa Tull, and she'll be talking a little bit later, really about how we made some of these concepts more fun. But a little bit more about me personally. I've got two daughters, I live here in Cincinnati, and my two daughters are big Disney fans. So as a product, I watch a lot of Frozen and Moana. And the rest of my spare time, I try to convince my wife to go on hikes. I'm usually fairly unsuccessful on that front. So just a little bit about me. Next slide. So now we're going to talk about the agenda today. And really what I want to start off with is introing Fifth Third Bank, what we do, and how we look at data. Go into the current landscape and my role and how that looks. Then kind of transition into some of the obstacles that we face. How we overcame those obstacles. Talk a little bit more about why specific words matter in data governance or what we like to say here at Fifth Third Bank is data management. And then lastly, how do you get people to change through a change management framework? So next slide. All right. So a little bit about Fifth Third Bank. We're essentially a bank corp that has diversified financial services, headquartered in Cincinnati, Ohio. So Fifth Third was established in 1858. And as of December 31, 2020, the company had access to over 52,000 fee free ATMs. We're managing over 434 billion in assets under care. And the way that we look at the bank is we kind of divide it into four main business units. So there's branch banking or as we like to call retail, commercial banking, consumer lending, and wealth and asset management. So you'll see that we're primarily a regional bank throughout the Midwest and Southeast. But with that said, we have 1134 full service banking centers throughout the U.S. And we also have a couple regional footprints throughout the entire global continental U.S. Next slide. All right. So when we talk about our data platform ecosystem today, we kind of divide our data capabilities into different areas of expertise. So as you look across the far left side of the slide, you'll see interact, abstract, process, store, move, deploy, manage. And my role tends to focus on the manage and interact portion. So where data starts and ends, and the way that we like to divide that up and do a data marketplace, where the beginning is data curation into elation, which we leverage as a marketplace. And then when people go to interact with said data, the first place they typically search is in elation as well. And so most of my career has been around data consumption. And management and interaction are just two areas of data consumption. And so in addition to that, you'll see some of the other technologies that we deploy across. But one of the big things that I like to point out here is we have a very divided ecosystem. We have a lot of old technology and a lot of new technology. So bridging the gaps between those has been one of our big challenges. And it's one of the things that we're constantly thinking through. But the thing that I like to point out here is start and end. The first step in the data marketplace is always elation. And so that's really the big draw here. Next slide. So the next thing I want to talk about is why data management. So data management is fairly large in the banking community for a variety of reasons. Obviously, when you're dealing with the Fed and regulatory reporting and things of those nature, data management is a much higher priority than it's been in some of my previous roles. But taking a little history lesson, we're going to go back in the way back machine and look at where this all started. So the Bazel committee or Bazel committee was initially named the Committee on Banking Regulations and Supervisory Practices. It was established to enhance financial stability by improving the quality of banking supervision worldwide and to serve as a forum for regular cooperation between its members, communities and banking supervisory matters across the globe. So taking the things that were established in the Bazel committee on banking supervision, the EDM council essentially was aggregated to help people understand how to execute on the direction in the Bazel committee. So the EDM council as a global association created to elevate the practices of data management as a business and operational priority. So taking those strategies and actually putting them into practice is really what the EDM council is based around. The council is a leading advocate for development and implementation of data standards, best practices and comprehensive training and certification programs. So one of the most important things that comes from the EDM council is what we like to call DKAM. So go to the next slide, please. DKAM is essentially the data management capability assessment model. And what this is, is it's an assessment model that looks at your company independently and gives them rankings from one through six, whether it's one not initiated, two conceptual, three in a developmental stage, four defined, five achieved and six enhanced. So the difference between three developmental and four defined is what we like to say crossing the capability chasm. That tends to be a problem that is challenging for most companies throughout the industry. And so one of the things that we always like to say here is whenever you think about a holistic data strategy, you can kind of break it down into three components. There's people, there's process and there's tools. So crossing the capability chasm from three to four can typically be attributed to processes and tools. Going from four to five, however, is equally challenging. And that's when you have to focus on people, the human management element. And what we're going to talk about a little later is why words matter in human management, because the way that people perceive the message frequently determines whether or not the message is successful or whether it's implemented practically throughout the organization. So as you can see on the right here, the EDMC or the EDM council had a 2020 benchmark component scores. And you'll see in the orange financial services tier two and three, some of the larger ones, and then all financial services is the blue line. And as you go around the outside from one, two, three, four, five, six and seven, you'll see some of the different criteria that the DKAM scoring model is based on. And I can't disclose exactly where our scores are, but one thing I will say is imagine there's a bigger septagon there. I think that's a septagon or a heptagon. Imagine there's a bigger septagon and that's kind of where 5th 3rd sits today. So we're industry leading in most of these criteria, which is to say that we've done a lot of really great work in regards to developing, defining, and even starting to achieve some of the different scores. Next slide, please. So as I referenced earlier, crossing that capability chasm is something that is challenging across all industries. And it's become one of the most difficult things that we've had to do over the last few years. And really what I tried to do here is bucket the challenges into a couple of different categories. So starting with awareness, ultimately the bank did not know how well we were performing with data management best practices. We didn't know whether we were good, bad, or undefined. And I think ultimately the strategy that we went through is we created a centralized data management program to develop the organization's data literacy based on DCAM principles. And as I said before, we're building the data strategy around three things, people, process, and tools. Next, desire, the bank did not have a data-driven culture. So what we did, our big strategy here and what we're going to talk about for the rest of this presentation is make data fun, focus on elation, adoption, and curation, ultimately directing people back to a marketplace. Knowledge was another thing. Many data consumers have not had the formal training required to unleash analytical capabilities. So what we did here is we developed a data management training curriculum and provided the right processes and tools in order for people to unleash data capabilities. So what that means for a lot of our consumers that aren't exactly data literate is having no code or low code options to allow them to leverage data. Ability. Data consumers do not always have the skills required to leverage data. So what we did is we created a scalable framework called Bay Dotty, which is Italian for beautiful data or good data, to execute data management best practices across the bank, which you're going to hear a little bit more about. And then lastly, reinforcement. Consumers did not understand the value of adopting new tools and technologies to leverage data. So in order to address this, we consistently provided business value and tried to leverage change management techniques that we'll talk about here in a little bit as well. So in summary, this is a lot of what we struggled with initially when we were talking about crossing that capability chasm. And again, the thing to draw from this is going from three to four is process and tools and four to five is human management. That's the people portion. Next slide. All right. So when we talk about how we went full scale with data management, what we were trying to do is create a scalable and efficient approach for the implementation of data management best practices. So starting on the far left is we were in the ideation portion. We were learning to crawl. Most of our data was broken into different categories for risk aggregation, which again was attributed back to the Basil Convention. We established a centralized data management program. We focused on defining our policies and standards and data handling best practices. And we measured progress and quality assessment through things like elation adoption, through things like data technical lineage, business quality lineage, all of those different things we were focusing on. And we're effectively in the scaling portion. And so what we've done here is we've rolled out most of our data management practices within IT. In order to do EDO or enterprise data office change management for existing data, we have a tabletop exercise where we have to go through certain criteria in order to put data into production. And so that's primarily with our existing data sets. Lastly, we have robust data management integration with tabletop reviews and measurement of existing new and enterprise data sets. Lastly, where we want to go is have a fully federated model where we're truly living agility and IT release with data management scaling throughout the entire bank. It's in every single line of business. It's ingrained in the work that they do. And ultimately data management will be viewed as an accelerator and not necessarily boxes to check along the way to production. Next slide. So whenever we talk about scaling data management, I think the first place to start was the risk aggregation exercise. We essentially looked at all of our critical data assets throughout the bank and we bucketed them into three different data tiers. Starting at the top is enterprise designated data. You'll see on the right hand all of the various data management capabilities that must be applied to the enterprise designated data, which essentially is data used for regulatory reporting. It requires the highest amount of data management rigor, the highest standard of care. And so you have to have things like service level agreements, data dictionary applied to the data, data quality rules, business glossary, technical lineage, consumption lineage, and lastly business process lineage. So ultimately you'll see that it's the highest amount of control because it is the highest amount of business impact when you look at it. As I said before, this is things like the regulatory reporting. This is things like financial reporting that go to Wall Street. This is things that are leveraged across the bank. And so it ultimately gets the highest amount of data management rigor applied to it. Next down the list you'll see manage data. So this is typically various enterprise reports that go out that people make operational decisions on. However, it's typically not published externally. And so you'll see that there's things like access permissions, service agreements, data dictionary, quality, glossary, and technical lineage that must be applied to this. But you don't have some of those higher end data management rigor like consumption lineage and business process lineage. In addition to that, no data stewards, for instance, aren't being applied to this. And then lastly it's the other data or exploratory data. And this is basically just stuff that we need to have some amount of access permissions. We want to have it cataloged in a data dictionary just so people can search it. But ultimately we don't have to apply all the various levels of data management capabilities to it. So high level, this is really how we talk about data tiering in the bank. And ultimately it's the standard of care that we apply to all of our critical data assets. And so with that, what I want to do now is pass it off to Carissa Tool. And she's going to talk a little bit about how we took this somewhat technical concept and applied it to the rest of the operations so people could grasp the concept one. But then in addition to that, they could also understand how to execute our vision. And so with that I'm going to pass it over to Carissa and she's going to talk a little bit about Bay Dotty. Thanks, Greg. We could go ahead and move on to the next slide. So I know Greg had talked about Bay Dotty on a previous slide and he had mentioned that it actually translates in Italian to beautiful or good data. And I guess that depends on how your Google translator works a certain day. But what Bay Dotty actually is, is an execution plan for data management in a visual and fun format. So when we were in the IDA stage, we developed our standards, we had policies, we had procedures. But when you go to communicate those, a lot of people kind of their eyes glaze over and they don't like to hear about standard and policy. So we're changing the narrative with this execution plan. So Bay Dotty is our own data management pizzeria. We have a couple of different options to which we can execute our data management capabilities and controls. You want to go ahead and go to the next slide for me? So when we look at a plain pizza, we're kind of aligning this to our existing data within the bank. We have created some custom fields with an elation that we require access permissions, our service agreements, which is just data availability, and then of course our data dictionary. So our titles and our descriptions. You can see we have some fun little flavors in here. We have 200 calories because it's the lowest level of effort. And then we also have where it kind of applies to the different data sets. So we move on to the next slide and the next pizza we have available. So specialty. This is where we start to align our data management capabilities to newly created data sets. So you can see that we still have our basic foundation of access permissions, service agreements and data dictionary. But we start to look a little more deeper into data quality and technical lineage. So we're piling on a little bit more toppings to our pizza and layering things in. Next slide please. Lastly is our enterprise data and our business critical data. So this is our supreme pizza. This is where we're throwing everything on. You know, we have our base again. We have our access permissions, our service agreements, data dictionary. But we're also laying in that business glossary and the business term alignment. And this is where data quality gets a lot more robust. So we look at movement controls. We look at data quality rules. We look at our complete data quality control environment in this stage. And then for lineage, not only are we tracking and documenting technical lineage, but we're ensuring that we have alignment to business process. And that we're also documenting the consumption lineage for critical data assets aligned to these enterprise data sets. And then lastly, obviously because it's enterprise data and needs a higher standard of care, we do put that stamp of approval on it with certification and with kind of a QA process afterwards to ensure that the control environment is sufficient for the level of data. And I think I'll turn it back to Greg. Yeah. And so one of the things to take away from this, we actually, we go back one slide real quick. What we constantly talked about, how could we get people to adopt data management? How could we get people to adopt data management? And we talked about maybe we can throw a pizza party for a specific squad or a line of business or a team if they start to adopt data management practices. Step one in that is obviously elation. And what I originally wanted to do with the data tiers is talk about it as if it was silver, gold, platinum, but our chief data officer actually gave me a hard time about that. And he said it sounded like an airline loyalty program. And so Chris's brainchild was really to go back to the table and say, all right, not everybody loves data governance or data management, but everybody loves pizza. And so that was kind of the brainchild of where this got started and then how we started this whole conversation. So I have to give it to Chris and the team. They really did a great job here in terms of making something that is fairly technical in nature, more digestible, and that's upon if you're keeping track, but ultimately like more consumable by the masses. And so they did a great job here. Well, here's the thing. If you think about it, a lot of people, and I know I'm one of them, if you go buy a new piece of furniture and you rip open that box and you pull out that direction pamphlet, how many of you actually just look at the pictures just to figure out what you're supposed to do as opposed to reading the standard, reading the policy, reading the procedure? Like to Greg's point, we wanted something to be fun and just way more digestible for people to latch onto and to enable that human change management to apply these. Yeah. So it got started with potential pizza parties and it made its way into a policy that is based on pizza. So next slide, please. In a lot of these conversations that we were having with some of our stakeholders and some of our IT analysts and data engineers and whatnot, we realized that centralized traditional data governance had somewhat of a bad brand. And so what we started to do is think about key words that mattered. And then we started to try to change the conversation around those words. Because ultimately the way that these words are perceived determines whether or not this is successful within the organization. And so something as simple as like a governance subject matter expert. We started to rebrand that as a data management maven that was federated in each of our agile squads that was trying to do the data management best practices. But the gravity of something like a subject matter expert seems a little bit aggressive and it wasn't as easy to get people to sign up for that conversation. Governance controls is the same thing. So we're rebranding that as data handling best practices. Ultimately, how are you handling your data to make sure that you can leverage it to best drive change in the organization? Data stewards. Ultimately, we're rebranding that as a data stewardship meeting where we sit down with some of our stakeholders, some of our line of business analysts or data owners, and we talk to them about what needs to be done and what standard of care they need to be applying. It was a little bit laborious to get them to sign up to be a data steward in title, but having a stewardship meeting seems much less threatening. With elation, we talked about cataloging data. We're changing that to more curation. So you're loading the marketplace. You're curating content into the data marketplace to be leveraged down the line. And this is something that Bob actually brought up to me that I loved is rather than assigning people work, assigning people tasks, you're recognizing them for the work that they're already doing today. They're already acting as data stewards. They're already acting as data curators, but you're recognizing them and formally starting to give them a framework or a tool chest to start to execute. And then going into the reinforcement portion, the last slide, if you think about everything that we've talked about today, change is really, really difficult. And so it's almost impossible to talk about a successful data management rollout at scale without talking through change management as a formal process. And so the change management technology or change management procedure that we've deployed just recently is ADCAR. And so ultimately, it goes from left to right, and you'll see, do consumers have awareness? Do they have the desire? Do they have the knowledge ability? And are you reinforcing it? And I would say that we're pretty firmly in between the enablement zone and the engagement zone, where we're starting to drive significant change. Just in the last year or two, I think our elation usage statistics have grown exponentially. And with 25,000 people in the bank, we're hoping to get all of those people in elation one day. So they can go to elation at step one in the data marketplace. So with that, I'm going to hand it off to Bob. And I appreciate everybody listening. All right. Thank you very much. And I just want to reiterate some of the things that Greg talked about. Because when he started talking about the EDM council, he started talking about the rigor that they needed for the enterprise data, for the managed data, for the other data. I mean, there's a lot of work that needs to be done there. Yet we need to engage people, we need to activate people in the organization. So you need to find a way to make that connection with people. And the stuff that Carissa talked about with the pizzas and the plane, the specialty, the supreme pizza. I saw a couple of people in the chat text. That's kind of cool. That's kind of a different way to look at things. And you know what? It's really, even this stuff, the ad card that Greg talked about at the end, that's all based on human behavior, right? It's all the whole awareness, desire, knowledge. Those things are personal traits and things that people need to adjust to and to adopt, basically, to further the way that they're able to govern and store data in the organization. So I always say at the beginning, when we're talking about, is it really data governance or is it human governance or is it people governance? Because we got to ask ourselves the question, what exactly are we governing? Are we governing the data or are we governing people's behavior associated with the data? And so a friend of mine told me that we should call this people governance because that's really what it is. It's how do you get people to take those technical, as I think Greg had talked about, technical concepts and the standard of care that was being applied to data and make it personal and make it reasonable to them and help them to understand that they play a big role in helping to make certain that the data is the way that it needs to be. The data is going to do what we tell it to do. Let's find a way to relate to people that make it easier for them to understand. Okay, let's get to the next slide. All right, so I talk a lot about non-invasive data governance and the idea that there's already governance taking place in your organization. And if you can think of the idea that we're not starting from zero, we're starting from a point where there are people that already have responsibility for defining data in the organization. There's people with responsibility for producing and using that data. In fact, I say that everybody potentially is in the organization is a data steward and that we need to make certain our program can activate those people, but we really need to get over that fact and realize that it's not just a handful of people. I know that Greg mentioned the word, the use of the word assigned versus the word recognize. So if we assign somebody something, if you get to be assigned to be a data steward, it immediately feels over and above what you're presently doing. But if we recognize that you use data a certain way and we provide you with the assistance, we provide you with a catalog to get to the information about the data and that marketplace that Greg talked about, then it's not going to feel as though it's something that's brand new to you. So we want to keep an eye the idea that everybody in the organization is a data steward. And if we can get them to recognize that they define produce and use data and that it needs to be done more formally, that's going to take our program a great step forward as well. I always start with the premise of you're already governing data. You're just doing it very informally. If you can formalize what you're presently doing. And if you can even use things like your catalog to get stewards actively engaged in the definition of the data, the standards of the data, that's also going to help to move your program forward quite a bit. So formalizing that accountability and getting those people active, those are really as I put it two sides of the same coin. We can really leverage things like our catalog to help us to recognize the appropriate people and get them engaged in using the catalog the way that Greg spoke about it. I mean, he has, they've added a certain number of users, but they hope at some point that everybody will use the marketplace that they're providing through relation to access the data. All right, let's go to the next slide. All right, and I'm known to say this quite a bit. So I have those core talking points of everybody is a data steward and get over it. You know, I'd really love the idea of being able to market with elation. Let's, let's market some pixie dust that we can sprinkle over the organization and have the data governed, have the stewards recognize to get them actively engaged. The fact is it's not going to happen. So I guess we can, we can stop trying to develop that. The fact is the data is not going to govern itself. It requires people, it requires people that are actively defining and producing and using data already as part of their job. If we can formalize how they do that, and we can leverage tools in our environment to help us to do that, we will start to activate people to govern data in our organization. And oftentimes, that central face of your data becomes that marketplace, becomes that catalog. And if you can get people to actively go in there to find what information they need to help to provide additional content to the tool, that's where it really gets activated within the organization. But the fact is this won't happen on its own. This won't happen without some resolute effort to do these types of things. So I always say that there's already people in your organization defining, producing and using data. If we can help them to do it better, we're going to govern our data better. We're going to have better stewardship of data in our organization. So the data won't govern itself. Let's go to the next slide. At the same time, the metadata will not govern itself. And you know what? I don't say that everybody in the organization is a metadata steward, but there are people in the organization that have responsibility for defining what metadata you need to provide the appropriate context to the data, to produce that metadata, to use that metadata to help them. There are people in the organization that are stewards of the metadata as well. So we need to recognize them and let them know, again, that a positive connotation goes along with it. Where are we going to recognize them? Where are we going to record that information? Well, with the data, with the information about the data in the catalog would be the appropriate place. And we're going to formalize accountability for the metadata the same way that we formalize accountability for data as well. And that's going to activate people. And I always suggest that if we formalize accountability for something, which means that it's part of how somebody's being reviewed or part of their plan, they seem to take it much more seriously. So we can go from things that people are already, for example, using sensitive data. But if we help them to understand that there's rules associated with how they can use that sensitive data, it's going to help them to more formally protect that sensitive data. So again, the data catalog is going to be that place where we're going to activate and we're going to formalize accountability for governing the data, but also governing the metadata. And we're going to recognize those people in the organization that are stewards of the metadata, as well as the people that are stewards of the data within an organization. Next slide, please. So when we started talking about this subject of data management meets human management, the first thing that kind of popped into my mind was, how do they overlap? Where do they have things that are really their own? And where is there the overlap? And so immediately, I was able to kind of sketch out that when it comes to data management, we're really focusing on best practice. We're focusing on the infrastructure around the definition of the data, around the production of the data and the usage of the data. I've challenged organizations to come up with another action in there, beside definition, production and usage. And pretty much everything falls under one of those three. So if we can implement best practice around how we're defining data, whether that's in a data modeling tool, or however you're doing it in your organization, or we're going to implement best practice around data production or usage, that's going to be more the folks in data management that are going to have that responsibility. Their focus is on delivering projects on time, delivering them on within budget and making certain that they follow their project schedule. Where it meets human management is the part where we get people interested in really playing the role of the stewards, making this interesting to them. So formalizing their accountability for how they're defining, producing and using data, and making certain that there's governance around the data and around the metadata in the organization. Where they tend to overlap is back at the way I define data governance to begin with, that it's the execution and enforcement of authority over the management of data. You've got to get the people to use the tools appropriately to, in order to govern the data, to execute and enforce authority. You're going to do it through stewardship, which is going to be engaging and activating people through the catalog. And the most really important thing of making that connection with people in your organization is making data governance fun, or at least not making it a burden to people. Something that's going to feel that is something that they're not used to doing, and that's going to be over and above what they're presently doing. So I was really happy to hear when Greg and Chris had talked about the approach that they used to kind of humanize all those technical concepts that Greg started speaking about, and how they went about doing that. So I thought that was a real good example of how an organization found an issue and put a solution to the problem. And the next slide, please. And so what I want to wrap up with is that typically when you're implementing data governance, you're going to do it through stewardship. You're going to do it through the people of your organization. And you need to find a way to connect with them, those people, number one, but you need to find a way to be able to activate them. And you've got to give them kind of, even giving them that central tool to give people as the starting point for data within the organization. And I believe Greg talked about people start looking for the data with elation. Make certain that you can activate people that way. So in order to implement an effective data governance program really requires that you're going to get people involved. You're going to find the appropriate messaging for them. You're going to provide them with the tools that they need to leverage the data as a truly valued asset. And then actually, one of the ways that I'm finding with a lot of organizations are that their data catalogs and the things that they're building are really the tools that they're using to make their data governance less passive and more active, get people involved day to day in using the tool, filling in the gaps of knowledge and understanding in the tool and making that an available resource to the organization. So again, thanks for having me on this webinar. I think I'm going to turn it over to Matesh now. Well, thank you, Bob. And thank you, Greg and Carissa for the great overview. I'll wrap up here with a few concluding thoughts here before going into Q&A. And if you're listening in, you're probably thinking to yourself, you know, some of the terms and concepts where we're using here and that we've described are not typically associated with data governance. Concepts like words matter, right, the namesake of this webinar, where you've heard from Greg that just shifting from calling people data stewards, actually bringing them to stewardship meetings has made them more successful in data management and data governance. And at the bottom of around recognizing folks instead of assigning them more tasks. And of course, the use of analogies, which you saw with pizza and how Carissa described the use of pizza between the crust and the sauce and the cheese as sort of a useful way for folks to remember what they need to be doing with with data and the data catalog in particular. So what's going on here? Well, one of these things have in common is that people, as Bob is so astutely pointed out, is that people are at the core of data governance. And more specifically, it is people's behaviors that are really being governed as part of the data governance program. And so it makes perfect sense that these motivational techniques that you see surrounding the people circle here have given Greg and his team great success when it comes to data management and data governance. Because ultimately, this is a human management, you can call it a challenge, I prefer to think of it as a human management opportunity. So with that in mind, you might be thinking to yourself, well, does the tool actually matter at all? This is a human management opportunity. Does the catalog or the governance tool itself, does the software matter? And I would say, of course, that yes, it does. Not just if you're going from the three to a four and the capability model with four to a five, the tool certainly does matter. And that's because not all software and not all tools are taking that people first approach. Elation does. Elation takes that people first approach to data governance. And so I'll conclude here with just a few examples of how we do that in the product itself. The first is with policies and quality flags. And these are terms that I think most folks that have been working in data governance are familiar with, but just to kind of give some examples here. With policies, we mean things like using and handling PII data to make sure that you're compliant with GDPR, CCPA. And with quality, we mean fit for use of the data, you know, whether the data you're using is correct or appropriate to use. We actually surface that in the user's workflow. And you can see some of the screenshot screenshots of that on the left hand side with lineage at the top left. And of course with compose, which is a unique capability in elation, an intelligent SQL editor where policies and quality flags are actually surfaced as folks are using the data as folks are querying the data. Right. So all of this is surfaced in the user's workflow contrast that with, you know, typical approach that you might see, which is person having to go to emails, potentially documents toward G drive, potentially word of mouth or potentially even different tools. And you can see how this sort of falls apart. There's so much friction in the process that it's hard to ultimately get to the trusted data and to stay compliant. So that's piece one, right? It's removing the friction and removing the barriers to actually remain compliant and follow these policies. Two is business glossary, that the terms that you're storing here and associating with the physical underlying physical data objects with elation, we automatically suggest business glossary terms based on what we're seeing and how you've been using the data in elation itself. And we automatically associate those business terms with the underlying physical data objects with the click of a button. So automation is a big piece of this, right? It's all really automated. Of course, you as humans have the opportunity to reject some of those suggestions that you'd like, but it is automated and that helps scale your governance efforts. Contrast that with, you know, another approach, which is you've got potentially thousands or tens of thousands of terms in your organization and millions or tens of millions of data objects, having to do that manually is an impossible undertaking no matter how many stewards or data mavens that you have in your organization. And the third here is stewardship dashboard. You can see the orange progress bars here in the screenshot. This is really useful in two ways. First, it helps prioritize curation efforts in the data catalog, showing the stewards and others how much progress they've been making in stewarding the data. But the other piece here is it gives folks a feedback mechanism to show them that they're actually making progress, right? That is really important when it comes to motivational techniques here and human management, because in contrast, the alternative is, you know, you could be potentially a steward sitting in a hamster wheel and you have no idea how much progress you're making towards actually curing the data. So it's that third piece here is really around that feedback mechanism to show people that they are in fact making progress and bringing them back to the catalog to continue making that progress and describing data so that others can make use of it. So with that, look, I think you've heard a great deal here from Greg and Carissa on how they're using Elation for data governance in their case data management at Difford Bank. They join a chorus of other customers of ours that are successfully using Elation to address data governance and I use that term successfully, very conscientiously here because we've heard time and again how folks have been using alternative tools, different tools, and certainly different techniques to deal with data governance and have been failing in the past. But with Elation and this people first approach, folks are seeing a great amount of success. Lastly, I showed a few screenshots here and some examples of how we address data governance within Elation. If you'd like to learn more, we didn't encourage you to attend our weekly demo. The next one on data governance in particular is actually happening in just two weeks from now. Just sign up here at elation.com slash dg-demo, useful URL here. If you'd like to learn more in sort of a white paper format here, we have the link at the bottom. So with that, I will turn it over to Shannon for Q&A. Mattesh, thank you so much. And thank you everybody for this fantastic presentation. Lots of great questions coming in. If you have questions for any of our speakers, you may submit them in the Q&A section in the, which you can find in the the icon for in the bottom middle of your screen there. And if you see a question that's already been asked that you were going to ask, you can just click that little thumbs up. I see lots of you using that already. And to answer the most commonly asked questions, just a reminder, I will send a follow-up email to all registrants by end of day Monday for this webinar with links to the slides and links to the recording and anything else requested throughout. So diving in here, Greg, back to your presentation. You mentioned you're using Snowflake, IBM and AWS set fifth third. What are your data management challenges with data sitting in different environments across a hybrid cloud like that? And how do you address those challenges? Yeah, sure. So ultimately, when you're sitting in multiple environments, you're going to have to take a slightly customized approach to each one. The other challenge that we've seen is different integrations with tools. Sometimes work and sometimes don't, primarily when you talk about IBM. But said a little bit differently, I think what we're trying to do is develop a protocol or a practice that we can apply to anything, even if you're doing technical lineage in Excel, it's a place to start. But ultimately, I think using Elation as the starting point, no matter what the back end ecosystem is, it allows you to start to get some consistency across and apply consistent data management capabilities. So that's, I would say that's what we're primarily using Elation for is the starting point regardless of source system or connection. Does that make sense? Yep, sorry, I'm just sitting here on mute. My apologies. Yeah, so lots of people here wanting to know, have you written your own poor banking applications or used packages like Hogan? I think it's kind of a blend. So when you look back at our ecosystem slide, there are definitely some custom banking applications that were homegrown and developed throughout. But in regards to packages, I can't speak to that. No worries. So how does Elation present business process and consumption lineage? Are there manual processes that require manual stitching? Yeah, so I'm going to try to answer this the best of my ability. Then I might defer to Karissa, the Elation expert in the group, but I think essentially what we try to do here to account for consumption lineage is develop custom fields and then put it into Elation. I think there are other things that you can attach to Elation, things like Manta that make it a little easier. But I also see a question in regards to IGC down the list. When I would say that we primarily use IGC for technical lineage, obviously there are things that you can do to try to add in lineage to Elation. But ultimately, we try to do a lot as much of that as we can within Elation. So good answer. I would like to elaborate just a little bit. So for us, I think as you saw on one of the diagrams, we require business process and consumption lineage for that enterprise data level. So the process that we have for enterprise data, it includes kind of a scoping and prioritization effort before we actually designate that particular dataset as enterprise data. So a lot of the times in our scoping and prioritization, we will align critical data assets to a business process within the bank. And then we will take those critical data assets and deconstruct those down to the physical data element level. So during that scoping and prioritization process is where we can actually identify those processes. And then, as Greg mentioned, we do have custom fields so that when we are curating for that specific dataset, we already have that in our back pocket to place into the system. And the only thing I was going to add to that was, boy, doesn't that sound like fun? I mean, the stuff that you're talking about. So you have to make it interesting for people because all of those things that you are just talking about are really important. And they add value to the way that people understand the data. And if you can't relate to them about it, I mean, there's a lot of work there. It's great that you have a plan and that you're following a plan that's working for you. I think that's the whole idea of formalizing accountability. The way that you described it just made it sound like so much fun. Yeah, Bob, one of the things that we've done in the past is something we called it like elation bounty hunters, where we literally had data engineers go out, find critical data assets that didn't have consumption lineage, technical lineage. They didn't have data quality rules applied. And they actually essentially went through and curated that content. And then we made it a competition. We essentially gamified it to make sure that we could have as many people as we could in elation curating the data for consumers. So we try it just about every turn to make it fun and engaging. And that's important. I mean, gamification, people have started to use that term in terms of data governance. So to make it a challenge, to have an award, to make it personal to people, a friendly challenge, you know, it never hurts, especially when it's harder to do, I think, when I'd be curious as to how you do it when you're all remote, you know, versus having everybody on site. But that could be another conversation for another day. Tess, anything you wanted to add to that and how elation for those business process and consumption lineage? No, I think those it was sort of well responded to here. I think the only thing I would add is that as Greg alluded to and Chris alluded to, we have a rich set of API that allow you to integrate with partners like Nanta. But beyond that, just out of box, we are actually able to investigate, quote unquote, investigate and process query logs to understand how data might be transformed within business intelligence tools and specific data sources. So there's a lot of rich capabilities when it comes to lineage with elation. I love it. Lots of great questions coming in here. So can you give some insight into the inception and size of the team as well as roles and responsibilities that have ushered your data governance practice forward? Yeah, yeah, absolutely. So as I said before, there's essentially a centralized data management program, which is, I would say, ten-ish people. And they're primarily focused on developing the policies and procedures of data management in accordance to the DCAM and the BCBF. With that being said, what we've tried to do over the last year or two is start to focus more on rolling those standards and practices out within the enterprise data office, so with technical data engineers. So we focus much more on executing those policies and procedures. And so we have what we refer to as a data management maven that sits in each one of the agile squads. We call them squads here. They're basically just agile teams, scrum teams. And so there's effectively a data management maven that sits on each one of them to help to federate those best practices and make sure that we're abiding by data management best practices. That's kind of where we are today. I think long term what we want to do is pick what we're referring to as citizen data engineers or power users or business data users throughout the organization in every one of the key areas to continue to federate data management as a practice. And we're kind of using the IT as the test run before we go full scale, if that makes sense. It does indeed. Thank you very much. So Natasha, I think this question is for you. There's a lot of experience with for profit, but I don't see anything for nonprofit like data impact like data for impact of our programs. Have you ever worked with any nonprofit on their data governance? We have worked with nonprofit organizations as well. And I don't have the list off hand, but happy to follow up here with you both offline on that. Yes, we have worked with universities as well as nonprofit organizations and they're making very effective use of Elation data catalog. I love it. So I've heard Elation described as a crowdsourced data governance. Would you agree? Can you hear me? Yes, I think that was Tim Atesh. This is Greg. I'll take this. But yeah, effectively, you can have people upvote or downvote whether or not a specific search was helpful. And I think that it's almost like a search engine optimization aspect. And so it ultimately does work as a crowdsourcing data marketplace. I'll take that for Atesh since he has it. Yeah, sorry. I thought that was for you. I think better left to you at fifth third. But yes, you know, I'll just double down on this. I think there was another question on differentiators with Elation and who our key competitors are. And without going into names here, you know, I'll mention one of our differentiators here is around collaboration, where with Elation you actually get that collaboration as a result of a great user experience that is useful not only to really technical data scientists, but all the way down to non-technical business users. And so we have this notion here at Elation where the more the catalog is used, the better it gets and the better it gets, the more it's used. And that is a key part of our value proposition because the more people that are using catalog, the more people are curating it and it simply becomes better for everybody. So the upvote, downvote mechanisms are one example of that, but there are plenty others where we're really relying and investigating sort of these human signals and how people are using the data to understand its trustworthiness and whether it can and should be used. So for a lot of you, data governance fanatics out there, I do think that it is crowd sourced capabilities, but they do have some pretty great controls built into the system to where you can set certain permissions for certain fields as well. So there is an option for a little more oversight and that's beneficial for us as well. I love it. So a question here to Greg and Chris, does Fifth Third do automated lineage? If so, how do you do that tool? Yeah, we actually, we do automated technical lineage and I think Greg had mentioned earlier that we do use IGC for that. And can you describe further elation adoption within the organization? How much penetration do you have? What are some bottlenecks and how have you addressed them? Yeah, so I would say that we just got enterprise licensing this year and up to this point, we've had to limit the amount of users that we could scale with. And so I think we're probably around the 500 mark, if I had to guess. As I said before, there's 20,000 people in the bank, 25,000 and virtually all of them use data. And so I think the long-term thought process is if someone needs to search for something, whether it's a Tableau dashboard or whether it's a specific connection via an ETL tool, ultimately all that will go through elation at some point. In terms of obstacles, I would say making sure that you're curating practical content into elation. And so the search functionality is great, but if the data that you're loading into there is garbage, it's going to be garbage in, garbage out. And so I think it's been our prerogative to make sure that we're loading quality data. They have an ability to search for it. And when they search for it, it's actually applicable to what they're trying to do. And I think that's been the biggest obstacle up to this point. And bottlenecks, I see what are some of the bottlenecks, I would say knowledge. So basically everything that we were referencing before with change management, all of those are obstacles or bottlenecks that we need to get over. Perfect. Is elation have business process approvals for providing data access to users? I guess I will take that. From a workflow perspective, we offer a few things. One is this sort of agile approval mechanism where if you're making edits to a catalog page, for example, it can be sent to the appropriate steward for approval. So there is some workflow capability there. We also have tie-ins via APIs into business process management tools like ServiceNow, like JIRA, and others that can be done through APIs. And within elation, the story there is going to be going to get actually a whole lot better here coming up in just a few months from a workflow perspective. And Mataj, does elation integrate with data modeling tools such as Erwin or ER studio to pull business metadata along with taxonomy ontology from enterprise data models? Yeah, I think the underlying answer to this, as with all questions is that we have a very, very powerful set of APIs that can integrate with a full suite of tools across the data ecosystem and landscape. And so that would be my response there. What specifically the integration is with Erwin and ER studio, we can kind of get back to you, but I would bet that it's certainly possible. I love it. And I think we have time for one more question here. And Bob, I certainly know that you know the answer to this or have an opinion on this, I would say, are data governance and data management different things? Well, I guess that depends on who you ask. I mean, if you ask the folks at Data International, and they set up their framework for data management, data governance is called out as a separate item. And it's right in the middle of the data wheel, so to speak. So data management, well, let's put it this way, data governance is really focused on people and their behavior. So if you just go ahead and you manage all the other aspects of data management, you know, metadata management, data movement, and you don't manage the people aspect of that, you're going to be making a big mistake. So the real difference between data governance and data management is that data governance focuses on the people, focuses on activating the people, but also helping them to understand their role in the data. And the rest of the data management knowledge areas, as Daima calls them, don't address that. So you need that with data governance. So they're different. I love it. Anybody else want to add to that, Matesh or Greg? I think Bob nailed it. I think what we notice internally to fit third is like I said during the presentation, data governance had kind of a antiquated understanding. And then I should say that I'm not entirely sure that that's aligned with industry, because I think Bob and I have had some good debates about this over the past. And I think ultimately it comes down to how people perceive it, right? And so what data management means here at Fifth Third Bank is how do you properly handle and leverage the data using best practices? And data governance was kind of seen as antiquated control. And I think that's why we decided to use the rebranding. But yeah, I think that's that's what I would add. Couldn't agree more if I could chime in. Go ahead. Yeah. Couldn't agree more. You know, ultimately do what's best for your organization. I think as Greg has alluded to, they have shifted from using the term data governance to data management. And that's worked for them. If that works for other organizations, I say do it. Even if that's potentially departure from industry norms and standards, do what's right for your organization. I back that 100%. I mean, do what is going to be right for your organization. And, you know, you can do data. A lot of organizations do data governance and they don't like the word data governance. They use something else to describe what they're doing. It's to me, it just falls under at least under the discipline of the governance. And that could be a piece of data management. I agree with you guys, but just do what is right for your organization. Well, thank you all for this great presentation and thanks to our attendees for being so engaged in everything we do and all the great questions. But I'm afraid that is all the time we have slotted for this webinar. Again, just a reminder, I will send a follow up email to all registrants by end of day Monday for this webinar with links to the slides and links to the recording along with additional contact information so you can learn more about Alation. And thanks to Alation for sponsoring and helping to make these webinars happen. Really appreciate it as always. And I hope everybody has a great day. Thanks, everybody. Thanks, guys. Thank you. Thanks, everyone.