 My name is Yash Bhagde and today I will be primarily speaking on catalogs, governance and data democratization. And in the course of this presentation also speak about how we have enjoyed some success implementing Alation as our primary data catalog tool at opportune. Now moving on to the next slide. Before getting into the specifics of the presentation, just outlining the agenda at a high level, I will begin by introducing my organization. I'll provide a brief introduction for myself. I'll talk about opportune's data governance journey. I'll talk about Alation's bottom up approach to data cataloging and governance. I'll also cover an example of how we were able to catalog a key piece of our financial regulatory reporting, that is the reporting that we perform for US Securities and Exchange Commission in Alation. I'll talk about Alation users at opportune and benefits of using Alation at opportune. And last but not the least, open it up for any question and answers. If we could reserve the last five minutes for Q&A, that would work perfectly. So in terms of my organization, opportune is a mission-driven lending company based in San Carlos, California. We offer affordable and inclusive lending products to customers. And approximately 100 million people in the United States either have no credit score or have been miscored because of limited credit history. And our goal at opportune is to provide affordable and inclusive lending products to these customers through our AI-driven models and data touch points that we have collected along the way to offer them these lending products that they can use to fund emergency expenses or large purchases. Opportune is a CDFI community development financial institution certified by the US Department of Treasury since 2009. And lastly, I'd just like to share some numbers with you to give you a sense of scale of our business. So we have originated loans in excess of 4.5 million, 11 plus billion dollars in total loan amount disbursed, two plus billion dollars in saved fees and interest. And approximately 945,000 customers have now started establishing credit without a FICO score established credit history. In terms of my introduction, I'm a data governance data quality and data analytics professional with 11 years of professional experience. Primarily in financial services, out of which the last eight years have been in data governance roles. Prior to my current role as a data governance manager at opportune, I've served in other organizations such as American Family Insurance, Suntrust Bank, and BMO Harris Bank in data governance roles. Since I began my tenure at opportune in late 2019, I've been primarily responsible for the successful implementation of Alation as our data catalog tool. And in the course of the last few years, I've been able to establish myself at least at my organization as the resident Alation expert. In terms of opportune's data governance journey, we at opportune, as we continue to expand our footprint to more states in the US, see exponential growth for us in the coming years. But with that, there is also a realization that we can't continue to depend on the tribal knowledge of a few village elders. And that we need to capture all of this institutional knowledge in a centrally available enterprise wide data repository. And with this goal in mind, we evaluated several tools in late 2019. And based on our evaluation, we felt Alation was best suited for our data cataloging needs. So we introduced Alation in a pre-production environment in late 2019, and in a production environment in early 2020. Since then, we have seen a rapid growth in the number of catalog users, which is perfect because we have a goal of creating a robust and vibrant community of catalog users that are working in a bottom-up crowd-sourced way to enrich our data catalog. Alation has helped us improve data literacy and with the help of business terms, metric terms, and business process glossary. We have been able to capture, improve our data literacy, expose metadata for our data sources. We've also been able to capture institutional knowledge and this knowledge is constantly maturing with the help of bottom-up and crowd-sourced approach to metadata management. I'll cover some examples in the next few slides where you'll be able to see that. And last but not the least, Alation has helped us expose or surface top users who can be identified in stewardship and subject matter expertise roles. And with the help of built-in data curation dashboards and Alation analytics visualizations, we are able to see in real time our curation progress and how the platform is being adopted across the enterprise. In terms of our mission statement and how Alation helps us support this mission statement, our mission statement is to inspire confidence in our stakeholders. That data is an enterprise asset and that this enterprise asset is available, governed, catalog, trustworthy, and protected. And if we can inspire that confidence, our stakeholders can continue to focus on their business needs. While having the confidence that one, a framework exists to govern this enterprise asset. And second, a tool such as Alation is available to help us make sure that this metadata is available, it's cataloged and governed. So in terms of the next topic on the agenda, Alation's bottom-up approach to data cataloging and governance. Alation has helped us, helped data-savvy decision makers, analysts, and data consumers find and share data for more accurate insights. You can onboard a variety of data sources in Alation, including AWS, Snowflake, Redshift, MySQL, among others. And once these data sources have been onboarded, you can set automated ingestion jobs to run on a set cadence. So an example of that would be we have on each of the data sources that we exposed in Alation, we have automated metadata extraction job, a data profiling job, a query log ingestion job that runs on a set cadence. Next, I'd like to focus your attention to the screenshot on the right-hand side of the page. That's an example of our Alation home page. You'll see that there's a very prominent search in the middle of the page. And the placement of that search is intentional, because Alation wants to encourage our catalog users to begin their data exploration journey by using search. You can type in natural language, it does partial keyword search as well. So in this particular example, you'll see that I'm searching for the customer product dimension. And as I enter my search criteria, it will start surfacing results by scanning all the data objects, schemas, tables, columns, published queries, catalog articles. And when it finds that match, it will begin surfacing those results. For the sake of this demo, I'm interested in the customer product dimension table. So I click on that, and it navigates me to the object overview page for the customer product dimension. Now, a couple of things I'd like to focus your attention on this page, to support Alation's bottom-up approach to data cataloging. One is the concept of trust flags. So the color scheme that they use for the trust flags is similar to traffic lights, where green means endorsed or go, orange means warning or caution, and red means stop or a deprecation notice has been placed on that particular data object. More than one catalog users can endorse one or deprecate. Hire the count for endorsements should inspire some confidence in you that this is the legitimate data object. And conversely, if there are a higher count for warning or deprecation, that should at least bear minimum cause you some or give you some caution. Another feature that I'd like to point to you is the concept of popularity. You might recall in an earlier slide, I said you can schedule automated ingestion jobs, one of which is a query log ingestion job. Based on all the queries that have been written against this data object, it calculates the popularity of the columns. Now, in this particular example, we only have 10 columns. But imagine a table with 100, 200 columns. And now the concept of sorting by popularity becomes really relevant, because if you want to focus your data curation efforts, you should start with the ones with the highest popularity to get the most metaphorical bang for your buck. Lastly, I'd like to talk about Alation's machine learning algorithm called Ali, and what you see these robotic heads in the center of the page is where Alation's machine learning algorithm has tried to guess the title. And green means it has guessed it with a high degree of confidence. And a catalog user has confirmed it. Go back, actually. Orange would mean that Ali has guessed it with a medium degree of confidence and a catalog user needs to confirm or discard it. And red means it has a low degree of confidence. The takeaway I want you to take from this slide is that there is this human machine learning collaboration that works in tandem to capture and enrich the data catalog. We talked about this color scheme on the previous slide. On this particular slide, the takeaway is that Alation makes all of this metadata available in one easy to understand interactive and intuitive interface right at your fingertips. There are no separate navigations within the tool that if I want to see lineage, I have to navigate to a different navigation in the tool. If I want to see the most popular filters used on this data object, I have to navigate to a different navigation. So all of this metadata is available at your fingertips. As I continue to scroll through, you'll see. So in addition to showing me the columns and a sample of a few records, I can see the most popular filters on this particular data object sorted by frequency. I can see the most popular joins on this table sorted by frequency. I'm able to see all the published queries where that my peers have published for this particular data object. I can see metadata such as whose authored that query when was at last executed, the number of times it was executed. And last but not the least, Alation helps us capture both upstream and downstream lineage and makes this lineage available in both graphical as well as tabular form where we've had a lot of usage or where we find value in this is that our analysts and developers are now able to see in real time what the impact of the changes will be on downstream data objects. So again, take away from this slide is that all of this metadata is available in one interactive and intuitive interface. And it's available at your fingertips right at the data object level. To continue on the theme of Alation's ground up and crowd sourced approach to data cataloging is for any data object, you can download the dictionary. Now again, going back to my example of this table has 10 columns and let's say you're trying to capture titles and descriptions and custom fields. Now imagine a scenario of 100 columns or 200 columns. And as you try to capture titles, descriptions, custom fields, you can begin to see why it will become a laborious process. So one way to alleviate this in Alation is to download the dictionary for that particular data object and then work with your stewards and subject matter experts to capture titles, descriptions, custom fields. Once that CSV file has been socialized and is ready, you can use the import data dictionary wizard, drag and drop your CSV file and in a quick expedient manner, you have now captured titles, descriptions, custom fields for a particular data object. So now that we've covered some of the features in Alation and how we are able to leverage that at opportune, I wanted to give you an example of cataloging a key piece of our financial regulatory reporting with real world consequences in Alation. And that's the reporting that we perform to or submit to the US Securities and Exchange Commission. Now, opportune is a publicly traded company. And as with any publicly traded company, we have certain obligations to furnish business and financial data to the US Securities and Exchange Commission. In addition to that, we also have an obligation to provide this data or furnish this data for our shareholders with the help of earnings decks, presentations, earning calls. And with that audience in mind, it is absolutely critical one to get these metrics accurate and to catalog the certified descriptions and certified logic for these metrics in enterprise wide available metadata repository. So what you see on the screen is an example of a landing page that we created for our SEC metrics. A couple of things I'd like to highlight on the page is, so once you see at the top this box with an I and an X in it, that just basically tells you that this page has limited visibility. The SEC team had requested that we limit visibility to this catalog article. So you have the ability to do that in the catalog. Also briefly touching upon the concept of agile approval, you can set reviewers for catalog articles. This is to make sure that there are no unauthorized edits to a catalog article. Additionally, when you see the director of the SEC reporting team has approved this catalog article, it should immediately inspire some confidence in you that these metrics have been vetted, have been reviewed and approved. So they are the certified logic to calculate a particular metric. A couple of the things that I want to quickly point on this page. Body in the Alation catalog articles allow you a lot of flexibility in terms of what you want to put in the body of the catalog article. You can be as descriptive as you want. You can capture audience, frequency, purpose. You can be as descriptive as you want. Also quickly touching upon the concept of templates, you can set for each catalog article an underlying custom template. And for each custom template, you can define what custom fields should show up on that particular template. So for this, you'll see I've selected the business process template. And as a result of that, among other custom fields, I'm capturing cataloging status and process type. Glossaries, we have more than one glossary in our catalog. And basically this is just saying that any catalog article with the business process template will show up in our business process glossary. Tags, tags help you club like items together. So one of the tags that we use is SCC reporting. And we can now tag data objects, schema, tables, columns, published queries, catalog articles with this tag. Tags are searchable. So on the home page, when I search for this particular tag and navigate to the tag, I can see all the data objects that have been tagged with SCC reporting. To continue on this example, I want to show you a metric of the, all the metrics we report to the SCC is the metric on line number eight. On the individual child article for that particular metric, which is embedded on the parent landing page, we have captured a certified description for this metric. You'll also see an example on the screen of that yellow banner. That's just basically telling you that any edits to this catalog article needs the reviewer to approve. This is to prevent unauthorized edits to a catalog article. We touched upon reviewers. We touched upon glossaries in the previous slide. Touched upon tags in the previous slide. Now let's go to the certified logic to calculate this particular metric. So you can publish your logic to calculate this metric and embed that in your catalog article. When I click on that, it brings me to this page where we have captured the certified SQL query to calculate this metric. For sensitivity, we had to blur the SQL. But the takeaway on this particular slide is that for this certified published query, we have parameterized it to accept input parameters at runtime. This does a couple of things for us. One, the query is more dynamic and run for a variety of date time intervals. But it also improves the self-service ability of this certified logic. So we have coached our SEC team members to be able to self-serve some of our needs by entering input parameters at runtime. You can share authorship and that just limits the authorship of this certified logic to the authorized authors. Touched upon tags. So one thing that I want to focus your attention or bring to your attention is the concept of run form. Now, most of our catalog users tend to be very SQL savvy. But we have some catalog users who are not as SQL savvy. And for those catalog users where we have found a lot of success is using run form. And basically what run form does is, when I click on run form, it brings me to this interactive interface where they're now only prompted for two parameters at runtime. It hides all of the SQL complexity for our less SQL savvy catalog users. They enter a disbursement greater than date, less than date, hit run. And it produces the output and also tracks the historical runs. So I can see what was reported one month ago, or 60 days ago. Additionally, it captures metadata for the number of executions. And when was it last executed? Also, going back to the original site, concept of trust flags, you can endorse this particular published query and the higher the endorsement count that inspires confidence in people that this is the certified logic. For this particular metric. So in terms of the Alation users at opportune and the benefits that we have been able to materialize at opportune using Alation is, one, we now have a governed and self-service model that we can emulate for other state regulatory and compliance requests. As we continue to grow our footprint, we'll have to satisfy more state regulatory request. So we now have a self-serviceable and a model that we can emulate. Overhead reduction of 50 plus percent. And now we are able to increase our opportunities to partner with the financial reporting team on other metrics and move to more valuable ones. We can also use a self-service model to emulate for other areas within the organization, marketing, sales, human resources. We have something tangible to show where we have some demonstrated success that can be emulated for other business areas. It helps us improve data literacy and some tangible wins that we can showcase to our senior management. This is the final slide of the presentation. Just to give you a sense of adoption of Alation at Opportune, we have 273 registered catalog users out of which 166 unique active visitors for the month of October. We have seen a 75% growth in our unique active visitors in the first 10 months of the year. 150% increase in composed query executions. So Compose is a SQL editor tool that's built right in to Alation and what sets it apart from a standalone SQL editor tool or standalone data cataloging tool is now in one view you are able to see not only the metadata that has been ingested for a data object, but you're also able to query those data objects. That kind of a marriage, I believe in my experience is unique to Alation. We have seen 150% increase in our Compose query executions. Our top 10 Compose users have authored 3200 queries. All of these metrics are available on the Alation analytics dashboard. So what you see on the right-hand side of the screen is an example of two of those built-in visualizations. Alation is adding to these visualizations in the upcoming releases, but the takeaway on this slide is, you know, in addition to the built-in curation dashboards and the Alation analytics visualizations, we have seen success in adoption of Alation at opportune and we can showcase these wins to the senior leadership team. So I'll stop the presentation for now. I believe I have four and a half minutes so I can open it up for questions at this point. Any questions? Yes? Sure. So, you know, some of those tools were evaluated. Those decisions were made preceding when I joined the organization, but what I can talk about what really encouraged us to go for Alation is one, the Compose tool that's built right into Alation and that unique marriage of being able to query those data objects, write that certified logic, and do that in the same view as a metadata catalog tool that thought was unique to Alation. Also the crowdsourced aspect of, you know, data cataloging in addition to Alation's machine learning algorithm and these automated jobs that are constantly ingesting new metadata on a set cadence, the ability to pull that into catalog articles and further enhance, add more context around the metadata that has been captured. We felt at the time was again unique to Alation. So there's a couple of the reasons why we selected Alation. Any other questions? So, I didn't show you some of the other metrics, but we have in some cases that those reports were performed on burst, which is our BI reporting tool, and we can hyperlink those in Alation as well. The goal is to capture a certified description at some context, but yeah, some of those metrics are reported on our BI platform. Other questions I can answer? Sure. Same here, thank you. Yeah, typically we find the most usage for Alation Compose and tends to be analysts, people on the data management and BI team. I mean, there are certainly other tools in the marketplace that also do this, but Alation, are usage tends to be mostly amongst analysts? Any other questions? So, if I'm understanding this correctly and I apologize if I'm mistaking your question, yes. Yes, you can onboard a variety of data sources, chief among them, AWS, Snowflake, Redshift, MySQL, there are other as well. You can ingest metadata from a variety of different data sources. What we have seen in Alation Compose is when we're trying to pull this into query, you can't do across data sources join, but you can ingest metadata from more than one data source. Any other questions? Thank you, thank you everyone for joining the session.