 Hello, everyone. Thank you for spending your happy hour with me today. My name is Lauren Mepheo. I am a service designer at Steampunk and the author of a book called Designing Data Governance from the Ground Up, which was recently published by the Pragmatic Programmers. And I'm here to talk to you today about why you can't do the latest trend in data, which is data mesh architecture, without a backbone of governance. So when I give this talk, I give this cheeky intro slide with a visual from Jordan Tagani. It shows how by X year, there will be Y amount of data, and it's going to be so huge. We can't even fathom it. And that, in and of itself, is a good thing, because data is the new oil, we're told. And so just having all of this data in your possession is enough to make it valuable and give you a competitive edge, right? We all know that that's actually not the case. More companies are becoming less data-driven as the volume of data they consume and produce increases. And that really boils down to the fact that change is hard. And data governance is not a technical problem. It is a cultural one. And everybody wants to change everyone else, but no one really wants to change themselves or think that they're the problem. The good news is that data mesh, when done well, can ease some of these problems. If you work in data, you've probably heard about data mesh. It promotes a data as products mindset, which spreads ownership of data across specific domains, which are all managed by themselves but in the same architecture. So you're looking here at an example of data mesh at JP Morgan as implemented in AWS. In this graphic, the entries in the catalog are maintained by the processes that move data to each respective data lake. So the catalog always reflects what is in the lake at that current moment. And so when someone at JP Morgan needs data, the catalog allows colleagues or respective data stewards to discover and request the data they need more easily. Each lake is curated by a team that understands the data in their respective domain. They can thus help facilitate rapid, authoritative decisions by the right decision makers. So done well, it's a more open source way to manage your data in one architectural environment. That's pretty cool, right? But there is a catch. If you don't invest in building a data-driven culture, your data mesh architecture is going to fail. And so if we are going to reap the rewards of data mesh, we have to co-create governance with our cross-functional teams. You just heard about how security cannot be done in a silo, it cannot be done just by the CISO, and it is the same with data. There is too much data existing in the world for one team or one person to manage it. So if we are going to do the latest and greatest in data architecture, we have to involve everyone, including the experts of those data domains. And this really starts by doing three key things. The first is to define what your big data domains are, and you can do that by answering four key questions. What are your organization's key areas of interest? Which problem spaces does your organization address? How is your organization architected as is? And who leads each problem space with its associated data? That last point is especially important because the next step is to appoint domain-specific data stewards, and these are not necessarily the people that you might think of first and foremost. These are going to be the leaders of each respective data domain. That doesn't necessarily mean that they hold technical roles. This also does not mean that your VP of marketing is going to start building your ETL pipelines, but it does mean that they are going to start defining what data quality looks like for marketing data. In other words, what makes that marketing data fit for use and consumption before it is implemented and ingested into ETL pipelines. And then finally, you want to define data quality. Here we have these four respective data domains with their component names, goals, success conditions, and failed conditions. This is a very simplified version of doing data quality and defining what it looks like to make it fit for use, but no matter where you work or what else you do, you are going to have to define what successful data looks like and what failed data looks like because you have to automate that within your processes. So this is a great first step to start. So done well, data mesh lets teams access, develop and manage data autonomously, but there's a catch. It only works if you've done the hard work to create a data-driven culture and automate that culture standards into your technical processes. So thank you for coming. I will be hosting a BOV session on implementing data mesh architecture this Thursday at 6 p.m. And I would love to see you there or have you read the book to learn more. Thank you.