 Okay. Thank you. So yeah, we are helping them stand up AWS SageMaker for machine learning models, but we are also advocating that they consider moving toward data mesh architecture because we're in the middle of redesigning what that architecture will look like. I have been in the tech space total for 10 years and I've been contributing to open source communities for five. I actually got networked into open source at the OS Summit in North America in Vancouver five years ago, so I'm really happy to be back with you all because this is my favorite tech conference and I always love coming. So when I give this presentation I always start with this cheeky slide from Jordan Tagani which is showing this upward curve of data and it says that by X year there will be Y amount of data and that is inherently a good thing because there's inherent value in having a lot of data and ingesting a lot of data and so if you possess that and you're on this trajectory that is inherently a good thing, but you're probably in this room because you know that that's not the case and that even as the volume of data grows astronomically we do not have the governance foundation to manage it yet and so these statistics come from one of many surveys. You can see that data volumes in this survey amongst people who responded grew by 63% per month on average. The next stat really stuck out to me. The mean number of data sources per org is 400 meaning that you are ingesting and produce data from 400 sources. Most will migrate to the cloud over the next two years if they haven't already that's not a surprise especially to anybody in this room, but nine out of ten executive surveys said that it is hard for them to prepare data for analytics. We often know about building data from the back end and the architecture, but ultimately that data is meant to be consumed. It's meant to be consumed by the right audiences and the right format and that is still really difficult to do and ultimately it's because change is hard. This is not really a technical transformation as much as it is a cultural transformation and if you don't have a data driven culture it makes it very difficult to implement anything in the data mesh sphere. These statistics are from a separate survey and one in four respondents said that their organizations are data driven. In the survey the year before this one 38% said that they were so that means the number of orgs who say their data driven is going down while the volume of data produced goes up. Cultural hardships were consistently named as a bigger barrier than technical blockers and if you work in machine learning you probably have heard the statistic that about 13% of models even make it to production so there is a lot of time, money and talent that is being wasted on these efforts and ultimately I think it comes down to the fact that everybody wants to do these grand things in tech. We all want to use the latest frameworks and build these really cool models but at the end of the day it's really hard to do the work of looking at our organization seeing how they're structured and implementing that change because it's always everybody else's problem and everybody wants to change the world but nobody wants to change themselves. The good news is that I do believe based on what I've seen and worked on that federated data governance can help solve these challenges. It's a way of automating your governance standards in order to decentralize your data and so I will pause on this slide because I do want us to start by defining some terms. When we talk about data governance we're really talking about a strategy to manage the people processes and tools related to how your organization stores its data and its associated metadata. It's really a set of standards which share how your organization collects stores processes and destroys data so if you have a data destruction policy that is data governance and your governance really should be scaled to improve service delivery and automated within your architecture and that last part is really important because I think people still conceive of governance as this theoretical entity that is separate from the technology and I always tell people if you remember one thing from this talk I want you to remember automate your standards because if you because you want to embed those within your architecture and that really brings us to data mesh. It's a concept that was introduced by Zamak Degani then of ThoughtWorks about four years ago. It does promote a data as products mindset which is very different from the top-down data as a service model that we're more familiar with today. In this data mesh model data is available within self service architecture according to domains so you have owners of data within respective domains and they all are within the same architecture. They're basically managed as their own micro services in the environment but they are kept in the same environment that feeds into the same catalog and then that's where people and you know existing services will hook up to it and consume that data. Data mesh also ensures that data is formatted stored and discovered against equal standards and those standards are meant to be set by the data stewards who own the decisions about data in their respective domains although I will say that's something that I'm curious to hear from you all about is this concept of ownership. What does it mean and then when we are in the data mesh we have federated data governance so this is where your governance standards are automated throughout the architecture. This is where you guide domain specific data stewards through the process of building and managing their own data domains as products and it also defines the parameters for how you want everybody to use, build, define and access the data products in the data mesh. So there is a lot here and I think what I wanted to do with this session is really have an honest conversation about what people's experiences are with data mesh and with data governance because data mesh was introduced in 2019. It's a concept that is less than four years old and it's current iteration and so as someone said to me recently if you meet anybody who says that they are an expert in data mesh they are full of it because nobody is an expert at this. This is still very young and new and I think the architecture holds a lot of potential. I think it's a great way to bring your data governance to life but we're so in the earliest stages of this and that really has hit home for me during this conference because there's been so much talk, if you went to the open SSF day, of how the open source security community is uniting to build those standards, secure supply chains, do all of these things and build these processes and standards to keep that information and data secure from the wrong hands and I'm just not hearing those same conversations about data and about doing the same thing with open data and when you think about the trajectory of even generative AI and where we were four months ago versus where we are now the use of it is astronomical but we're really in the infancy of figuring out what data governance and standards look like even within the open source community so with that and keeping this up I will turn it over to you to hear your thoughts and conversations about what this looks like in your organization, what questions you have. Again I would say I know a good bit about it but I do not call myself an expert given how much of this is in its infancy but I do want to hear from all of you about what your experiences are and maybe even not with data mesh but what has it been like setting up your lake house, what have your biggest challenges been, what is it like to set up your ETL pipelines, those types of things that I would love to discuss especially because a second edition of the book is not out of the question and so I wanted this to be like a listening tour of sorts and to hear from practitioners what they're struggling with and what they would like to focus on and what they'd like the community to focus on and I can pass the mic around to help people hear. So I guess I have a question. When you say managing products according to domains what do you mean by a domain is it like industry domain or what do you mean by the word domain? Yeah that's a great question. So domains in this case are the key areas that your business uses, it's basically how your organization characterizes data so you would have a domain of sales data and then subdomains of lead general, inbound leads, outbound leads, all of that stuff and in the longer version of this presentation I talk about building a business map where you actually end up not only do you define what your core data domains are, you then attach the subdomains underneath that roll up and so when we talk about domains we're talking about the key areas of data that your business ingests and figuring out what that looks like but again that in and of itself that's kind of the first step. It's one of the first steps that you would take on a data governance journey and that's not easy to figure out. It might even be easier if you were starting from scratch in theory but think about all of the data that you already have, think about the data if you work with models that is ingested already, you end up having to do a good amount of retroactive work to categorize and tag data the right way let alone defining what those domains are but yeah they're the key areas so if you think about like sales, legal, marketing, products, those would be four domains. When you're talking about like generative AI and like data governance, do you see sometime in the future like data governance being generated by AI or how would you, I don't know, just like any ideas about like the future and maybe like some yeah qualms with that? Yeah so that so on a relevant note I was actually talking to someone who my cousin actually was who runs a small business he was saying that he actually uses chat GPT to come up with ideas for blog posts because he asked her on a blog he you know doesn't have a ton of time to actually write the post himself but he and he definitely doesn't have a lot of time to do a ton of research about topics so he will use chat GPT to figure out what should I be writing about and he finds that to be really helpful now in theory what you could be doing is you know I can see someone I can see someone at an organization asking chat GPT which data governance standards should I implement? Now there are a lot of challenges with that because and there are I mean to jump out at me so one is we were just talking some people and I about chat GPT and the fact that the data actually stops in December 2021. Not a lot of people know that that's probably I mean if anybody's going to know that it's you but it's actually not the latest and greatest data and they're upfront about that I mean if you go on OpenAI's FAQs they say that but you are consuming data from a finite source. The other thing to keep in mind is that there are general guidelines that I think can serve anybody well and regardless of where they work and part of the selling point for this book that I wrote is saying that no matter what type of organization you work in these whatever data lake you have or don't have the governance for that and setting it up is going to look different but these are the core six steps you need to start your data governance journey. Now in theory if that information you know it's on Google books so if that ever gets into you know it's on the web so if it ever got scraped it could in theory go into chat GPT and in theory some you know chat GPT could tell people in two to three years you should do these six things. Now having said that the answers for what data domains are for who your stewards should be that is all going to be up to your organization and there's a lot of nuance there that chat GPT cannot account for so what I would say is that I think there is potential to use it as a starting point but you do definitely need to be aware of its limitations. I also I don't think I need to tell anybody in this room not to put proprietary code or anything into it because that is then open AI owns it unless you are using the paid version so that's something else to consider but I do think that could so I think it could be a starting point but I do think we need to be aware of the limitations of something like chat GPT and I will say I think there's a real even among data practitioners there's a real lack of data literacy there's a end about how to manage it and I think we are seeing consequences of that I mean the the use of tools like chat GPT is already astronomical but there are very few guardrails and they're basically being built now and I think that's a I mean it's not an unsolvable problem but it's something we have to address. Somebody has questions before me. Yeah I don't have a question. I just wanted to mention I'm involved with the open source query engine and it is nicely set up for this kind of data mesh but typically I always find there's a lot of like high level idea talk but not really any like this is what I actually did like even when I talk like to people that use Trino to implement that I was the most enterprises of what is where you can create data products like federated is one thing but like how federated are you are you like having multiple clusters run of different like in of the same software like Trino or are you going to run all sorts of different software like maybe a spark cluster and others and then the whole Federation and those standardization access ago goes out the windows it's all very theoretical really and I don't really get much like this is what we actually did and what it works I'm not sure if you have any more like real world examples that go down to the middle of what's actually happening. I actually do so this is the longer version of a talk that I typically give on this subject and there is a case study in here for exactly that reason I'll skip over some of this for the sake of time to get to the case study although I am happy to share the deck with anybody but what you're looking at here is an example of data mesh at JP Morgan. So we're looking at architecture that uses the AWS glue catalog and it's utilizing Amazon Web Services this is a case study from AWS blog and they are using glue catalog to make the data visible Lake formation to securely share the data and Athena to make interactive queries. And so here what you're looking at is a data mesh that's a network of distributed nodes and they're linked together to ensure that the data is secure and discoverable here also each data products Lake is managed by product owners and they and they you know implicitly understand the data and their main in their domain and can make decisions about who accesses it. This is another slide from the same blog post and so what you're looking at here is a mesh catalog along with a enterprise data catalog and in this example the entries are maintained by the processes that move the data to the lakes and so then the catalog reflects what's always in the lake so unlike something like chat GBT which it has a finite cutoff for the data ingested this offers real time and so then when someone at JP Morgan needs the data the catalog allows them to discover and request it. And so this is what it looks like at a large company with quite a good budget to spend and that's another challenge that I have found helping clients implement this work. I mean it is not cheap and that is a big challenge for companies who are operating on a really tight budget because this is this does require not only a lot of money but also a lot of colleagues you need enough people who are able to serve as data stewards and while doing their day jobs and so there are examples in the wild of this being implemented another company that is doing data mesh is called brainly and brainly is more of a startup and so and they're blogging in real time about their data mesh journey. So I would encourage you to look them up as well because they're they're committed to trying to stand update a data mesh they they've said we are going to try our best we we're not sure how this is going to go but we're going to document it and so there are some use cases out there but they are kind of few and far between and that's what I was hoping to get out of this session was to talk to the practitioners to see what are you doing how are you what does this mean to you really they're public and they're a large and as to your point they're all over the place. Some different governments have that NASA has and all these different observation and one of the questions that I and as we sit through them wrangling the data sets one of the questions that comes up is like what's right because these are all public data sets and yet we do analysis on something like for example for agriculture the USDA famously created these few boundaries and that has actually decided that it's not a public data set for public use except that the Foundation is not doing that. We're going to be basically publishing two boundaries for the whole planet. Now you have a few boundaries and you get that and they're open and free. Is there a claim of privacy? Yeah that's a good yeah there is why if there is a claim of privacy why is there a claim of privacy because all the data is derived from public data set. It is all attributable to public data set. It is all paid for by the global global data commons. Why is there a claim and if there isn't a claim then why is there nervousness around data that's inherently geospatial data. Yeah that's very that's interesting. So in terms of why there's nervousness around it I think that's a complicated question. I think it comes down to the government in particular the government in general but we'll talk about the U.S. government in this case. They have they collect a lot of very sensitive data that would need PII masking and so I think they maybe get jumpy about some of that PII data ending up on the web. That's just a theory but it's an interesting case study in terms of you know just taking what's out there and the data set you know not having an owner. Yeah so the PII is a very clear case when something is PII because it is P. But if something is actually P as in private private and but if something is already public. Right. Whether it's privately you know identifiable or not it's already a public data set and it's not up to the U.S. government actually this is global. Right. So some of these data sets are U.N. based. U.N. puts them out and you know if they're derived from open data sets particularly for federated data sets to your point and meshes are all about sort of cataloging them in different ways. I just don't know how to even approach the question of why is there any claim of privacy on that. It's already a public data set. I don't see how there is to be honest. I mean if it is a data set that is under a creative commons license I would say you know it's fair game but you bring up a question. I mean a good point about changes in data. So a challenge with data is that if it's based on you know history and for instance if there's data that was collected before a certain law was established then sometimes that is not always accounted for. Similarly you can have this creative commons data set and then someone can decide oh no that's not open source anymore but if it already was there and it was you used it when it was open source I don't you know I don't think they have much of a leg to stand on. I think it's probably more out of fear than anything. But and I do have a question. So going back to what you were saying about you know not everything should go into the data lake. My question is so is the issue that what about ingesting. Like what about creating you know are you talking about like moving from like a warehouse to a lake and and how it's not as simple as just hooking it up to the original source so that it flows into the lake or do you just not think it should do you think data should live. If you have a well-working warehouse that has nicely structured highly performant well executable and accessible data why move it into a lake just so that you can access it in the mesh your mesh technology by default should be able to access all the data sources right. That's what the whole federated use case is about. Otherwise if we put it all into the lake house and then call it the central lake house and the federation is gone and it's just a new label for the old warehouse and new technology but it's the same all central as stuff. I thought that's what we want to avoid right. Yeah you have the whole performance issues all that other stuff. Yeah so when I when I talk to clients about this the biggest issues they have with warehouses are twofold the warehouses do not hold the in the volume of data that they have and they don't have enough processing power for all of the data and then they also need you know to house and work with different types of data so structured semi-structured and unstructured and so those are the big reasons for moving out of a warehouse but I mean your point is and so I guess it goes back to I mean I'm a service designer and so the first thing I do is talk to users of systems to say is figure out what their limitations are. If the warehouse in theory works then that's not necessarily a reason to switch but what we find is that you know again the volume of data is large it keeps growing the different types of data keep growing and so then I think a lot of people feel like they outgrow their warehouse and that's where this concept comes in. Yeah and I totally understand that's a different thing though right like there will always be systems that are just better off in the differences than in a lake. Yeah for smaller organizations that don't have you know tons and tons that don't have just lots of data coming in but want to be more strategic about maybe forming a data cooperative with other organizations that do similar work but everybody's kind of got their own proprietary data but when sharing and pooling that data would you know rise the tide of all boats would a yeah yeah it's late yeah so would it would a data mesh concept for something like a cooperative between all of these different organizations be something that would be a fit for that like I hope I'm explaining no I think that's possible I mean what you're describing to me sounds like clear domains you would have with subdomains you would have this cooperative of let's say six partners they would each have their own domain and subdomains categorized you would have you know at least six data stewards who are each managing the data in their respective domains they're you know commute they're forming you know like attack like a technical advisory council where they co-create the standards for that data in different ways and then ultimately in theory at least you could add it into an environment like this and then you know united under that one architecture now the biggest question that leads me to is who owns the architecture and so if you have a co-op that is in interesting case study because it needs a home you know I think about the Linux foundation and how the Linux foundation is the home for so many open-source projects and so presumably you know the Linux foundation owns a lot of the architecture and the technical systems for these projects to run on so that's the biggest question I have I actually think as a use case it could be a really good one to form a data cooperative the biggest thing I think people would have to figure out along with all of the governance is who owns the architecture who owns the data mesh and that would be the biggest thing to suss out but in this space of open tech and open source I mean there are well established homes for projects like that of data cooperatives not that I know of but I would I will say I think with startups this this concept is very it's easy to you know get get freaked out a bit by it mostly because of the cost and just not having the staff I mean when you talk about data domains having data stewards that implies that you have a minimum of six to eight people and if you're in a startup you're probably doing three jobs and so this and this is even more work so it's easy to get a little overwhelmed by it the thing I will say that startups have is a big advantage is that you if you have a relatively small number of data you actually have less technical debt that you're starting with because you don't have to do all that retroactive tagging you don't have to go back I mean I can't tell you how many projects I've worked on where you have to you get in there and the problem you're solving actually isn't that hard but the volume of data and content that you have to go back in and tag and categorize and create tax onemies for that volume is of unchecked data is so high and so that's actually a huge barrier for enterprises that I think a lot of startups maybe don't have so in terms of starting from scratch startups when it comes to governance are actually I think in a good place because you as the leader get to define what quality looks like from the beginning doesn't have to never change but you can start and build with that governance in mind and honestly maybe data mesh is not for you it is not for everyone and so even the book that I wrote I mean this talk is about data mesh architecture the book touches on it but is it's much more focused on building a governance program because ultimately you have to decide which architecture which system is right for you and your organization and it might not be data mesh have you come across any machine assisted tools for that characterization and intelligiation that's a word of data stores that's a good question so we're always looking for automated you know automated tagging things like that I do not know of any in particular I'm tagging at scale as as you know used to be and still is quite expensive there are tools to do it and I do have another presentation on AI that I do that I do late where I do link to some open source data sets and things like that I know that there is a project of products called lime which is an open source classifier based out of the University of Washington it's a python toolkit and it helps detect bias in AI that's not quite what you're saying because the the tagging issue is paramount but I will say that in the five years since I started giving that presentation there have been more products coming to market to address you know tagging data categorizing it the right way but but I think that technology to my knowledge it's not where I would like it to be let me put it that way I don't I don't know like the fact that I can't name any tools off the top of my head I think that's that's a challenge and it shows that there's a need for that so this may already exist in open source I'm ashamed to say I don't know but you know whatever why not create some sort of pointer management system where every schema let's say there is this that so there is largely sort of either tag data sets meaning things that are not schematic or not you know relational and then there's relational data sets but even just relational database data sets you know traditional relational databases right they have schemas they have permissions very simple things right why is there a tool I guess is a question that allows you to publish the schema and permissions dynamically to a set of users where there's some sort of federated user identity and you essentially publish that dynamically you can change it anytime you want and by doing so you're not really copying your database you're not moving the data from one to another you're simply enabling access to your data store dynamically and the person who's trying to access not only has access but also has a schema but that doesn't work for you know non-relational databases which are now increasingly much more popular but you're dealing with legacy issues also so a lot of the databases are relational so any any insight on that? That's honestly that's a great point and I do not know of any off the top of my head that do this I was talking to somebody about today who's in the business they used to work for Azure and they are now creating a an open data warehouse to I think facilitate pretty close to what you're talking about so that startup just raised a seed they are probably going to raise a series A or try to raise a series A in the summer and I said I said to him I was like well I would love to learn more about this because I actually don't I think this is really important but I don't know of anybody else doing that work so that my takeaway was that we're at the very beginnings of defining what that could look like but that's I mean that's a perfect example of in again in theory at least that's how it could work that's how the the architecture could be built and do you know I mean at the Linux Foundation is any I'd be curious to hear what you know you're doing in open data is there any how how is it managed there and you know what's your what's your toolkit? Yeah so at in my project I can't speak for the 799 other projects at the Linux Foundation but I can speak to my project there's 800 plus of these projects now so there's this ridiculously big number so we as I mentioned deal with a lot of geospatial public data sets and they're actually quite voluminous they are very but they're very well structured they have a lot of structure and and that structure can be exploited for ensuring that you actually publish structure that's where they got idea of publishing the schema because the geospatial data sets we have but we don't store them in a scheme of simply because they have enhanced structure we just use their structure as a schema and you know a lot of this data set these products came across because you have massive data log files which is not the data we deal with we deal with very structured geospatial data sets but I think the Apache Hadoop people or the PySpark people or the Parquet people or those guys might have a strong kind of understanding or need of what I have but I mean from my perspective it's just these federated databases are all over the place and what people really need is a way to access them they're supposed to be available and free none of them are accessible they're all extreme there's some old formats from World War two there's like these old I mean these are hourly updated data sets on to a readable format yeah and and so there's all this opportunity for us to create trans transformations and ETL's all on geospatial data sets so that's what we end up doing a lot I will say my first reaction to hearing that you're dealing with such structured data is that my I would think oh that sounds like you're lucky in a way because structured tends to be fairly in the scheme of data tends to be the easiest to work yes and it's because it has a structure it has all of that info that is missing from unstructured data or semi-structured it's tagged more effectively so that and the problem tends to be you know if a lot of this stuff could work really well if everything was structured if everything had a schema but then the problem becomes now there's so much data that doesn't and so then how do you manage that yeah so that's where tagging and stuff like that is what we've been using is sort of creating JSON based tags of some sort of way to attach metadata structure to existing structured databases but frankly that is that is a new area for us I mean for us we are wrestling with data ownership frankly like I asked earlier on you know these are all public data sets but they're extremely identifiable and so that makes them problematic for people to I mean in some ways because there was wasn't that easily accessible they've been in the public domain for decades and nobody cared now suddenly they're going to be available and I can tell exactly what you're doing on your farm and I can tell exactly what you're doing I can figure figure out who the farm owner is and just like five six clicks and so these are all really public data sets but they have not been accessible and everybody was comfortable with that right now they're not comfortable because they're becoming accessible yeah exactly no you can't and I don't think that it will happen the problem is right yeah I mean it's like chad gpt you can't put the genie back in the box once the genie is out it's out so yeah yeah so when I hear and that on that note I mean I think data ownership when I think about the big challenges that I have with clients around governance whatever it is ownership is always the biggest road block because in many cases especially with something new it's high risk they think low reward people don't want to own anything and nobody wants to be responsible for making those decisions about data and so that is really where we encounter blockers the other thing that I feel like my clients struggle with is we'll do that later syndrome I'm a we do human-centered design at steampunk we are supposed to build and design and ship everything with specific personas and users in mind there was a really great presentation today about personas for ml ops and that way I mean that's the sort of work that I do as a designer but way I constantly hear I will do will do design later we'll do after we ship will do data governance later and so these are the two things that that really become a blocker for me when I'm working with clients and again this is these are not I mean we can talk about architecture and lake houses versus warehouses and that those are that that's its own conversation but ultimately the culture you have drives the decisions that you make about the technology and so that and so that coupled with the issue of ownership who owns what who wants to own what those are my biggest challenges so I'm curious if other people feel the same way on a similar note related to multiple topics here I work for a utility and energy company where a lot of data is highly regulated but in theory should be public and we one of our departments was I think really lucky to steal our leader from my team you went from DevOps to the data team and he rebranded it all as AI so even though they weren't doing AI yet three years ago he was like alright guys we're just rebranding ourselves as AI that's what this will be used for eventually that's just what we're gonna do and that was the key that really turned the the the heads of a lot of the c-suites and was in a great way because they got the best thing it was the same thing but different words yeah it was just a branding issue and they were able to get buy-in they got cybersecurity resources they got legal resources for data privacy and I think it also helped because it's a highly regulated industry but it was pretty much a genius move on his part to just say okay I know what the my engineers need and I know what the seeds these want and you just flip the switch and I think it worked out really well so far knock-and-wood but I think it was definitely a conversation and a language terminology problem initially for that team so some as someone who is designing and also highly regulated you know industries I am always stunned by the degree to which a word or term can either kill your idea or move it forward I mean even the concept of user interviews like people if people if you if you do a user interview with people who work in the federal government they're like why do you want to talk to me like what are you gonna ask me what are you gonna ask my people like you can't talk to this person because they think they just have this idea in their head of I mean a user interview is a very standard term in design but they have this idea in their head that it is something very different whereas if you say do you want to have a talk about your geospatial data with NRCS it's like it's very it's it's different it gets a very different reaction and I'm I'm always struck by that so I mean I can't overstate enough how important it is to speak the language of the people that you're with and really figure out what's going to resonate with them and a I mean for better or worse a everybody everybody jumped on that AI train without knowing really much about how to operate it or what they were doing but the reality is every company is pressured to use it they feel they and so that for the depending on the audience that can be a huge selling point and so I have found over and over again that the way I frame something it's the same thing and if I use a different term like that can be the difference between either having it move forward or having it slip backward and it's wild to me the degree to which that happens whereas if he went and said hey can I get budget for cyber for PII as part of cybersecurity they probably would have said no why do you need that but yeah and I think the governance space is really a great place for that especially when you have so many global teams and you can kind of all agree on some terminology together and positively frame certain things to yeah that's a different yeah and setting up the data catalog is its own thing because then everybody thinks you know a date is the same thing when it's used differently but that also becomes a problem I mean again if you don't have shared definitions for data then that data means different things in different contexts and again you need to have the conversation about which definition is correct whether it should be you know maybe the definition is different in one domain versus another one these are not easy conversations but and it's less about whether there's a right or wrong answer but I think until you have those conversations then it just a lot of this stuff just stays blocked but I think that perspective is of and that approach of getting by and because that's the other thing and the book talks about this is ultimately you need sponsorship but like you're not the whole idea behind the book is that governance is too big for one person one team and you need I don't see anything getting really productively done without buying from a C-suite sponsor and so there the book does talk about how to find the right person because if they are not in it's it's only going to go so far so we are at 645 so I am I think we're we're done I'm happy to stay for another few more minutes and then I will be going over to the aquarium so hopefully you all are too I am also around tomorrow and so if you'd like to chat more or purchase the book I would love to talk to you more and thank you for coming