 Hello and welcome. My name is Shannon Kemp and I'm the executive editor of Data Diversity. We'd like to thank you for joining this Data Diversity webinar initiating a customer MDM data governance program sponsored today by Irwin. Just a couple of points to get us started. Due to a large number of people that attend these sessions, you will be muted during the webinar. For questions, we'll be collecting them via the Q&A in the bottom right-hand corner of your screen. Or if you like to tweet, we encourage you to share how these are questions via Twitter using hashtag Data Diversity. As always, we will send a follow-up email within two business days containing links to the slides and the recording of this session and any additional information requested throughout the webinar. Now let me introduce to you our speaker for today, Danny Sanwell, senior director of product marketing at Irwin. With more than 25 years experience in the IT industry, Danny has been with Irwin for 16 years. His industry experience includes various roles in data administration, database design, business intelligence, metadata management and application development. Danny is responsible for the strategy, messaging and strategic alliances for Irwin modeling. And with that, I will give the floor to Danny to get today's webinar started. Hello and welcome. Hey, thanks, Shannon. And thanks again to everyone that's joined us today. Thank you for taking the time. We really appreciate your interest in the topic. Also like to thank Data Diversity just for the opportunity to present and expose some of the success that we've had with our customers or at least our customers have had with us. And finally, I do have to shout out to Shannon to thank her for that fine, fine music that we got to listen to while we were waiting. So let's get into it. What are we going to talk about today? Really it's about underpinning your customer MDM and your data governance initiative using Irwin modeling. And this is based on a case study that we have of real customer experience. When we look at, hang on a second here, I'm going to get these slides to move. I know I am. I apologize for, there we go. So if we look at the agenda, we'll just do a quick overview on master data management. Look at what's going on in that space, why people are so interested in it and some of the challenges and success stories that are out there. Do a look at the case study that we have for today's presentation, understand why master data management was important to this organization and how they came to look to Irwin and a data discovery and analysis pilot using our technology to underpin and assure the success of that initiative moving forward. We'll go into some of the details of what they did using our technology to achieve this pilot and really kick start the road to successful MDM and then go through a summary of benefits followed by Q&A. So if we look at master data management, I've got a strange habit of starting every presentation with a Wikipedia entry. I thought it was a good overview. But really when we look at it, this is about mastering key elements and domains of your data so that you can have business agility through integration of strategic and shared information across the organization. There's a lot of ways that people have done master data management and continue to look to do master data management. A couple of them are listed here but as we get further and more capabilities in the area of data virtualization and cloud offerings in this space, we're seeing a lot of choices out there in terms of how to achieve this mastery of key critical data such as customer, product, vendor, there's a lot of other domains so that organizations can achieve the agility that they're looking and become that true data driven organization. What are they trying to get out of it? Agile analytics, real time analytics to drive their business both from a strategic planning perspective but also through sharper business execution and sharper and smarter operations in their business. Really understanding their customers so that the customer can be at the center of everything that they do because they understand strategically that's how they will be successful and grow as a business. When we put this in context, MDM doesn't stand alone as a, although it delivers direct value to the business in terms of delivering quality and accessible data, it really is one of the key pillars to successful data governance within an organization. You'll see that metadata management, data quality, information life cycle, there are lots of elements that go into a successful data governance initiative. Master data management is a key foundation to be successful in data governance moving forward so there's a lot of drivers to why people go and look at master data management as a solution for their organization. There's also a lot of key people in the organization that are looking to master data management to drive larger and bigger initiatives on that road to data centricity. Did a lot of research, worked with a lot of customers, this customer and many more and trying to understand what are the common keys to MDM's success moving forward. Through that research I actually ripped these pretty much for a paid amount of a white paper by a man I respect in this industry, Dave Lotion. You'll notice that the keys to success really are not skewed from a technology perspective. For MDM to be successful there's technology involved and technology may be the solution but in order to get the right technology and have that technology implemented properly there's some key things that are common across successful MDM's implementations and that's early consumer engagement. So who's using the data, what is the business data of that purpose and talking to those people understanding those expectations and requirements for the shared information because again it's not just a technical question of how do we get information there it's how is that information used and what are the expectations that come along with that information when we deliver it through a master data management solution. Data governance, master data management being a key pillar but data governance around master data management is equally important. You have to get the right amount of control and visibility to your master data management before you push it out there and actually have it start impacting different business processes within the organization and through that to get to that point metadata collaboration is key. Prior to exposing that view you have to make sure that people understand that data and that comes through metadata collaboration. Looking at the terms, definitions, usage from a business perspective that may be wide and varying across an application landscape and understanding where those things are similar, where those things are different, potentially synchronizing those things where applicable but if not rationalizing them and then making sure that the end result is harmonized to the business because again the business doesn't care what's going on underneath. They want data that serves their purposes in their terms and data that they can understand and actively participate with, basically trust the data that they have in their organization. So if we look at the case study and I'll be honest up front the names have been changed, in fact we're not using names to protect the innocent as have the models and I can guarantee that no data modelers or data sources were injured in making this presentation. So when we look at the organization that we're talking about today, very classic, classic scenario. It's a traditional brick and mortar business that started out basically selling their wares in a very traditional way. They are looking at and their industry is demanding that they start to grow that online presence and open up additional routes to market in order to be competitive and to satisfy the customer need. There's multiple lines of business in this organization. They deliver both products and services and they have a very large and very customer base. So you have individual consumers and consumer type products but they also have contracts with large corporations, large entities, both private and government to deliver their products and services. And quite frankly their revenues are stagnating. They're not growing at the rate that they need to be successful in their organization. And along with that, not surprisingly, they've got declining customer satisfaction and retention metrics because their customers are changing just like the nature of their business and those customers have new and different demands and expectations in terms of the experience that they have with any organization. So it's as simple as when I call up as a customer and they don't know who I am. They don't know the extent of my business. They don't know what they've offered and that I've turned down or they don't know what I'm interested in when I speak to them and it becomes a very siloed business and a very siloed experience for the customer and today's customer just will not be satisfied with that. So significant impact in terms of maintaining their revenues and the ability to take advantage of those relationships and move them forward into a position of growth. Their data management infrastructure, again, classic, you know, based on the length of time they've been in business, it spans the mainframe to the cloud. They've got lots of business systems and the backbone is based on some very traditional technologies and deployment methods but they're looking to new deployment methods and new capabilities that can be delivered in different ways to really enable their business and make sure that they have what they need to be able to meet the needs that they're trying to achieve. They had early MDM aspirations and it was a failed MDM initiative. So early on it came on and again looking at things from a technology perspective they saw MDM as something that would enable and solve some problems for them but they didn't necessarily approach it in the right way and the net net for them and a lot of organizations was, you know, round one didn't work out the way and that just makes round two even more difficult. One other key element to highlight, they didn't have a data modeling practice in place. There was a couple of people that would use data models in a limited way but there was no formal data modeling practice. There wasn't a tool in place or a process in place to integrate data modeling into the overall initiative, you know, to ensure its success. So what were the contributing factors to their initial MDM failure? It was, if I can nutshell it, they basically looked at it from a technology perspective. They saw it as a technology or a technical problem that needed to be overcome. They saw information across a wide landscape and they needed that to be integrated and rationalized. So instead of looking at what the business needed from MDM and really trying to get control and govern that initiative from the outgo with a business perspective, they really looked at it as just another piece of technology that they could bolt in, the problem would be solved. And what they found out was that they just couldn't support it, right? They didn't understand what they needed. They didn't understand what it was supposed to do. So it wasn't surprising when they implemented something that it didn't meet the needs. And then when they looked at what they'd implemented, they understood that they didn't have the buy-in or the sponsorship to maintain that and grow the resources that were needed in order to make this a reality within their organization. So if we can look at anything, it was, again, lack of business sponsorship and lack of business collaboration and accountability in the overall process that really stymied this thing before it started. So as this organization starts over, they're looking at it from a very different perspective. So now they're looking at their goals and driving those from the business. Technology is secondary. Technology is not what's going to drive this initiative. It's going to be what are the business requirements? How can they best meet the business requirements? And not just the requirements today, but the requirements for the future moving forward. So they want increased customer satisfaction and retention. They want to capitalize on those relationships. And they want to get new customers, understand who they've been successful with so that they can get more like that and get a better yield from those customers moving forward. So really it's about a 360 degree view and connection with their customer through the data that they have in the organization that runs the organization and the business from day to day. So what are they going to do? They want to deliver an MDM platform that's going to make sure that key domains of data are consistent, that they are of a high quality and that they're trusted. They want to promote this and promote data as the go-to way to doing business and to driving the business through governance and increasing the data fluency of all the stakeholders in the organization. They want to make sure that they have an agile, effective analytics platform and then drive operational excellence through data and effective and efficient integration of that data. And this is the foundation from which they went forward with the new approach to MDM. So this takes us to the pilot that they started looking for data discovery, analysis, and governance. Why did they decide to take a model-driven approach? You know, this use case is a great illustration of what we've been telling people about data modeling for years. Data modeling is much more than a way to automate the generation of data definition language or DDL. There's a lot of value in a data model that goes well beyond just saving time on IT-centric tasks or database or data asset-centric tasks. It really is that place where all the stakeholders can come together in an environment that's visual to allow them to understand and break down what are generally very complex topics into consumable and addressable bytes. It speaks to all of these folks in their language, with their perspective, in context of how they view data because we understand that a business analyst looks very differently at data than a DBA. And presenting data in one way is not going to satisfy either of those. So that ability of a model to be able to present the same information in multiple contexts allows people to become much more invested in the process of managing that data and leveraging that data for strategic advantage moving forward. And then there's the other elements of it, which is the ability to start to standardize and govern data within your organization, lots of capabilities to compare and analyze and collaborate. But at the end of the day, the real key reason why people look to it is because it is a low-risk apples-to-apples environment. There's lots of different types of data out there, big data, traditional data, cloud data, structured and unstructured. There's lots of different viewpoints. And given them one place to see all this, understand it, collaborate together, develop the acumen and knowledge required to be a valuable member of the team, but also in an environment where you're not going to break anything, right? You're not in a production environment. You're not potentially moving things around for scenario analysis. You're doing this in a place that's disconnected, but then easily reconnected to drive change for the business moving forward. And we look at that, really, we have top-to-bottom integration that breaks down silos and data management, right? You have taxonomy, looking at the business terminology, describing data in business terms, conceptual models, that business alignment, how does data interact with the day-to-day running, the processes, the organization, the services that are available out there in the business, what are the rules behind that, right? What are the business rules, how do we ensure that when we manage data, we don't just do it from a technical perspective, we implement those business rules so that the technology serves the business, it doesn't fight against the business, logical models, physical models, how is the data deployed? Again, understanding all of this conceptual and terminology is great, but if we don't know where it's deployed, how it's deployed, how are we going to manage and transform and synthesize those business requirements into actual technical deployments that work for the business? And then configuration models, how does this stuff hang together? How is data integrated? What feeds data? What consumes data? How is data transformed across all of those steps and those different elements of the enterprise that take and manage data in some way for the business? And having an integrated view, the combination of these views together is really the key to not just breaking down technology silos, as depicted in the slide, but breaking down organizational silos that different people that have a common goal but aren't sure how they can work together effectively in order to achieve it. So when we look at the pilot, we look at customer and their first domain, and this was one of the key elements was not trying to boil the ocean. They wanted to take one domain, be able to master that domain, and then through that mastery, establish a reusable facility and a repeatable process for doing it for the other key domains so that, again, they will get it right out of the gate and they will do it in the most effective and efficient way possible for the business. So in terms of the key steps at a high level, first, documenting your as-is-data landscapes, it still amazes me how many folks don't really understand the data that they have in their organization, how it's configured, how it's integrated, and how it feeds the business processes. So understanding that as-is-data landscape is job one, because how can you build something to transform that landscape if you don't know what it is? Second is making sure that you enable the stakeholders with awareness and collaborative analysis tools, ways that they can work together in a single place and effectively collaborate with all of the brethren that they have across all of the different organizational silos. Then take that knowledge and specify the MDM-2B architecture, because you can take that and draw the picture of what you need before you go out and start finding out where you can get it. A lot of problems come from engaging in MDM initiatives and engaging with MDM vendors without the real ammunition in the gun to understand specifically what you need, what they're going to provide, and what it's going to take for them to prove that they can provide it. So this gives you the ability to do build versus buy analysis. And if you do choose to go build, then you can specify what needs to be built and really scope that project appropriately. If you're going to buy, then again, you have a clear set of requirements and an architecture behind that that the vendor would then need to speak to and satisfy in order to win the business. Finally, establish data governance at the program initiation. Don't wait until it's built to say, okay, how does this fit and how do we control it? Make sure we understand the existing policies that apply and procedures so that they're supported out of the gate. And understand how we need to support that so that we can iterate that over time as that MDM program expands and changes based on emerging business requirements. So I think I've talked, which is my style, talked to some of this before, but really it's about capturing the structures, standardizing them, defining them, documenting those points of integration, and starting to understand, identify, and work on the differences that exist in your as is landscape. How do we enable the awareness and collaborative analysis? Well, we publish it in a place where they can get it. So web-based self-discovery for both technical and non-technical roles in a visual manner where they can see not just the visualization of any individual data source, but the visualization of how this hangs together in a metadata configuration or a data architecture. And then enable those managed feedback loops between the organizational silos. Give them a place to share things, get feedback, and do that under control so that we capture all of that tribal knowledge that's out there, and really look at managing the life cycle of the MDM from the get-go. How do we specify the 2B architecture? Well, you define your master record requirements based on the analysis of what data you have. So what are the data sources? Who are the consumers of that data? Capturing all that together, ensure that you've defined that semantic and data flow integration to get a full picture for harmonization. And then feed that into scoping documents, understand the impact of change so that that scoping is complete and comprehensive, and then be able to start taking that and understanding what phases and how am I going to get from point A to point B because it's generally not one step. There's a couple of steps to do that, and you can derive all of that easily using a modeling tool from the existing things that you document in Phase 1. And then, again, finally establishing that governance foundation. So through this process, capture and integrate business vocabulary and terminology. Make sure that you have clear lineage and impact analysis capabilities so people can understand and easily get answers to the questions so that they can participate. And then make sure that through that process you specify and align ownership accountability and ensure that you're attached and respecting all the policies and procedures that exist, and then identify the new ones that you'll need to instantiate as you move to this next stage in your data architecture. So let's get down to the tools and the steps. So when we look at discovering standardizing and document-relevant data, this is basically, it starts with reverse engineering your data sources. So Irwin has and provides support to reverse engineer all manner of data sources, whether they're traditional DBMSs, big data sources, unstructured data sources, anything going back even to things like COBOL copybooks and things that might be hanging around from your legacy. You can have access to that. You can have it read the structure from the catalog and then bring it into this visual environment. But there's a lot more to that. And some of the kind of features and capabilities that are listed under there are the ones that this organization really found valuable in terms of accelerating their analysis. The ability to either define and implement naming standards upon that reverse engineer, or to actually derive your naming standards and then finalize them from what you already have out sitting in a catalog. Same with data types, same with domains. The more you can standardize how things are described and how things are defined in your organization, the easier it's going to be for anybody to move from one area, one data source to another, and provide value in that process. Because the more they look the same, the easier they are to understand. Once you understand them, and we all have a relational mind, this is like that. And if this is like that, then this is what I did with that, and this is what I'll do with this. Other things that really made it valuable to them. The ability to import definitions and edit those in a bulk format. So they had a lot of definitions that were sitting out from different systems, a lot of them in Excel spreadsheets, we were able to grab those quickly and pull them into the data model so that people could start to work on them and collaborate on refining them in this optimized and visual environment. So layout features within the tool and active standard templates, the ability to figure out which were common data elements that needed to be in every model and then defining them in one place and have those become actively bound to the models so that if you manage them centrally, they would be reflected in all of the different modeling collateral that you were creating to support the initiative. Once you have the models in place, they leverage complete compare. Complete compare is our capability to take, do basically a left to right side compare of data models, so a model to a model, a model to a data source, or a data source to a data source. And really start to identify the gaps, identify the differences, analyze those, and then where appropriate, synchronize the metadata if it made sense through complete compare by dropping those things. But equally as important being able to mark and document those differences because sometimes there's a reason why things need to be different. There may be a different business purpose, a different perspective on elements in a data model, and you don't always want them synchronized, but at the same time you don't want them popping up in your face and bothering you every time you run a complete compare. So just that ability to mark and document differences, why they're being left in that fashion, and what the remediation would be outside of actually synchronizing the metadata was a key benefit to them as they went forward in their analysis. And just the ability to push out reports of individual compares to share with people that weren't necessarily using an urban data modeler and publish out into a collaborative environment was very valuable because it's not always the model itself. Sometimes it's the steps in the process and being able to capture those steps and show people what was done and why it was done is very, very valuable. Next, they centralized the models for data governance and metadata configuration. They did this using a product that we have called Erwin Data Governance. It's a web-based centralized repository that harvests Erwin models, quite frankly, it harvests data models of any source, so all of the leading providers as well as a lot of the legacy case tools and other tools that have been out there, and pulled in all of the data models related to their customer MDM initiative. So they broke it down, they pulled them in. They derived initial business glossary based on the definitions that they had in their data warehouse because those were pretty strong definitions. And then built a glossary and then started to publish that out to their stakeholders and iterating through refining those definitions and categorizing those definitions in a standardized glossary. So mappings between the models, mappings between the elements in the models was done very easily through drag and drop so that they could start to build the configurations of data models and how data hung together, how it flowed through the organization, and how the semantic definitions and standards that they had created applied to all of these varied and disparate data sources. Again, it's really building a full architecture, both from a real data perspective, but also from an equally important metadata perspective. And once that was done, your stakeholders had all of these tools available so that they could go out and discover data sources, educate themselves on what it is that exists out there, what the business purpose is, what the technical implementation was. Visualize those data sources through the web in their own terms on their own time, understand how these things hung together, and then drill down and leverage the mappings in these configurations so that they could understand, A, the lineage of key data elements in the MDM architecture, where they came from, what kind of transformations they went through. And then more importantly, as they start to pull out other systems that may have been managing the master data or doing a quasi-MDM in the current, as is architecture, understanding the impact on everything downstream. So if I change this, what do I need to change downstream? Because the last thing you want is the first view of your MDM to be something broken downstream that people didn't account for. This gives you the capability in a very strong visual and detailed way of understanding what is the impact of the changes that I plan to make, and I'm proposing when we put in this new technology, this new capability within our data management architecture. Finally, they used our model derivation to be able to take elements across all of the different models that they had documented in phase one, take the best of that, rationalize that, and pull out their customer MDM model. And as with all of the models, they ensured that they had a conceptual model, a logical model, and a physical model. And then, of course, as it was sitting in the data governance tool, they had their integration model or their configuration model. So now what they had was a conceptual model that they could validate with the business to ensure that it had all of the elements that were there and that it fit to the business requirements appropriately. They had a logical model where they could see the true business rules around this data in action for their proposed MDM solution. They had a physical model, so depending on which way they chose to go forward, whether it was a build or a buy, if they were building, they could actually deploy the MDM model in the technology of their choice, whether it's through straight DDL generation in the tool or through our multitude of bridges that will take all of the information and metadata that's stored in a data model in Irwin and share that and deploy that in all of the other metadata repositories in terms of downstream technologies and tools that needs to be in sync and governed in the same way. So if they were going to buy MDM technology, they could actually push this out. But most important, what they had was a true blueprint of their MDM 2B architecture that they could use to start driving out the documents in terms of scope, in terms of requirements to go to vendors and to really do a true analysis on the best solution for their requirements and for their needs as an organization from an MDM perspective. This was extremely valuable in elevating the confidence of the business to go forward with the budget and the buy-in to get to the next step, which is starting to visualize and understand and analyze how you're going to implement MDM that you've specified from a business perspective using Irwin. So the benefits, it accelerated and enhanced their analysis cycles. So there's a lot of information out there in terms of how much time and how much money out of any given data management initiative is spent just doing basic data analysis to understand the as is. They found that they were able to cut that time and expense in half based on the accelerators that were built into using a data modeling approach versus the traditional approaches that they had, which was going out, doing analysis through interviews with as much metadata as they could squeeze out of any given technology, and then generally using Excel to try to bind these things together. So it was the analysis in terms of those analysis cycles that cut the time in half based on their estimates and their results. From a more of a soft benefit, they found that the interactions and the willingness of stakeholders across all of the silos to engage together and the level of effectiveness in those interactions was increased significantly. So you had people that weren't used to being part of an initiative like this feeling much more comfortable and equipped to be part, a valuable part of that team. So the interactions and the value that they got out of those interactions was accelerated significantly. Specifying their MDM requirements, this was a byproduct of all the work that they did. So it was a natural byproduct. And having that analysis work done in an environment that could turn it into a specification of the 2B was very, very valuable to them. Because now they had in an environment that they could leverage to get as detailed as possible or as conceptual as possible, the same place where they understood and did all the analysis was now the place that they drove out those requirements and were able to reveal those to all of the different folks that they needed as they got on to the next step of the journey. Instituting accountability for proposed MDM elements and processes, the business was much more confident and much more committed to the initiative based on what they could see in terms of control, accountability, and ownership of MDM from all perspectives of the business. And then finally, what they'd done was created a repeatable process and a reusable facility so that as they get to stage two, stage three, stage four, and start going through those different domains that they wanted to master, they had an environment that they could build on. So the value will not just be realized in the initiative that they have today looking at getting control of customer, but the value is going to be repeatable and reusable. And the net result is a facility in which they can actually deploy governance across their entire data management infrastructure and business data consumption infrastructure and knowledge of what went right and what went wrong so that they can get more and more efficient and effective in ensuring that the discovery and analysis that's required for any major data management initiative will be successful and effective and efficient going forward. And this use case that I've just described, you could really take MDM out of it and put data warehousing in it or data lakes, data migration, legacy enhancement, all of the different use cases, anything that really has an impact on data management in your organization can easily be repurposed because basically those same steps will be the underpinning for all of those initiatives and creating that foundation provides a centralized source of the truth from an architecture, governance, and metadata perspective that will drive value in any initiative that you have the concerns data and strategic data in your organization. Before we get to the QA, I did want to basically give a shout out for one of the new and exciting features that Irwin now offers to the market. For those of you that aren't aware, Irwin became a privately owned independent company last April. So all of the Irwin technologies and the Irwin brand have moved and now we're in a very, very focused and agile company focused on architecture and governance. And the first step of fulfilling that vision and expanding our capabilities for our customers, we went out and acquired an enterprise architecture, model driven enterprise architecture tool that's delivered in a SaaS platform. So now beyond the deep dive that we have in terms of data modeling, data architecture and data governance, we now have a model driven solution that allows you to integrate that very detailed view of data into an information architecture, a business architecture application and portfolio architecture, infrastructure design and security architecture environments. So fully getting the big picture. So if you look at something like an MDM or data governance, most of those are not just pure technology. There's process integration. There's organizational integration as we've talked about. Now we have a solution that's going to be able to give you that full perspective and make sure that all of your key data management initiatives are not just going to be well aligned from a data management perspective, but from an overall enterprise architecture and organizational efficiency perspective and driving innovation in a much more agile way moving forward. The first instantiation of this is our Irwin Cloud Core, which is a cloud based or SaaS delivered data modeling and enterprise architecture bundle, where you can really start to get your arms around what's going in your organization and drive all of those strategic initiatives and make sure that they're well aligned to business strategy and can take advantage of the innovation that's just starving to happen in your organization. With that, I will take us to the Q&A. Danny, thank you so much and just a reminder and to answer one of the most popular questions, I will be sending a follow-up email by end of Thursday for this webinar with links to the slides, the recording of the session and anything else requested throughout. I'm diving right into the questions here, Danny. What is the most, you know, we get a lot of questions that are specific to industries. Is there a specific approach to different industries? Is it all the same? This specific question is regarding Minar Posse is saying in heavy chemical industries. You know, we've always, it's funny, there's a struggle always because we have a solution that crosses across all of the industry verticals and our value proposition in terms of what we can bring to that is, I would say, fairly similar. Obviously, you know, every industry has different and unique needs and challenges, but the real, you know, the real beauty of using a technology like Irwin is that it's agile and adaptable, right? It's not a solution for finance, it's not a solution for government, it's not a solution for, you know, petrochemical. It's a solution for data management and data management and data, you know, data centricity in an organization brings value to an organization no matter what it is. So, you know, you know, it is the same use case, it's the same value proposition because no matter what your business is, I believe that, you know, leveraging data as a strategic asset and leveraging the intelligence and the sort of answers to questions that exist in data that you have and use every day to run your business, I think is the key for transforming and growing any business no matter what the vertical. So, what, were there existing business processes that changed or were there required business processes to change as part of this method? Well, you know, the first business process that was changed was leveraging modeling tools versus, you know, their traditional analysis foundation. So, you know, understanding how to use the tools, what the tools could do, had impact on a lot of the, you know, data analysis processes that were in place. In terms of business processes, you know, external in the organization in terms of consumers of customer data, it hadn't got that far based on the, you know, the pilot that we ran, right? So, but what they did understand was what was going to change in terms of the data, the master data around customer, what were the implications to those processes and that's where they're now looking at taking the Agilier product and, you know, really understanding and doing the analysis on that and defining, you know, business processes today and business processes for the future. So they anticipate that there will be some impact but the real goal of the analysis was to harmonize all of the requirements to minimize the amount that you needed to change the business because of what was done with master data management. Business processes should be changed for business purposes and for business advantage. We really, really promote and enable organizations to get away from technologies or, you know, specific approaches to managing data, dictating business processes because that's just, that's a, I guess I think a little wrong headed. It's the wrong way. You don't want to have to do things in your business just because this is what the technology specifies. So what they did get was, you know, the true, you know, impact of this new technology and this new approach to managing and deploying customer across all of their systems and then they could do the analysis from there to understand what the impact might be on business processes but the overarching goal was to allow the business to change their processes and making sure that they had an agile foundation and backbone in their data architecture that would, you know, be able to support those changes in business processes that were driven from real business drivers and value to the business versus technological requirements coming from a data management infrastructure. You know, speaking of your clients and, you know, how long did it take to complete the pilot for the first domain for this client? Was that the first, what was the first domain and how long to repeat the process for the second domain approximately? How many entities defined for the first domain? Okay. So the first domain was customer and it's the only domain that they've attacked thus. So we haven't got the results in terms of the benefits on using the repeatable process. It took them basically a month to get everything documented, reverse engineered and in place to then start publishing for true collaboration and it took them basically three months to conclude the analysis and have the facility stood up with the information that was required in order for them to take the next step which is starting to look at, you know, what would be their master data management solution based on the analysis that was performed. So it was, you know, relatively quick process. In terms of entities, reverse engineered, I believe the number and I'm stretching my memory to a certain extent which is getting tougher and tougher all the time but we were looking at around I think 400 entities of information across multiple systems analyzed and then broke that down to I believe somewhere in the vicinity of 80 entities to really support customer from a centralized perspective. So from your experience and in your opinion Danny, what are the best fully integrated MDM tools today to centralize management and monitoring of the MDM data model across different MDM user types? That I would not say I'm an expert in and I'm definitely not in a position to, you know, to do a sort of overview of the technologies that are out there. You know, what my experience is is that there's different approaches for MDM and there's different organizations that have very different requirements and those requirements may be domain specific requirements. They also may be just a reality of how they're going to business in terms of building their IT infrastructure. You know, there's a lot of technologies out of there. Some are considered legacy at this point in the, you know, in the road to MDM perfection. There's new cloud offerings that are out there and as we said at the beginning there's a lot of different approaches. So, unfortunately, you know, one answer, there's a lot of good providers that have a lot of good messages, might, you know, answer to that would be make sure that you're well armed with your requirements and those requirements truly from a business perspective so that when you talk to these different ones you find out what is the approach that, or you have a good idea of what's the approach that's going to work best in the infrastructure that you have, what's your vision for this moving forward so that any technology you implement is, you know, agile and can iterate on those requirements and satisfy the needs of, you know, today, tomorrow and the day after that, you know, and again a lot of organizations are looking to the cloud to, you know, lessen the overhead and impact of, you know, maintaining this infrastructure within their organization. And that's a decision that every organization has to make in terms of, you know, where does it sit on the initial path and what is the risk of deploying one way or the other. You know, my only answer to that question is, you know, there's a right MDM solution for everybody. The trick is when you're going to make that choice, make sure that you have as much information and you are as fully armed as you need to get into that battle because everybody's going to tell you a story and tell you that they've got the right MDM for you. You have to understand and have a high degree of confidence, both from a technical perspective but even more importantly from a business perspective to ensure that you can enter those conversations and those negotiations and those, you know, deliberations, you know, as well, you know, as well armed as possible with as much information and a clear picture of what you need so that, you know, you can be an active and intelligent player in that conversation and people will be forced to show how they're going to support you today, tomorrow, and the day after that. You know, we see this next question quite a bit, you know, especially as companies take on big data, what role does no-sequel database play in an MDM solution? Are they required? In what solutions they may be better than traditional databases? Well, you know, MDM, you know, by its very definition is master data. The business doesn't care where the data is stored. You know, right now, you know, my experience with a lot of what they would call big data solutions is that, you know, they don't contain yet a lot of what we would consider master data, especially on the operational side. Now some of those technologies may be appropriate to, you know, to store, you know, people are going with data lakes, you know, and augmenting their or and innovating on their data warehousing approach. You know, again, the technology underneath it is really not, you know, what I focus on. It's the business value of the initiative and how it impacts the bottom line of the business. So I'm probably, again, not the best person to talk about the differences between, you know, big data or no sequel type technologies. Again, it's really once you understand what are the requirements, you know, no sequel technology is great in terms of managing, you know, different types of data and data that doesn't necessarily lend itself to the structure of traditional business data, but a lot of the targets for MDM are traditional business data, like product, like customer, like vendor. And it's important to be able to implement, you know, rules associated with managing that data. And that's really what DBMS and relational DBMS technology has been very, very strong at. So, you know, again, understanding your requirements and understanding your infrastructure as it sits today will guide you, you know, in terms of what technology will underpin that. And different vendors will have different technologies depending on what they want to deliver from an MDM solution. My game is not to really look and say one is better than the other. My game is to understand the business requirements and make sure whichever one people are proposing to you is going to meet those requirements and not impede those requirements in any way. Does Irwin have any metadata management tool or is there a best way to manage your metadata through Irwin? Yeah, that's a great question. So the product that I talked about, you know, in this presentation, so we talked about the Irwin data modeler, which is, you know, the active data modeling tool, the one that goes out and, you know, creates models, you know, syncs them up with data sources, gives you all of those different layers and ability to describe and standardize. The second tool that I talked about is what we call Irwin data governance. That is a metadata management tool. It's much more than a metadata management tool because it has governance constructs and capabilities layered on top, but it has a metadata foundation and the foundation is metadata driven out of Irwin models as well as additional metadata that we manage within that environment. So, you know, Irwin data governance is a metadata management tool. It's a data governance tool. It combines them together. It's also a data architecture tool. There's the element of managing the modeling process. We have Irwin data modeler work group addition, which has a modeling repository. And just like you would have an operational system and analytics system in your traditional business, we look at it very much the same way from an Irwin perspective. So that Irwin data governance is really, you know, the metadata repository for analytics publication planning. The Irwin data modeler work group addition is your operational data modeling. So looking at, you know, managing the life cycle of a specific model in your organization, version control, allowing people to work on them and, you know, reconciling conflicts, controlling who can change those models using an Irwin data modeler, who has access to them, what level of access that they have. So again, it's a distributed repository approach where we have a repository that's really fine tuned for the data modelers, the people that are actively creating and maintaining data models. And then we harvest from that into our warehouse of metadata and models and configurations where we then apply metadata management and governance in the data governance tool. I love it. And that puts us right at the top of the hour. I'm afraid that's all we have time for today. Thank you so much for this great presentation. As always, and thanks to our attendees for being so engaged in everything we do. We just have all the great questions coming in. I will just remind you I will be sending out a follow-up email by end of day Thursday with links to the slides of the recording, excuse me, with links to the slides and links to the recording. And there's a couple of questions in here, Dani, that I think we're going to have as well. How can I get a POC for the new Irwin product? So I'll include that in the follow-up email. As well, make sure you get that. So thank you, everyone, and I hope everyone has a fantastic day. Thank you, Shannon. Thank you, everybody.