 Hello and welcome. My name is Shannon Kemp and I'm the Chief Digital Manager of DataVercity. We'd like to thank you for joining this DataVercity webinar, Integrate ERP and CRM Metadata into Enterprise Data Models, sponsored today by IDERA. Just a couple of points to get us started. Due to a large number of people that attend these sessions, you will be muted during the webinar. For questions, we will be collecting them by the Q&A in the bottom right-hand corner of your screen. Or if you'd like to tweet, we encourage you to share highlights or questions via Twitter using hashtag DataVercity. As always, we will send a follow-up email within two business days containing links to the slides, the recording of the session, and additional information requested throughout the webinar. Now, let me introduce to you our speakers for today, Ron Huzenga and Nick Porter. Ron has over 30 years of business experience and across many different industries, including manufacturing, retail, healthcare, and transportation. His hands-on consulting experience with large-scale data development engagements and practical real-world insights to enterprise data architecture, business architecture, and governance initiatives. Nick is the technical director of Silwood Technology Limited, or the originers of SAFER, the Metadata Exploration Tool for large application packages, such as SAP, Salesforce, Microsoft Dynamics, and Oracle EBS. He has over 35 years IT experience, the last 25 being in the information architecture space, providing tools and consulting to support projects needing an understanding of data structures for ERPs. And with that, I will give it over to Ron to get today's webinar started. Hello and welcome. Hi, Shannon, and thank you for that introduction. Hopefully this demo won't go south because I got that Purple Peep Leader song stuck in my head, but we'll go from there. What I want to talk about today, and Nick and I will be talking about and demonstrating to you, is how you can actually harvest and utilize ERPs and CRM metadata in your data models. A lot of people think about data modeling as in terms of designing and then generating DDL for your applications, but using data models to really understand your data landscape is extremely important as well, and that means being able to reverse engineer existing data stores. And what we're going to talk about you in particular today is how you can actually harvest that metadata out of those ERP solutions into your data models and switch to many of your enterprise models as well. Again, we'll talk about harvesting that data. Nick is going to show you how to do that with Sapphire for the ERPs and then we'll create the data models. And then what I'm going to do is I'm going to take that a little further and talk about how we can incorporate some of that into our enterprise data architecture concepts by expanding on those models and also building in things like governance properties and expanding through data flows and lineage and things like that. And then, of course, at the end, we'll also have time to answer the questions at the end of the session. In terms of any environment, whether you're using CRM systems, ERPs, or your existing databases, you really want to find a way to model and govern your data. Data models are really your visual way of getting through and figuring out what's in your environment. And there are a few basic steps that you need to go through to be able to do that. First of all, you need to be able to know where is that data, you need to identify what the data stores are, and you need a method of reverse engineering them. Once you've done that, you then need to apply an extra layer or basically interpret it a little bit in terms of really identifying what it is. Quite often there will be a lot of cryptic names and things like that, so you need a way to actually translate that into meaningful names and everything. So that's using things like naming standards, Nick's going to show you some things around actually extracting data out of the metadata catalogs and the ERPs, et cetera. And then another thing that we want to do is we want to make sure that we are identifying entity instances. And what I mean by that is throughout all these data stores in our organization, we may have a lot of different data stores that represent different aspects of the same concept, like customer or something like that, so we want to have a mechanism of linking all of those together so we know where we're storing data across our organization about these key concepts that are important to us. Other things that are important is as data makes its movement through the organization, where did a particular piece of data come from? So we'll talk about things like visual data lineage. Other things that we'd want to do to augment that is actually showing how data is used in our business process models in the organization as well. And then of course putting more meaning against it in terms of utilizing data dictionaries, tying things into business glossaries to really apply that business meaning to what's going on. And then ultimately, how do we govern this data, which is extremely important today? So what I'm going to talk about when we do that is how do we actually apply a reference of master data management principles to it? And I'm going to talk about things called attachments and enterprise data dictionaries that will allow us to do that. We'll talk about security classifications and also things like regulatory policies so we can build the governance around all these different types of data. Now in terms of identifying where the data stores are, we would typically reverse engineer using the data modeling tool to pull the information in. But when you do that, you're going to get some things that are a little more cryptic than you're used to. And this is where Sapphire really comes in to not only pull in those data structures, but also the meaning of those data structures as well. So with that, I'm actually going to turn it over to Nick and he'll talk about how we actually start to do that for a step with Sapphire. Thanks very much, Ron. Hello, everybody. So just really to amplify a little bit on what Ron's already started to talk about. We were really talking then about accessing the data models, the data structures, if you like, in these large application packages. And these days, most large companies will have one or more of these systems in their data landscape. So why would you want to do this? Why would you want to be able to get data models out of these systems? And obviously if you've got something like SAP or PeopleSoft or Oracle eBusiness Suite, that's going to contain a lot of very important business information that you would want to be able to understand and to govern just like you do any other data source within your organization. We often hear people say, well, something like SAP, it's just a black box, and we don't really understand what that system is about. Well, it's full of really important business information, and there's really no reason why you shouldn't be able to understand that system just like any other of your data sources. And data professionals like yourselves should be able to do your job on any of the data sources within your enterprise. And obviously data modeling is a very good way to understand the contents of these large application systems. Ron, can you move to the next, please? Now, there are some problems in doing that. So as Ron has already mentioned then, you could take ER Studio and you could point to ER Studio with a relational database and reverse engineer it. It does a very good job of achieving that so you could attach it to an Oracle or a SQL server, a DB2 database, something like that. And you could reverse engineer from the system catalog. Now, you could do that with the sorts of packages like SAP. In theory, you could do that with something like SAP. But what you will find is that in the case of a package like SAP, the system catalog, the database system catalog, does not store what you might call useful metadata. And each of these packages are going to talk about ones that we can work with in a minute or two, but each of these packages varies a little bit in how they name tables and columns, but something like SAP in particular has very cryptic table and column names. It's very hard just by looking at the physical table and column names to understand the structure of the system. And also very importantly, none of these packages use primary or foreign key constraints. Even if you can work out what the tables store, then working out how they are related, working out how, for example, to join, say, customer to order, that's not performed by database constraints. So each of these packages will enforce referential integrity using the application layer. The relationship definition is done by the application rather than the database itself. Now, these packages are also very large in terms of metadata. SAP is, again, probably the biggest. A typical SAP system is around about 90,000, 90,000 tables, around about a million attributes. And of course, they are storing information about a very wide range of functionality in an organization. So even if you could reverse engineer the physical database with ER Studio, you would have a data model of around, in case of SAP, around about 90,000 tables, no relationships, no real meaningful table and attribute names. Now, some of the other packages are not quite so bad as that. Something like PeopleSoft is typically around 20,000 tables. Some of the other systems we're going to talk about like Salesforce and much, much smaller. A typical Salesforce system is only around about a few hundred tables. Another aspect to consider is customization. Now, despite the fact that there are something like 90,000 tables in an SAP system, most customers add their own objects. They'll add their own tables or add attributes to existing tables. So you also have to take account of the fact that these systems will have some level of customization. And it's also very common, especially with the large organizations to have multiple instances. I can think of one of our customers in the health industry, they had in excess of 50,50 different instances of Salesforce. All different. Ron, could we have the next slide, please? So this is really what Sapphire is about. It's about providing a solution to understand that complex set of metadata. And it's a piece of software to make the access to ERP and CRM metadata easy to access, understand and share. And the idea of the product is that data professionals like yourselves should be able to explore the metadata in these large packages, just like you can other systems. And you shouldn't have to be an expert in something like SAP or JDM Woods or PeopleSoft in order to do that. And the way that Sapphire does that then is by accessing the data dictionary of each of these systems to get the metadata. So whereas our studio will reverse engineer a database by looking at the system catalog. If we look at the Oracle system catalog, the SQL Server system catalog, we're doing something functionally similar. But we're doing that by looking at the proprietary data dictionary layer of each of these systems. Now, one important point really to get across is I think when people first start to learn about Sapphire, they imagine we're just a scammer. We are just a pipe that you plug one end into something like SAP. You plug the other end into ER Studio and you pump metadata across. Now, it's more complex than that because of the size of the system. Because these systems are so large, a lot of the functionality of the product is about helping you to do what we call scoping. And that is to subset this large amount of metadata into more manageable chunks. And I'm going to give you a little flavor of that when we look at the product itself. So my last slide before we look at the product then, this diagram kind of sums up what we do. The systems that we can work with are on the left. So pretty much any release of SAP, we can do the Oracle owned applications now. So Oracle eBusiness Suite, PeopleSoft, Siebel, JD Edwards. We can do Salesforce and force-based applications. And we can also do Microsoft Dynamics AX. And we do have a capability we call ETL for metadata, which allows us to get systems that aren't on that list. I'm not really going to talk about that today. So in each of those cases then, those packages, we do this effectively reverse engineering the metadata, pulling it back from the data dictionary layer of each of these systems. And then we load it into what we call the Sapphire repository. And the Sapphire repository is typically a database schema. Most customers will use something like Oracle or SQL Server to store the metadata we pull out of these systems. And then this scoping work that I mentioned earlier takes place where you can now start to slice and dice and subset the results before you can then generate a model into ER Studio. And the kinds of projects where Sapphire gets used are the sorts of projects shown on the right there. So I always tend to think of it as anything with the word data in the project title. So data governance, master data management, data modeling, any of those kinds of data related projects where you want to understand data from any of the systems on the left of this diagram is where Sapphire gets used. So if we can now switch into looking at the product. So Monica, if you could make me presenter now. Thank you. I'm just going to share my screen. So what I'm going to do is show you in the first instance an SAP system. So this is an SAP system that's already been reverse engineered by Sapphire into the Sapphire environment. And if I was going to do that reverse engineering, there is a wizard to help you do that to connect through to the source system. Obviously the method for connecting through varies by which package we're looking at. So the way we do it for SAP is a little bit different to way we do it for something like PeopleSoft, which is also different from the way we do it for something like Salesforce. But in each case then we're reverse engineering from the data dictionary. And then once we've done that, these what we call the scoping tools come into play to allow us to start looking at the result. So let's start with this one. This gives us a list of all the tables that we've reverse engineered from the SAP system. This is a fairly typical SAP system in its scope. So if you look to the end of this list here, you can see in our particular system here, there's just short of 100,000 base tables in this system. And if we were to look into the system catalog, the database system catalog, these are the kinds of table names that we would find pretty hard to understand what each of those is about based upon that name. And then this column gives us a descriptive name, which we've brought in from SAP, which gives you much more of a clue about what each of those tables is for. I'm going to talk about these other columns in a moment. So Ron mentioned the idea of perhaps of looking for customer. One of the most important concepts obviously in any of these packages would be customer. So let's do a wildcard search on customer master. We're going to look for any table that's got the string customer master in the descriptive name. And you can see this is now brought the nearly 100,000 tables down to 58. And one of the pieces of information we also extract when we pull the metadata out is a row count. So we can see how much data is in each table. Now, this is just our own SAP sandbox. We only use SAP for development. So it's got relatively small amounts of data in this particular system. But you can see a number of the tables have got no data in. And one of the other methods we've got for doing the scoping work is to say, well, I only want to see the tables that actually contain data. So I could if I wanted to subset that to tables that have got the string customer master in the description and also only contain data. And then we have these columns here which show us the number of relationships. So this top one, customer master data rebate processing, no child tables, nine parent tables. So in other words, it gets foreign keys from nine tables. It gives, excuse me, it gives its primary key to no other tables in the system. And what I can do is I could rank the result set by numbers of relationships. And if I do that, yeah. Sorry, we have requests. Is there any way you can zoom in a bit? Sure. I've actually got it on the external monitor. So what I'll do is I'll switch on to my... No worries. You can't. That's fine, but... No, no, no. I know that's fine. I'll switch on to my... Tell me if that's better. Yeah, change. Has it changed? No. I can just maximize the interface. Or yeah, or... See if I can... I'll forgive me then if that's not readable. I'm just going to stop sharing a second and just see if I can... That's good. Everybody wants to see it. So... Sorry, folks. I don't think I can... Without doing... That's doing you better. It's the... In your screen there and the model view, is that a way to maximize the screen there? And that... Not in terms of... Yeah, okay. Not in terms of character size at the moment, no. Not without me going and changing the settings. That's fine. All right, sorry. Add to that. So by sorting by number of relationships then, the table that's come to the top has got 847 child tables. And this is a table called General Data in Customer Master. So this is... If you went and asked anybody in the SAP team, they'd say, oh, KNA1, which is its physical name, is the main table of restoring customer master information in SAP. And if we double-click on that table, then now you can see the columns which make up KNA1. And again, you can see it's a fairly cryptic column name. Not really easy to understand what the contents of each of those is based upon that name. And then you've got this descriptive name for each of the attributes. And you probably also see that as I click on each attribute, we get an attribute definition. Again, all this information has come from the SAP system itself. So SAP have done a pretty good job really of documenting their model. So you do tend to get quite comprehensive documentation about each object type. So this looks like a sort of important customer table. And what I can do now is look at how that's related to other tables in the system. So I can open up this panel which shows us the parent tables, the 51 parent tables and the 847 child tables down the side here. And then also what I can do is I, with this little checkbox, filter out any of those related tables that have got no data in them. So having found customer master, there are now 125 child-related tables that have actually got data in them. And there are things like purchase document header, for instance. So as you'd expect when you enter a purchase document, you need to tell it which customer this is for. And then there's a whole series of other customer master tables. So there are some other tables which store additional customer master information, things like Dunning data and bank details and transaction figures. So here's an opportunity now to start recording some of this information. And we can use what we call a subject area to do that. And a subject area is just a folder where you can group together items of interest. And I've got a series of already built subject areas. There's things like the billing tables and claim bundle and consolidation. Each of these tables belongs to this subject area. I'm going to make my own, I'm going to make a new one called demo. And then I'm going to take KNA1, the customer master table, and drag and drop that into the table, into the subject area. And then I'm going to take some of these other customer master tables and add those to the subject area. And this subject area is also the mechanism by which we can export into VR Studio. So what I'm going to do is take that set of tables, generate a file format that we can then load into VR Studio. So it's just generating the model now. And then if I switch across into VR Studio now, I can import that file if I can find it. So it'll now process the file and I get a data model based upon that set of objects that I exported. Now I'm not going to go through this too much because Ron's going to show you this in a bit more detail. But now here, for instance, here's general data in customer master. That's that table we started off with. It's a fully populated data model based upon the set of objects that we discovered within Sapphire. Obviously it's using the business names. Obviously we have the physical names for each objects as well, but it's using the business names which makes that much easier to understand than if we were just using the technical names. So I'm just going to briefly show you that it's not just for SAP. I'll switch into looking at the Salesforce system now. And the Salesforce system is much smaller. This particular system is only actually 400 tables. But the way that you go about exploring it, the way you go about slicing and dicing it into subsets is exactly the same. As it is for SAP, the creation of subject areas, the mechanism for exporting into ER Studio is exactly the same. So you only really need to understand Sapphire, how Sapphire works, wants to be able to use it with any of the package types that it works with. So that's as much as really as I wanted to show you. Obviously there's a lot more features. I've just given you a flavor, really, of what the product can do. So Monica, I think we'll now pass back to Ron. And Ron's going to show you a little bit more detail about what you can do with these models. Thank you, Nick. And I'm just sharing my screen now. And we're coming up to an ER Studio view here. So what I'm going to do is I'm going to start kind of going back to that enterprise modeling concept. And then I'll talk about these models in general in a few minutes. What I've got here is just a very quick representation of very few data structures that might comprise something like an enterprise logical model or that sort of thing. And what I'm really interested in is really documentation. So if you look at something like the customer that I have in this model up in the top, what you'll see is the way I've set my display options here is I'm showing the customer in this top right entity. I've got the definition there for my data dictionary. And I've also got a number of things showing such as different classifications. These are called attachments or some people may call them metadata extensions as well. But it really allows me to go through and classify this type of information. So if I just kind of go into that, I'll bring up the editor in a second here and just show you a few concepts about that particular metadata extension. Again, they're called attachments because we can attach them to any concept in the models themselves. So on this one for the customer, I've set up different categorizations, which master data classes, whether it's master reference transactional data. In terms of data importance, what's the business value? Again, the customer tends to be extremely important in most businesses. So in terms of business value, we put high because that also has an implication on data quality that we really want to make sure that we're looking after the data. Other things such as retention policies where data's or originated internally or external to the organization and some other measures like volatility, who the data quality steward are and those types of things, you're limited only by your own creativity. You can define as many of these constructs as you want and attach them to the different concepts in the model as well. The way we actually do this is through our enterprise data dictionary. So I'm just going to go to a data dictionary tab right now and I'm using an ER studio environment that actually is using enterprise data dictionaries. So if I look at them in this folder here, these were my governance attachments and as you can see, I've got business value, data origin, master data classes, an example. So I'm just going to basically go on to the master data class. There's the name for it. I can put in a detailed description of what it's for, but I can also have in this one my list of values including my default value and everything else. The beauty of this is I create this once in my enterprise dictionary and then I can use it for any model that I check into my repository. So now we're getting a linking of sharing these constructs across all of our models. And that allows me to do things like show me all my master data tables, all my reference data tables across my models when I go through and I do these types of linking. Another thing that's very important is I'm just going to show you this now is security properties as well that you see on this particular customer master. If I kind of go over here to the security information, I also have things that I can define here. It's another manifestation of this attachment or metadata constructs, but I can set privacy levels like highly confidential, security impacts and those types of things. And again, it can be virtually any data type, list of values and those types of things. The beauty is that we're actually showing this directly on the models themselves. What I'm going to do now is just take a quick look through a couple of these other things. I'm going to start with the SAP customer. This is what Nick had actually brought in. And when I look at this, I can actually see these same data models that he brought in. I've just laid it out a little bit differently, but I'm seeing all this information that came in from SAP through the Sapphire tool. As you'll see, I have all my referential integrity here as well. And that's because as part of what Sapphire does is it creates those links. So those come in as referential constraints into my model. So I have the relationship shown, which is extremely important when I'm knitting together this type of an enterprise model. I'm just going to go back to this tab here. If I go into, this is the main model here. What I can also do is go to another viewpoint of it. So now I've gone to a different view that's actually showing the actual names of the columns, the data types and those types of things. And I'm not going to drill into it right now, but if I was to go in and drill in and look at the table, the columns and those types of things, I would see all the same types of information that Nick was showing in terms of the documentation that was in there in terms of the definitions and everything else. All of that has come out of Sapphire into your ER studio data model. So it really lets you build up that very rich specification of what's there. Okay, now I'm going to go to the Salesforce and show you the same type of thing here. Now the Salesforce model, I've done it a little bit differently just to show you the same type of thing. I'm showing it in the same type of format as I did with that enterprise model. So in Salesforce, there's not something called customer. It's actually called account, but it's an equivalent concept. So we're showing that. And what I've done is I've actually gone through, and for each of these tables in this particular subject area, I've provided the same types of classification information as I did in that enterprise model that I was working in earlier. So again, I'm really building out details around how I manage and govern all these different types of data across these different types of models. I'm going back to the enterprise model very quickly here because there's something else I really want to show you. This is something that's extremely powerful is what I want to do is I go into the where use capability here. And what I've got here is I've got a concept of customer on this model. But what I've also got is something called universal mappings. What I'm able to do with my repository is I'm able to link light constructs together. So I've gone through and I've taken this customer from my enterprise model and I've linked it through equivalent constructs in other models. So from an adventure works type of model, I've got things. If I go into things like the SAP customer, I can see the logical tables that are there. Those are all linked in and this is an extremely powerful capability because it lets me see where all these entity instances actually originate across the organization. Just to put that in the bit of context for you right now, I'm just going to try to go here and just show you this conceptually. So what I've done is I've taken this enterprise model that I've got that I've shown you and three universal mappings. What I'm doing is I'm linking across all my different implementation models. So for instance, when you're talking about customer, you see I've got customer in one model. In another implementation model, it might be called client. In another one it might be called customers. As we saw in Salesforce, it's called account and those types of things. It doesn't matter that the names are different. I had this mechanism that I can link them all together, which is very important because now you can do things with things like that. We're used in query to say where's all my customer information in my organization, which is extremely powerful. What we can also do with ER Studio is of course we're building this up as we're going across and linking through these constructs of all of our different models. So what I'm able to do from here is I can actually go through and I can actually export all this information through a report out of ER Studio team server as well. So now what I've done is the same type of thing. I've got all these different models listed down the left side of my spreadsheet. I've got the entity names from these different model areas that I'm looking at and I've got them tagged as to whether they're master reference or transactional data and I can build that up. And the nice thing is I can take this like I've done here in something like Excel and I can build myself pivot tables. Here I'm looking at kind of a model first view. I could look at a data type perspective first and I could actually slice and dice it and get a real view of how I've classified all of this metadata across my organization. Okay, what I want to show you as well is again, I'm going to go through this very quickly. This is the same type of information. So what we've got is other models that Nick has done with PeopleSoft. So here's the basic type of information out of PeopleSoft again that he was talking about. And he also mentioned Dynamics AX and I've been able to go through and do the same type of classifications on these different models from these different platforms. So it really lets you tie together that enterprise picture of what all this information is. Now what I want to do is I want to go through and actually show you a couple things about how we can build some more information around it. When people think of data lineage, quite often they think of things like when I'm taking information out of my source data stores, loading it into a staging area, and then moving it into my data warehouse, I'd like to see the sources and targets and the transformations that occur. However, lineage is more than that. It's actually mapping the data's journey through the organization. So even though we're not doing an ETL in this respect, we can actually use lineage models in ER Studio to show us how information comes through and transitions between different systems or applications as well. So what I've done here is I've got a very simple one that I've shown here. And what I've done is assuming that we start with our prospecting in Salesforce, here's the account or one of the account tables from Salesforce that we were talking about that we brought in with Sapphire. I'm going to do some basic transformation mapping to that information and I'm going to load it into my accounting system, which in this example happens to be PeopleSoft. So I can now document and design what my interfaces look like to actually move this data throughout my organization as well. And by building up these lineage diagrams, you're building up that chain of custody of the information throughout your entire organization. Now what I want to do is I'm going to switch gears a little bit and just go into our Team Server application, which is basically our portal to the world. This is actually my model explorer, so I can see these same types of information when I publish these models out to Team Server. So as an example here, I'll take Dynamics AX as an example here. I can actually see the thumbnail of the models of that type of thing as I'm going through. So what I'm going to do is I'm just going to click on one of these and the same model that I saw in the data modeling tool is going to show up in my interactive viewer in here if my server actually wakes up. Just bear with me a second here. I think it may have gone to sleep while I was actually doing this in the background. So we'll skip that for a second and I'll basically go into my glossaries for a second because there's a couple of other things that I want to show you here. I want to go back to that same concept and I'm going to actually just go in and, I'm going to click on the model terms first. I'm going to do the same thing as Nick did and I'm actually going to search for customer. So I've basically got customer which is a business term here. I'm going to click on the customer and what I've done is I've got business definitions of my business glossary for this customer term but what I'm also able to do is I'm able to associate these terms to my other constructs in the models. So just like I link model constructs together for customer I can now associate these business terms to them as well. If I collect on these related ER objects tab here's where I'm seeing every place where I've related this business term of customer back to those entities in my different models. So I'm seeing it from number of models in addition to the ones that Nick brought in. So I have e-commerce models, those types of things. I have a data warehouse where I've done it. As I start to go down, here's down in the middle is the one from the PeopleSoft table and you'll actually see the definitions that came in from the PeopleSoft through Sapphire as well. I see the same thing down here on the logical models and if I expand this out I'm actually just going to expand the number of entries per page. You start to see all kinds of things here. So in SAP where customer was broken up across all these different sub-tables or sub-types that were kind of brought together on those tables, I'm able to see all of these here as well and they're all linked together, tied to that particular term which is extremely important to be able to do. Now the other things that I can do is basically go into other things as well. So if I go into related entries, which is good now I'm going to go into the customer table. So I'm actually going to go back up to here just go back to my model. Bear with me here. Whoops, I lost my... into my model explorer. I'm going to go back into this enterprise model again and I'm going to click on the customer table itself there. So again, you can see that I've got other things going on here as well. This is actually retrieving all that metadata from that customer master that I did. And of course, this is my portal. What I see as well is I've got security properties there and I have alerts turned on. So I can see that working with customer data may have a high security impact and it also highly sensitive data. The same properties that I saw in the data model itself I'm seeing here. So I've got things like the definitions that have come across my full list of attributes that have come across those attachments that I have for Fort. I've actually got here as well. So all of that information is showing and of course the security properties and the universal mapping. So again, all those links that I had that I was able to show previously I can actually show them here as well. So from this customer, I'm seeing that I'm actually have customer linked out to these entities and all these other models as well. And you're even seeing the physical names that Nick was talking about in terms of the physical constructs as well. Another aspect of this that I can do is I can take that same type of information. I can extract all of my universal mappings out. So here's another thing that I've done to give me an enterprise perspective. Again, I've got the different types of models, the different kinds of concepts from my enterprise model kind of across that top axis and then showing where it's implemented and all these other models that are in this environment as well. So again, it gives me a very rich specification of what are all the links concepts across these models that I'm tracking in my environment. Of course, in a large enterprise environment this can get very large, so you want to slice and dice it, but all that metadata is there at your fingertips as you really get a handle on managing your organization. Other things that we want to do here, so I've got customer here as well, I also want to look at related terms for my customer. Again, this is all tied through my data dictionaries. I see that related business term that I was talking about earlier, so that's tied in, but I'm seeing other things as well. Our data dictionaries are actually built up so that you can not only have definitions, but you can also have things like governance policies and everything else. So what I've got here is something like right of access, which is actually a related term or a related governance policy for this customer type of concept. If I click through on that, that actually tells me that it's an entity type of a policy. It tells me the status of it and this is the actual definition of the policy as well, and I can actually start to go up and beyond this. So I could see other related terms to it, which related glossaries did it come from? Well, this actually came from my GDPR data policies, which of course roll up into my overall data governance policies glossary as well. So by linking through all these constructs, I'm able to build up a very rich specification that really helps me define, manage, classify, understand the data, and actually attach governance to it as well. And the reason we're able to do this is again starting back at the beginning, identifying those data sources, then bringing it through and tying all of these different constructs together. With that, we're kind of at the end of the actual formal presentation part of it. So with that, we will open it up to questions. Ron and Nick, thank you so much for this great Q, for this great presentation. Just to answer the most commonly asked questions coming in, just a reminder, I will send a follow-up email for this presentation by the end of Thursday with links to the slides, links to the recording, and anything else requested throughout. You know, there was a lot of great questions coming in for both demonstrations here. So how does SeferModel get to DA? Is the first question there for you, Nick? That's Data Architect. ER Studio Data Architect, presumably. So we generate a file format called ERX. Which you can then import into ER Studio. Actually, I did do that as part of the demo. Forgive me then if it was a bit small, but you generate a file format called ERX from Sapphire, and then there's an import feature on the ER Studio menu system where you just point it at the ERX file and it loads it in and populates the model. Ron, and this came in with your demo here. Is it a reasonable exercise to try and attach this type of categorization to 10,000 plus tables in a package, just the import ones? Or the import ones? I can speak to that. Yeah, basically, that kind of falls under the boil the ocean category, right? So basically what you want to do is you want to find information that's important to you. So that's why I start with like an enterprise type of model with the constructs that are important to the organization and I start linking through from there. Typically, a good way to start is first start with your reference and your master data constructs because they're the ones that are actually used in all your business transactions and they're the ones that you want to make sure you have really good, high quality data. So I would definitely start categorizing all my master and reference data constructs. Now the nice thing is in terms of the way that you set up attachments as well is you can actually set up default values so you can actually there are actually ways that you can manipulate that with the attachments editor as well to actually categorize things very quickly in the tool as well. But I would start with those higher order concepts first. What version of ER Studio is needed to combine the two? From a studio perspective, all of our versions have actually been able to import ERX files. So even some of the ER Studio versions that are a few years old will still pull in the ERX files. Of course, you get more and more capability with the later releases in terms of features that are added, but you can bring them into virtually any version of ER Studio. So does SFR work with software as a service application where you cannot access the data based directly? Salesforce, yes, and Dynamics. So in the case of Salesforce and Microsoft Dynamics AX, we're using APIs, metadata APIs to get to the system. But we don't do all available packages. So success factors, something like that, we don't do at the moment. It's the list of package types that I showed you on the architecture slide. Ron, it's a very generic question. There's not a lot of information here, but how much time needed to implement ER Studio in a telecom company? That's an interesting question. It really depends on the complexity of your environment. You can be up and running very quickly, but then of course the real work is actually finding all your data sources, reverse engineering your databases, and if you're doing ERPs with Sapphire. So it's really a matter of how well organized your project is and where you know where that actual data resides. If you know where the data resides, then it's really a matter of going through and pulling this information in. To give you an example of that, before I was a product manager, I was a consultant. So a lot of my work in recent years was going into organizations, finding out where their databases and data stores were, reverse engineering them, and actually then starting to knit that information together into what the enterprise landscape looks like. You would be amazed at how much work a seasoned data modeler can do within a couple of weeks once you get up and running. I know typically when I went in, I would have the walls papered with data models and those types of things after a couple of weeks and really getting into the nitty-gritty of tying all these information constructs together. A similar question to you, Nick. How long would it take a data modeler to learn to use Sapphire and what does it cost per user? Well, maybe we don't need to get the cost per day. We can certainly get the cost in the follow-up. But how long does it take to learn? Yeah, it's a fairly straightforward product to get familiar with. Just to give you some idea, there is a standard training course, which is a day, and that will give you really everything that you need to get familiar with the product. I think if you're a data modeler, and if you're used to using a tool like ER Studio, you're going to find it very straightforward, really, to use Sapphire. If you're familiar with data modeling concepts, that's going to give you a very, very good head start. And how long does it take Sapphire to import a package? Yeah, that's another good question. There is SAP being the biggest, takes the longest. I would say typically about an hour to reverse engineer a typical SAP system. Something like Salesforce, a standard Salesforce system will take literally two or three minutes because it's only a few hundred tables. People soft, JD Edwards, Oracle eBusiness Suite, those kind of systems may be 20 minutes to half an hour to pull back a full system. SAP by far the longest. So I tend to say here, we know our systems are fairly fast for development purposes, but I tend to say between an hour and two hours is what we would advise customers for a full SAP system. And how long does that, how much time does it take to reverse engineer Oracle EBS, for instance? Yeah, Oracle EBS, 20 minutes, 20 minutes to half an hour. Nice. A lot of questions about time. That is obviously a very important thing in the world. So how does it work with MDM tools? I'm not quite sure what that means. Master Data Management Tools. Correct. I can take that one if you want. Yeah, sure. Yeah. Okay. Yeah, typically your master data management tools would be something that would be separate from something like a Sapphire bridge or something like that because you're actually doing your master data stores. So for your MDM tools, you would actually want to have that master data model. So it would probably want to be based on something like that enterprise model concept that I was showing. Now that said, of course, what you can do is you can actually do things like export metadata from ER Studio to pull into MDM tools. But other things that you can do, such as the team server that I was showing, is I actually set up things like glossaries for my reference data and things like that. So if your MDM tool has exposed URLs for the different constructs, then you can actually build yourself a glossary of the master data for the different constructs and actually link it back to the data constructs that it applies to through those hyperlinks. What are the reporting export options for a server? So obviously we can export the model into ER Studio like we showed. There are capabilities to also export the metadata into Excel. So you can break the model down into a series of sheets in an Excel spreadsheet. That's very popular actually with our users. We can export the model into CSV and XML. But on the whole, I'd say we rely upon the reporting capabilities of a product like ER Studio to do sort of advanced reporting. We're not trying to replicate the kind of capabilities you have with something like ER Studio. And how accurate is the reverse engineered model when there are no primary or foreign keys? Is reverse engineering driven by actual data or column table comments? Okay, so that's a good question. For nearly all of the packages that we support, there are primary and foreign key constraints in the data dictionary, in the package data dictionary. So something like SAP, for example, has got a very full definition of primary and foreign key constraints in its data dictionary, not in the system catalog in its data dictionary. And that applies to most of the systems that we can reverse engineer. Now, there are a couple of exceptions to that. One of them is JDMWiz. JDMWiz doesn't have relationship definitions, but it does have a series of what they call business views, which we use to infer relationships in the case of JDMWiz. And PeopleSoft is similar to that. PeopleSoft does have relationship definitions in its data dictionary, but there's not really enough. So we use an inference process to get those as well. And you can control that inference process. You can decide how to generate some of those relationships. But for the vast majority of the packages, SAP, e-business suite, Salesforce, Dynamics, Siebel, there are a full definition of relationships within the data dictionary itself. And what steps would you see when the next version of, say, SAP comes out? So SAP data dictionary doesn't really change. The structure of their meta-meta model doesn't really change from release to release. Where that's a little bit different is with SAP business warehouse. So the SAP has a separate instance called BW, which is used for business reporting. And that does tend to change. The meta model, if you like, for that does tend to change from release to release. So that's part of our job, if you like, is to keep up with latest versions of BW itself. But for the core transaction system, obviously the content of the data dictionary changes from release to release. But the actual way that store would, and the place that we're going to get that metadata, the structure of that doesn't really change on a release-by-release basis. And I'll expand on that as well, Nick, because kind of the follow-on question to that is let's assume that I've reverse engineered from SAP. Now I've put in a new release, and I've got more metadata in one of these subject-area views that I've previously brought into a data model. Well, what you can do once you've done that is you can then export again out of that same SAP area. That will give you a standalone data model. And now you can use the compare and merge capabilities with an ER studio to compare from that new model to the existing model that you have, and then you can update the new constructs in place through that compare and merge process. That's a good point. All righty. Well, I think we can squeeze in one more quick question here. Let me just look here. So what happens if the underlying ERP data structure is updated? I think that's what Ron has just referred to. So you can refresh the contents of the Sapphire repository so you effectively re-extract the metadata and you get an updated version of the Sapphire repository contents. If you do that, those subject areas that I showed you, they are maintained so you wouldn't destroy the subject areas if you like. And then you can use the comparison capabilities of ER studio to identify change. Sapphire also has some comparison capabilities of its own so you can compare two entire instances or something like SAP to identify differences. All righty. Well, thank you so much. And Ron and Nick, thank you for this great presentation, this great information. And thanks to our attendees for being so engaged in everything that we do and asking all these great questions. Great. And just a reminder that I will send a follow-up email by end of day Thursday with links to the slides, links to the recording. I'll make sure Ron's information is in there as well. And that's being shown right now. And thanks to everybody. I hope you all have a great day. Again, Ron and Nick, thank you so much and thanks, Adira. You're very welcome. Thank you very much. Thanks, everybody.